Implementing Dynamic User-Defined Events

In the previous post, we started looking at how to deal with the situation where you have reentrant VIs that need to have UDEs that are specific to particular instances of the VI. That post covers a lot of the basic theory and design issues, so if you haven’t read that post, please take a few moments to check it out as much of the following won’t make sense without that background.

Filling in the Blanks

The basic approach we took in designing our example consisted of first defining the program’s overall intended operation. With that theory sorted-out, we then began creating the program’s high-level structure, being sure to leave blanks or prototypes as placeholders for details that we hadn’t yet defined. At one time this sort of approach would have been considered heresy. The feeling was you hade to have everything worked out in detail before you wrote even one line of code or created even a single VI.

Clearly there are a lot of practical problems with this approach, but for one of the biggest (in terms of long-term impact on the project) is that it leads to a break-down in the walls that information hiding is trying so hard to erect. Think about it for a second, if an integral part of designing a process launcher includes figuring out the internal operation of the processes it is going to launch, it’s going to be very difficult to keep that knowledge from contaminating your launcher design. Or to put it another way, there will be a strong tendency to create code that depends on the process VIs behaving in a particular way.

You can see a better approach in the way we designed the launcher for our reentrant VI. Last time we defined a VI that (beyond the mechanics of how to launch a reentrant VI) only concerned itself with the data that the process VIs need to do their work. Based on a black-box description of what the routines were supposed to do, we able to create the logic for providing that data without worrying about what the routines were going to do with it. This is a Very Good Approach.

Getting Registered

One of the “blanks” that we left to be filled until now was a VI called DAQ Operations.lvlib:Interval Registry.vi. When we first considered this VI’s external persona we noted that, when passed an enumerated value Check and an interval, it would return a boolean flag indicating whether or not that interval already existed. However, we made no assumptions about how it might go about completing that task. Having taken, so to speak, a step inside the veil we can look at how it works.

Interval Registry - Check

This is the logic that completes the Check function, and as it turns out, there isn’t really very much to it. In fact, we see logic that implements a simple FGV. All the function in question really does is search an array of intervals to see if the one we asked about exists. Given this level of simplicity, a logical question would be, “Why bother with the subVI? The process manager has a loop, why not just use it to maintain the list of active intervals?”

This approach would handle half of the problem quit well – the half dealing with whether or not a particular interval was launched. But if you think about it, that isn’t what we really need to know. Operationally, we don’t care if a particular interval was ever launched. We want to know whether the interval is still running, and the interval’s current state is something of which the manager has no knowledge. Remember that when we defined the rules governing the high-level behavior of the acquisition subsystem, we said that when an acquisition process sees that it has no addresses left to poll it will shut itself down. How is the process manager supposed to know that happened?

Beyond this practical consideration, there is also a conceptual problem associated with storing the interval list in the process manager. You might not realize it, but all data has an associated “scope” that defines the area, or context, which owns the data. If we store the interval list in the process manager we are, conceptually, giving ownership of a piece of information that should belong to the subsystem as a whole, to one specific VI. Moreover, that one specific VI would then be required to manage all accesses to that data. Now while that functionality could certainly be implemented, it would be at the cost of additional complications in the form of additional signalling, and event handlers to respond to those messages.

By contrast, moving the array into a FGV that any VI in the subsystem can access, makes the data immediately available to the subsystem as a whole: No event handlers, no added signalling. As we get into the VI that performs the (simulated) Modbus IO Collect Data.vi, we see how the FGV’s remaining two functions work.

Keeping Time

Next, let’s look at the reentant VI (Metronome.vi) that is responsible for triggering acquisitions at a fixed interval. You will recall that the VI is passed three parameters when it is launched: The desired interval between acquisitions, the event to fire when the interval times out, and the event that will fire when the interval is shutting down. Here is the logic for initializing the VI.

Metronome - Initialization

We see that the interval value drives the Timeout node of an event loop, the shutdown event is registered, and the event that this VI will fire, is simply passed into the event loop. As you might expect, the event loop itself only has two events. Here is the Timeout event:

Metronome - Timeout

No surprises here: all the event needs to do is fire the acquisition event – which this code does with alacrity. The shutdown event (which handles both types of shutdown) is likewise uncomplicated:

Metronome - Stop Interval

It destroys the two dynamic UDEs and stops the event loop. The acquisition event is destroyed because once a shutdown is initiated, it is no longer needed. The interval stop event is destroyed because it is only fired in the acquisition VI, so by the time we get to this point in the code it has already been fired by the only other VI that needs it. The small delay between the two operations is to ensure that the acquisition VI has time to start shutting itself down.

Getting into the Publishing Business

The last code we need to look at is the reentrant VI that reads the simulated data and publishes it for use (Collect Data.vi). Again starting with the initialization logic, we see something very similar to the metronome VI:

Collect Data - Initialization

The main difference is that this VI needs to register to receive both dynamic UDEs. Although the VI will be generating the Delete Interval event it also needs to be able to respond to it because that context is the best place put the logic for deinitializing any acquisition logic that might be used. Another conceptual difference from the metronome is that this VI relates to the interval input in a way that is fundamentally different. For the metronome the interval is a number that has a specific meaning: it is the number of milliseconds between triggers. For this data collection routine however, it is simply a number that provides it way for it to identify itself. The broader point is that you need to remember to manage the expectations when different processes look at the same value and see different things.

Because this VI is going to be acquiring data, and performing other operations, its event loop is also more complex. Let’s look at the 5 events it will handle:

Timeout – This application is only generating simulated data, but in most other situations, there will need to be some sort of initialization performed, like opening DAQ references or establishing connections to one or more Modbus devices, and this event is a good place to handle such initialization. The way this logic is written, it is easy to make the initialization run just once, but still have it readily available if it ever needs to be run again.

Collect Data - Timeout

However, there is other initialization that needs to be performed even for simulated data. Specifically, if the initialization logic completes with no errors, we need to tell the rest of the application that this process is open and ready for business. To record that state change, we use our “repository” VI again, but this time running the Insert logic…

Interval Registry - Insert

…which is the very soul of simplicity.

Start Addresses – This event’s purpose is to maintain the process’ internal list of addresses in response to the user starting additional addresses. In completing this work, there are two cases that it will need to address.

Collect Data - Start Addresses - Adding

First there is the case where the interval number matches the value that the process is using to identify itself. In that situation the code needs to add the new addresses to the existing list, or more correctly, it needs to add addresses to the array that don’t already exist in the array. This logic protects the logic from what would be a very common operator mistake: adding addresses tht already exist. The second situation is one where the interval number does not match the value that the process is using to identify itself.

Collect Data - Start Addresses - Delete Case

In this case, we need to enforce the rule that only one process can be polling a given address. Hence, the logic needs to remove from its array any addresses any addresses contained in the new event. Of course this operation raises the spectre of the entire polling array being emptied out, with the contingent requirement that the interval shut itself down. The logic handles that scenario with a two-step process. First it tells the rest of the application that its stopping by calling the registry VI using the delete logic.

Interval Registry - Delete

After removing itself from the registry, the VI fires the Stop Interval UDE to close both itself and its associated metronome process.

Stop Addresses – This event has logic that is similar to the previous event’s delete logic, but is simpler because the only thing that matters is whether the indicated address is in the process’ address list.

Collect Data - Stop Addresses

Get Data – This event generates an array of simulated data and passes it to the static Publish Data UDE, along with an array of addresses associated with the data values. In addition for troubleshooting purposes, the code also writes the data to a front panel table.

Collect Data - Get Data

Stop Interval; Stop Application – Finally, this event stops this VI if either the interval or the application as a whole is shutting down.

Collect Data - Stop Interval

As the comment says, if the acquisition logic needs to be deinitialized, this in the place to put that logic. But why is the code interested in the front panel’s state? Although this VI usually runs in the background unseen, there are times that you want to be able to view its front panel so you can verify its operation. In this example, I implemented that functionality using the VI properties to force the front panel open when it starts running. This logic checks to see if the front panel is open and, if so, closes it.

Testing the Code

So with all the code implemented, here are the Subversion links to the application an the toolbox of reusable code:

Dynamic Registration – Release 1
Toolbox – Release 18

The first thing I would recommend after downloading the code, it to go through it while re-reading both this post and the previous one. Often times it is easier to understand things when looking at the code, that are otherwise a bit obscure when all you have are pictures of the code.

Next run the top-level VI (Dynamic Registration.vi). When the front panel opens, click on the Add Addresses button to define some addresses. For the purpose of this example, I created a simple dialog box that lets you specify a starting address, the number of consecutive addresses to collect, and the sample interval. The starting address needs to be between 40000 and 49999, the number of consecutive address must be less than 1000 and the sample interval needs to be between 300 and 5000 (milliseconds). These parameter limits are set in the dialog box code – feel free to change them as you desire. Likewise, the output of this dialog box is a list of addresses, so you can also change the selection interface if you so desire.

To get started, define 5 addresses starting at 40000 with a sample interval of 1000-msec. You should see the front panel of acquisition VI pop open showing the 5 addresses being updated once per second. In addition, the main GUI should show data from the same 5 addresses.

Click the Add Addresses button again, but this time define 5 addresses starting at 40008, and updating every 2000-msec. A second acquisition VI windows will open showing the 5 new addresses, and the main GUI will show a total of 10 addresses with the results changing at different rates.

Let’s next see what happens if you specify addresses that are already being polled. Click the Add Addresses button one more time and define 5 addresses starting at 40004, and updating every 3000-msec. This action defines a range where 40004 is already being polled once a second and 40008 is being polled every 2 seconds. In response you will see a third acquisition window open, but you will observe that one address is removed from each of the two existing polling lists, and that the main GUI shows a total of 13 addresses being polled.

Finally, to test the auto-shutdown operation click the Delete Addresses button and in the resulting dialog box, tell the system to delete 5 addresses starting at 40008. Because the 2000-msec interval is emptied out, the acquisition VI window associated with that interval will close. Finally, the polling list for the 3000-msec interval will be reduced by one.

The Big Tease

So that’s about all for now on this topic, but what’s in store for next time? One of the things that I like to talk about in this venue are things that can cause unexpected complications, so next time I’m going to discuss what happens when a DLL misbehaves as you are trying to close it.

Until Next Time…
Mike…

Dynamic UDEs: the Power for Reentrant Processes

If you have an application that you want to construct from multiple parallel processes, a key requirement is signalling – telling the various parts of the application that something important has happened, or is happening, in one of the other processes. When it comes to fulfilling that requirement, a valuable tool to have at hand is the User Defined Event, or UDE. In fact, over the past year this blog has considered a variety of ways to use this tool. However, all these implementations have one thing in common: They all use static UDEs. In other words, the event that will be used to signal a particular occurrence is decided when the code is created and it never changes.

But what if you don’t know until runtime what event you want to use? For example, it is common with reentrant code that you won’t have a fixed set of UDEs because there isn’t a fixed set of VIs that are running. Sometimes you need to be able to send a message to one particular clone. In such a situation, you aren’t just sending general signals, but signals that are unique to a particular instance of a VI. The solution is to use UDEs that are dynamically generated in the same way that the reentrant VIs are.

To demonstrate this technique, the next couple posts will highlight this use case, starting this time with a discussion of some of the design considerations. Our into to this exploration is an application that I was once assigned to maintain.

The Problem Defined

The job asked me to expand an existing application that had been developed by a large LabVIEW consulting firms located here in the US. The problem is that the software wasn’t designed for expandibility. Specifically, a key part of the program was a subsystem that polled a user-defined list of Modbus registers at a rate that was also user-defined. Because the user-defined inputs could change at any time, the decision was made to make the acquisition loops event-driven, and create a separate “metronome” process that would fire an acquisition event at the user defined rate. So far, so good. The real issue is with the implementation of this concept.

Apparently, there was originally only going to be one timed interval but, as you might expect, a requirement was later added to create a second one. To meet this scope change, with as little effort as possible, the decision was made to simply duplicate all the VIs used for the first process while appending “2” to the end of the names – an expedient that is unfortunately common in code developed by “large LabVIEW consulting firms”. To make matters worse, the modularization was poor so the program was basically built around a huge cluster containing literally dozens of references for UDEs, notifiers and queues that ran through nearly every VI in the application.

In the end, the only way to implement the required functionality was to completely redesign and reimplement that portion of the code. The really ironic part is that it took me less time to implement the functionality correctly, than it did to do it poorly the first time. Using this description as a jumping-off point, I obviously won’t be discussing the solution that I implemented for the original application. What we will do is use it as “inspiration” for examining techniques that could cover a wide range of similar requirements.

Getting Moving in the Right Direction

First, we should recognize that while our earlier conversation incorporates a pretty good description of what the code basically needs to do, we do need to flesh it out a bit: At run time, we need to be able to create multiple independently-timed data reads with varying intervals between reads. In addition, the results from these timed acquisitions need to be “published” somehow so they can be used by the rest of the program.

With this broader functional definition in place, we can begin to consider the appropriate API for accessing that functionality. As I have said many times before, one of the corner stones of a good API is the concept of information hiding – the process of deciding what information to expose to the calling code, and what information to keep private. So like a politician running for reelection, our next job is to decide what to hide and where to hide it.

The basic principle in play is to hide any information that the calling code either doesn’t need to know, or which would be counter-productive for it to know. If we think about it, there are exactly three pieces of information that the calling VI actually needs to know in order to define and use an acquisition task:

  1. The Modbus address to read
  2. The interval between reads
  3. How to receive the published data

On the other hand however, there is a (much) large list of things that the calling code doesn’t need to know, among which are things like:

  1. How the Modbus is read
  2. How the timers work
  3. The signalling that the timers use
  4. How many acquisition processes there are
  5. How the timing is implemented
  6. Internal data structures
  7. etc…

Now that we have a handle on what we want to expose – and just as importantly, what we do not want seen – we can start designing the outward interface.

The Outside View

The obvious place to start is with the VI that will configure or setup the polling. Given that we have already decided that we only want to expose two parameters (addresses to read, and the read interval) we can go ahead and create a placeholder VI that provides the appropriate IO, but which is for now empty.

Start Addresses

Note that with the exception of the error cluster, this VI has no other outputs. Remember that all this VI is doing is identifying a group of addresses that some hidden “something” is going to read at the specified interval. Consequently, the only response that this VI can give is whether or not the specification process was successful. The assumption is that other parts of the software will independently report errors that occur during the data reads. In the same way, we are also going to need a VI to tell the “something” that is doing the reading that we are no longer wishing to poll particular addresses.

Stop Addresses

The interface to this VI is even more basic than the one for starting the polling of addresses, and the reason is simple. At this point we don’t care what rate at which a given address is being polled, we just want it to stop. You could argue that knowing the polling interval would make it easier for the code to find the addresses to delete, while that is true, it would also mean that the calling code would have to keep track of the addresses that it is monitoring – which could quickly become an awful lot of redundant information for the caller to remember and manage.

Last but not least, the third interface VI that we need to specify is the data publication mechanism. To keep things simple, we should use a technique that is easy to implement. So I am picking the logical equivalent of the callback techniques apparent in other languages: a User Define Event. For this application, the event will return an array of address/value pairs that the calling application can use as it desires. Note that in this implementation, all the acquisition processes will be sending their data to the same place, so this can be a static UDE.

The Test Application

Finally, while we’re talking about interfaces, let’s also look at the calling application. Because all the “heavy lifting” will be encapsulated in subVIs, the calling code can be very simple. It has buttons for identifying addresses that we wish to poll, addresses we want to stop polling, and stopping the application. To display the results, the application’s front panel incorporates a table that shows the addresses in ascending order, and the last data value acquired for that address.

Main GUI

The block diagram is, likewise, pretty plain. It is event-driven with one event for each of the three buttons on the front panel, plus one to handle the UDE that publishes the data. You can check out its code in the source later.

Crawling Under the Covers

With the front end interfaces thus defined, we now can start thinking about code that will make the interfaces do something useful. The simple part is the UDE for publishing the results because it uses the same technique that we have used many times over the past year. To summarize the implementation, a library named for the event (Publish Data.lvlib) has four subVIs: One (structured as a FGV) for creating/buffering a reference to the event, and one each for registering to receive the event, generating the event, and destroying the event. In addition, it incorporates a typedef that defines the event data.

Publish Data Event Library

The process for managing the addresses to read requires a bit more thought. To begin with, we know that there are only two address management operations: adding and removing addresses. However, we also know that if we are to conform to our API, we need to hide those explicit operations from the calling application. This situation is one of those development scenarios where the words that we use to talk about what we are doing can help or hinder our understanding of what we are trying to accomplish. To see what I mean, let’s consider a similar case that is part of LabVIEW itself: the logic for handling queues.

You may have noticed that with the built-in API you don’t “create” or “destroy” queues. Rather, you “acquire” and “release” references to the queue. While you may wonder what difference this wording makes, we need to remind ourselves that it isn’t simply a matter of an API developer running amuck with a thesaurus. It actually describes a very real and very important distinction. Instead of creating a queue, you are simply telling LabVIEW that you want to acquire a reference to a queue. Now, when you make this request, there are two possible situations:

  1. The queue doesn’t currently exist and LabVIEW needs to create one.
  2. The queue already exists so all you need is to get a reference to the one that is there.

Likewise, you don’t destroy a queue when you are done using it, you simply tell LabVIEW that you have no further need for it by releasing the reference you previously acquired. Because LabVIEW keeps track of how many open references are associated with each queue, LabVIEW can tell when the queue is no longer in use and destroy it automatically. Now consider for a moment the degree to which this hidden functionality simplifies your code. You no longer need to worry about what or who is using the queue, and when it is safe to destroy it. All that potentially complex logic is hidden in the way that the functionality is encapsulated, and the difference in terminology highlights that difference.

As we are designing our API, we need to adhere to the same idea. So instead of “adding” and “removing” address, let’s think about this problem in terms of “starting” and “stopping” acquisition from lists of addresses. To grasp the benefits that we can garner from this change in language, lets consider what actually needs to happen behind the scenes for each of these operations. Just to be clear, this complexity has nothing to do with how we choose to implement the functionality, it is inherent in what we are trying to do. This logic will have to be created regardless of how we structure the code.

Starting Acquisition: This might seem to be pretty easy, but what if users start acquiring data from an address at one rate, but then later changes their mind (or makes a mistake) and starts the same address at a different rate. There is no point to have the same address being read at two different rates. Likewise, it is not clear that this action should be considered an error. Therefore, to do what the user is requesting you first have to remove that address from the process that is currently polling it, and then reassign it to a different (perhaps new, perhaps preexisting) acquisition process. Taking the point further, what if the address you remove from a process is the only address that it is currently polling. Removing that address would leave that acquisition loop with nothing to do, and so we need some way to stop it.

Stopping Acquisition: For its part, stopping the acquisition of addresses can hide some complexity of its own. For example, say the user identifies a list of 4 addresses to be stopped. There is no guarantee that all the addresses are being polled by the same acquisition process – and even if they are all together, we don’t know which process is reading them. This fact implies a need to be able to search all the acquisition processes for a particular address. Plus, as before if we remove all the addresses associated with a particular interval we need to stop that acquisition task.

Remember, this functionality will always be needed, it is simply more robust (and therefore smarter) to hide it from the calling application by encapsulating it in our API.

Getting Down With the Acquisition

To this point we have described the acquisition processes as having two loops: One performs the acquisition and one is a “metronome” function that periodically fires an event that causes the acquisition loop to acquire one scan of all the channels contained in its current configuration. In addition, both of these processes need to be reentrant so multiple copies of each can be launched as needed. Now we need to refine that basic description be specifying the rules that will govern their operation.

First, let’s state that each process is self maintaining both in terms of its own operation and its data. What that requirement means is that each instance of the acquisition process will maintain for itself a list of the addresses that it is polling. Consequently, when a process receives a system message (via UDE) to stop polling on one or more addresses, it will examine its own list of addresses and remove any that are in the “stop polling” list.

Second, we will state that there will only one acquisition process running for each acquisition interval. Hence, if a process receives a system message specifying its own polling interval, it will add those addresses to the list of addresses it is already polling. For example, say there is a process running that is acquiring data from 4 addresses every 1000 msec and it receives a system message that the user wants to start an additional 5 addresses at that same sample interval. The code will add those 5 new addresses to the 4 that it is already polling.

Third, if an acquisition process receives a system start message that does not specify its polling interval, it will prevent duplicate polling by automatically search its configuration for the addresses in the message and delete any that it finds.

Fourth, if after stopping one or more addresses a process finds that its polling list is empty, it will shut itself down by firing an event that is unique to that particular instance.

Fifth, in the event that the user starts addresses for an interval that is not currently running, the logic incorporates a manager function that will start-up a new acquisition process to handle that interval.

Let’s Build Some Code

Finally we are ready to start writing some code to materialize what has to this point been mostly words. A good place to start this work is with the manager VI that will launch new acquisition processes for us. I like this approach because it will give us the opportunity look at how the pieces fit together.

How the Pieces Fit Together

The operational rules we listed earlier provide a number of clues as to where we are going next. To begin with, we talked a lot about messages. This information by itself is enough to let us design and implement the inner workings of the two interface routines we prototyped earlier. They are simply the event generation VIs of two more static UDEs (Start Addresses and Stop Addresses) – and we already know how to build those.

Next, because we have defined what the manager basically needs to do we can create an event-driven shell for it that can respond to two events: Stop Application and Start Addresses. Again, if you have been reading this blog for a while, the Stop Application event is an “old friend” so we will concentrate on the manager’s response to the Start Addresses event. Since this response will vary depending upon whether or not the specified interval already exists, we need to design a way for the code to make that determination, while continuing to bear in mind the principles of information hiding, to wit, the manager doesn’t need to know how the determination was made, just what the result was.

Start Addresses Event Handler

To support this functionality, we will create a VI (called Interval Registry.vi) that the acquisition processes will maintain as they start and stop. This function will support three operations: Check, Insert and Delete. For the Check operation we see here, the routine checks to see if there is already a process running at the indicated sample interval. If there isn’t, the VI returns a false Boolean value that causes the manager to call a launcher VI (Process Launcher.vi) that kicks off all the VIs needed to service the interval. If, however, the interval is already running there is nothing more for the manager to do, so the true case (not shown) does nothing.

Turning our attention now to the VI that launches the new process VIs, we have built this sort of launcher several times before, so we already know its basic structure. What we need clarity on is the details of the data that needs to be passed to the two VIs.

Starting with the simpler of the two (which we will call Metronome.vi) it’s only purpose is to fire an acquisition event at some predefined interval. However, if it is going to stop when it has nothing more to do, it also needs to know what event will tell it to stop. Note that both of these events are specific to each interval that is created. Consequently, they both need to be created on the fly when the acquisition process is launched with their references passed into the new clone as a parameters.

In the same way, looking at the acquisition VI (we’ll call it Modbus Reader.vi) we see that it is going to be receiving the acquisition event that the metronome fires and is also going to need to shut down like the metronome, so it will need to get references to the same to events. The only other messages it will need to receive are the static ones that we have discussed earlier, but because they are static we don’t have to be concerned with them here.

So we add in the event definition logic, and this is what our finished launcher VI looks like:

Process Launcher

The Big Tease

The next thing we need to look into is the VIs that are being launched and the logic that resides inside Interval Registry.vi, but this post is getting long so that will have to wait until next time.

Until Next Time…
Mike…

If the socket fits, wear it…

One of this blog’s recurring themes is the importance of modularity as an expression of the age-old tactic of “divide and conquer”. What is perhaps new (or at least daunting) to some readers is the idea of spreading tasks across not just separate processes on the same computer, but across multiple networked computers. Of course if this strategy is to be successful, the key is communications and to that end we have been examining ways of incorporating remote access capabilities into out testbed application.

Last time out, we implemented the first interface for remote applications to monitor and control our application. That interface took the form of a custom TCP protocol that used packets of JSON data to carry messages over a vanilla TCP connection. I started there because it provides a simplified mechanism for exploring some of the issues concerning basic code structure. Although this interface worked well, and in fact would prove adequate for a wide variety of applications, it did exhibit one big issue. To wit, clients had to be written in a specific way in order to use it. This fact is a problem for many applications because users are growing increasingly reticent about installing special software. They want to know why they need to load special code to do a job? The way they see it, their PCs (and cell phones for that matter) come with a bunch of networking software preloaded on them – and they have a valid point! Why should they have to install something new?

A complete answer to that question is far beyond the scope of this post, but we can spend a few useful moments considering one small niche of the overall problem, and a standardized solution to that problem. Specifically, how can we leverage some of those networking tools (read: browsers) to support remote access to our testbed application? As we have discussed before, the web environment provides ample tools for creating some really nice interfaces. The real sticking point is how that “really nice” interface can communicate with the testbed application. You may recall that a while back we considered one technique that I characterized as a “drop box” solution. The idea was to take advantage of the database underlying a web application by using it to mediate the communications. In other words, the LabVIEW application writes new data to the database and the web application reads and displays the data from the database – hence the “drop box” appellation.

While we might be able to force-fit this approach into providing a control capability, it would impose a couple big problems: First, it would mean that the local application would have to be constantly polling a remote database to see if there have been any changes. Second, it would be really, Really, REALLY slow. We need something faster. We need something more interactive. We need WebSockets.

What are WebSockets?

Simply put, the name WebSockets refers to a message-based protocol that was standardized in 2011 as RFC 6455. The protocol that the standard defines is low-overhead, full-duplex and content agnostic, meaning that it can carry data of any type – even JSON-encoded text data (hint, hint).

An interesting aspect of this protocol is that its default port for establishing a connection is port 80 – the same as the default port for HTTP. While this built-in conflict might be confusing, it actually makes sense. You see when a client initiates an HTTP connection, the first thing it does is pass to the server a number of headers that provide information on the requested connection. One of those headers allows the client to request an Upgrade connection. The original purpose of this header was to allow the client to request an upgraded connection with, for example, enhanced security. However, in recent years it has become a mechanism to allow multiple protocols to listen to the same port.

The way the process works is simple: The client initiates a normal HTTP connection to the server but sets the request headers to indicate that it is requesting a specific non-HTTP protocol. In the case of a request for the WebSockets protocol, the upgrade value is websocket. The server responds to this request with a return code of 101 (Switching Protocols). From that point on, all further communications are made using the WebSockets protocol. It is important to note that while this initial handshake leads some to assume that WebSockets in some ways dependent upon, or rides on top of the HTTP protocol, such is not the case. Aside from the initial connection handshake, the WebSockets protocol is a distinct process that shares nothing with HTTP. Consequently, while the most common application of the technique might be web-based client-server operation, the WebSockets protocol is equally well-suited for peer-to-peer messaging. The only limitation is that one of the two peers needs to be able to respond correctly to the initial handshake.

It is also worth understanding why the basic idea of using Port 80 for the initial connection is significant. A conversation on Stackoverflow gives a pretty good explanation of several issues, but for me the major advantage of using port 80 is that it avoids IT-induced complications. Many corporate IT departments will lock down ports that they don’t recognize. While there are some that try to lock down port 80, it is much less common. Before continuing on, if you’re interested, you also can find the details of the initial handshake here.

The LabVIEW Connection

Ok, so it sounds like WebSockets could definitely have a place in our communications toolbox, but how are we going to take advantage of it from LabVIEW? The answer to that question lies in the work of LabVIEW CLA Sam Sharp. He has developed a set of “pure G” VIs that allows you to implement either side of the connection. Because these are written in nothing but G, there are no DLLs involved so they can run equally well on any supported LabVIEW platform. Making the deal even sweeter, he has documented his code, created a tutorial on them, released his VIs for anyone to use, and all the compensation he requests is “…it would be great if you credit me…”. So, Sam, may you have a million click-throughs.

The following discussion is written assuming Sam’s VIs which I have converted to LabVIEW 2015. One quick note, if you don’t or can’t use the VIPM, you can still use the *.vip file, all you have to do is change the “v” to a “z” and you are good to go. As a first taste of how these VIs work, let’s look (like we did with the TCP example last time) at an over-simplified example to get a sense of the overall logical flow.

The Simplist WebSockets Server

For our purposes here, the testbed application will be the “server” so our code starts by listening for a connection attempt on the default Port 80. When it receives a connection, a reference to that connection is passed to a VI (DoHandshake.vi) that implements the initial handshake to activate the WebSockets protocol. Note that a key part of this process is the passing of a couple of “magic strings” between the client and server to validate the connection and protocol selection.

With the handshake completed and both ends of the connection satisfied that the WebSockets protocol is going to be used, the following subVI (Read.vi) reads a data packet from the client that, in our application, represents a data or control request. Next comes the subVI (Write.vi) that writes a response back to the client. Finally the code calls a subVI (Close.vi) that sends a WebSockets command to close the connection, and then closes the TCP connection reference that LabVIEW uses.

Building the Interface

To build this bare logic into something usable, the structure of the server task is essentially identical to that of the TCP process we built last time. In fact, the only difference between the two is ports to which they are listening, and the specific reentrant handlers that they launch in response to a TCP connection. So let’s concentrate on that alternate process. During initialization, the handler calls the subVI that implements the initial handshake.

Handler Initialization

In addition to the connection reference, this routine also outputs a string that is the URI that was used to establish the connection. Although we don’t need it for our application, it could be used to pass additional information to the server. Once initialization is complete the main event loop starts, but unlike the TCP handler we wrote earlier, it is not based around a state-machine structure.

Main Event Loop

While we could have broken up the process into separate states, the fact that Sam has provided excellent subVIs implementing the read and write functionality makes such a structure feel a bit contrived – or at least to me it does. When the timeout event fires, the code waits for 500 msecs for the first user data coming from the connection. If the read times-out, the loop waits for another 500 msec and then tries again. This polling technique is important because it allows other things (like the system shutdown event) to interrupt the process. Likewise, because we are waiting for a response that is, at least potentially, coming from a remote computer the polling allows us to wait as long as necessary for the response.

When the request data does arrive, the JSON data string is processed by a pair of subVIs that we originally created for the TCP protocol handler. They create the appropriate Remote Access Commands object and pass it on to the dynamic dispatch VI (Process Command.vi) that executes the command and returns the response. The response data is next flattened to a JSON string and written to the connection. Because the current implementation assumes a single request/response cycle per connection, the code closes the WebSockets connection and the TCP connection reference. However, it would be easy to visualize a structure that would not close the connection, but rather repeat one of the data read commands at a timed interval to create a remote “live” interface.

In terms of the errors that can occur during this process, the code has to correctly respond to two specific error codes. First is error code 56, a built-in LabVIEW error that flags a network operation timeout. Because this is the error that is generated if server hasn’t yet received the client’s request, the code basically ignores it. Second is error code 6066, which is a WebSockets-specific error defined in RFC 6455 to flag the situation where the remote client closes a WebSockets connection. Our code responds by closing the TCP connection reference and stopping the loop.

Testing our Work

Now that we have our new server up and running we need to be able to test its operation. However, rather than creating another LabVIEW application to act as the test platform, I built it into a web application. The interface consists of a main screen that provides a pop-up menu for selecting what you want to do and 5 other screens, each of which focus on a specific control action. As these things go, I guess it’s not a great web application, but it is serviceable enough for our purposes. If you need a great application, talk to Sam Sharp – that’s what his company does.

The HTML and CSS

As I have preached many times before, one of the things that makes web development powerful is the strict “division of labor” between its various components: the HTML defines the content, the CSS specifies how the content should look, JavaScript implements client-side interactivity and a variety of languages (including JavaScript!) providing server-side programmability. So lets start with a quick look at the HTML that defines my web interface, and CSS that makes it look good in spite of me… In order to provide some context for the following discussion, here is what the main screen looks like:

Main Screen

It has a title, a header and a pop-up menu from which you can select what you want to do. As a demonstration of the effect that CSS can have, here’s the part of the HTML that creates the pop-up menu.

<button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">Available Actions<span class="caret"></span></button>
<ul class="dropdown-menu">
  <li><a href="ReadGraphData.html">Read Graph Data</a></li>
  <li><a href="ReadGraphImage.html">Read Graph Image</a></li>
  <li class="divider"></li>
  <li><a href="SetAcquisitionRate.html">Set Acquisition Rate</a></li>
  <li><a href="SetDataBufferDepth.html">Set Data Buffer Depth</a></li>
  <li><a href="SetTCParameters.html">Set TC Parameters</a></li>
</ul>

You’ll notice that pop-up menu is constructed from two separate elements: A button and an unordered list – normally a set of bullet points – where each item in the list is defined as an anchor with a link to one of the other pages. However, as the picture shows, when this code runs we don’t see a button and a set of bullet points, we see one pop-up menu. How can this be? The magic lies in CSS that dramatically changes the appearance of these elements to give them the appearance of a menu. Likewise, some custom JavaScript makes the visually manipulate elements work like a menu. What is very cool, however, is that the resources making this transformation possible are part of a standard package, called Twitter Bootstrap, that is free for anyone to use. In a similar vein, let’s look at the page that displays a plot of data acquired from the testbed application:

Graph Screen - Blank

At the top of the screen there’s a small form where the user enters information defining the task to be performed, and a button to initiate the operation that the user is requesting. Below that form, is a blank area where the software will draw the graph of the acquired data. Let’s look at two specific bits of HTML, first the code that builds the data entry form…

<form>
  <fieldset class="input-box">
    <legend>View Graph Data</legend>
    <input type="text" class="str-input" id="ipAddr" value="localhost">  Host</input><br>
    <input type="number" class="num-input" id="portNum" value="80">  Port Number</input><br>
    <select id = "targetPlugin">
      <option value = "Sine Source">Sine Source</option>
      <option value = "Ramp Source">Ramp Source</option>
      <option value = "Hen House TC">Hen House TC</option>
      <option value = "Dog House TC">Dog House TC</option>
      <option value = "Out House TC">Out House TC</option>
    </select><label>  Select Target for Action</label><br>
    <input type="button" id="just-submit-button" value="Send Command">
  </fieldset>
</form>

…and now the code that defines the graph:

<div id="container" style="min-width: 310px; height: 400px; margin: 0 auto"></div>

But, something seems to be missing. The first snippet will create data-entry fields and a button, but what happens when the button is clicked? Apparently, nothing. Likewise, the consider the graphing element. We can see how large the area is to be, but where is the data coming from? And where are the graphing operations? To answer those questions, we need to look elsewhere.

The JavaScript

The power behind much of the web in general – and our application in particular – is the interpreted language JavaScript. In addition to being able to access all resources on your computer, JavaScript can interact directly with web pages and their underlying structures. For folks that like to split hairs, JavaScript is “object-based” because it does support the concept of object, but it is not “object-oriented” because it doesn’t explicitly support classes.

More important for what we are going to be doing is that it supports the concept of “callbacks” (read: User Defined Events). In other words, you can tell JavaScript to automatically performs functions when certain events occur. For example, our JavaScript code is going to be interacting with the web page that loaded it, we need to be sure that the page is fully loaded before that program starts. In order to accomplish that goal, the JavaScript file associated with the page includes this structure:

$(window).load(function() {
	...  // a lot of stuff goes here
});

This code creates a callback for the .load() event. The parameter passed to the .load() event is a reference to the function that JavaScript will run when the event fires. As is common in JavaScript, the code declares the function in line so everything between the opening and closing curly brackets will be executed when the event fires. So after declaring a few variables the code includes this:

$("#just-submit-button").click(function(){
  //The code here retrieves all of the input data and formats the request.
  target = $("#targetPlugin").val();
  remAddr = $("#ipAddr").val();
  remPort = $("#portNum").val();
  jsonData = '\"Read Graph Data\":' + JSON.stringify({"Target":target}); 

  // the websocket logic
  wc_connect(remAddr, remPort, parseData);
  wc_send(jsonData);
});

So the first thing the code does when the page finishes loading is register another callback, but this one defines what JavaScript will do when the user clicks the button in the form. The first three lines read the values of the form data entry fields, and the fourth assembles that data into the JSON string that will be sent to the server. The last two lines are the interface to the WebSockets logic. The first of these lines establishes the connection to the server, while the other one sends the command. But what about the response? Shouldn’t there be a line with a command like wc_receive? You really should be expecting this by now: Inside the wc_connect command the code registers another callback to handle the response.

The event (called onmessage) that is tied to this callback fires when a message is received from the server. The code implementing the callback resides in the file websockets.js (in case you’re curious) and its job is to read the JSON response data packet, check for errors, parse the data and generate the output – the graph. The only question now is, “How does it know how to parse the data and generate the graph?” And the answer is (all together now): “There’s another callback!” See the third parameter of wc_connect, the one named parseData? That value is actually a reference to a function contained in the JavaScript code for this particular page, and is an example of how JavaScript implements a “plugin architecture”. So here is how the data parser for this page starts…

var parseData = function(rawData){
  var plotData = JSON.parse(rawData);
  // trim decimal places
  plotData.forEach(function(element, index, array){
    plotData[index] =  Number(element.toFixed(3));
  });

At this point in the process, the data portion of the response is still a string, so to make processing the data easier, we first parse it to convert it into a JSON object. In the case of this particular response, the resulting object is the array of numbers expressed as strings. Really long strings. You see when LabVIEW encodes a number as a JSON string it includes far more digits of precision than are really needed, so forEach element in the array, I convert the value to a number with 3 decimal places. Here’s the rest of the code:

  // logic for drawing the graph
  $('#container').highcharts({
    title: { text: 'Recent Data', align: 'center' },
    subtitle: { text: 'System: '+remAddr+':'+remPort, align: 'center' },
    xAxis: { title: { text: 'Samples' }, tickInterval: 1 },
    yAxis: { title: { text: 'Amplitude' }, gridLineColor: "#D8D8D8" },
    tooltip: { headerFormat: '<small>Sample: {point.key}</small><br>' },
    series: [{ turboThreshold: 0, name: target, data: plotData, lineWidth: 1, marker:{enabled: false}, color: '#000000' }]
  });
}

This is the code that does the plotting, and as we shall see in a moment, this small amount of code produces a beautiful and highly functional chart that displays the values of individual points in a tooltip when you hover over them with the mouse and even provides a pop-up menu that allows you to save the plot image in a variety of image formats. This functionality is possible thanks to a plotting library called Highcharts that uses the structure defined in the HTML as a placeholder for what is going to draw. I have used this library before in demonstrations because in my experience it is stable, easy to use, and very well-documented. I also like the fact that regardless of what kind of plot I am trying to create they have a demo online that gets me about 95% of the way to my goal. Please note that this library is a commercial product, but they make it available for free for “non-Commercial” applications – however even for commercial usage, the one-time license fee is really pretty reasonable. Finally, even though it doesn’t appear that they actively police their licensing with things like crippled versions or the like, if you are using this on a professional project, pay the people. They have certainly earned their bread.

Testing the Pages

So at last we have our server in place and some test web pages (and supporting code) created. We need to consider how to run the web client. Here you have three options: First, you could just double-click the top-level file in Windows Explorer and Windows will dutifully open the file in your browser and everything will work as it should. Second, if you have access to an existing web server you can copy the dozen or so files to it and test it from there. Third, you could create a small temporary server strictly for testing. If you choose that path, a good option is a server called Express.js. As it name implies, it is written in JavaScript, which means it runs under the Node.JS execution engine. You can set one up sufficient to test our current code in about 10 minutes – including the time required to download the code.

The overall test process is similar to what we did to test the custom TCP server last time. The only significant change is the interface. First, test things that should work to make sure they do. Second, test the things that shouldn’t work and make sure they don’t. Here are examples of what you can expect to see on the graphing and image-fetch screens:

Graph Screen

Image Screen

Testbed App – Release 20
Toolbox – Release 17
WebSockets Client – Release 1

Big Tease

So what’s next? We have looked at access via a custom TCP interface and the standard WebSockets interface. How about next time, we look at how to do embed this connectivity in a C++ program using a DLL?

Until Next Time…
Mike…

Laying the good foundation, with TCP…

In case you are just joining the conversation, we are in the midst of a project to modify the testbed application that we have been slowly assembling over the past year. I would heartily recommend that you take some time and review the past posts.

To this point in our latest expansion project, we have created a remote control interface, embedded it in our testbed and performed some basic testing to verify that the interface works. Our next step is to create the first of several “middleware” processes. I call them middleware because they sort of sit between our application’s basic code and the external applications and users. In future installments we will look at middleware for .NET, ActiveX and WebSockets, but we will start with a more fundamental interface: TCP/IP.

The Roadbed for the Information Highway

Aside from giving me the opportunity to air out some tired metaphors, TCP/IP is a good place to start because it gives us the opportunity to examine the protocol that underlies a host of other connection options.

Just the basics

Although the idea of creating a “server” can have a certain mystique, there really isn’t much to it really – at least when you are working with LabVIEW. The underlying assumption to the process is that there is something monitoring the computer’s network interface waiting for a client application to request a TCP/IP connection. In network parlance, this “something” is called a “listener” because listens to the Ethernet interface for a connection. However a given listener isn’t simply listening for any connection attempt, rather the network standards define the ability to create multiple “ports” on a single interface, and then associate particular ports with particular applications. Thus, when you create the listener you have to tell it what port that it is to monitor. In theory, a port number can be any U32 value, but existing standards specify what sorts of traffic is expected on certain port numbers. For example, by default HTTP connections are expected on ports 80 or 8080, port 21 is the default for FTP and LabVIEW by default listens to port 3363. All you have to do is pick a number that isn’t being used for anything else on your computer. To create a listener in LabVIEW, there is a built-in function called TCP Create Listener. It expects a port number, and returns a reference to the listener that it creates – or an error if you pick a port to which some other application is already listening.

Once you have created the listener, you have to tell it to start listening by calling the built-in function TCP Wait On Listener. As its name implies, it waits until a connection is made on the associated port, though you will typically want to specify a timeout. When this function sees and establishes an incoming connection it outputs a new reference specific to that particular connection. A connection handler VI can then use that reference to manage the interactions with that particular remote device or process.

Finally, when you are done with your work, you kill the server by closing the listener reference (TCP Close Connection), and all connection references that you have open. Put these three phases together, you come up with something like this.

The Simplest TCP Server

This simple code creates a listener, waits for a connection, services that connection (it reads 4 bytes from it), and then quits. While this code works, it isn’t really very useful. For example, what good is a server that only waits for one connection and then quits? Thankfully, it’s not hard to expand this example. All you have to do is turn it into a mini state-machine.

One Step at a Time

As usual, the state-machine is built into the timeout event so the following states include a shift register pointing to the next state to be executed, and a second one carrying the delay that the code will impose before going to that state. But before we get into the specifics, here’s a state diagram showing the process’ basic flow.

State Diagram

Execution starts with the Initialize Listener state. It’s main job at this point is to create the TCP listener. Next is the aptly named state, Wait for Connection. It patiently waits for a connection by looping back to itself with a short timeout. As long as there is no connection established, this state will execute over and over again. This series of short waits gives other events (like the one for shutting down the server) a chance to execute.

When a connection is made, the machine transitions to the Spawn Handler state. Since it is critical that the state machine gets back to waiting for a new connection as soon as possible, this state dynamically launches a reentrant connection handler VI and immediately transitions back to the Wait for Connection state.

The state machine continues ping-ponging between these last two states until the server is requested to stop. At that point, the code transitions to the Close Listener state which disposes of the listener and stops the state machine. So let’s look at some real code to implement these logical states – which, by the way resides in a new process VI named TCP-IP Server.vi.

The Initialize Listener State

This state at present only executes once, and its job is to create the listener that initiates connections with remote clients. The TCP Create Listener node has two inputs, the first of which is the port that the listener will monitor. Although I could have hard-coded this number, I instead chose to derive this value from the application’s Server.Port property. In a standalone executable, the application reads this value from its INI file at start-up, thus making it reconfigurable after the application is deployed. If the server.tcp.port key does not exist in the INI file, the runtime engine defaults to LabVIEW’s official port number, 3363.

Initialize Listener

When running in the development environment, this value is still reconfigurable, but it is set through the My Computer target’s VI Server settings. To change this value, right-click on My Computer in the project explorer window and select Properties. In the resulting dialog box, select the VI Server Category. At this point, the port number field is visible in the Protocols section of the VI Server page, but it is disabled. To edit this value, check the TCP/IP box to enable the setting, make the desired change and then uncheck the TCP/IP box, and click the OK button. It is critical that you remember to uncheck TCP/IP before leaving this setting. If you don’t, the project will be linked to the specified port and the TCP server in the testbed application will throw an error 60 when it tries to start.

The other input to the TCP Create Listener node is a timeout. However, this isn’t the time that the node will wait to finish creating the listener. We will be testing this code on a single computer and so don’t have to worry about such things as the network going down – even momentarily. In the broader world, though, there are a plethora of opportunities for things to go wrong. For example, the network could go south while a client is in the middle of connecting to our server. This timeout addresses this sort of situation by specifying the amount of time that the listener will wait for the connection to complete, once a connection attempt starts.

The Wait for Connection State

This state waits for connection attempts, and when one comes, completes the connection. Unfortunately, LabVIEW doesn’t support events based on a connection attempt so this operation takes the form of a polling operation where the code checks for a connection attempt, and if there is none, waits a short period of time and then checks again. The short wait period is needed to give the process as a whole the chance to respond to other events that might occur.

Wait for Connection

The logic implementing this logic starts with a call to the built-in TCP Wait On Listener node with a very short (5-msec) timeout. If there is no connection attempt pending when the call is made, or an attempt is not received during that 5-msec window, the node terminates with an error code 56. The following subVI (Clear Errors.vi) looks for, and traps that error code so its occurrence can be used to decide what to do next. If the subVI finds an error 56, the following logic repeats the current state and sets the timeout to 1000-msec. If there is no error, the next state to be executed is Spawn Handler and the timeout is 0.

If there is a successful connection attempt, the TCP Wait On Listener node also outputs a new reference that is unique to that particular connection. This new reference is passed to a shift-register that makes is available to the next state.

The Spawn Handler State

In this state, the code calls a subVI (Launch Connection Handler.vi) that spawns a process to handle the remote connection established in the previous state. This connection handler takes the form of a reentrant VI that accepts two inputs: a reference to a TCP connection and a boolean input that enable debugging operations – which at the current time consists of opening the clone’s front panel when it launches, and closing it when it closes.

Spawn Handler

It is important that the connection handler be a reentrant process for two reasons: First, we want the code to be able to handle more than one connection at a time. Second, the listener need to get back to listening for another new connection as quickly as possible. We’ll discuss exactly what goes into the connection handler in a bit.

The Close Listener State

Finally, when the process is stopping, this event closes open connections, sets the timeout to -1, and stops the event loop.

Close Listener

But why are there two connections to be closed? Doesn’t the connection handler that gets launched to manage the remote connection handle closing that reference? While that point is true, the logic behind it is flawed. There is a small, but finite, delay between when the remote connection is completed and the Spawn Handler starts executing. If the command to stop should occurring during that small window of time, the handler will never be launched, and so can’t close that new connection and its associated reference.

Turning States into a Plugin

Now that we have an understanding of the process’ basic operation, we need to wrap a bit more logic around it to turn it into usable code.

Adding Shutdowns and Error Handling

To begin with, if this new process is going to live happily inside the structure we have already defined for testbed application plugins, it is going to need a mechanism to shut itself down when the rest of the application stops. Since that mechanism is already defined, all we have to do is register for the correct event (Stop Application) and add an event handler to give it something to do.

Loop Shutdown

Nothing too surprising here: When the shutdown event fires, the handler sets the next state to be executed to Close Listener and the timeout to 0. Note that it does not actually stop the loop – if it did the last state (which closes the listener reference) would never get the chance to execute. Finally, we also need to provide for error handling…

Add in Error Handling

…but as with the shutdown logic, this enhancement basically consists of adding in existing code. In this case, the application’s standard error reporting VI.

Defining the Protocol

With the new middleware plugin ensconced happily in the testbed framework, we need to create the reentrant connection handler that will handle the network interactions. However, before we can do that we need to define exactly what the communications protocol will look like. In later posts, I will present implementations of a couple of standardized protocols, but for now let’s explore the overall communications process by “rolling our own”.

As a quick aside, you may have noticed that I have been throwing around the word “protocol” a lot lately. Last time, I talked about creating a safe protocol for remote access. Then this time we discussed the TCP protocol, and now I am using the word again to describe the data we will be sending over out TCP connection. A key concept in networking is the idea of layers. We have discussed the TCP protocol for making connections, but that isn’t whole story. TCP is build on top of a lower-level protocol called IP – which is itself built on even lower level protocols for handling such things as physical interfaces. In addition, this protocol stack can also extend upwards. For example, VI Server is at least partially built on top of TCP, and we are now going to create our own protocol that will define how we want to communicate over TCP.

This layering may seem confusing, but it offers immense value because each layer is a modular entity that can be swapped out without disrupting everything else. For example, say you swap out the NIC (Network Interface Card) in your computer, the only part of the stack that needs to change are the very lowest levels that interface to the hardware.

The first thing we need to do is define the data that will be passed back and forth over the connection, and how that data will be represented while it is in the TCP communications channel. Taking the more basic decision first, let’s look at how we want to represent the data. Ideally, we want a data representation that is flexible in terms of capability, is rigorous in its data representations and easy to generate in even primitive languages like C++. The first standard that was created to fill this niche was a spin-off of HTML called XML. The problem is that while it excels in the first two points, the third is a problem because when used to encode small data structures the same features that make it incredibly flexible and rigorous, conspire to make it is very verbose. Or to put it another way, for small data structures the data density in an XML document is very low.

Fortunately, there is an alternative that is perfect for what we need to do: JSON. The acronym stands for “JavaScript Object Notation”, and as the name implies is the notation originally used to facilitate the passing of data within JavaScript applications. The neat part is that a lot of the JSON concepts map really well to native LabVIEW data structures. For example, in terms of datatypes, you can have strings, numbers and booleans, as well as arrays of those datatypes. When you define a JSON object, you define it as a collection of those basic datatypes – sort of like what we do with clusters in LabVIEW. But (as they say on the infomercials), “Wait there’s more…” JSON also allows you include other JSON objects in the definition of a new object just LabVIEW lets us embed clusters within clusters. Finally, to put icing on the cake, nearly every programming language on the planet (including LabVIEW) incorporates support for this standard.

To see how this will work, let’s consider the case of the temperature controller parameters. When wanting to configure this value, the remote application will need to send the following string: (Note: As with JavaScript itself, the presence of “white space” in JSON representations is not significant. I’m showing this “pretty-printed” to make it easier to understand.)

{
    "Target":"Dog House TC", 
    "Data":{
        "Error High Level":100,
        "Warning High Level":90,
        "Warning Low Level":70,
        "Error Low Level":60,
        "Sample Interval":1
    }
}

This string defines a JSON object that contains two items. The first is labeled Target and it holds a string identifying the specific plugin that it is wanting to configure – in this case the Dog House TC. In the same way, the name of the second item is labeled Data, but look at its value! If you think that looks like another JSON object definition, you’d be right. This sub-object has 5 values representing the individual parameters needed to configure a temperature controller. In case you’re wondering, this is what the code looks like that parses this string back into a LabVIEW data structure:

Unflattening JSON

That’s right, all it takes is one built-in function and one typedef cluster. The magic lies in the fact that the string and the cluster represent the exact same logical structure so it is very easy for LabVIEW’s built-in functions to map from one to the other.

The Unflattened JSON Data

The other thing to note is that the Sample Interval value in the cluster has a unit associated with it, in this case milliseconds. The way LabVIEW handles this situation is consistent with how it handles units in general: When converting data to a unitless form (like a JSON value) it expresses the value using the base unit for the type of data that it is. In the example shown, Sample Interval is time, and the base unit for time is seconds, so LabVIEW expresses the 1000 msecs as 1 sec in JSON. Likewise when unflattening the string back to a LabVIEW data structure, the function interprets the input value in the value’s base units as defined in the cluster.

We are about done with what our message will look like, but there are still a couple of things we need to add before we can start shooting our data down a wire. To begin with, we need to remember that Ethernet is a serial protocol and as so it’s much easier to uses if a receiver can know ahead of time how much data to be expecting. To meet that need, we will append a 2-byte binary value that is the total message. The other thing we need is someway to tell whether the message arrived intact and without corruption, so we will also append a 2-byte CRC. Moreover, to make the CRC easy for other applications to generate we will use a standard 16-bit CCITT form of the calculation. So this is what one of our command data packets will look like:

Message Format

In the same way, we can use the same basic structure for response messages. All we have to do is redefine the JSON “payload” as a JSON object with two objects: a numeric error code (where 0 = “No Error”), and a string that is a contains any data that the response needs to return. As you would expect, this string would itself be another JSON-encoded data structure.

Creating the Connection Handler

We are finally ready to implement the reentrant command handler that manages these messages, and the important part of that job is to ensure that it is fully reentrant. By that I mean that it does little good to make the VI itself reentrant if its major subVIs are not. So what is a “major” subVI? The two things to consider are:

  • How often does the SubVI execute? If the subVI rarely executes or only runs once during initialization, it might not be advantageous to make it reentrant.
  • How long does it take to execute? In the same way, subVIs that implement simple logic and so execute quickly, might not provide a lot of benefit as reentrant code.

As I am wont to do, I defined the handler’s overall structure as a state machine with three states corresponding to the three phases of the response interaction. So the first thing we need to do (and the first state to be executed) is Read Data Packet. Its job is to read an entire message from the new TCP connection, test it for validity and, if valid, pass the command on to the Process Command state.

Read Data Packet

The protocol we have defined calls for each message to start with a 2-byte count, so the state starts by reading two bytes from the interface, casting the resulting binary value to U16 number and then using that number to read the remainder of the message. Then to validate the message, the code performs a CRC calculation on the entire message, including the CRC at the end. Due to the way the CRC calculation works, if the message and CRC are valid, the result of this calculation will always be 0. Assuming the CRC checks, the code strips the CRC from the end of the string and sends the remaining part of the string to a subVI that converts the JSON object into a LabVIEW object. I chose an object-oriented approach here because it actually simplifies the code (no case structures) and it provides a clear roadmap of what I need to do if I ever decide to add more interface commands in the future. If the CRC does not check, the next state to execute is either Send Response if no error occurred during the network reads, or Stop Handler if there was.

Moving on, the Process Command state calls a dynamic dispatch method (Process Command.vi) that is responsible for interfacing to the rest of the application through the events we defined last time, and formatting a response to be sent to the caller. The object model for this part of the code has 5 subclasses (one for each command) and the parent class is used as the default for when JSON command structure does not contain a valid command object. It should surprise no one that the command processing subclass methods look a lot like the test VIs we created last time to verify the operation of the remote access processor, consequently, I am not going to take the time or space to present them all again here. However I will highlight the part that makes them different:

Parsing Response for Error or Data

This snippet shows the logic that I use to process the response coming back from the remote access engine in response to the event that reads the graph data. Because the variant returned in the response notifier can be either a text error message, or an array of real data, the first thing the code does is attempt to convert the variant into a string. If this attempt fails and generates an error, we know that the response contains data and so can format it for return to the remote caller. If the variant converts successfully to a string, we know the command failed and can pass an error back to the caller.

At this point, we now have a response ready to send back to the caller, so the state machine transitions to the Send Response state. Here we see the logic that formats and transfers the response to the caller:

Send Response

Since the core of the message is a JSON representation of a response cluster, the code first flattens the cluster to a JSON string. Note however, the string that it generates contains no extraneous white space, so it will look different very from the JSON example I showed earlier. The logic next calculates the length of the return message and the CRC of the JSON. Those two values are added to the beginning and end, respectively, of the JSON string and the concatenated result is written back to the TCP connection.

Finally, the Stop Handler state closes the TCP connection and stops the state machine loop, which also stops and removes from memory the reentrant clone that has been running.

Testing the Middleware

Finally, as always we need to again test what we have done, and to do that I have written a small LabVIEW test client program. However, if you know another programming language, feel free to write a short program to implement the transactions that we have defined. The program I created is included as a separate project. The top-level VI opens a window that allows you to select the action you want to perform, the plugin that it should target and (if required) enter the data associated with the action. Because this is a test program, it also incorporates a Boolean control that forces an invalid CRC, so you can test that functionality as well.

So open both projects and run the testbed application, nothing new here – or so it seems. Now run the simple TCP client, its IP address and port number are correct for this test scenario. As soon as the client starts, the waveform graph for displaying the plugin graph data appears, so let’s start with that. You should be able to see the data from each of the 5 testbed plugins by selecting the desired target and clicking the Send Command button. You should also be able to see all 5 graph images.

Now try generating some errors. Turn on the Force CRC Error check-box and retry the tests that you just ran successfully. The client’s error cluster should now show a CRC Error. Next turn the Force CRC Error check-box back off and try doing something illegal, like using the Set Acquisition Rate action on one of the temperature controllers. Now you should see an Update Failed error.

Continue trying things out, verifying that the thing which should work do, and that the things that shouldn’t work, don’t. If you did the testing associated with the last post, you will notice that there is a lag between sending the command and getting the results, but that is to be expected since you are now running over a network interface. Finally, assuming the network is configured correctly, and the desired ports are open, the client application should be able to work from a computer across the room, across the hall or across the world.

Testbed Application – Release 19
Toolbox – Release 16
Simple TCP Client – Release 1

Big Tease

So what is in store for next time? Well, let’s extend things a bit further and look at a way to access this same basic interface, but this time from a web browser! Should be fun.

Until Next Time…
Mike…

It’s a big interconnected world – and LabVIEW apps can play too

If I were asked to identify one characteristic that set modern test systems apart from their predecessors, my answer would be clear: distributed processing. In the past, test applications were often monolithic programs – but how things have changed! In the past we have talked several times about how the concept of distributed processing has radically altered our approach to system architecture. In fact, the internal design of our Testbed Application is a very concrete expression of that architectural shift. However, the move away from monolithic programs has had a profound impact in another area as well.

In the days of yore, code you wrote rarely had to interact with outside software. The basic rule was that if you wanted your program to do something you had to implement the functionality yourself. You had to assume this burden because there was no alternative. There were no reusable libraries of software components that you could leverage, and almost no way to pass data from one program to another. About all you could hope for were some OS functions that would allow you to do things like read and write disk files, or draw on the screen. In this blog we have looked at ways that our LabVIEW applications can use outside resources through standardized interfaces like .NET or ActiveX. But what if someone else wants to write a program that uses our LabVIEW code as a drop in component? How does that work?

As it turns out, the developers saw this possibility coming and have provides mechanisms that allow LabVIEW application to provide the same sort of standardized interface that we see other applications present. Over the next few posts, we are going to look at how to use incorporate basic remote interfaces ranging from traditional TCP/IP-based networking to building modern .NET assemblies.

What Can We Access?

However, before we can dig into all of that, we need to think about what these interfaces are going to access. In our case, because we have an existing application that we will be retrofitting to incorporate this functionality, we will be looking at the testbed to identify some basic “touchpoints” that a remote application might want to access. By contrast, if you are creating a new application, the process of identifying and defining these touchpoints should be an integral part of you design methodology from day one.

The following sections present the areas where we will be implementing remote access in our testbed application. Each section will describe the remote interface we desire to add and discuss some of the possible implementations for the interface’s “server” side.

Export Data from Plugins

The obvious place to start is by looking at ways of exporting the data. This is, after all, why most remote applications are going to want to access our application: They want the data that we are gathering. So the first thing to consider is, where does the data reside right now? If you go back and look at the original code, you will see that, in all cases, the primary data display for a plugin is a chart that is plotting one new point at a time. Here is what the logic looked like in the Acquire Sine Data.vi plugin.

Simple Acquisition and Charting

As you can see, the only place the simulated data goes after it is “acquired” is the chart. Likewise, if you looked at the code for saving the data, you would see that it was getting the data by reading the chart’s History property.

Save Chart Data to File

Now, we could expand on that technique to implement the new export functionality, but there is one big consequence to that decision. Approaching the problem in this way would have the side-effect of tying together the number of data points that are saved to the chart’s configuration. Hence, because the amount of data that a chart buffers can’t be changed at runtime, you would have to modify the LabVIEW code to change the amount of data that is returned to the calling application.

A better solution is to leverage what we have learned recently about DVRs and in-place structures to create a storage location the size of which we can control without modifying the application code. A side-effect of this approach is that we will be able to leverage it to improve the efficiency of the local storage of plugin data – yes, sometimes side-effects are good.

To implement this logic we will need three storage buffers: One for each of the two “acquisition” plugins and one for the reentrant “temperature controller” plugin. The interface to each storage buffer will consist of three VIs, the first one of which is responsible for initializing the buffer:

Initialize Buffer

This screenshot represents the version of the initialization routine that serves the Ramp Signal acquisition process. The basic structure of this code is to create a circular buffer that will save the last N samples – where “N” is a reconfigurable number stored in the database. To support this functionality, the DVR incorporates two values: The array of datapoints and a counter that operates as a pointer to track where the current insertion point is in the buffer. These VIs will be installed in the initialization state of the associated plugin screen’s state machine. With the buffer initialized, we next need to be able to insert data. This is typical code for performing that operation:

Insert Data Point

Because the DVR data array is initialized with the proper number of elements at startup, all this logic has to do is replace an existing value in the array with a newly acquired datapoint value, using the counter of course to tell it which element to replace. Although we have a value in the DVR called Counter we can’t use it without a little tweaking. Specifically, the DVR’s counter value increments without limit each time a value is inserted, however, there is only a finite number of elements in the data array. What we need for our circular buffer is a counter that starts at 0, counts to N-1 and then returns to 0 and starts over. The code in the image shows the easiest way to generate this counter. Simply take the limitless count and modulo divide it by the number of points in the buffer. The output of the modulo division operation is a quotient and a remainder. The remainder is the counter we need.

Modulo division is also important to the routine for reading the data from the buffer, but in this case we need both values. The quotient output is used to identify when the buffer is still in the process of being filled with the initial N datapoints:

Read All Data.1

During this initial period, when the quotient is 0, the code uses the remainder to trim off the portion of the buffer contents that are yet to be filled with live data. However, once the buffer is filled, the counter ceases being the a marker identifying the end of the data, and it becomes a demarcation point between the new data and the old data. Therefore, once the quotient increments past 0, a little different processing is required.

Read All Data.2

Once the circular buffer is full, the element that the remainder is pointing at is the oldest data in the array (chronologically speaking), while the datapoint one element up from it is newest. Hence, while the remainder is still used to split the data array, the point now is to swap the two subarrays to put the data in correct chronological order.

Retrieve Graph Images from Plugins

The next opportunity for remote access is to fetch not the data itself, but a graph of the data as it is shown on the application’s GUI. This functionality could form the basic for a remote user interface, or perhaps as an input to a minimalistic web presentation. Simplifying this operation is a control method that allows you to generate an image of the graph and the various objects surrounding it like the plot legend or cursor display. Consequently, the VI that will be servicing the remote connections only needs to be able to access the chart control reference for each plugin. To make those references available, the code now incorporates a buffer that is structurally very similar to the one that we use to store the VI references that allow the GUI to insert the plugins into its subpanel. Due to its similarity to existing code, I won’t cover it in detail, but here are a few key points:

  • Encapsulated in a library to establish a namespace and provided access control
  • The FGV that actually stores the references is scoped as Private
  • Access to the functionality is mediated though publicly-scoped VIs

This FGV is the only new code we will need to add to the existing code.

Adding Remote Control

One thing that both of the remote hooks we just discussed have in common is that they are both pretty passive – or to make this point another way, they both are monitoring what the application is doing without changing what it is doing. Now we want to look at some remote hooks that will allow remote applications control the application’s operation, at least in a limited way.

Since the way the application works is largely dependent upon the contents of the database, it should surprise no one that these control functions will need to provide provisions for the remote application to alter the database contents in a safe and controlled way.

Some Things to Consider

The really important words in that last sentence are “safe” and “controlled”. You see, the thing is that as long as you are simply letting people read the data you are generating, your potential risk is often limited to the value of the data that you are exposing. However, when you give remote users or other applications the ability to control your system, the potential exists that you could lose everything. Please understand that I take no joy in this conversation – I still remember working professionally on a computer that didn’t even have a password. However, in a world where “cyber-crime”, “cyber-terrorism” and “cyber-warfare” have become household terms, this conversation is unavoidable.

To begin with, as a disclaimer you should note that I will not be presenting anything close to a complete security solution, just the part of it that involves test applications directly. The advice I will be providing assumes that you, or someone within your organization, has already done the basic work of securing your network and the computers on that network.

So when it comes to securing applications that you write, the basic principle in play here is that you never give direct access to anything. You always qualify, error-check and validate all inputs coming from remote users or applications. True, you should be doing this input validation anyway, but the fact of the matter is that most developers don’t put a lot of time into validating inputs coming from local users. So here are a couple of recommendations:

Parametrize by Selecting Values – This idea is an expansion on a basic concept I use when creating any sort of interface. I have often said that anything you can do to take the keyboard out of your users’ hands is a good thing. By replacing data that has to be typed with data menus from which they can select you make software more robust and reduce errors. When working with remote interfaces, you do have to support typed strings because unless the remote application was written in LabVIEW, typing is the only option. But what you can do is limit the inputs to a list of specific values. On the LabVIEW-side the code can convert those string values into either a valid enumeration, or a predefined error that cancels the operation and leaves your system unaltered. When dealing with numbers, be sure to validate them also by applying common-sense limits to the inputs.

Create Well-Defined APIs – You want to define a set of interfaces that specify exactly what each operation does, and with as few side-effects as possible. In fancy computer-science terms, this means that operations should be atomic functions that either succeed or fail as a unit. No half-way states allowed! Needless to say, a huge part of being “well-defined” is that the APIs are well-documented. When someone calls a remote function, they should know exactly what is expected of them and exactly what they will get in response.

Keep it Simple – Let’s be honest, the “Swiss Army Knife” approach to interface design can be enticing. You only have to design one interface where everything is parametrized and you’re done, or at least you seem to be for a while. The problem is that as requirements change and expand you have to be constantly adding to that one routine and sooner or later (typically sooner) you will run into something that doesn’t quite fit well into the structure that you created. When that happens, people often try to take the “easy” path and modify their one interface to allow it to handle this new special case – after all, “…it’s just one special case…”. However once you start down that road, special cases tend to multiply like rabbits and the next thing you know, your interface is a complicated, insecure mess. The approach that is truly simple is to create an interface that implements separate calls or functions for each logical piece of information.

With those guidelines in mind, let’s look at the three parameters that we are going to be allowing remote users or applications to access. I picked these parameters because each shows a slightly different use case.

Set the Acquisition Sample Interval

One of the basic ways that you can store a set of parameters is using a DVR, and I demonstrated this technique by using it to store the sample rates that the “acquisition” loops use to pace their operation. In the original code, the parameter was already designed to be changed during operation. You will recall that the basic idea for the parameter’s storage was that of a drop box. It wasn’t important that the logic using the data know exactly when the parameter was changed, as long as it got the correct value the next time it tried to use the data. Consequently, we already have a VI that writes to the DVR (called Sample Rate.lvlib:Write.vi) and, as it turns out, it is all we will need moving forward.

Set Number of Samples to Save

This parameter is interesting because it’s a parameter that didn’t even exist until we started adding the logic for exporting the plugin data. This fact makes it a good illustration of the principle that one change can easily lead to requirements that spawn yet other changes. In this case, creating resizable data buffers leads to the need to be able change the size of those buffers.

To this point, the libraries that we have defined to encapsulate these buffers each incorporate three VIs: one to initialize the buffer, one to insert a new datapoint into it, and one to read all the data stored in the buffer. A logical extension of this pattern would be the addition of a fourth VI, this time one to resize the existing buffer. Called Reset Buffer Size.vi these routines are responsible for both resizing the buffer, and correctly positioning the existing data in the new buffer space. So the first thing the code does is borrow the logic from the buffer reading code to put the dataset in the proper order with the oldest samples at the top and the newest samples at the bottom.

Put the Data in Order

Next the code compares the new and old buffer sizes in order to determine whether the buffer is growing larger, shrinking smaller or staying the same size. Note that the mechanism for performing this “comparison” is to subtract the two value. While a math function might seem to be a curious comparison operator, this technique makes it easy to identify the three conditions that we need to detect. For example, if the values are the same the difference will be 0, and the code can use that value to bypass further operations. Likewise, if the two numbers are not equal, the sign of the result will indicate which input is larger, and the absolute magnitude of the result tells us how much difference there is between the two.

This is the code that is selected when the result of the subtraction is a positive number representing the number of element that are to be added to the buffer.

Add points to Buffer

The code uses the difference value to create an array of appropriate size and then appends it to the bottom of the existing array. In addition, the logic has to set the Counter value point to the first element of the newly appended values so the next insert will go in the correct place. By contrast, if the buffer is shrinking in size, we need to operate on the top of the array.

Remove points from buffer

Because the buffer is getting smaller, the difference is a negative number representing the number of elements to be removed from the buffer data. Hence, the first thing we need to do is extract the number’s absolute value and use it to split the array, effectively removing the elements above the split point. As before, we also need to set the Counter value, but the logic is a little more involved.

You will remember that the most recent data is on the bottom of the array, so where does the next data point need to go? That’s right, the buffer has to wrap around and insert the next datapoint at element 0 again, but here is where the extra complexity comes in. If we simply set Counter to 0 the data insert logic won’t work correctly. Reviewing the insert logic you will notice that the first pass through the buffer (modulo quotient = 0) is handled differently. What we need is to reinitialize Counter with a number that when subjected to the modulo division will result in a remainder of 0, and a quotient that is not 0. An easily derived value that meets that criteria is the size of the array itself.

Finally we have to decide what to do when the buffer size isn’t changing, and here is that code. Based on our discussions just now, you should be able to understand it.

buffer size not changing

Set Temperature Controller Operating Limits

Finally, there are two reasons I wanted to look at this operation: First, it is an example of where you can have several related parameters that logically form a single value. In this case, we have 5 separate numbers that, together, define the operation of one of the “temperature controller” processes. You need to be on the look-out for this sort of situation because, while treating this information as 5 distinct value would not be essentially wrong, that treatment would result in you needing to generate a lot of redundant code.

However, this parameter illustrates a larger issue, namely that changes in requirements can make design decisions you made earlier – let’s say – problematic. As it was originally designed, the temperature controller values were loaded when the plugins initialized, and they were never intended to be changed during while the plugin was running. However, our new requirement to provide remote control functionality means that this parameter now needs to be dynamic. When confronted with such a situation, you need to look for a solution that will require the least rework of existing code and the fewest side-effects. So you could:

  1. Redesign the plugin so it can restart itself: This might sound inviting at first because the reloading of the operating limits would occur “automatically” when the plugin restarted. Unfortunately, it also means that you would have to add a whole new piece of functionality: the ability for the application to stop and then restart a plugin. Moreover, you would be creating a situation where, from the standpoint of a local operator, some part of the system would be restarting itself at odd intervals for no apparent reason. Not a good idea.
  2. Redesign the plugin to update the limits on the fly: This idea is a bit better, but because the limits are currently being carried through the state machine in a cluster that resides in a shift-register, to implement this idea we will need to interrupt the state machine to make the change. Imposing such an interruption risks disrupting the state machine’s timing.

The best solution (as in all such cases) is to address the fundamental cause: the setups only load when the plugin starts and so are carried in the typedef cluster. The first step is to remove the 5 numbers associated with the temperature controller operating parameters from the cluster. Because the cluster is a typedef, this change conveniently doesn’t modify the plugin itself, though it does alter a couple of subVIs – which even more conveniently show up as being broken. All that is needed to repairs these VIs is to go through them one by one and modify the code to read the now-missing cluster data values with the corresponding values that the buffered configuration VI reads from the database. Said configuration VI (Load Machine Configuration.vi) also requires one very small tweak:

Reload Enable

Previously, the only time logic would force a reload of the data was when the VI had not been called before. This modification adds an input to allow the calling code to request a reload by setting the new Reload? input to true. To prevent this change from impacting the places where the VI is already being called, the default value for this input is false, the input is tied to a here-to-fore unused terminal on the connector pane, and the terminal is marked as an Optional input.

Building Out the Infrastructure

At this point in the process, all the modifications that need to be done to the plugins themselves have been accomplished, so now we need is a place for the external interface functionality itself to live. One of the basic concepts of good software design is to look at functionality from the standpoint of what you don’t know or what is likely to change in the future, and then put those things into abstracted modules by themselves. In this way, you can isolate the application as a whole, and thus protect it from changes that occur in the modularized functionality.

The way this concepts applies to the current question should be obvious: There is no way that we can in the here and now develop a compete list of the remote access functionality that users will require in the future. The whole question is at its essence, open-ended. Regardless if how much time you spend studying the issue, users have an inherently different view of your application than you do and so they will come up with needs that you can’t imagine. Hence, while today we might be able to shoe-horn the various data access and control functions into different places in the current structure, to do so would be to start down a dead-end road because it is unlikely that those modifications would meet the requirements of tomorrow. What we need here is a separate process that will allow us to expand or alter the suite of data access and control functionality we will offer.

Introducing the Remote Access Engine

The name of our new process is Remote Access.vi and (like most of the code in the project) it is designed utilizing an event-drive structure that ensures it is quiescent unless it is being actively accessed. The process’ basic theory of operation is that when one of its events fire, it performs the requested operation and then sends a reply in the form of a notification. The name of the notification used for the reply is sent as part of the event data. This basic process is not very different from the concept of “callbacks” used in languages such as JavaScript.

Although this process is primarily intended to run unseen in the background, I have added three indicators to its front panel as aides in troubleshooting. These indicators show the name of the last event that it received, the name of the plugin that the event was targeting, and the name of the response notifier.

The Read Graph Data Event

The description of this event handler will be longer than the others because it pretty much sets the pattern that we will see repeated for each of the other events. It starts by calling a subVI (Validate Plugin Name.vi) that tests to see if the Graph Name coming from the event data is a valid plugin name, and if so, returns the appropriate enumeration.

Validate plugin name

The heart of this routine is the built-in Scan from String function. However, due to the way the scan operation operates, there are edge conditions where it might not always perform as expected when used by itself. Let’s say I have a typedef enumeration named Things I Spend Too Much Time Obsessing Over.ctl with the values My House, My Car, My Cell Phone, and My House Boat, in that order. Now as I attempt to scan these values from strings, I notice a couple of “issues”. First there is the problems of false positives. As you would expect, it correctly converts the string “My House Boat” into the enumerated value My House Boat. However, it would also convert the string “My House Boat on the Grand Canal” to the same enumeration and pass the last part of the string (” on the Grand Canal”) out its remaining string output. Please note that this behavior is not a bug. In fact, in many situations it can be a very useful behavior – it’s just not the behavior that we want right now because we are only interested in exact matches. To trap this situation, the code marks the name as invalid if the remaining string output is not empty.

The other issue you can have is what I call the default output problem. The scan function is designed such that if the input string is not scanned successfully, it outputs the value present at the default value input. Again, this operation can be a good thing, but it is not the behavior that we want. To deal with this difference, the code tests the error cluster output (which generates and error code 85 for a failed scan) and marks the name as invalid if there is an error.

When Validate Plugin Name.vi finishes executing, we have a converted plugin name and a flag that tells us whether or not we can trust it. So the first thing we do is test that flag to see whether to continue processing the event or return an error message to the caller. Here is the code that executes when the result of the name validation process is Name Not Valid.

Name Not Valid

If the Response Notifier value from the event data is not null, the code uses it to send the error message, “Update Failed”. Note that this same message is sent whenever any problem arises in the remote interface. While this convention certainly results in a non-specific error message, it also ensures that the error message doesn’t give away any hints to “bad guys” trying to break in. If the Response Notifier value is null (as it will be for local requests) the code does nothing – remember we are also wanting to leverage this logic locally.

If the result of the name validation process is Name Valid, the code now considers the Plugin Name enumeration and predicates its further actions based on what it finds there. This example for Sine Source shows the basic logic.

Name Valid - Remote

The code reads the data buffer associated with the signal and passes the result into a case structure that does one of two different things depending on whether the event was fired locally, or resulted from a remote request. For a remote request (Response Notifier is not null), the code turns the data into a variant and uses it as the data for the response notifier. However, if the request is local…

Name Valid - Local

…it sends the same data to the VI that saves a local copy of the data.

The Read Graph Image Event

As I promised above, this event shares much of the basic structure as the one we just considered. In fact, the processing for a Name Not Valid validation result is identical. The Name Valid case, however, is a bit simpler:

Read Graph Image

The reason for this simplification is that regardless of the plugin being accessed, the datatypes involved in the operation are always the same. The code always starts with a graph control reference (which I get from the lookup buffer) and always generates an Image Data cluster. If the event was fired locally, the image data is passed to a VI (Write PNG File.vi) that prompts the user for a file name and then saves it locally. However, if instead of saving a file, you are wanting to pass the image in a form that is usable in a non-LabVIEW environment, a bit more work is required. To encapsulate that added logic, I created the subVI Send Image Data.vi.

Send Image Data

The idea is to convert the proprietary image data coming from the invoke node into a generic form by rendering it as a standard format image. Once in that form, it is a simple matter to send it as a binary stream. To implement this approach, the code first saves the image to a temporary png file. It then reads back the binary contents of the file and uses it as the data for the response notifier. Finally, it deletes the (now redundant) temporary file.

The Set Acquisition Rate Event

This event is the first one to control the operation of the application. It also has no need to be leveraged locally, so no dual operation depending on the contents of the Response Notifier value.

Set Acquisition Rate

Moreover, because the event action is a command and not a request, the response can only have one of two values: “Update Failed” or “Update Good”. The success message is only be sent if the plugin name is either Sine Source or Ramp Source, and no errors occurs during the update. While on the topic of errors, there are two operations that need to be performed for a complete update: the code must modify both the database and the buffer holding the live copy of the setting that the rest of the application uses. In setting the order of these two operations, I considered which of the two is most likely to generate an error and put it first. When you consider that most of the places storing live data are incapable of generating an error, the database update clearly should go first.

So after verifying the plugin name, the subVI responsible for updating the database (Set Default Sample Period.vi) looks to see if the value is changing. If the “new” value and the “old” value are equal, the code returns a Boolean false to its Changed? output and sets the Result output to Update Good. It might seem strange to return a value to the remote application that the update succeeded when there was no update performed, but think about it from the standpoint of the remote user. If I want a sample period of 1000ms, an output of Update Good tells me I got what I wanted – I don’t care that it didn’t have to change to get there. If the value is changing…

Set Default Sample Period

…the code validates the input by sending it to a subVI that compares it to some set limits (500 < period < 2500). Right now these limits are hardcoded, and in some cases that might be perfectly fine. You will encounter many situations where the limits are fixed by the physics of a process or a required input to some piece of equipment. Still, you might want these limits to be programmable too, but I’ll leave that modification as, “…as exercise for the reader.” In any case, if the data is valid, the code uses it to update the database and sets the subVI’s two outputs to reflect whether the database update generated an error. If the data is not valid, it returns the standard error message stating so.

The Set Data Buffer Depth Event

The basic dataflow for this event is very much like the previous one.

Set Data Buffer Depth

The primary logical difference between the two is that all plugins support this parameter. The logic simply has to select the correct buffer to resize.

The Set TC Parameters Event

With our third and (for now at least) final control event, we return to one that is only valid for some of the plugins – this time the temperature controllers.

Set TC Parameters

The interesting part of this event processing is that, because its data was not originally intended to be reloaded at runtime, the live copy of the data is read and buffered in the object-oriented configuration VIs.

Save Machine Configuration

Consequently, the routine to update the database (Save Machine Configuration.vi) first creates a Config Data object and then use that object to read the current configuration data. If the data has changed, and is valid, the same object is passed on to the method that writes the data to the database. Note also, that the validation criteria is more complex.

Validate TC Parameters

In addition to simple limits on the sample interval, the Error High Level cannot exceed 100, the Error Low Level cannot go below 30, and all the levels have to be correct relative to each other.

Testing

With the last of the basic interface code written and in place, we need to look at how to test it. To aide in that effort, I created five test VIs – one for each event. The point of these VIs is to simply exercise the associated event so we can observe and validate the code’s response. For instance, here’s the one for reading the graph data:

Test Read Graph Data

It incorporates two separate execution paths because it has two things that it has to be doing in parallel: Sending the event (the top path) and waiting for a response (the bottom path). Driving both paths, is the output from a support VI from the notifier library (not_Generic Named Notifier.lvlib:Generate Notifier Name.vi). It’s job is to generate a unique notifier name based on the current time and a 4-digit random number. Once the upper path has fired the event, it’s work is done. The bottom path displays the raw notifier response and graphs of the data that is transferred. Next, the test VI for reading the graph image sports similar logic, but the processing of the response data is more complex.

Test Read Graph Image

Here, the response notifier name is also used to form the name for a temporary png file that the code uses to store the image data from the event response. As soon as the file is written, the code reads it back in as a png image and passes it to a subVI that writes it to a 2D picture control on the VI’s front panel. Finally, the three test VIs for the control operations are so similar, I’ll only show one of them.

Test Resizing Data Buffers

This exemplar is for resizing the data buffers. The only response processing is that it displays the raw variant response data.

To use these VIs, launch the testbed application and run these test VIs manually one at a time. For the VIs that set operating parameters, observe that entering invalid data generates the standard error message. Likewise, when you enter a valid setting look for the correct response in both the program’s behavior and the data stored in the database. For the VI’s testing the read functions, run them and observe that the data they display matches what the selected plugin shows on the application’s GUI.

Testbed Application – Release 18
Toolbox – Release 15

The Big Tease

In this post, we have successfully implemented a remote access/control capability. However, we don’t as of yet have any way of accessing that capability from outside LabVIEW. Next time, we start correcting that matter by creating a TCP/IP interface into the logic we just created. After that introduction, there will be posts covering .NET, ActiveX and maybe even WebSockets – we’ll see how it goes.

Until Next Time…
Mike…

More Than One Kind of Modularity

When learning something that you haven’t done before – like .NET – it’s not uncommon to go through a phase where you look at some of the code you wrote early on and cringe (or at least sigh deeply). The problem is that you are often not only learning a new interface or API, but you are learning how to best use that interface or API. The cause of all the cringing and sighing is that as you learn more, you begin to realize that some of the assumptions and design decisions that you made were misguided, if not flat-out wrong. If you look at the code we put together last time to help us learn about .NET in general, and the NotifyIcon assemble in particular, we see a gold-plated example of just such code. Although it is clearly accomplished it’s original goal of demonstrating how to access .NET functionality and illustrating how the various objects can relate to one another, it is certainly not reusable – or maintainable, or extensible, or any of the other “-ables” that good software needs to be.

In fact, I created the code in that way so this time we can take the lesson one step further to fix those shortcomings, and thus demonstrate how you can go about cleaning up code (of your own or inherited) that is making you cringe or sigh. Remember, it is always worth your time to fix bad design. I can’t tell you how many times I have seen people struggling with bad decisions made years before. Rather than taking a bit of time to fix the root cause of their trouble, they continue to waste hours on project after project in order to workaround the problem.

Ok, so where do we start?

Clearly this code would benefit from cleaning-up and refactoring, but where and how should we start? Well, if you are working on an older code base, the question of where to start will not be a problem. You start with where the most pain is. To put it another way, start with the things that cause you the biggest problems on a day-to-day basis.

This point, however, doesn’t mean that you should just sit around and wait for problems to arise. As you are working always be asking yourself if what you are doing has limitations, or embodies assumptions that might cause problems in the future.

The next thing to remember is that this work can, and should, be iterative. In other words you don’t have to fix everything at once. Start with the most egregious errors, and address the others as you have the opportunity. For example, if you see the code doing something stupid like using a string as a state variable, you can fix that quickly by replacing the strings with a typedef enumeration. I have even fixed some long-standing bugs in doing this replacement because it resolved places where states were subtly misspelled or contained extraneous spaces.

Finally, remember that the biggest payoffs, in terms of long-term benefit, come from improved modularity that corrects basic architectural problems. As we shall see in the following discussion, I include under this broad heading modularity in all its forms: modular functionality, modular logic and modular data.

Revisiting Modular Functionality

Modular functionality is the result of taking small reusable bits of code and encapsulating it in routines that simplify access, standardize the interface or ensure proper usage. There are good examples of all these usages in the application modified code, starting with Create NotifyIcon.vi:

Create NotifyIcon VI

Your first thought might be why I bothered turning this functionality into a subVI. After all, it’s just one constructor node. Well, yes that is true but it’s also true that in order to create that one node you have to remember multiple steps and object names. Even though this subVI appears rather simple, if you consider what it would take to recreate it multiple times in the future, you realize that it actually encapsulates quite a bit of knowledge. Moreover, I want to point out that this knowledge is largely stuff for which there is no overwhelming benefit to be gained from you committing it to memory.

Next, let’s consider the question of standardizing interfaces. Our example in this case is a new subVI I created to handle the task of assigning an icon to the interface we are creating. I have named it Set NotifyIcon Icon.vi:

Set NotifyIcon Icon VI

You will remember from out previous discussion that this task involves passing a .NET object encapsulating the icon we wish to use to a property node for the NotifyIcon object. Originally, this property was combined with several others on a single node. A more flexible approach is to breakup that functionality and standardize the interfaces for all the subVIs that will be setting NotifyIcon to simply consist of an object reference and the data to be used to set the property in a standard LabVIEW datatype – in this case a path input. This approach also clearly simplifies access to the desired functionality.

Finally, there is the matter of ensuring proper usage. A good place to highlight that feature is in the last subVI that the application calls before quitting: Drop NotifyIcon.vi.

Drop NotifyIcon VI

You have probably been warned many times about the necessity of closing references that you open. However, when working with .NET objects, that action by itself is sometimes not sufficient to completely release all the system resources that the assembly had been using. Most of the time if you don’t completely close out the assembly you many notice memory leaks or errors from attempting to access resources that still think they are busy. However with the NotifyIcon assembly you will see a problem that is far more noticeable, and embarrassing. If you don’t call the Dispose method your program will close and release all the memory it was using, but if you go to the System Tray you’ll still see your icon. In fact, you will be able to open its menu and even make selections – it just doesn’t do anything. Moreover, the only way to make it go away it to restart your computer.

Given the consequences of forgetting to include this method in your shutdown sequence, it is a good idea to make it an integral part of the code that you can’t forget to include.

Getting Down with Modular Logic

But as powerful as this technique is, there can still be situations where the basic concept of modularity needs to be expressed in a slightly different way. To see such a situation, let’s look at the structure that results from simply applying the previous form of modularity to the problem of building the menus that go with the icon.

Create ContextMenu VI

Comparing this diagram to the original one from last time, you can see that I have encapsulated the repetitive code that generated the MenuItem objects into dedicated subVIs. By any measure this change is a significant improvement: the code is cleaner, better organized, and far more readable. For example, it is pretty easy to visualize what menu items are on submenus. However, in cases such as this one, this improved readability can be a bit of a double-edged sword. To see what I mean, consider that for the structure of your code to allow you to visualize your menu organization, said organization must be hard-coded into the structure of the code. Consequently, changes to the menus will, as a matter of course, require modification to the fundamental structure of the code. If the justifications for modularity is to include concepts like flexibility and reusability, you just missed the boat.

The solution to this situation is to realize that there is more than one flavor of modularity. In addition to modularizing specific functionality, you can also modularize the logic required to perform complex and changeable tasks (like building menus) that you don’t want to hard code. If this seems like a strange idea to you, consider that computers spend most of their time using their generalized hardware to performed specialized tasks defined by lists of instructions called “programs”. The thing that makes this process work is a generalized bit of software called a “compiler” that turns the programs into data structures that the generalized hardware can use to perform specialized actions.

Carrying forward with this line of reasoning, what we need is a simple way of defining a menu structure that is external to our program, and a “menu compiler” that turns that definition into the MenuItem references that our program needs. So let’s build one…

Creating the Data for Our Menu Compiler

So what should this menu definition look like? Well, to answer that question we need to start with the data required to define a single MenuItem. We see that as a minimum, every item in a menu has to have a name for display to the user, a tag to identify it, and a parent tag that says if the item has a parent item (and if so which item is its parent). In addition, we haven’t really talked about it, but the order of references in an array of menu items defines the order in which the items appear in the menu or submenu – so we need a way to specify its menu position as well. Finally, because in the end the menu will consist of a list (array) of menu item references, it makes sense to express the overall menu definition that we will eventually compile into that array of references as a list (and eventually also an array).

But where should we store this list of menu item definitions? At least part of the to this question depends on who you want to be able to modify the menu, and the level of technical expertise that person has. For example, you could store this data in text files as INI keys, or as XML or JSON strings. These files have the advantage of being easy to generate and are readily accessible to anyone who has access to a text editor – of course that is their major disadvantage, as well. Databases on the other hand are more secure, but not as easy to access. For the purposes of this discussion, I’ll store the menu definitions in a JSON file because, when done properly, the whole issue of how to parse the data simply goes away.

To see what I mean, here is a nicely indented JSON file that describes the menu that we have been working using for our example NotifyIcon application:

[
	{
		"Menu Order":0,
		"Item Name":"Larry",
		"Item Tag":"Larry",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Moe",
		"Item Tag":"Moe",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"The Other Stooge",
		"Item Tag":"The Other Stooge",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":3,
		"Item Name":"-",
		"Item Tag":"",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":4,
		"Item Name":"Quit",
		"Item Tag":"Quit",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":0,
		"Item Name":"Curley",
		"Item Tag":"Curley",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Shep",
		"Item Tag":"Shep",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"Joe",
		"Item Tag":"Joe",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	}
]

And here is the LabVIEW code will convert this string into a LabVIEW array (even if it isn’t nicely indented):

Read JSON String

JSON has a lot of advantages over techniques like XML: For starters, it’s easier to read, and a lot more efficient, but this is why I really like using JSON: It is so very convenient.

Starting the Compilation

Now that we have our raw menu definition string read into LabVIEW and converted into a datatype that will simplify the next step in the processing, we need to ensure that the data is in the right order. To see why, we need to remember that the final data structure we are building is hierarchical, so the order in which we build it matters. For instance, “The Other Stooge” is a top-level menu item, but it is also a submenu so we can’t build it until we have references to all the menu items that are under it. Likewise, if one of the items under it is a submenu, we can’t build it until all its children are created.

So given the importance of order, we need to be careful how we handle the data because none of the available storage techniques can on their own guarantee proper ordering. The string formats can all be edited manually, and it’s not reasonable to expect people to always type in data in the right order. Even though databases can sort the result of queries, there isn’t enough information in the menu definition to allow it to do so.

The menu definition we created does have a numeric value that specifies the order of items in their respective menus and submenus. We don’t, however, yet have a way of telling the level the items reside at relative to the overall menu structure. Logically we can see that “Larry” is a top-level menu item, and “Shep” is one level down, but we can’t yet determine that information programmatically. Still the information we need is present in the data, it just needs to be massaged a bit. Here is the code for that task:

Ordering the Menu Items

As you can see, the process is basically pretty simple. I first rewrite the Item Tag value by adding the original Item Tag value to the colon-delimited list that starts with the Parent Tag. I then count the number of colons in the resulting string, and that is my Menu Level value. The exception to this processing are the top-level menu items which are easy to identify due to their null parent tags. I simply force their Menu Level values to zero and replace the null string with a known value that will make the subsequent processing easier. The real magic however, occurs after the loop stops. The code first sorts the array in ascending order and then reverses the array. Due to the way the 1D array sort works when operating on arrays of clusters, the array will be sorted first by Menu Level and then Menu Order – the first two items in the cluster. This sorting, in concert with the array reversal, guarantees that the children of a submenu will always be processed before the submenu item itself.

Some of you may be wondering why we go to all this trouble. After all, couldn’t we just add a value to the menu definition data to hold the Menu Level? Yes, we could, but it’s not a good idea, and here’s why. In some areas of software development (like database development, for instance) the experts put a lot of store in reducing “redundancy” – which they define basically as storing the same piece of information in more than one place. The problem is that if you have redundant information, you have to decide how to respond when the two pieces of information that are supposed to be the same, aren’t. So let’s say we add a field to the menu definition for the menu level. Now we have the same piece of information stored in two different places: It is stored explicitly in the Menu Level value while at the same time it is also stored implicitly in Parent Tag.

Generating the Menu Item “Code”

In order to turn this listing into the MenuItem references we need, we will pass this sorted and ordered array into a loop that will process one element at a time. And here it is:

Compiling the Menu-1

You can see that the loop carries two shift registers. The top SR holds a 1D array of strings that consists of the submenu tags that the loop has encountered so far. The other SR also carries a 1D array but each element in it is a cluster containing an array of MenuItem references associated with the submenu named in the corresponding element of the top SR.

As the screenshot shows, the first thing that happens in the loop is that the code checks to see if the indexed Item Tag is contained in the top SR. If the tag is missing from the array it means that the item is not a submenu, so the code uses its data to create a non-submenu MenuItem. In parallel with that operation, the code is also determining what to do with the reference that is being created by looking to see if the item’s Parent Tag exists in the top SR. If the item’s parent is also missing from the array, the code creates entries for it in both arrays. If the parent’s tag is found in the top SR, it means that one or more of the item’s sibling items has already been processed so code is executed to add the new MenuItem to the array of existing ones:

Compiling the Menu-2

Note that the new reference is added to the top of the array. The reason for this departure from the norm is that due to the way the sorting works, the menu order is also reversed and this logic puts the items on each submenu back in their correct order. Note also that during this processing the references associated the menu items are also accumulated in a separate array that will be used to initialize the callbacks. Because the array indexing operation is conditional, only a MenuItem that is not a submenu, will be included in this array.

Generating the Submenu “Code”

If the indexed Item Tag is found in the top SR, the item is a submenu and the MenuItem references needed to create its MenuItem should be in the array of references stored in the bottom SR.

Compiling the Menu-3

So the first thing the code does is delete the tag and its data from the two array (since they are no longer needed) and uses the data thus obtained to create the submenu’s MenuItem. At the same time, the code is also checking to see if the submenu’s parent exists in the top SR. As before, if the Parent Tag doesn’t exist in the array, the code creates an entry for it, and if it does…

Compiling the Menu-4

…adds the new MenuItem to the existing array – again at the top of the array. By the time this loop finishes, there should be only one element in each array. The only item left in the top SR should be “[top-menu]” and the bottom SR should be holding the references to the top-level menu items. The array of references is in turn used to create the ContextMenu object which written to the NotifyIcon object’s ContextMenu property.

What Could Possibly Go Wrong?

At this point, you can run the example code and see an iconic system tray interface that behaves pretty much as it did before, but with a few extra selections. However, we need to have a brief conversation about error checking, and frankly in this situation there are two schools of though on this topic. There is ample opportunity for errors to creep into the menu structure. Something as simple as misspelling a parent tag name could result in an “orphan” menu that would never get displayed – or could end up being the only one that is displayed. So the question is how much error checking do we really need to do? There are those that think you should spend a lot of time going through the logic looking for and trapping every possible error.

Given that most menus should be rather minimal, and errors are really obvious, I tend to concentrate on the low-hanging fruit. For example, one simple check that will catch a large number of possible errors, is looking to see if at the end of the processing, there is more than one menu name left in the top SR – and finding an extra one, asserting an error that gives the name of the extra menu. You should probably also use this error as an opportunity to abort the application launch since you could be left in a situation when you can’t shutdown the program because the “Quit” option is missing.

Something else that you might want to consider is what to do if the external file containing the menu definitions comes up missing. The most obvious solution is to, again, abort the application launch with some sort of appropriate error message. However, depending on the application it might be valuable to provide a hard-coded default menu that doesn’t depend on external files and provides a certain minimum level of functionality. In fact, I once worked on an application where this was an explicit requirement because one of the things that the program allowed the user to do was create custom menus, the structure of which was stored in external files.

Stooge Identifier – Release 2
Toolbox – Release 11

The Big Tease

So what are we going to talk about next time? Well something that I have seen coming up a lot lately on the user forum is the need to be able to work with very large datasets. Often, this issue arises when someone tries to display the results of a test that ran for several hours (or days!) only to discover that the complete dataset consists of hundreds of thousands of separate datapoints. While LabVIEW can easily deal with datasets of this magnitude, it should be obvious that you need to really bring you memory management “A” game. Next time will look into how to plot and manage VLDs (Very Large Datasets).

Until Next Time…

Mike…

Building a Web Backend in LabVIEW

When building a web-based application, one of the central goals is often to maximize return on investment (ROI) by creating an interface that is truly cross-platform. One of the consequences of this approach is that if you are using a LabVIEW-based program to do so-called “backend” tasks like data acquisition and hardware control, the interface to that program will likely be hidden from the user. You see the thing is, people like consistent user interfaces and while LabVIEW front panels can do a pretty good job as a conventional application, put that same front panel in a web page and it sticks out like the proverbial sore thumb.

Plus simply embedding a LabVIEW front panel in a web pages breaks one of the basic tenets of good web design by blurring the distinction between content and appearance. Moreover, it makes you — the LabVIEW developer — at least partially responsible for the appearance of a website. Trust me, that is a situation nobody likes. You are angered by what you see as a seemingly endless stream of requests for what to you are “pointless” interface tweaks, “So big deal! The control is 3 pixels too far to the left, what difference does that make?” And for their part, the full-time web developers are rankled by what they experience as a lack of control over an application for which they are ultimately responsible.

If you think about it, though, this situation is just another example of why basic computer science concepts like “low coupling” are, well, basic computer science concepts. Distinguishing between data collection and data presentation is just another form of modularity, and we all agree that modularity is A Very Good Thing, right?

It’s All About the (Data)Base

So if the concept that is really in play here is modularity, then our immediate concern needs to be the structure of the interface between that part of the system which is our backend responsibility, and that which is the “web guy’s” customer facing web-based GUI.

Recalling what we have learned about the overall structure of a web application, we know that the basic way most websites work is that PHP (or some other server-side programming tool) dynamically builds the web pages that people see based on data that they extract from a database. Consequently, as LabVIEW developer trying to make data available to the web, the basic question then is really, “How do I get my data into their database?”

The first option is to approach communications with the website’s database as we would with any other database. Although we have already talked at length about how to access databases in general, there is a problem. While most website admins are wonderful people who will share with you many, many things — direct access to their database is usually not one of those things.

The problem is security. Opening access to your LabVIEW program would create a security hole through which someone not as kind and wholesome as you, could wiggle. The solution is to let the web server mediate the data transfer — and provide the security. To explore this technique, we will consider two simple examples: one that inserts data and one that performs a query. In both cases, I will present a LabVIEW VI, and some simple PHP code that it will be accessing.

However, remember that the point of this lesson is the LabVIEW side of the operations. The PHP I present simply exists so you can get a general idea of how the overall process works. I make no claims that the PHP code bears any similarity to code running on your server, or even that it’s very good. In fact, the server you access might not even use PHP, substituting instead some other server-side tool such as Perl. But regardless of what happens on the other end of the wire, the LabVIEW code doesn’t need to change.

As a quick aside, if you don’t have a friendly HTTP server handy and you want to try some of this stuff out, there is a good option available. An organization called Apache Friends produces an all-in-one package that will install a serviceable server on just about any Windows, Linux or Apple computer you may have sitting around not being used. Note that you do not want to actually put a server that you create in this way on the internet. This package is intended for training and experimentation so they give short shrift to things like security.

The following two examples will be working with a simple database table with the following definition:

CREATE TABLE temp_reading (
   sample_dttm  TIMESTAMP PRIMARY KEY,
   temperature  FLOAT
   )
;

Note that although the table has two columns we will only be interacting directly with the temperature column. The sample_dttm column is configured in the DBMS such that if you don’t provide a value, the server will automatically generate one for the current time. I wanted to highlight this feature because the handling of date/time values is one of the most un-standardized parts of the SQL standard. So it will simplify your life considerably if you can get the DBMS to handle the inserting or updating of time data for you.

Writing to the Database

The first (and often the most common) thing a backend application has to do is insert data into the database, but to do that you need to be able to send data to the server along with the URL you are wanting to access. Now HTTP’s GET method which we played with a bit last week can pass parameters, but it has a couple big limitations: First, is the matter of security.

The way the GET method passes data is by appending it to the URL it is getting. For example, say you have a web page that is expecting two numeric parameters called “x” and “y”. The URL sent to the server would be in the form of:

http://www.mysite.html?x=3.14;y=1953

Consequently the data is visible to even the most casual observer. Ironically though, the biggest issue is not the values that are being sent. The biggest security hole is that it exposes the names of the variables that make the page work. If a person with nefarious intent know the names of the variables, they can potentially discover a lot about how the website works by simply passing different values and seeing how the page responds.

A second problem with the GET method’s way of passing data is that it limits the amount of data you can sent. According to the underlying internet standards, URLs are limited to a maximum of 2048 characters.

To deal with both of these issues, the HTTP protocol defines a method called POST that works much like GET, but without exposing the internal operation or limiting the size of the data it can pass. Here is an example of a simple LabVIEW VI for writing a temperature reading to the database.

Insert to database with PHP

This code is pretty similar to the other HTTP client functions that we have looked at before, but there are a few differences. To begin with, it is using the POST we just talked about. Next, it is formatting and passing a parameter to the method. Likewise, after the HTTP call it verifies that the text returned to it is the success message. But where do those values (the parameter name and the success message) come from? As it so happens I didn’t just pull them out of thin air.

Check out the URL the code is calling and you’ll note that it is not a web page, but PHP file. In fact, what we are doing is asking the server to run a short PHP program for us. And here is the program:

<?php
	if($dbc = mysqli_connect("localhost","myUserID","myPassword","myDatabase")) {
    	// Check connection
		if (mysqli_connect_errno()) {
			// There was a connection error, so bail with the error data
			echo "Failed to connect to MySQL: " . mysqli_connect_error();
		} else {
			// Create the command string...
			$insert = "INSERT INTO temp_reading (temperature) VALUES (" . $_POST["tempData"] . ")" ;
			
			// Execute the insert
			if(!mysqli_query($dbc,$insert)){
				// The insert generated an error so bail with the error data
				die('SQL Error: ' . mysqli_error($dbc));
			}
			
			// Close the connection and return the success message
			mysqli_close($dbc);
			echo "1 Record Added";
		}
	}
?>

Now the first thing that you need to know about PHP code is that whenever you see the keyword echo being used, some text is being sent back to the client that is calling the PHP. The first thing this little program does it try to connect to the database. If the connection fails the code echoes back to the clients a message indicating that fact, and telling it why the connection failed.

If the connection is successful, the code next assembles a generic SQL statement to insert a new temperature into the database. Note in particular the phrase $_POST["tempData"]. This statement tells the PHP engine to look through the data that the HTTP method provided and insert into the string it’s building the value associated with the parameter named tempData. This is where we get the name of the parameter we have to use to communicate the new temperature value. Finally, the code executes the insert that it assembled and echoes back to the client either an error, or a success message.

Although this code would work “as is” in many situations, it does have one very large problem. There is an extra communications layer in the process that can hide or create errors. Consider that a good response tells you that the process definitely worked. Likewise receiving one of the PHP generated errors tells you that the process definitely did not work. A network error, however, is ambiguous. You don’t know if the process failed because the transmission to the server was interrupted; or if the process worked but you don’t know about it because the response from the server to you got trashed.

PHP Mediated Database Reads

The other thing that backend applications have to do is fetch information (often configuration data) from the database. To demonstrate the process go back to the GET method:

Fetch from database with PHP

As with the first example, the target URL is for a small PHP program, but this time the goal is to read a time ordered list of all the saved temperatures and return them to the client. However, the data needs to be returned in such a way that it will be readable from any client — not just one written in LabVIEW. In the web development world the first and most common standard, is to return datasets like this in an XML string. This standard is very complete and rigorous. Unfortunately, it tends to produce strings that are very large compared to the data payload they are carrying.

Due to the size of XML output, and the complexity of parsing it, a new standard is gaining traction. Called JSON, for JavaScript Object Notation, this standard is really just the way that the JavaScript programming language defines complex data structures. It is also easy for human beings to read and understand, and unlike XML, features a terseness that makes it very easy to generate and parse.

Although, there are standard libraries for generating JSON, I have chosen in this PHP program to generate the string myself:

<?php
    if($dbc = mysqli_connect("localhost","myUserID","myPassword","myDatabase")) {

    	// Check connection
		if (mysqli_connect_errno()) {
			echo "Failed to connect to MySQL: " . mysqli_connect_error();
		} else {
			// Clear the output string and build the query
			$json_string = "";
			$query = "SELECT temperature FROM temp_reading ORDER BY sample_dttm ASC";
			
			// Execute the query and store the resulting data set in a variable (called $result)
			$result = mysqli_query($dbc,$query);
			
			// Parse the rows in $result and insert the values into a JSON string
			while($row = mysqli_fetch_array($result)) {
				$json_string = $json_string . $row['temperature'] . ",";
			}
			
			// Replace the last comma with the closing square bracket and send the
			// to the waiting client
			$json_string = "[" . substr_replace($json_string, "]", -1);
			echo $json_string;
		}
	}
	mysqli_close($dbc);
?>

As in the first PHP program, the code starts by opening a database connection and defining a small chunk of SQL — this time a query. The resulting recordset is inserted into a data structure that the code can access on a row by row basis. A while loop processes this data structure, appending the data into a comma-delimited list enclosed in square brackets, which the JSON notation for an array. This string is echoed to the client where a built-in LabVIEW function turns it into a 1D array of floats.

This technique works equally well for most complex datatypes so you should keep it in mind even if you are not working on web project. I once had an application where the client wanted to write the GUI in (God help them) C# but do all the DAQ and control in LabVIEW-generated DLLs. In this situation the C# developer had a variety of structs containing configuration data that he wanted to send me. He, likewise, wanted to receive data from me in another struct. The easy solution was to use JSON. He had a standard library that would turn his structs into JSON strings, and LabVIEW would turn those strings directly into LabVIEW clusters. Then on the return trip my LabVIEW data structure (a cluster containing a couple of arrays) could be turned, via JSON, directly into a C# struct. Very, very cool.

The Big Tease

So what is in store for next time? Well something I have been noticing is that people seem to be looking for new ways to let users interact with their applications. With that in mind I thought I might take some time to look at some nonstandard user interfaces.

For my first foray into this area I was thinking about a user interface where there are several screens showing different data, but by clicking a button you can pop the current screen out into a floating window. Of course, when you close the floating window, it will go back into being a screen the you can display in the main GUI. Should be fun.

Until Next Time…

Mike…

Finishing the Configuration Management

For the last couple posts we have been looking at how to best utilize object-oriented programming methodologies. In that quest, we have taken as our example the goal of converting the parts of our testbed application that use stored configuration data to a configuration manager based on object classes. In this implementation, the classes represent the various types of potential data repositories.

Maximizing Flexibility by Leveraging Existing Code

When we stopped last time we had created all the basic code infrastructure and all we had to do was construct the dynamic dispatch methods that retrieve the data the application needs. In creating these methods, we have as our guiding principle reducing the amount of code we have to create by reusing as much code as we can. In other words we need to really spend some time thinking about how to structure our code such that it combines maximum reuse with maximum flexibility.

One good way of attaining that ideal is to allow for multiple levels of dynamic dispatch within the same method. Let’s say for the sake of argument that you have a method that will need to be accessible from 10 different subclasses. Furthermore, let’s say that 4 of those subclasses all need to do the same thing, an additional 4 are mostly the same but with a few differences, and the last 2 subclasses require fundamentally different logic to perform the same task. You could implement the logic that is common to the largest group of subclasses (the first 4) in the parent and let the remaining 6 override the parent to define their own solutions. This solution would work, but could potentially result in a lot of duplicated code. It depends on how similar the second group of 4 subclasses are to the first group of 4.

In creating the Initialize New method last week we saw a far better way to optimize the code: For the 4 subclasses that are similar, we could call the parent method in the child (thus taking advantage of that existing code) and then add a little logic to customize the functionality. This solution will work well in many situations, but one area where it will not is in scenarios where the common parts of the code need to pass data to the unique parts.

To address those situations, a very useful solution can be to write the parent method such that it has a subVI encapsulating the similar, but unique bits, that is itself a dynamic dispatch VI. By providing multiple levels of dynamic dispatch, you create a situation that is easy to understand and minimizes duplication of code. In our example, the 4 subclasses would use the parent implementations of the dynamic dispatch VI and subVI. The 4 subclasses that are similar, but a bit different would use the parent method, but override the dynamic dispatch subVI, and the two that are fundamentally different would override the parent method VI itself.

Getting Down to Business

So let’s start looking at what we need to do to finish our conversion — while remembering that most of this discussion will be about the reorganization and repurposing of existing code. For the most part, I won’t be going into how the basic functionality works since we discussed that code when it was originally introduced.

The basic internal structure of our four “blueprint” VIs will be largely similar. We will call the VI that creates the object we want, followed by a dynamic dispatch method that does what we need done. However, note that there isn’t (and shouldn’t be) a direct one-to-one correlation between the blueprint VI and the underlying method. We still need to be looking for ways to minimize redundant code. Back when we were setting up the testbed application originally we considered the need for unstructured configuration values — like you would usually store in an INI file. In fact, we implemented that capability and used it to store the default sample period. Consequently, one of the three methods we end up implementing will be used in a couple of different places now, and will be reusable in the future for other unstructured configuration data. So, let’s start with that one.

Get Misc Setups

Creating this method follows the same procedure as we have used before, except that this method has two input parameters and an output string. So the finished parent method has a front panel that looks like this:

misc read front panel

Another difference with this method is that if it is called for a subclass that does not override it, the parent VI does not provide any default functionality. To see what I mean, check out the VI’s block diagram. You see that it does nothing. This structure might seem rather pointless, but in reality it can at times be pretty handy. Say you have a method that is really only needed by one subclass. Without dynamic dispatch you would have to put a case structure around the code so the VI is only called in that one particular circumstance. However, with dynamic dispatch, this selection takes place automatically and without cluttering up the code with case structures.

With the parent VI (Read Misc Settings.vi) created and saved, we can now build the two subclass overrides. Starting with the Text version, to read the data from the INI file, we create an override VI in Config Data_Text.lvclass containing code that uses the built-in configuration VIs to fetch the data:

read misc - ini

But wait! This won’t work. You notice that the path to the INI file comes from the class data — which is fine except that we forgot to initialize it. I wanted to highlight this point because it is a very easy, and common mistake to make. Initializing the DVR is not the same as initializing the data in the DVR. All we have to do to fix this memory lapse is modify the class’ version of the Initialize New method, like so:

finish initializing the DVR - text

The subVI with the green banner builds the path to the application’s INI file programmatically. By the way, in case you’re wondering about whether the same issue applies to the other override to this method (the one for databases), the answer is “Yes”. However the solution there is a bit more complex, so we’ll deal with it in a moment.

The other thing to notice about the setup reading method’s code is that if the read doesn’t find the desired value in the INI file, it creates it and gives it a default value. Adding this extra bit is often helpful for recovering from situations where a uses has gotten in and mucked around with the INI file and deleted something they should not have. In any case, to see this code in action, we need to finish up the code in the Get Default Sample Period.vi “blueprint” VI:

get default sample period

You can now run this VI and get a valid result from the INI file. Of course, if you set the configuration data selector value in the INI file to Jet you will get a zero back. This results is because we haven’t provided an override for that subclass yet. Note that you do not get an error because not overriding a parent method is a perfectly valid thing to do.

Updating the Database Initialization

But we need the database data too, so let’s backtrack to the Initialize New VI in Config Data_DB.lvclass. The class data for this subclass is a string that the code will use in one way or another to connect to the database. However, there are three possible sources for this string:

  • Most ADO interfaces use a rather complex and specialized connection string that can identify logical names, network paths, security parameters and a lot more.
  • Although Jet does use ADO drivers, its connection string is much simpler. In fact, with the exception of the file path, it can be largely treated as a big string constant.
  • SQLite (which we aren’t going to support right now, but which exists in the class hierarchy) doesn’t use any ADO parameters. Keeping with its minimalist approach, all it needs is a file path.

So what are we going to do? Well, if you said, this sounds like a job for dynamic dispatch, you’re right! We need to create a method that will derive the correct string depending on which specific subclass is present. So let’s create a virtual folder and a subdirectory named _protected in Config Data_DB.lvclass with Protected access scope. Inside the virtual folder we will create a new VI using the dynamic dispatch template called Get Connection String.vi, and save it in the subdirectory. In addition to the signal IO the template provides, we need to add a string output:

get connection string

After saving this new the parent class method, we can fill in the overrides. However that code is pretty trivial, mostly scavenged from the original code, and has very little to do with OOP. Consequently, I won’t take time here to highlight it, but feel free to check it for yourself. Then with this work completed, we can finish the initialization code.

finish initializing the DVR - jet

Accessing the Database

Although we will be using a Jet database, note that the override for the Read Misc Settings method will reside in the Config Data_DB_ADO.lvclass subclass, not Config Data_DB_ADO_Jet.lvclass. The reason for this placement is that with the exception of the connection string (which we handle during initialization) the query logic is the same for Jet or most other ADO databases. By the way, what would we do if we ever did find a DBMS that needed something different? That’s right, we would just create a subclass for it and override the method in that new subclass.

In any case, most of the code we need now was lifted wholesale from the old configuration library:

read misc - jet

And with that we are done with the first of our blueprint VIs. However, this is also the procedure we will follow the VIs to read the error handling parameters, the startup processes and load the machine configurations for each state machine clone. Again, I won’t step through the creation of this code because from the standpoint of this post’s main topic (object oriented code development) there is nothing in these that is any different from the one we did go through. As a friend of mine used to say, “From here on, it’s just a matter of turnin’ the crank.”

Testing

To make the results of these changes easier to verify, I have made slight differences between the configuration in the INI file, and the configuration in the database. Notice that if you start the application with the INI file set to Text, the application launches with only two TC state machines running and the acquisition sample rate defaults to 500 msec. However, a setting of Jet produces the functionality we have seen before (3 state machines and a 1000 msec default sample rate).

Notice also that you have to make the INI file setting changes before starting the application. The logic could certainly have been written to make the settings reconfigurable on the fly, but it would have added another level of complication, so I will leave that as, “…as an exercise for the reader…”.

Before tagging this release of the code in SVN, I also went through the “recycled” code and made sure that the VIs were all saved in their proper locations and all the redundant code had been removed. A project window feature that made the latter task much easier was the ability to right-click on a folder and have LabVIEW list all the files with no callers. One thing to be careful about, however, is deleting “unused” files that are in classes. The logic behind this feature can’t identify VIs that are being dynamically linked — which pretty much describes dynamic dispatch VIs.

Finally, before closings out this post, I want to remind you of something important. I make no claim that this code is the best implementation for all possible situations. Although a lot of it is based on software that I developed for deliverable systems, the point on this blog is to provide you with examples, demonstrations, and models that you can then customize and mold to fit your customers’ specific needs. In music, there is the idea of “variations on a theme”. In fact, in a few cases the variations have become more famous than the original. So take the themes I offer here and feel free to deconstruct, rearrange and reassemble them into something that is new and exciting.

Testbed application Release 15
Toolbox Release 7

The Big Tease

What if I told you that LabVIEW incorporates a feature that can improve the quality of your code and reduce errors by helping to validate your math? Hey, this is a no-brainer! Everybody can use help validating their code. But, what would you say if I told you that most people never use this feature? Crazy…

Until Next Time…

Mike…

How to Make Dynamic SubVIs

The last two posts discussed different ways to use dynamic linking and calling in situations where you want the target VI to run in parallel with the rest of the application. Another major use case for dynamic calling is to create software plugins.

I use the term “plugin architecture” to indicate a technique that strives to simplify code by facilitating runtime changes to limited portions of the code while allowing the basic logic for the function as a whole to remain intact and unchanged. For example, say you are implementing a control algorithm that has, for the sake of argument, two inputs: a position and a load. As long as the readings are accurate, timely and in the proper units, most of the control algorithm doesn’t care how these inputs are measured. It would, therefore, simplify the code base as a whole to incorporate a technique that allows the reuse all the common code by simply “plugging in” different acquisition modules that support the various different types of sensors that can measure position and load.

This isn’t rocket science

This goal might sound lofty, but the hurdle for getting into it is actually pretty low. In fact, when teach the LabVIEW Core 1 and 2 week, I will sometimes introduce what LabVIEW can do by demonstrating a simple plugin architecture that only uses concepts introduced and discussed in those two classes.

I start the demo by opening an application that I have written to implement a simple calculator. It has two numeric inputs labeled “X” and “Y” and an indicator labeled “Output”. The only other control is a popup menu that lists the two math operations it can perform: addition and subtraction. I first show that you can use the simple program to add and subtract numbers.

I then comment that it would be nice if my program could multiply numbers as well. So I drag the program window to one side but leave it running. I then open a new window and add two inputs and an output (labeled like the program). On the block diagram I drop down a multiplier, wire it up, and save the VI with the name Multiply.vi. After closing that VI’s front panel, I tell the class that although they just watched me create the ability to multiply numbers, the program which is already running has already acquired it. At this point, I pull the program front panel back over and show that the menu which only moments before said “Add” and “Subtract”, now has a third option: “Multiply”. Moreover, the new selection does indeed allow me to now multiply numbers. Finally, I make the point that this capability to dynamically expand and change functionality was created using nothing but things that they will be learning in the coming week.

While the students at this point might not be cheering, they are awake and have some motivation to learn. You better believe that when I show them the code Friday afternoon and they can recognize how the program works, they are excited. So let’s see what I can do now to motivate and excite you…

It’s all about the packaging

As we begin to get into the following use cases you should notice that the actual code needed to implement each solution is not really very complicated. If fact, in the following sections we will be dealing with many of the same functions as when we were dynamically launching separate processes. The tricky part here is that we need to fit all the data management and launch logic in a space that would otherwise be occupied by a single VI. For this reason we need to be very careful when packaging this code. To demonstrate what I mean, let’s consider three common use cases that all call a simple test VI (called amazingly enough Test.vi) that simply returns a random number. I have ordered these cases such that each example builds on what went before:

System Initialization

I like to start here when explaining these concepts because it is the simplest structure and so is easy to understand.

System Initialization

Here you see laid out all the basic pieces required to dynamically call a subVI. The first VI (the one marked “Dummy”) is the one we will use through-out this post to represent the data management needed to get the path to the VI that will be called dynamically. Depending upon your application this could come from an INI file, database, or what have you. We won’t go into a lot of detail with that VI because we have discussed the various techniques pretty thoroughly in other posts.

You should recognize the next node (Open VI Reference) as we have already used it a lot. It opens a reference to the VI specified in the input VI name or path. This action also has the effect of loading the VI into memory. Just as statically subVIs can be reentrant, you can specify it here, as well, if needed.

Once the VI is open, we pass its reference to a node that looks a lot like the Start Asynchronous Call we have already used. This node is Call by Reference and like the prior one it expects a strict VI reference and in return provides a connector pane for passing data to the target. However, unlike the asynchronous call, this node always waits until the target finishes executing before continuing, hence its representation of the target’s connector pane also allows us to get data from the target.

Finally, in order to remove the target VI from memory, we need to close the reference when we are done with it — which is what the last node does.

When using this technique it is important to keep in mind that, because it loads, executes and unloads the VI all at once, it is very inefficient when used is a situation where it will be run multiple times. Sort of like trying to use an Express VI inside a loop — but worse.

However, one place where this technique can be used very effectively is doing things like system initialization. Its not uncommon for large and complex applications to require an equally large and complex initialization process, especially the first time they are run. Although you could just write a VI to do this initialization and install it in the program, why should you burden your program with a bunch of code that may only execute one time — ever? This technique allows you to only load and execute a VI if you actually need it.

Function Substitution

This use case is perhaps the most common of the three. It uses the same nodes we just saw, but because it can be used essentially anywhere, the packaging has to be very efficient. A solution I often use is to create a mini state-machine that has states for loading, executing and unloading the target VI, as well as one more for deciding what to do first. The user interface has one (enumerated) control that allows you to specify whether you want to Load the target, Run the target, or Unload the target.

The inputs are pretty simple, and the state-machine logic mirrors that simplicity. If the function requested is Load, the Startup state transitions directly to the Open VI state which opens and buffers the VI reference. Here is the code for these two states:

Function substitution - Startup - Load

Function substitution - Open VI

If the function requested is Run (the input’s default value, by the way), the Startup state first checks to see if the VI reference is valid and if it is, transitions to the Execute state.

Function substitution - Startup - Run

Function substitution - Execute

If the reference is not valid, execution instead continues with the Open VI state which returns to the Execute state after opening the reference.

Finally, if the function requested is Unload, the Startup state transitions directly to the Close VI state:

Function substitution - Startup - Unload

Function substitution - Close VI

The result of all this work is a flexible VI that can be used in a variety of different ways depending upon system requirements. For example, if the target VI loads quickly or the calling process doesn’t have tight timing constraints, you can just install it and let the first call to both load the VI into memory and run it. Alternately, if you don’t want the first call to be delayed by that initial load, you can call it once in a non-time-critical part of the code to just Load the target and then Run it as many times as you like later.

In the same way, the Unload function, which normally isn’t needed, can be used to release the memory resources used by a large dynamic subVI when you know that you aren’t going to be needing it for a while.

This approach could even be extended to create a simple test executive that dynamically loads and unloads whole sets of VIs. In such a situation, though, you probably don’t want to be tied to a single connector pane, so you should consider changing the VI reference to non-strict and modifying the Execute state to use our old friend the Run VI method like so.

Function substitution - Execute - Run VI Method

Interprocess Communication

This use case is in many ways the most expansive of the three because it supports communications between different processes regardless of their physical location. In terms of code, there is very little difference between this case and the previous one. In fact, the only change that is really required is to the Load state. This is what the network-enabled version looks like:

Function substitution - Open VI - IPC

That new icon in front of Open VI Reference is the one that makes the magic. Its name is Open Application Reference and its job is to return a reference to the instance of LabVIEW that is hosting the VI that you want to access. To make this connection, you need to know the host-name or IP address of the computer running the LabVIEW application, and the port number the instance is using for incoming connections. If the application that you are wanting to access in on your own computer, you can either leave the host name string empty or use the name localhost. The port can be any number that isn’t already being used. One common people mistake is to simply accept the default value, which is the port that LabVIEW listens to by default. This causes problems when they go to test their application because now there are two processes (the LabVIEW development environment and their application) trying to use the same port.

An important point to remember is that while the Call by Reference node passes data back and forth, the called VI actually executes on the remote system. Hence, this sort of operation works independent of the target system’s operating system or even the version of LabVIEW that it is running.

The tasks for setting up a system to use this technique involves properly configuring the application to which you are going to be connecting — though it often doesn’t require any code changes. This amazing benefit is possible because the underlying networking is handled by the runtime engine, not the application code you write. I have on a couple occasions come into a place and added remote control functionality to an old application that was not even designed with that capability in mind.

You will, however, have to make changes to the target application’s INI file to enable networking, And you, obviously, will need to have access to the application’s source code so you know what VIs to call. Likewise, if you are accessing a computer outside your local network there can be a variety of network communications details to sort out that are beyond the scope of this post. One other thing to bear in mind is that it is always easier to connect to a remote VI if it is already in memory. The reason for this fact is that if the VI is already in memory all you need to know is the VI’s name. If you are also loading it into memory, you have to know the complete path — the annotation of which can change between platforms.

But assuming you establish the connection, what exactly can you do with it? The answer to that lies in what VIs you choose to execute from the remote process. If you execute a VI that fires an event, that event gets fired on the remote system, so you can use it as a channel for controlling that application.

Alternately, say you chose to execute a VI that is a function global variable (FGV). Depending on whether you are reading from or writing to the FGV, you are now either passing data to the remote system, or collecting data from it. By the way, this is the method I still typically use for passing data over a network between LabVIEW applications — not network-enabled shared variables. Unlike this later “enhancement”, the dynamic calling of a FGV doesn’t need to be “deployed”, is more memory efficient, has a smaller code footprint, is far easier to troubleshoot if there is a problem, and works across all releases of LabVIEW back to Version 6.

The only real limit to what you can do with this technique is your imagination.

So there are the three major use cases for the dynamic linking of subVIs. This discussion is by no means a complete consideration of the topic, but hopefully it will whet your appetite to experiment a bit. The link below is to an archive containing all three examples to get you started.

Dynamic Linking Examples

Next time: Command Line Parameters are not a relic of the past.

If you have spent any time poking into some of the more esoteric corners of the application builder, you may have noticed a checkbox in the Advanced section labeled, “Pass all command line arguments to application”, and wondered what that is all about. Well, wonder no more, that little checkbox is what we are going to discuss next time out — and while we’re at it I’ll cover a really neat use for it.

Until next time…

Mike…

Adding Basic Error Handling

Now that we have a basic application running, there is something important we need to add. The first release of the testbed application makes a mistake that is common too much commercial LabVIEW code: It assumes that nothing will ever go wrong. To address that failing, we are going to look at a couple refinements that will report and record errors, as well as provide the hooks for later refinement of the error handling process.

Startup Errors

The acquisition process right now is simply passing to the display process a random number. Consequently, the acquisition needs no initialization — and so presents no real opportunities for an error to occur. However, if you decide to use this testbed as a basis for a real-world application, you will discover that most kinds of acquisition will require some sort of initialization that needs to occur when the process starts.

Now the most obvious place to put this initialization is before the event registrations. Unfortunately this placement can result in a situation where an error in the early parts of the initialization can keep your events from registering so you have no way of stopping the process — or event reporting that an error has occurred.

You could, of course, put the initialization that can generate errors after the event registrations, and so preserve the functionality of your events. As it turns out though, that move really isn’t very helpful because you still don’t have any way of reporting the error, stopping the process or trying to reinitialize the code. Remember it is perfectly possible to have transient problems like an instrument was turned off, or there was a momentary network outage.

What I want to show you is an easy way to leverage something you already have in the code to handle initialization errors. I’ll first show you an easy technique that simply stops the process and closes it. This technique is useful for many situations. Then I’ll discuss how to build on that technique to implement a more complex approach that supports retrying the initialization process. Both techniques are based on the fact that all event structures can have a timeout event.

Stop Process on Errors

Implementing the first technique, is easy because the event structure already has a timeout case in it. All we have to do is add a little functionality to it. Since the timeout right now is fixed at 1000 msec, the first thing we need to do is provide a way of changing the value being passed to the timeout terminal. The idea is that we will carry the timeout value in a shift register that is initialized with a zero. This setting means that the first time through the loop, the timeout will fire immediately. Inside the timeout event, if the timeout value for the current iteration is zero, the code knows it just finished initializing and so does nothing but check the state of the error cluster. If it is showing an error, the process quits. This case also sets the timeout value shift register to the normal 1000 msec value so if there are no errors, the loop will continue with its normal operation. This is what the check for initialization errors looks like:

Acquire Data w-error handling

The default case (not shown) for the inner case structure contains the code that is currently in the timeout event case.

Retry on Error

I haven’t implemented the complete retry technique since it tends to be very specific to what a given process is doing, but based on the previous simple example it isn’t hard to visualize some of the tweaks the code would need.

To begin with, in the simple example there were only two possible states for the timeout case to address:

  • The VI has been initialized
  • The VI has not been initialized

Because there are only two possibilities, it is logical — and valid — to essentially “encode” the knowledge about whether the VI is initialized in the timeout value, where a zero means it is not initialized and any other value means it is initialized.

Now however, the situation is more complex because, in this case, there can be two reasons why the VI in not initialized. It could be because the VI is just starting, or it could be because the last attempt at initialization failed. Additionally, if a previous initialization attempt has failed, you might not want to have the code sit there forever retrying, so the code needs to know how many time has it has failed. Finally, if you are going to retry the initialization at some interval, you need to be able to define that interval without restraint.

Clearly, we are going to need more than one value to represent all that information, so I would recommend two numbers (in separate shift registers). One would be the timeout value and it would have no meaning other than how long to wait for the timeout. The second numeric value would encode the retry state like so:

  • ..-1 = The VI has been initialized
  • 0 = All retry attempts have been exhausted
  • 1.. = The number of retry attempts remaining

It is this new second value that would now control the inner case structure such that the acquisition code would go in the ..-1 case and pass both shift register values through unmodified. The 0 case would stop the loop, and the 1.. case would attempt to initialize the VI. If the attempt fails, the logic would set the timeout to the retry timeout value and decrement the value controlling the case structure. If the attempt is successful, the logic would set the timeout to the acquisition timeout value, and set the control value to -1.

And remember, this example is but one of many possible variations on the theme.

Displaying and Recording Errors

Having collected errors, your application also needs a mechanism for displaying them to the operator and/or recording their occurrence. I have seen approaches that try to decentralize this functionality but for many reasons the results are often less than ideal. Chief among these reasons can be additional errors resulting from trying to access a common resource such as a file or database from multiple locations within the code. The solution to this conundrum is to create a separate process, the sole purchase of which is to report and display errors. That is what I am presenting here:

Exception Handler

As you can see, the code is very simple. When something fires the error event, the parameters needed to make the error handler VI work are read from the system configuration information. This information consists of what the VI should do with the error, and the labels for the dialog box buttons. The error handler VI itself (Show Error.vi) is a simplified version of the error handler that ships with LabVIEW. For safety reasons, the VI no longer has the ability to abort the running application.

The process’ event structure only services three events:

  1. The error event (shown) — <sys:System Error>:User Event
    This event is fired from other processes when they detect an error in their operation. The first VI it calls can either display the error to the operator in a dialog box, save it to file or both. The output from the VI is an enumeration that specifies whether to continue execution of the application, or command a shutdown. If the reporting VI presents the error in a dialog box, this output is set in accordance with the user’s selection. If the reporting VI is only recording the error, the output is always Continue.
  2. The application stop event — <sys:Stop Application>:User Event
    This event fires whenever the application is stopping. The only thing unique about this process’s implementation of the event handler is that before stopping itself, it pauses and waits 5 seconds for all the other process to at least get started on their shutdown operations.
  3. The Stop button value change event
    Operationally, there is no real reason for this button, the front panel will after all be closed. Its main use is to assist in troubleshooting or preliminary testing where the formal shutdown logic may not be completed. It simply fired the shutdown event.

But how do you pass errors to this error handler process? Like all events, the Handle Errors event has a Generate Event.vi. In this case, however, the operation is tweaked slightly.

Generate Error Event

Because we only want the event generated when an error occurs, the code is structured such that the event is only fired if there is an incoming event; in which case, the data for the event is the incoming error cluster. The output error cluster, however, is the result of the event generation. If there is no incoming error, the error cluster is passed through unmodified.

The other variation for typical for this VI is that because it is likely to be used in multiple locations, I have set its execution to be shared clone reentrant. This causes LabVIEW to preallocate a pool of clones that can be used as needed. This setting is useful in situations where there is code that you don’t want instances blocking each other, but don’t need a unique memory space between instances.

The event generator is placed in the acquisition and display VIs and the last thing to do before the loop repeats. The testbed code with the two modifications discussed in this post is available from:

http://svn.notatamelion.com/blogProject/testbed application/Tags/Release 2

You should know the drill for grabbing a copy of it by now.

Until next time…

Mike…