If the socket fits, wear it…

One of this blog’s recurring themes is the importance of modularity as an expression of the age-old tactic of “divide and conquer”. What is perhaps new (or at least daunting) to some readers is the idea of spreading tasks across not just separate processes on the same computer, but across multiple networked computers. Of course if this strategy is to be successful, the key is communications and to that end we have been examining ways of incorporating remote access capabilities into out testbed application.

Last time out, we implemented the first interface for remote applications to monitor and control our application. That interface took the form of a custom TCP protocol that used packets of JSON data to carry messages over a vanilla TCP connection. I started there because it provides a simplified mechanism for exploring some of the issues concerning basic code structure. Although this interface worked well, and in fact would prove adequate for a wide variety of applications, it did exhibit one big issue. To wit, clients had to be written in a specific way in order to use it. This fact is a problem for many applications because users are growing increasingly reticent about installing special software. They want to know why they need to load special code to do a job? The way they see it, their PCs (and cell phones for that matter) come with a bunch of networking software preloaded on them – and they have a valid point! Why should they have to install something new?

A complete answer to that question is far beyond the scope of this post, but we can spend a few useful moments considering one small niche of the overall problem, and a standardized solution to that problem. Specifically, how can we leverage some of those networking tools (read: browsers) to support remote access to our testbed application? As we have discussed before, the web environment provides ample tools for creating some really nice interfaces. The real sticking point is how that “really nice” interface can communicate with the testbed application. You may recall that a while back we considered one technique that I characterized as a “drop box” solution. The idea was to take advantage of the database underlying a web application by using it to mediate the communications. In other words, the LabVIEW application writes new data to the database and the web application reads and displays the data from the database – hence the “drop box” appellation.

While we might be able to force-fit this approach into providing a control capability, it would impose a couple big problems: First, it would mean that the local application would have to be constantly polling a remote database to see if there have been any changes. Second, it would be really, Really, REALLY slow. We need something faster. We need something more interactive. We need WebSockets.

What are WebSockets?

Simply put, the name WebSockets refers to a message-based protocol that was standardized in 2011 as RFC 6455. The protocol that the standard defines is low-overhead, full-duplex and content agnostic, meaning that it can carry data of any type – even JSON-encoded text data (hint, hint).

An interesting aspect of this protocol is that its default port for establishing a connection is port 80 – the same as the default port for HTTP. While this built-in conflict might be confusing, it actually makes sense. You see when a client initiates an HTTP connection, the first thing it does is pass to the server a number of headers that provide information on the requested connection. One of those headers allows the client to request an Upgrade connection. The original purpose of this header was to allow the client to request an upgraded connection with, for example, enhanced security. However, in recent years it has become a mechanism to allow multiple protocols to listen to the same port.

The way the process works is simple: The client initiates a normal HTTP connection to the server but sets the request headers to indicate that it is requesting a specific non-HTTP protocol. In the case of a request for the WebSockets protocol, the upgrade value is websocket. The server responds to this request with a return code of 101 (Switching Protocols). From that point on, all further communications are made using the WebSockets protocol. It is important to note that while this initial handshake leads some to assume that WebSockets in some ways dependent upon, or rides on top of the HTTP protocol, such is not the case. Aside from the initial connection handshake, the WebSockets protocol is a distinct process that shares nothing with HTTP. Consequently, while the most common application of the technique might be web-based client-server operation, the WebSockets protocol is equally well-suited for peer-to-peer messaging. The only limitation is that one of the two peers needs to be able to respond correctly to the initial handshake.

It is also worth understanding why the basic idea of using Port 80 for the initial connection is significant. A conversation on Stackoverflow gives a pretty good explanation of several issues, but for me the major advantage of using port 80 is that it avoids IT-induced complications. Many corporate IT departments will lock down ports that they don’t recognize. While there are some that try to lock down port 80, it is much less common. Before continuing on, if you’re interested, you also can find the details of the initial handshake here.

The LabVIEW Connection

Ok, so it sounds like WebSockets could definitely have a place in our communications toolbox, but how are we going to take advantage of it from LabVIEW? The answer to that question lies in the work of LabVIEW CLA Sam Sharp. He has developed a set of “pure G” VIs that allows you to implement either side of the connection. Because these are written in nothing but G, there are no DLLs involved so they can run equally well on any supported LabVIEW platform. Making the deal even sweeter, he has documented his code, created a tutorial on them, released his VIs for anyone to use, and all the compensation he requests is “…it would be great if you credit me…”. So, Sam, may you have a million click-throughs.

The following discussion is written assuming Sam’s VIs which I have converted to LabVIEW 2015. One quick note, if you don’t or can’t use the VIPM, you can still use the *.vip file, all you have to do is change the “v” to a “z” and you are good to go. As a first taste of how these VIs work, let’s look (like we did with the TCP example last time) at an over-simplified example to get a sense of the overall logical flow.

The Simplist WebSockets Server

For our purposes here, the testbed application will be the “server” so our code starts by listening for a connection attempt on the default Port 80. When it receives a connection, a reference to that connection is passed to a VI (DoHandshake.vi) that implements the initial handshake to activate the WebSockets protocol. Note that a key part of this process is the passing of a couple of “magic strings” between the client and server to validate the connection and protocol selection.

With the handshake completed and both ends of the connection satisfied that the WebSockets protocol is going to be used, the following subVI (Read.vi) reads a data packet from the client that, in our application, represents a data or control request. Next comes the subVI (Write.vi) that writes a response back to the client. Finally the code calls a subVI (Close.vi) that sends a WebSockets command to close the connection, and then closes the TCP connection reference that LabVIEW uses.

Building the Interface

To build this bare logic into something usable, the structure of the server task is essentially identical to that of the TCP process we built last time. In fact, the only difference between the two is ports to which they are listening, and the specific reentrant handlers that they launch in response to a TCP connection. So let’s concentrate on that alternate process. During initialization, the handler calls the subVI that implements the initial handshake.

Handler Initialization

In addition to the connection reference, this routine also outputs a string that is the URI that was used to establish the connection. Although we don’t need it for our application, it could be used to pass additional information to the server. Once initialization is complete the main event loop starts, but unlike the TCP handler we wrote earlier, it is not based around a state-machine structure.

Main Event Loop

While we could have broken up the process into separate states, the fact that Sam has provided excellent subVIs implementing the read and write functionality makes such a structure feel a bit contrived – or at least to me it does. When the timeout event fires, the code waits for 500 msecs for the first user data coming from the connection. If the read times-out, the loop waits for another 500 msec and then tries again. This polling technique is important because it allows other things (like the system shutdown event) to interrupt the process. Likewise, because we are waiting for a response that is, at least potentially, coming from a remote computer the polling allows us to wait as long as necessary for the response.

When the request data does arrive, the JSON data string is processed by a pair of subVIs that we originally created for the TCP protocol handler. They create the appropriate Remote Access Commands object and pass it on to the dynamic dispatch VI (Process Command.vi) that executes the command and returns the response. The response data is next flattened to a JSON string and written to the connection. Because the current implementation assumes a single request/response cycle per connection, the code closes the WebSockets connection and the TCP connection reference. However, it would be easy to visualize a structure that would not close the connection, but rather repeat one of the data read commands at a timed interval to create a remote “live” interface.

In terms of the errors that can occur during this process, the code has to correctly respond to two specific error codes. First is error code 56, a built-in LabVIEW error that flags a network operation timeout. Because this is the error that is generated if server hasn’t yet received the client’s request, the code basically ignores it. Second is error code 6066, which is a WebSockets-specific error defined in RFC 6455 to flag the situation where the remote client closes a WebSockets connection. Our code responds by closing the TCP connection reference and stopping the loop.

Testing our Work

Now that we have our new server up and running we need to be able to test its operation. However, rather than creating another LabVIEW application to act as the test platform, I built it into a web application. The interface consists of a main screen that provides a pop-up menu for selecting what you want to do and 5 other screens, each of which focus on a specific control action. As these things go, I guess it’s not a great web application, but it is serviceable enough for our purposes. If you need a great application, talk to Sam Sharp – that’s what his company does.

The HTML and CSS

As I have preached many times before, one of the things that makes web development powerful is the strict “division of labor” between its various components: the HTML defines the content, the CSS specifies how the content should look, JavaScript implements client-side interactivity and a variety of languages (including JavaScript!) providing server-side programmability. So lets start with a quick look at the HTML that defines my web interface, and CSS that makes it look good in spite of me… In order to provide some context for the following discussion, here is what the main screen looks like:

Main Screen

It has a title, a header and a pop-up menu from which you can select what you want to do. As a demonstration of the effect that CSS can have, here’s the part of the HTML that creates the pop-up menu.

<button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">Available Actions<span class="caret"></span></button>
<ul class="dropdown-menu">
  <li><a href="ReadGraphData.html">Read Graph Data</a></li>
  <li><a href="ReadGraphImage.html">Read Graph Image</a></li>
  <li class="divider"></li>
  <li><a href="SetAcquisitionRate.html">Set Acquisition Rate</a></li>
  <li><a href="SetDataBufferDepth.html">Set Data Buffer Depth</a></li>
  <li><a href="SetTCParameters.html">Set TC Parameters</a></li>
</ul>

You’ll notice that pop-up menu is constructed from two separate elements: A button and an unordered list – normally a set of bullet points – where each item in the list is defined as an anchor with a link to one of the other pages. However, as the picture shows, when this code runs we don’t see a button and a set of bullet points, we see one pop-up menu. How can this be? The magic lies in CSS that dramatically changes the appearance of these elements to give them the appearance of a menu. Likewise, some custom JavaScript makes the visually manipulate elements work like a menu. What is very cool, however, is that the resources making this transformation possible are part of a standard package, called Twitter Bootstrap, that is free for anyone to use. In a similar vein, let’s look at the page that displays a plot of data acquired from the testbed application:

Graph Screen - Blank

At the top of the screen there’s a small form where the user enters information defining the task to be performed, and a button to initiate the operation that the user is requesting. Below that form, is a blank area where the software will draw the graph of the acquired data. Let’s look at two specific bits of HTML, first the code that builds the data entry form…

<form>
  <fieldset class="input-box">
    <legend>View Graph Data</legend>
    <input type="text" class="str-input" id="ipAddr" value="localhost">  Host</input><br>
    <input type="number" class="num-input" id="portNum" value="80">  Port Number</input><br>
    <select id = "targetPlugin">
      <option value = "Sine Source">Sine Source</option>
      <option value = "Ramp Source">Ramp Source</option>
      <option value = "Hen House TC">Hen House TC</option>
      <option value = "Dog House TC">Dog House TC</option>
      <option value = "Out House TC">Out House TC</option>
    </select><label>  Select Target for Action</label><br>
    <input type="button" id="just-submit-button" value="Send Command">
  </fieldset>
</form>

…and now the code that defines the graph:

<div id="container" style="min-width: 310px; height: 400px; margin: 0 auto"></div>

But, something seems to be missing. The first snippet will create data-entry fields and a button, but what happens when the button is clicked? Apparently, nothing. Likewise, the consider the graphing element. We can see how large the area is to be, but where is the data coming from? And where are the graphing operations? To answer those questions, we need to look elsewhere.

The JavaScript

The power behind much of the web in general – and our application in particular – is the interpreted language JavaScript. In addition to being able to access all resources on your computer, JavaScript can interact directly with web pages and their underlying structures. For folks that like to split hairs, JavaScript is “object-based” because it does support the concept of object, but it is not “object-oriented” because it doesn’t explicitly support classes.

More important for what we are going to be doing is that it supports the concept of “callbacks” (read: User Defined Events). In other words, you can tell JavaScript to automatically performs functions when certain events occur. For example, our JavaScript code is going to be interacting with the web page that loaded it, we need to be sure that the page is fully loaded before that program starts. In order to accomplish that goal, the JavaScript file associated with the page includes this structure:

$(window).load(function() {
	...  // a lot of stuff goes here
});

This code creates a callback for the .load() event. The parameter passed to the .load() event is a reference to the function that JavaScript will run when the event fires. As is common in JavaScript, the code declares the function in line so everything between the opening and closing curly brackets will be executed when the event fires. So after declaring a few variables the code includes this:

$("#just-submit-button").click(function(){
  //The code here retrieves all of the input data and formats the request.
  target = $("#targetPlugin").val();
  remAddr = $("#ipAddr").val();
  remPort = $("#portNum").val();
  jsonData = '\"Read Graph Data\":' + JSON.stringify({"Target":target}); 

  // the websocket logic
  wc_connect(remAddr, remPort, parseData);
  wc_send(jsonData);
});

So the first thing the code does when the page finishes loading is register another callback, but this one defines what JavaScript will do when the user clicks the button in the form. The first three lines read the values of the form data entry fields, and the fourth assembles that data into the JSON string that will be sent to the server. The last two lines are the interface to the WebSockets logic. The first of these lines establishes the connection to the server, while the other one sends the command. But what about the response? Shouldn’t there be a line with a command like wc_receive? You really should be expecting this by now: Inside the wc_connect command the code registers another callback to handle the response.

The event (called onmessage) that is tied to this callback fires when a message is received from the server. The code implementing the callback resides in the file websockets.js (in case you’re curious) and its job is to read the JSON response data packet, check for errors, parse the data and generate the output – the graph. The only question now is, “How does it know how to parse the data and generate the graph?” And the answer is (all together now): “There’s another callback!” See the third parameter of wc_connect, the one named parseData? That value is actually a reference to a function contained in the JavaScript code for this particular page, and is an example of how JavaScript implements a “plugin architecture”. So here is how the data parser for this page starts…

var parseData = function(rawData){
  var plotData = JSON.parse(rawData);
  // trim decimal places
  plotData.forEach(function(element, index, array){
    plotData[index] =  Number(element.toFixed(3));
  });

At this point in the process, the data portion of the response is still a string, so to make processing the data easier, we first parse it to convert it into a JSON object. In the case of this particular response, the resulting object is the array of numbers expressed as strings. Really long strings. You see when LabVIEW encodes a number as a JSON string it includes far more digits of precision than are really needed, so forEach element in the array, I convert the value to a number with 3 decimal places. Here’s the rest of the code:

  // logic for drawing the graph
  $('#container').highcharts({
    title: { text: 'Recent Data', align: 'center' },
    subtitle: { text: 'System: '+remAddr+':'+remPort, align: 'center' },
    xAxis: { title: { text: 'Samples' }, tickInterval: 1 },
    yAxis: { title: { text: 'Amplitude' }, gridLineColor: "#D8D8D8" },
    tooltip: { headerFormat: '<small>Sample: {point.key}</small><br>' },
    series: [{ turboThreshold: 0, name: target, data: plotData, lineWidth: 1, marker:{enabled: false}, color: '#000000' }]
  });
}

This is the code that does the plotting, and as we shall see in a moment, this small amount of code produces a beautiful and highly functional chart that displays the values of individual points in a tooltip when you hover over them with the mouse and even provides a pop-up menu that allows you to save the plot image in a variety of image formats. This functionality is possible thanks to a plotting library called Highcharts that uses the structure defined in the HTML as a placeholder for what is going to draw. I have used this library before in demonstrations because in my experience it is stable, easy to use, and very well-documented. I also like the fact that regardless of what kind of plot I am trying to create they have a demo online that gets me about 95% of the way to my goal. Please note that this library is a commercial product, but they make it available for free for “non-Commercial” applications – however even for commercial usage, the one-time license fee is really pretty reasonable. Finally, even though it doesn’t appear that they actively police their licensing with things like crippled versions or the like, if you are using this on a professional project, pay the people. They have certainly earned their bread.

Testing the Pages

So at last we have our server in place and some test web pages (and supporting code) created. We need to consider how to run the web client. Here you have three options: First, you could just double-click the top-level file in Windows Explorer and Windows will dutifully open the file in your browser and everything will work as it should. Second, if you have access to an existing web server you can copy the dozen or so files to it and test it from there. Third, you could create a small temporary server strictly for testing. If you choose that path, a good option is a server called Express.js. As it name implies, it is written in JavaScript, which means it runs under the Node.JS execution engine. You can set one up sufficient to test our current code in about 10 minutes – including the time required to download the code.

The overall test process is similar to what we did to test the custom TCP server last time. The only significant change is the interface. First, test things that should work to make sure they do. Second, test the things that shouldn’t work and make sure they don’t. Here are examples of what you can expect to see on the graphing and image-fetch screens:

Graph Screen

Image Screen

Testbed App – Release 20
Toolbox – Release 17
WebSockets Client – Release 1

Big Tease

So what’s next? We have looked at access via a custom TCP interface and the standard WebSockets interface. How about next time, we look at how to do embed this connectivity in a C++ program using a DLL?

Until Next Time…
Mike…

Laying the good foundation, with TCP…

In case you are just joining the conversation, we are in the midst of a project to modify the testbed application that we have been slowly assembling over the past year. I would heartily recommend that you take some time and review the past posts.

To this point in our latest expansion project, we have created a remote control interface, embedded it in our testbed and performed some basic testing to verify that the interface works. Our next step is to create the first of several “middleware” processes. I call them middleware because they sort of sit between our application’s basic code and the external applications and users. In future installments we will look at middleware for .NET, ActiveX and WebSockets, but we will start with a more fundamental interface: TCP/IP.

The Roadbed for the Information Highway

Aside from giving me the opportunity to air out some tired metaphors, TCP/IP is a good place to start because it gives us the opportunity to examine the protocol that underlies a host of other connection options.

Just the basics

Although the idea of creating a “server” can have a certain mystique, there really isn’t much to it really – at least when you are working with LabVIEW. The underlying assumption to the process is that there is something monitoring the computer’s network interface waiting for a client application to request a TCP/IP connection. In network parlance, this “something” is called a “listener” because listens to the Ethernet interface for a connection. However a given listener isn’t simply listening for any connection attempt, rather the network standards define the ability to create multiple “ports” on a single interface, and then associate particular ports with particular applications. Thus, when you create the listener you have to tell it what port that it is to monitor. In theory, a port number can be any U32 value, but existing standards specify what sorts of traffic is expected on certain port numbers. For example, by default HTTP connections are expected on ports 80 or 8080, port 21 is the default for FTP and LabVIEW by default listens to port 3363. All you have to do is pick a number that isn’t being used for anything else on your computer. To create a listener in LabVIEW, there is a built-in function called TCP Create Listener. It expects a port number, and returns a reference to the listener that it creates – or an error if you pick a port to which some other application is already listening.

Once you have created the listener, you have to tell it to start listening by calling the built-in function TCP Wait On Listener. As its name implies, it waits until a connection is made on the associated port, though you will typically want to specify a timeout. When this function sees and establishes an incoming connection it outputs a new reference specific to that particular connection. A connection handler VI can then use that reference to manage the interactions with that particular remote device or process.

Finally, when you are done with your work, you kill the server by closing the listener reference (TCP Close Connection), and all connection references that you have open. Put these three phases together, you come up with something like this.

The Simplest TCP Server

This simple code creates a listener, waits for a connection, services that connection (it reads 4 bytes from it), and then quits. While this code works, it isn’t really very useful. For example, what good is a server that only waits for one connection and then quits? Thankfully, it’s not hard to expand this example. All you have to do is turn it into a mini state-machine.

One Step at a Time

As usual, the state-machine is built into the timeout event so the following states include a shift register pointing to the next state to be executed, and a second one carrying the delay that the code will impose before going to that state. But before we get into the specifics, here’s a state diagram showing the process’ basic flow.

State Diagram

Execution starts with the Initialize Listener state. It’s main job at this point is to create the TCP listener. Next is the aptly named state, Wait for Connection. It patiently waits for a connection by looping back to itself with a short timeout. As long as there is no connection established, this state will execute over and over again. This series of short waits gives other events (like the one for shutting down the server) a chance to execute.

When a connection is made, the machine transitions to the Spawn Handler state. Since it is critical that the state machine gets back to waiting for a new connection as soon as possible, this state dynamically launches a reentrant connection handler VI and immediately transitions back to the Wait for Connection state.

The state machine continues ping-ponging between these last two states until the server is requested to stop. At that point, the code transitions to the Close Listener state which disposes of the listener and stops the state machine. So let’s look at some real code to implement these logical states – which, by the way resides in a new process VI named TCP-IP Server.vi.

The Initialize Listener State

This state at present only executes once, and its job is to create the listener that initiates connections with remote clients. The TCP Create Listener node has two inputs, the first of which is the port that the listener will monitor. Although I could have hard-coded this number, I instead chose to derive this value from the application’s Server.Port property. In a standalone executable, the application reads this value from its INI file at start-up, thus making it reconfigurable after the application is deployed. If the server.tcp.port key does not exist in the INI file, the runtime engine defaults to LabVIEW’s official port number, 3363.

Initialize Listener

When running in the development environment, this value is still reconfigurable, but it is set through the My Computer target’s VI Server settings. To change this value, right-click on My Computer in the project explorer window and select Properties. In the resulting dialog box, select the VI Server Category. At this point, the port number field is visible in the Protocols section of the VI Server page, but it is disabled. To edit this value, check the TCP/IP box to enable the setting, make the desired change and then uncheck the TCP/IP box, and click the OK button. It is critical that you remember to uncheck TCP/IP before leaving this setting. If you don’t, the project will be linked to the specified port and the TCP server in the testbed application will throw an error 60 when it tries to start.

The other input to the TCP Create Listener node is a timeout. However, this isn’t the time that the node will wait to finish creating the listener. We will be testing this code on a single computer and so don’t have to worry about such things as the network going down – even momentarily. In the broader world, though, there are a plethora of opportunities for things to go wrong. For example, the network could go south while a client is in the middle of connecting to our server. This timeout addresses this sort of situation by specifying the amount of time that the listener will wait for the connection to complete, once a connection attempt starts.

The Wait for Connection State

This state waits for connection attempts, and when one comes, completes the connection. Unfortunately, LabVIEW doesn’t support events based on a connection attempt so this operation takes the form of a polling operation where the code checks for a connection attempt, and if there is none, waits a short period of time and then checks again. The short wait period is needed to give the process as a whole the chance to respond to other events that might occur.

Wait for Connection

The logic implementing this logic starts with a call to the built-in TCP Wait On Listener node with a very short (5-msec) timeout. If there is no connection attempt pending when the call is made, or an attempt is not received during that 5-msec window, the node terminates with an error code 56. The following subVI (Clear Errors.vi) looks for, and traps that error code so its occurrence can be used to decide what to do next. If the subVI finds an error 56, the following logic repeats the current state and sets the timeout to 1000-msec. If there is no error, the next state to be executed is Spawn Handler and the timeout is 0.

If there is a successful connection attempt, the TCP Wait On Listener node also outputs a new reference that is unique to that particular connection. This new reference is passed to a shift-register that makes is available to the next state.

The Spawn Handler State

In this state, the code calls a subVI (Launch Connection Handler.vi) that spawns a process to handle the remote connection established in the previous state. This connection handler takes the form of a reentrant VI that accepts two inputs: a reference to a TCP connection and a boolean input that enable debugging operations – which at the current time consists of opening the clone’s front panel when it launches, and closing it when it closes.

Spawn Handler

It is important that the connection handler be a reentrant process for two reasons: First, we want the code to be able to handle more than one connection at a time. Second, the listener need to get back to listening for another new connection as quickly as possible. We’ll discuss exactly what goes into the connection handler in a bit.

The Close Listener State

Finally, when the process is stopping, this event closes open connections, sets the timeout to -1, and stops the event loop.

Close Listener

But why are there two connections to be closed? Doesn’t the connection handler that gets launched to manage the remote connection handle closing that reference? While that point is true, the logic behind it is flawed. There is a small, but finite, delay between when the remote connection is completed and the Spawn Handler starts executing. If the command to stop should occurring during that small window of time, the handler will never be launched, and so can’t close that new connection and its associated reference.

Turning States into a Plugin

Now that we have an understanding of the process’ basic operation, we need to wrap a bit more logic around it to turn it into usable code.

Adding Shutdowns and Error Handling

To begin with, if this new process is going to live happily inside the structure we have already defined for testbed application plugins, it is going to need a mechanism to shut itself down when the rest of the application stops. Since that mechanism is already defined, all we have to do is register for the correct event (Stop Application) and add an event handler to give it something to do.

Loop Shutdown

Nothing too surprising here: When the shutdown event fires, the handler sets the next state to be executed to Close Listener and the timeout to 0. Note that it does not actually stop the loop – if it did the last state (which closes the listener reference) would never get the chance to execute. Finally, we also need to provide for error handling…

Add in Error Handling

…but as with the shutdown logic, this enhancement basically consists of adding in existing code. In this case, the application’s standard error reporting VI.

Defining the Protocol

With the new middleware plugin ensconced happily in the testbed framework, we need to create the reentrant connection handler that will handle the network interactions. However, before we can do that we need to define exactly what the communications protocol will look like. In later posts, I will present implementations of a couple of standardized protocols, but for now let’s explore the overall communications process by “rolling our own”.

As a quick aside, you may have noticed that I have been throwing around the word “protocol” a lot lately. Last time, I talked about creating a safe protocol for remote access. Then this time we discussed the TCP protocol, and now I am using the word again to describe the data we will be sending over out TCP connection. A key concept in networking is the idea of layers. We have discussed the TCP protocol for making connections, but that isn’t whole story. TCP is build on top of a lower-level protocol called IP – which is itself built on even lower level protocols for handling such things as physical interfaces. In addition, this protocol stack can also extend upwards. For example, VI Server is at least partially built on top of TCP, and we are now going to create our own protocol that will define how we want to communicate over TCP.

This layering may seem confusing, but it offers immense value because each layer is a modular entity that can be swapped out without disrupting everything else. For example, say you swap out the NIC (Network Interface Card) in your computer, the only part of the stack that needs to change are the very lowest levels that interface to the hardware.

The first thing we need to do is define the data that will be passed back and forth over the connection, and how that data will be represented while it is in the TCP communications channel. Taking the more basic decision first, let’s look at how we want to represent the data. Ideally, we want a data representation that is flexible in terms of capability, is rigorous in its data representations and easy to generate in even primitive languages like C++. The first standard that was created to fill this niche was a spin-off of HTML called XML. The problem is that while it excels in the first two points, the third is a problem because when used to encode small data structures the same features that make it incredibly flexible and rigorous, conspire to make it is very verbose. Or to put it another way, for small data structures the data density in an XML document is very low.

Fortunately, there is an alternative that is perfect for what we need to do: JSON. The acronym stands for “JavaScript Object Notation”, and as the name implies is the notation originally used to facilitate the passing of data within JavaScript applications. The neat part is that a lot of the JSON concepts map really well to native LabVIEW data structures. For example, in terms of datatypes, you can have strings, numbers and booleans, as well as arrays of those datatypes. When you define a JSON object, you define it as a collection of those basic datatypes – sort of like what we do with clusters in LabVIEW. But (as they say on the infomercials), “Wait there’s more…” JSON also allows you include other JSON objects in the definition of a new object just LabVIEW lets us embed clusters within clusters. Finally, to put icing on the cake, nearly every programming language on the planet (including LabVIEW) incorporates support for this standard.

To see how this will work, let’s consider the case of the temperature controller parameters. When wanting to configure this value, the remote application will need to send the following string: (Note: As with JavaScript itself, the presence of “white space” in JSON representations is not significant. I’m showing this “pretty-printed” to make it easier to understand.)

{
    "Target":"Dog House TC", 
    "Data":{
        "Error High Level":100,
        "Warning High Level":90,
        "Warning Low Level":70,
        "Error Low Level":60,
        "Sample Interval":1
    }
}

This string defines a JSON object that contains two items. The first is labeled Target and it holds a string identifying the specific plugin that it is wanting to configure – in this case the Dog House TC. In the same way, the name of the second item is labeled Data, but look at its value! If you think that looks like another JSON object definition, you’d be right. This sub-object has 5 values representing the individual parameters needed to configure a temperature controller. In case you’re wondering, this is what the code looks like that parses this string back into a LabVIEW data structure:

Unflattening JSON

That’s right, all it takes is one built-in function and one typedef cluster. The magic lies in the fact that the string and the cluster represent the exact same logical structure so it is very easy for LabVIEW’s built-in functions to map from one to the other.

The Unflattened JSON Data

The other thing to note is that the Sample Interval value in the cluster has a unit associated with it, in this case milliseconds. The way LabVIEW handles this situation is consistent with how it handles units in general: When converting data to a unitless form (like a JSON value) it expresses the value using the base unit for the type of data that it is. In the example shown, Sample Interval is time, and the base unit for time is seconds, so LabVIEW expresses the 1000 msecs as 1 sec in JSON. Likewise when unflattening the string back to a LabVIEW data structure, the function interprets the input value in the value’s base units as defined in the cluster.

We are about done with what our message will look like, but there are still a couple of things we need to add before we can start shooting our data down a wire. To begin with, we need to remember that Ethernet is a serial protocol and as so it’s much easier to uses if a receiver can know ahead of time how much data to be expecting. To meet that need, we will append a 2-byte binary value that is the total message. The other thing we need is someway to tell whether the message arrived intact and without corruption, so we will also append a 2-byte CRC. Moreover, to make the CRC easy for other applications to generate we will use a standard 16-bit CCITT form of the calculation. So this is what one of our command data packets will look like:

Message Format

In the same way, we can use the same basic structure for response messages. All we have to do is redefine the JSON “payload” as a JSON object with two objects: a numeric error code (where 0 = “No Error”), and a string that is a contains any data that the response needs to return. As you would expect, this string would itself be another JSON-encoded data structure.

Creating the Connection Handler

We are finally ready to implement the reentrant command handler that manages these messages, and the important part of that job is to ensure that it is fully reentrant. By that I mean that it does little good to make the VI itself reentrant if its major subVIs are not. So what is a “major” subVI? The two things to consider are:

  • How often does the SubVI execute? If the subVI rarely executes or only runs once during initialization, it might not be advantageous to make it reentrant.
  • How long does it take to execute? In the same way, subVIs that implement simple logic and so execute quickly, might not provide a lot of benefit as reentrant code.

As I am wont to do, I defined the handler’s overall structure as a state machine with three states corresponding to the three phases of the response interaction. So the first thing we need to do (and the first state to be executed) is Read Data Packet. Its job is to read an entire message from the new TCP connection, test it for validity and, if valid, pass the command on to the Process Command state.

Read Data Packet

The protocol we have defined calls for each message to start with a 2-byte count, so the state starts by reading two bytes from the interface, casting the resulting binary value to U16 number and then using that number to read the remainder of the message. Then to validate the message, the code performs a CRC calculation on the entire message, including the CRC at the end. Due to the way the CRC calculation works, if the message and CRC are valid, the result of this calculation will always be 0. Assuming the CRC checks, the code strips the CRC from the end of the string and sends the remaining part of the string to a subVI that converts the JSON object into a LabVIEW object. I chose an object-oriented approach here because it actually simplifies the code (no case structures) and it provides a clear roadmap of what I need to do if I ever decide to add more interface commands in the future. If the CRC does not check, the next state to execute is either Send Response if no error occurred during the network reads, or Stop Handler if there was.

Moving on, the Process Command state calls a dynamic dispatch method (Process Command.vi) that is responsible for interfacing to the rest of the application through the events we defined last time, and formatting a response to be sent to the caller. The object model for this part of the code has 5 subclasses (one for each command) and the parent class is used as the default for when JSON command structure does not contain a valid command object. It should surprise no one that the command processing subclass methods look a lot like the test VIs we created last time to verify the operation of the remote access processor, consequently, I am not going to take the time or space to present them all again here. However I will highlight the part that makes them different:

Parsing Response for Error or Data

This snippet shows the logic that I use to process the response coming back from the remote access engine in response to the event that reads the graph data. Because the variant returned in the response notifier can be either a text error message, or an array of real data, the first thing the code does is attempt to convert the variant into a string. If this attempt fails and generates an error, we know that the response contains data and so can format it for return to the remote caller. If the variant converts successfully to a string, we know the command failed and can pass an error back to the caller.

At this point, we now have a response ready to send back to the caller, so the state machine transitions to the Send Response state. Here we see the logic that formats and transfers the response to the caller:

Send Response

Since the core of the message is a JSON representation of a response cluster, the code first flattens the cluster to a JSON string. Note however, the string that it generates contains no extraneous white space, so it will look different very from the JSON example I showed earlier. The logic next calculates the length of the return message and the CRC of the JSON. Those two values are added to the beginning and end, respectively, of the JSON string and the concatenated result is written back to the TCP connection.

Finally, the Stop Handler state closes the TCP connection and stops the state machine loop, which also stops and removes from memory the reentrant clone that has been running.

Testing the Middleware

Finally, as always we need to again test what we have done, and to do that I have written a small LabVIEW test client program. However, if you know another programming language, feel free to write a short program to implement the transactions that we have defined. The program I created is included as a separate project. The top-level VI opens a window that allows you to select the action you want to perform, the plugin that it should target and (if required) enter the data associated with the action. Because this is a test program, it also incorporates a Boolean control that forces an invalid CRC, so you can test that functionality as well.

So open both projects and run the testbed application, nothing new here – or so it seems. Now run the simple TCP client, its IP address and port number are correct for this test scenario. As soon as the client starts, the waveform graph for displaying the plugin graph data appears, so let’s start with that. You should be able to see the data from each of the 5 testbed plugins by selecting the desired target and clicking the Send Command button. You should also be able to see all 5 graph images.

Now try generating some errors. Turn on the Force CRC Error check-box and retry the tests that you just ran successfully. The client’s error cluster should now show a CRC Error. Next turn the Force CRC Error check-box back off and try doing something illegal, like using the Set Acquisition Rate action on one of the temperature controllers. Now you should see an Update Failed error.

Continue trying things out, verifying that the thing which should work do, and that the things that shouldn’t work, don’t. If you did the testing associated with the last post, you will notice that there is a lag between sending the command and getting the results, but that is to be expected since you are now running over a network interface. Finally, assuming the network is configured correctly, and the desired ports are open, the client application should be able to work from a computer across the room, across the hall or across the world.

Testbed Application – Release 19
Toolbox – Release 16
Simple TCP Client – Release 1

Big Tease

So what is in store for next time? Well, let’s extend things a bit further and look at a way to access this same basic interface, but this time from a web browser! Should be fun.

Until Next Time…
Mike…

It’s a big interconnected world – and LabVIEW apps can play too

If I were asked to identify one characteristic that set modern test systems apart from their predecessors, my answer would be clear: distributed processing. In the past, test applications were often monolithic programs – but how things have changed! In the past we have talked several times about how the concept of distributed processing has radically altered our approach to system architecture. In fact, the internal design of our Testbed Application is a very concrete expression of that architectural shift. However, the move away from monolithic programs has had a profound impact in another area as well.

In the days of yore, code you wrote rarely had to interact with outside software. The basic rule was that if you wanted your program to do something you had to implement the functionality yourself. You had to assume this burden because there was no alternative. There were no reusable libraries of software components that you could leverage, and almost no way to pass data from one program to another. About all you could hope for were some OS functions that would allow you to do things like read and write disk files, or draw on the screen. In this blog we have looked at ways that our LabVIEW applications can use outside resources through standardized interfaces like .NET or ActiveX. But what if someone else wants to write a program that uses our LabVIEW code as a drop in component? How does that work?

As it turns out, the developers saw this possibility coming and have provides mechanisms that allow LabVIEW application to provide the same sort of standardized interface that we see other applications present. Over the next few posts, we are going to look at how to use incorporate basic remote interfaces ranging from traditional TCP/IP-based networking to building modern .NET assemblies.

What Can We Access?

However, before we can dig into all of that, we need to think about what these interfaces are going to access. In our case, because we have an existing application that we will be retrofitting to incorporate this functionality, we will be looking at the testbed to identify some basic “touchpoints” that a remote application might want to access. By contrast, if you are creating a new application, the process of identifying and defining these touchpoints should be an integral part of you design methodology from day one.

The following sections present the areas where we will be implementing remote access in our testbed application. Each section will describe the remote interface we desire to add and discuss some of the possible implementations for the interface’s “server” side.

Export Data from Plugins

The obvious place to start is by looking at ways of exporting the data. This is, after all, why most remote applications are going to want to access our application: They want the data that we are gathering. So the first thing to consider is, where does the data reside right now? If you go back and look at the original code, you will see that, in all cases, the primary data display for a plugin is a chart that is plotting one new point at a time. Here is what the logic looked like in the Acquire Sine Data.vi plugin.

Simple Acquisition and Charting

As you can see, the only place the simulated data goes after it is “acquired” is the chart. Likewise, if you looked at the code for saving the data, you would see that it was getting the data by reading the chart’s History property.

Save Chart Data to File

Now, we could expand on that technique to implement the new export functionality, but there is one big consequence to that decision. Approaching the problem in this way would have the side-effect of tying together the number of data points that are saved to the chart’s configuration. Hence, because the amount of data that a chart buffers can’t be changed at runtime, you would have to modify the LabVIEW code to change the amount of data that is returned to the calling application.

A better solution is to leverage what we have learned recently about DVRs and in-place structures to create a storage location the size of which we can control without modifying the application code. A side-effect of this approach is that we will be able to leverage it to improve the efficiency of the local storage of plugin data – yes, sometimes side-effects are good.

To implement this logic we will need three storage buffers: One for each of the two “acquisition” plugins and one for the reentrant “temperature controller” plugin. The interface to each storage buffer will consist of three VIs, the first one of which is responsible for initializing the buffer:

Initialize Buffer

This screenshot represents the version of the initialization routine that serves the Ramp Signal acquisition process. The basic structure of this code is to create a circular buffer that will save the last N samples – where “N” is a reconfigurable number stored in the database. To support this functionality, the DVR incorporates two values: The array of datapoints and a counter that operates as a pointer to track where the current insertion point is in the buffer. These VIs will be installed in the initialization state of the associated plugin screen’s state machine. With the buffer initialized, we next need to be able to insert data. This is typical code for performing that operation:

Insert Data Point

Because the DVR data array is initialized with the proper number of elements at startup, all this logic has to do is replace an existing value in the array with a newly acquired datapoint value, using the counter of course to tell it which element to replace. Although we have a value in the DVR called Counter we can’t use it without a little tweaking. Specifically, the DVR’s counter value increments without limit each time a value is inserted, however, there is only a finite number of elements in the data array. What we need for our circular buffer is a counter that starts at 0, counts to N-1 and then returns to 0 and starts over. The code in the image shows the easiest way to generate this counter. Simply take the limitless count and modulo divide it by the number of points in the buffer. The output of the modulo division operation is a quotient and a remainder. The remainder is the counter we need.

Modulo division is also important to the routine for reading the data from the buffer, but in this case we need both values. The quotient output is used to identify when the buffer is still in the process of being filled with the initial N datapoints:

Read All Data.1

During this initial period, when the quotient is 0, the code uses the remainder to trim off the portion of the buffer contents that are yet to be filled with live data. However, once the buffer is filled, the counter ceases being the a marker identifying the end of the data, and it becomes a demarcation point between the new data and the old data. Therefore, once the quotient increments past 0, a little different processing is required.

Read All Data.2

Once the circular buffer is full, the element that the remainder is pointing at is the oldest data in the array (chronologically speaking), while the datapoint one element up from it is newest. Hence, while the remainder is still used to split the data array, the point now is to swap the two subarrays to put the data in correct chronological order.

Retrieve Graph Images from Plugins

The next opportunity for remote access is to fetch not the data itself, but a graph of the data as it is shown on the application’s GUI. This functionality could form the basic for a remote user interface, or perhaps as an input to a minimalistic web presentation. Simplifying this operation is a control method that allows you to generate an image of the graph and the various objects surrounding it like the plot legend or cursor display. Consequently, the VI that will be servicing the remote connections only needs to be able to access the chart control reference for each plugin. To make those references available, the code now incorporates a buffer that is structurally very similar to the one that we use to store the VI references that allow the GUI to insert the plugins into its subpanel. Due to its similarity to existing code, I won’t cover it in detail, but here are a few key points:

  • Encapsulated in a library to establish a namespace and provided access control
  • The FGV that actually stores the references is scoped as Private
  • Access to the functionality is mediated though publicly-scoped VIs

This FGV is the only new code we will need to add to the existing code.

Adding Remote Control

One thing that both of the remote hooks we just discussed have in common is that they are both pretty passive – or to make this point another way, they both are monitoring what the application is doing without changing what it is doing. Now we want to look at some remote hooks that will allow remote applications control the application’s operation, at least in a limited way.

Since the way the application works is largely dependent upon the contents of the database, it should surprise no one that these control functions will need to provide provisions for the remote application to alter the database contents in a safe and controlled way.

Some Things to Consider

The really important words in that last sentence are “safe” and “controlled”. You see, the thing is that as long as you are simply letting people read the data you are generating, your potential risk is often limited to the value of the data that you are exposing. However, when you give remote users or other applications the ability to control your system, the potential exists that you could lose everything. Please understand that I take no joy in this conversation – I still remember working professionally on a computer that didn’t even have a password. However, in a world where “cyber-crime”, “cyber-terrorism” and “cyber-warfare” have become household terms, this conversation is unavoidable.

To begin with, as a disclaimer you should note that I will not be presenting anything close to a complete security solution, just the part of it that involves test applications directly. The advice I will be providing assumes that you, or someone within your organization, has already done the basic work of securing your network and the computers on that network.

So when it comes to securing applications that you write, the basic principle in play here is that you never give direct access to anything. You always qualify, error-check and validate all inputs coming from remote users or applications. True, you should be doing this input validation anyway, but the fact of the matter is that most developers don’t put a lot of time into validating inputs coming from local users. So here are a couple of recommendations:

Parametrize by Selecting Values – This idea is an expansion on a basic concept I use when creating any sort of interface. I have often said that anything you can do to take the keyboard out of your users’ hands is a good thing. By replacing data that has to be typed with data menus from which they can select you make software more robust and reduce errors. When working with remote interfaces, you do have to support typed strings because unless the remote application was written in LabVIEW, typing is the only option. But what you can do is limit the inputs to a list of specific values. On the LabVIEW-side the code can convert those string values into either a valid enumeration, or a predefined error that cancels the operation and leaves your system unaltered. When dealing with numbers, be sure to validate them also by applying common-sense limits to the inputs.

Create Well-Defined APIs – You want to define a set of interfaces that specify exactly what each operation does, and with as few side-effects as possible. In fancy computer-science terms, this means that operations should be atomic functions that either succeed or fail as a unit. No half-way states allowed! Needless to say, a huge part of being “well-defined” is that the APIs are well-documented. When someone calls a remote function, they should know exactly what is expected of them and exactly what they will get in response.

Keep it Simple – Let’s be honest, the “Swiss Army Knife” approach to interface design can be enticing. You only have to design one interface where everything is parametrized and you’re done, or at least you seem to be for a while. The problem is that as requirements change and expand you have to be constantly adding to that one routine and sooner or later (typically sooner) you will run into something that doesn’t quite fit well into the structure that you created. When that happens, people often try to take the “easy” path and modify their one interface to allow it to handle this new special case – after all, “…it’s just one special case…”. However once you start down that road, special cases tend to multiply like rabbits and the next thing you know, your interface is a complicated, insecure mess. The approach that is truly simple is to create an interface that implements separate calls or functions for each logical piece of information.

With those guidelines in mind, let’s look at the three parameters that we are going to be allowing remote users or applications to access. I picked these parameters because each shows a slightly different use case.

Set the Acquisition Sample Interval

One of the basic ways that you can store a set of parameters is using a DVR, and I demonstrated this technique by using it to store the sample rates that the “acquisition” loops use to pace their operation. In the original code, the parameter was already designed to be changed during operation. You will recall that the basic idea for the parameter’s storage was that of a drop box. It wasn’t important that the logic using the data know exactly when the parameter was changed, as long as it got the correct value the next time it tried to use the data. Consequently, we already have a VI that writes to the DVR (called Sample Rate.lvlib:Write.vi) and, as it turns out, it is all we will need moving forward.

Set Number of Samples to Save

This parameter is interesting because it’s a parameter that didn’t even exist until we started adding the logic for exporting the plugin data. This fact makes it a good illustration of the principle that one change can easily lead to requirements that spawn yet other changes. In this case, creating resizable data buffers leads to the need to be able change the size of those buffers.

To this point, the libraries that we have defined to encapsulate these buffers each incorporate three VIs: one to initialize the buffer, one to insert a new datapoint into it, and one to read all the data stored in the buffer. A logical extension of this pattern would be the addition of a fourth VI, this time one to resize the existing buffer. Called Reset Buffer Size.vi these routines are responsible for both resizing the buffer, and correctly positioning the existing data in the new buffer space. So the first thing the code does is borrow the logic from the buffer reading code to put the dataset in the proper order with the oldest samples at the top and the newest samples at the bottom.

Put the Data in Order

Next the code compares the new and old buffer sizes in order to determine whether the buffer is growing larger, shrinking smaller or staying the same size. Note that the mechanism for performing this “comparison” is to subtract the two value. While a math function might seem to be a curious comparison operator, this technique makes it easy to identify the three conditions that we need to detect. For example, if the values are the same the difference will be 0, and the code can use that value to bypass further operations. Likewise, if the two numbers are not equal, the sign of the result will indicate which input is larger, and the absolute magnitude of the result tells us how much difference there is between the two.

This is the code that is selected when the result of the subtraction is a positive number representing the number of element that are to be added to the buffer.

Add points to Buffer

The code uses the difference value to create an array of appropriate size and then appends it to the bottom of the existing array. In addition, the logic has to set the Counter value point to the first element of the newly appended values so the next insert will go in the correct place. By contrast, if the buffer is shrinking in size, we need to operate on the top of the array.

Remove points from buffer

Because the buffer is getting smaller, the difference is a negative number representing the number of elements to be removed from the buffer data. Hence, the first thing we need to do is extract the number’s absolute value and use it to split the array, effectively removing the elements above the split point. As before, we also need to set the Counter value, but the logic is a little more involved.

You will remember that the most recent data is on the bottom of the array, so where does the next data point need to go? That’s right, the buffer has to wrap around and insert the next datapoint at element 0 again, but here is where the extra complexity comes in. If we simply set Counter to 0 the data insert logic won’t work correctly. Reviewing the insert logic you will notice that the first pass through the buffer (modulo quotient = 0) is handled differently. What we need is to reinitialize Counter with a number that when subjected to the modulo division will result in a remainder of 0, and a quotient that is not 0. An easily derived value that meets that criteria is the size of the array itself.

Finally we have to decide what to do when the buffer size isn’t changing, and here is that code. Based on our discussions just now, you should be able to understand it.

buffer size not changing

Set Temperature Controller Operating Limits

Finally, there are two reasons I wanted to look at this operation: First, it is an example of where you can have several related parameters that logically form a single value. In this case, we have 5 separate numbers that, together, define the operation of one of the “temperature controller” processes. You need to be on the look-out for this sort of situation because, while treating this information as 5 distinct value would not be essentially wrong, that treatment would result in you needing to generate a lot of redundant code.

However, this parameter illustrates a larger issue, namely that changes in requirements can make design decisions you made earlier – let’s say – problematic. As it was originally designed, the temperature controller values were loaded when the plugins initialized, and they were never intended to be changed during while the plugin was running. However, our new requirement to provide remote control functionality means that this parameter now needs to be dynamic. When confronted with such a situation, you need to look for a solution that will require the least rework of existing code and the fewest side-effects. So you could:

  1. Redesign the plugin so it can restart itself: This might sound inviting at first because the reloading of the operating limits would occur “automatically” when the plugin restarted. Unfortunately, it also means that you would have to add a whole new piece of functionality: the ability for the application to stop and then restart a plugin. Moreover, you would be creating a situation where, from the standpoint of a local operator, some part of the system would be restarting itself at odd intervals for no apparent reason. Not a good idea.
  2. Redesign the plugin to update the limits on the fly: This idea is a bit better, but because the limits are currently being carried through the state machine in a cluster that resides in a shift-register, to implement this idea we will need to interrupt the state machine to make the change. Imposing such an interruption risks disrupting the state machine’s timing.

The best solution (as in all such cases) is to address the fundamental cause: the setups only load when the plugin starts and so are carried in the typedef cluster. The first step is to remove the 5 numbers associated with the temperature controller operating parameters from the cluster. Because the cluster is a typedef, this change conveniently doesn’t modify the plugin itself, though it does alter a couple of subVIs – which even more conveniently show up as being broken. All that is needed to repairs these VIs is to go through them one by one and modify the code to read the now-missing cluster data values with the corresponding values that the buffered configuration VI reads from the database. Said configuration VI (Load Machine Configuration.vi) also requires one very small tweak:

Reload Enable

Previously, the only time logic would force a reload of the data was when the VI had not been called before. This modification adds an input to allow the calling code to request a reload by setting the new Reload? input to true. To prevent this change from impacting the places where the VI is already being called, the default value for this input is false, the input is tied to a here-to-fore unused terminal on the connector pane, and the terminal is marked as an Optional input.

Building Out the Infrastructure

At this point in the process, all the modifications that need to be done to the plugins themselves have been accomplished, so now we need is a place for the external interface functionality itself to live. One of the basic concepts of good software design is to look at functionality from the standpoint of what you don’t know or what is likely to change in the future, and then put those things into abstracted modules by themselves. In this way, you can isolate the application as a whole, and thus protect it from changes that occur in the modularized functionality.

The way this concepts applies to the current question should be obvious: There is no way that we can in the here and now develop a compete list of the remote access functionality that users will require in the future. The whole question is at its essence, open-ended. Regardless if how much time you spend studying the issue, users have an inherently different view of your application than you do and so they will come up with needs that you can’t imagine. Hence, while today we might be able to shoe-horn the various data access and control functions into different places in the current structure, to do so would be to start down a dead-end road because it is unlikely that those modifications would meet the requirements of tomorrow. What we need here is a separate process that will allow us to expand or alter the suite of data access and control functionality we will offer.

Introducing the Remote Access Engine

The name of our new process is Remote Access.vi and (like most of the code in the project) it is designed utilizing an event-drive structure that ensures it is quiescent unless it is being actively accessed. The process’ basic theory of operation is that when one of its events fire, it performs the requested operation and then sends a reply in the form of a notification. The name of the notification used for the reply is sent as part of the event data. This basic process is not very different from the concept of “callbacks” used in languages such as JavaScript.

Although this process is primarily intended to run unseen in the background, I have added three indicators to its front panel as aides in troubleshooting. These indicators show the name of the last event that it received, the name of the plugin that the event was targeting, and the name of the response notifier.

The Read Graph Data Event

The description of this event handler will be longer than the others because it pretty much sets the pattern that we will see repeated for each of the other events. It starts by calling a subVI (Validate Plugin Name.vi) that tests to see if the Graph Name coming from the event data is a valid plugin name, and if so, returns the appropriate enumeration.

Validate plugin name

The heart of this routine is the built-in Scan from String function. However, due to the way the scan operation operates, there are edge conditions where it might not always perform as expected when used by itself. Let’s say I have a typedef enumeration named Things I Spend Too Much Time Obsessing Over.ctl with the values My House, My Car, My Cell Phone, and My House Boat, in that order. Now as I attempt to scan these values from strings, I notice a couple of “issues”. First there is the problems of false positives. As you would expect, it correctly converts the string “My House Boat” into the enumerated value My House Boat. However, it would also convert the string “My House Boat on the Grand Canal” to the same enumeration and pass the last part of the string (” on the Grand Canal”) out its remaining string output. Please note that this behavior is not a bug. In fact, in many situations it can be a very useful behavior – it’s just not the behavior that we want right now because we are only interested in exact matches. To trap this situation, the code marks the name as invalid if the remaining string output is not empty.

The other issue you can have is what I call the default output problem. The scan function is designed such that if the input string is not scanned successfully, it outputs the value present at the default value input. Again, this operation can be a good thing, but it is not the behavior that we want. To deal with this difference, the code tests the error cluster output (which generates and error code 85 for a failed scan) and marks the name as invalid if there is an error.

When Validate Plugin Name.vi finishes executing, we have a converted plugin name and a flag that tells us whether or not we can trust it. So the first thing we do is test that flag to see whether to continue processing the event or return an error message to the caller. Here is the code that executes when the result of the name validation process is Name Not Valid.

Name Not Valid

If the Response Notifier value from the event data is not null, the code uses it to send the error message, “Update Failed”. Note that this same message is sent whenever any problem arises in the remote interface. While this convention certainly results in a non-specific error message, it also ensures that the error message doesn’t give away any hints to “bad guys” trying to break in. If the Response Notifier value is null (as it will be for local requests) the code does nothing – remember we are also wanting to leverage this logic locally.

If the result of the name validation process is Name Valid, the code now considers the Plugin Name enumeration and predicates its further actions based on what it finds there. This example for Sine Source shows the basic logic.

Name Valid - Remote

The code reads the data buffer associated with the signal and passes the result into a case structure that does one of two different things depending on whether the event was fired locally, or resulted from a remote request. For a remote request (Response Notifier is not null), the code turns the data into a variant and uses it as the data for the response notifier. However, if the request is local…

Name Valid - Local

…it sends the same data to the VI that saves a local copy of the data.

The Read Graph Image Event

As I promised above, this event shares much of the basic structure as the one we just considered. In fact, the processing for a Name Not Valid validation result is identical. The Name Valid case, however, is a bit simpler:

Read Graph Image

The reason for this simplification is that regardless of the plugin being accessed, the datatypes involved in the operation are always the same. The code always starts with a graph control reference (which I get from the lookup buffer) and always generates an Image Data cluster. If the event was fired locally, the image data is passed to a VI (Write PNG File.vi) that prompts the user for a file name and then saves it locally. However, if instead of saving a file, you are wanting to pass the image in a form that is usable in a non-LabVIEW environment, a bit more work is required. To encapsulate that added logic, I created the subVI Send Image Data.vi.

Send Image Data

The idea is to convert the proprietary image data coming from the invoke node into a generic form by rendering it as a standard format image. Once in that form, it is a simple matter to send it as a binary stream. To implement this approach, the code first saves the image to a temporary png file. It then reads back the binary contents of the file and uses it as the data for the response notifier. Finally, it deletes the (now redundant) temporary file.

The Set Acquisition Rate Event

This event is the first one to control the operation of the application. It also has no need to be leveraged locally, so no dual operation depending on the contents of the Response Notifier value.

Set Acquisition Rate

Moreover, because the event action is a command and not a request, the response can only have one of two values: “Update Failed” or “Update Good”. The success message is only be sent if the plugin name is either Sine Source or Ramp Source, and no errors occurs during the update. While on the topic of errors, there are two operations that need to be performed for a complete update: the code must modify both the database and the buffer holding the live copy of the setting that the rest of the application uses. In setting the order of these two operations, I considered which of the two is most likely to generate an error and put it first. When you consider that most of the places storing live data are incapable of generating an error, the database update clearly should go first.

So after verifying the plugin name, the subVI responsible for updating the database (Set Default Sample Period.vi) looks to see if the value is changing. If the “new” value and the “old” value are equal, the code returns a Boolean false to its Changed? output and sets the Result output to Update Good. It might seem strange to return a value to the remote application that the update succeeded when there was no update performed, but think about it from the standpoint of the remote user. If I want a sample period of 1000ms, an output of Update Good tells me I got what I wanted – I don’t care that it didn’t have to change to get there. If the value is changing…

Set Default Sample Period

…the code validates the input by sending it to a subVI that compares it to some set limits (500 < period < 2500). Right now these limits are hardcoded, and in some cases that might be perfectly fine. You will encounter many situations where the limits are fixed by the physics of a process or a required input to some piece of equipment. Still, you might want these limits to be programmable too, but I’ll leave that modification as, “…as exercise for the reader.” In any case, if the data is valid, the code uses it to update the database and sets the subVI’s two outputs to reflect whether the database update generated an error. If the data is not valid, it returns the standard error message stating so.

The Set Data Buffer Depth Event

The basic dataflow for this event is very much like the previous one.

Set Data Buffer Depth

The primary logical difference between the two is that all plugins support this parameter. The logic simply has to select the correct buffer to resize.

The Set TC Parameters Event

With our third and (for now at least) final control event, we return to one that is only valid for some of the plugins – this time the temperature controllers.

Set TC Parameters

The interesting part of this event processing is that, because its data was not originally intended to be reloaded at runtime, the live copy of the data is read and buffered in the object-oriented configuration VIs.

Save Machine Configuration

Consequently, the routine to update the database (Save Machine Configuration.vi) first creates a Config Data object and then use that object to read the current configuration data. If the data has changed, and is valid, the same object is passed on to the method that writes the data to the database. Note also, that the validation criteria is more complex.

Validate TC Parameters

In addition to simple limits on the sample interval, the Error High Level cannot exceed 100, the Error Low Level cannot go below 30, and all the levels have to be correct relative to each other.

Testing

With the last of the basic interface code written and in place, we need to look at how to test it. To aide in that effort, I created five test VIs – one for each event. The point of these VIs is to simply exercise the associated event so we can observe and validate the code’s response. For instance, here’s the one for reading the graph data:

Test Read Graph Data

It incorporates two separate execution paths because it has two things that it has to be doing in parallel: Sending the event (the top path) and waiting for a response (the bottom path). Driving both paths, is the output from a support VI from the notifier library (not_Generic Named Notifier.lvlib:Generate Notifier Name.vi). It’s job is to generate a unique notifier name based on the current time and a 4-digit random number. Once the upper path has fired the event, it’s work is done. The bottom path displays the raw notifier response and graphs of the data that is transferred. Next, the test VI for reading the graph image sports similar logic, but the processing of the response data is more complex.

Test Read Graph Image

Here, the response notifier name is also used to form the name for a temporary png file that the code uses to store the image data from the event response. As soon as the file is written, the code reads it back in as a png image and passes it to a subVI that writes it to a 2D picture control on the VI’s front panel. Finally, the three test VIs for the control operations are so similar, I’ll only show one of them.

Test Resizing Data Buffers

This exemplar is for resizing the data buffers. The only response processing is that it displays the raw variant response data.

To use these VIs, launch the testbed application and run these test VIs manually one at a time. For the VIs that set operating parameters, observe that entering invalid data generates the standard error message. Likewise, when you enter a valid setting look for the correct response in both the program’s behavior and the data stored in the database. For the VI’s testing the read functions, run them and observe that the data they display matches what the selected plugin shows on the application’s GUI.

Testbed Application – Release 18
Toolbox – Release 15

The Big Tease

In this post, we have successfully implemented a remote access/control capability. However, we don’t as of yet have any way of accessing that capability from outside LabVIEW. Next time, we start correcting that matter by creating a TCP/IP interface into the logic we just created. After that introduction, there will be posts covering .NET, ActiveX and maybe even WebSockets – we’ll see how it goes.

Until Next Time…
Mike…

A Tree Grows in Brooklyn LabVIEW

This time out, I want to start exploring a user interface device that in my opinion is dramatically under-utilized. I am talking about the so-called tree control. This structure solves a number of interface challenges that might otherwise be intractable. For example, the preferred approach to displaying large amounts of data is to avoid generating large tabular blocks of data, opting instead to display these datasets on graphs. However, there can be situations where those large tabular blocks of data are exactly what the customer wants. What a tree control can do is display this data using a hierarchical structure that makes it easier for the user to find and read the specific data they needs. A good example of this sort of usage is Windows explorer. Can you imagine how long it would take to you find anything of all Windows provides was an alphabetical list of all the files on your multi-gigabyte hard drive?

Alternately, a tree control can provide a way of hierarchically organize interface options. For instance, to select the screen to display in the testbed application we have been building, the program currently uses a simple pop-up menu containing a list of the available screens. This technique works well is you have a limited number of screens, but does not scale well.

We will structure our evaluation of tree controls around two applications that demonstrate its usage both as a presentation device for large datasets and as a control interface. Starting that discussion, this week we will look how display a large amount of data (all the files on your PC). Then in the following post we will explore its usefulness for controlling the application itself by modifying the testbed application to incorporate it.

Our Current Goal

To demonstrate this control’s ability to organize and display a large amount of tabular data, we are going to consider an example that displays a hierarchical listing of the files on your computer starting with a directory that you specify. The resulting display will represent folders as expandable headings, and for files show their size and modification date.

I picked this application as an example because it provides the opportunity to discuss an interesting concept that I have been wanting to cover for some time (i.e. recursion). Moreover, on a practical level, this application makes it easy to generate a very large set of interesting data – though it isn’t very fast. But more on that in a bit. For now, let’s start by considering what it takes to make this tree grow. Then we can look at the application’s major components.

Becoming a LabVIEW Arborist

Although tree controls and menus occupy different functional niches, their APIs bear certain similarities. For example, they both draw a distinction between what the user sees and the “tags” that are used internally to identify specific items. Likewise, when creating a child item, both APIs use the parent’s tag to establish hierarchical relationships.

A big difference between the two is that a tree control can have multiple columns like a table. In fact, one way of understanding a tree control is as a table that lets you collapse multiple rows into a header row. So in designing for this thing, the first thing we need to do is decide what values are going to represent the “header rows”, and what values the “data rows”. For this, our first excursion into utilizing tree controls, the “header” rows will define the folders – so that is where we go first.

Showing the Folders

The code that we will use to add a new folder to the tree resides in a VI called Process New Directory Entry.slow.vi (the reasons for the “slow” appellation will be explained shortly).

Process New Directory Entry.slow

Because this logic resides in a subVI, the reference to tree control comes from a control on the VI’s front panel. Next, note that the way you get the row into the control is by using an invoke node that instantiates the Edit Tree Items:Add Item method. I point out this fact because it tells you something important: All the data we are going to be displaying in the control are properties of the control, not values. Consequently, they will be automatically saved as part of the control whenever you save the VI that contains the control.

Next, let’s consider the inputs to the method. The top-most item is Parent Tag. The assumption is that the method is defining a new child item, so this input defines the parent under which the new child will reside. Therefore, a Parent Tag that is a null string indicates an item with no parent (i.e. a top-level item). The next item down from the top is Child Position and its job is to tell the method where to insert the new child that it is creating. A value of -1, as is used here, tells the method to put the new child after any existing children of the identified parent. In other words, if this code is called multiple times, the children will appear in the control in the order in which they were created.

The next two input items (Left Cell String and Child Text) control what the user will see in the control. You will recall that I said that tree controls are sort of like hierarchical, collapsible tables. In that representation, the left-most cell shows the hierarchical organization through indentation. In addition, by default, every row that has other rows nested beneath it shows a small glyph indicating that the row can be expanded. The other cells in the row are like the additional columns in the table and can hold whatever data you want. When creating entries for directories, the left-most cell will contain the name of the directory, and the remainder of the row will be empty. To implement this functionality, the input path is stripped to remove the last item. This value is passed to the Left Cell String input. In addition, an empty string array is written to the Child Text input.

Next, the Child Tag input allows you to specify the value that you want to use to uniquely identify this row when creating children under it, or reading the value of control selections. Now the documentation says that if you don’t wire a string to this input, it will reuse the Left Cell String value as the tag, but you don’t want to depend on this feature. The problem is that tags have to be unique so to prevent duplication, LabVIEW automatically modifies these tags to insure that they are unique by appending a number at runtime. While it is true that the method returns a string containing the key that LabVIEW generated, not knowing ahead of time what the tag will be can complicate subsequent operations. To avoid this issue, I like to include logic that will guarantee that the tag value that I write to this input is unique. For this application, if the parent tag is a null string (indicating a top-level item), the code takes the entire path, converts it to a string and uses the result as the child tag. In the parent tag is not null, the code generates the tag by taking the parent tag value and appending to it a slash character and the string that is feeding the child’s Left Cell String input. If this reason for this logic escapes you, don’t worry about it – you’ll see why it’s important shortly.

Finally, the Child Only? input is a flag that, when false, allows other rows to be added hierarchically beneath it.

Showing the Files

With the code handled for creating entries associated with directories, now we need to implement the logic for creating the entries that represent the files inside those directories – which as you can see below, utilizes the same method as we saw earlier, but plays with the inputs in a slightly different ways.

Process New File Entry.slow

Named Process New File Entry.slow.VI, this VI is designed to use the additional columns to provide a little additional information about the file: to wit, its size and last modification date. Therefore, the first thing the code does is call a built-in function (File/Directory Info) that reads the desired information. However, this call raises the potential of an error being generated. When errors are possible you need to spend some time thinking about what you want to have happen when they occur. In this situation, there are three basic responses:

  1. Propagate the Error: With this approach, the error would simply be propagated on through the code and be reported like any other error. This action would ensure that the error would be reported, but would stop the processing of the interface.
  2. Don’t Include this File in the List: By trapping the error and preventing it from being passed on, we can use it to block the display of files for which we can’t retrieve the desired error. This technique would allow the interface processing to run to completion, by simply ignoring the error.
  3. Include the File but not its Data: A variation of Option 2, this approach would still block the error from being propagated. However, it would still create the file’s entry in the table but with dummy data like, “Not Available”, or simply “n/a” for the missing data.

So which of these options is the correct one? This is one of those situations where there is no universally correct answer. Sorting out which option is the correct one for your application is why, as a software engineering professional, you earn the “big bucks”. For the purpose of our demonstration, I picked Option 2.

Next, the New Directory Tag input is the tag that is associated with the folder in which this file resides. Finally, the Child Tag value is calculated by taking the Parent Tag value and appending to it a slash character and the name of the file stripped from the input path.

Pulling it all Together

So those are the two main pieces of code. All we have to do now is combine them into a single process that will process a starting directory to produce a hierarchical listing of its contents. The name of this VI is Process Directory.slow.vi, and this is what its code looks like:

Process Directory.slow

So you can see that the first thing is does is call the subVI we discussed for creating an entry in the tree control for the directory identified in the Starting Path input, using the tag value from the My Parent Tag input. The result is that the folder is added to the tree, and the subVI returns the tag for the new folder item. The next step is to process the folder’s contents so the code calls the built-in List Folder function to generate lists of the directory’s files and subdirectories.

The array of file names is passed into a loop that repeatedly calls the subVI we discussed earlier that creates entries in the tree control for individual files. The array of subdirectory names drives a loop that first verifies that the first character of the name is not a dollar sign (“$”). Although this check is not technically necessary, it serves to bypass various hidden system directories (like $Recycle Bin) which would generate errors anyway. Assuming that the subdirectory name passes the test, the code calls a subVI that we haven’t looked at before – or have we? If you open this subVI and go to its block diagram and you will see this:

Process Directory.slow

Look familiar? I have not simply duplicated the logic in Process Directory.slow.vi, rather I am using a technique called recursion to allow the VI to call itself. This idea might sound more than a little confusing, but if you think about it, the idea makes a lot of sense. Look at it this way, to correctly process these subdirectories, we need to do the exact same things as we are doing right now to process the parent directory, so why not use the exact same code?

The way it works is that Process Directory.slow.vi is configured in its VI Properties as a shared clone reentrant VI. To review, when LabVIEW runs code utilizing share clones, it creates a small pool of instances of the VIs code in memory. When the shared clone VI is actually called, LabVIEW goes to this pool and dynamically calls one of the share clones that isn’t currently being used. If the pool every “runs dry” LabVIEW automatically adds more clones to the pool. It is this behavior relative to shared clones that is key to the way LabVIEW implements recursion. In order to see how this recursion operates, let’s consider this very basic top-level VI:

Getting Processing Started.slow

The code first clears any contents that might already exist in the tree control and then makes the first call to Process Directory.slow.vi. When the runtime engine sees that call, it goes to the pool, gets a clone of the VI and starts executing it. An important point to remember is that even though all the clones in the pool were derived from the same VI, they are at this point separate entities. It is as though you manually created several copies of the same VI, except LabVIEW did the copying for you.

When running this first clone, LabVIEW will eventually get to the call that it makes to Process Directory.slow.vi. As before, the runtime engine will go to the pool, get a second clone of the VI and start it executing, and so it will go until execution gets to a directory that only has files in it. In that case, the cloned VI will not get called and that Nth-generation clone will finish its execution. At this point LabVIEW will release the clone back to the pool for future reuse, and return to executing the clone that called the one that just finished. This calling clone may have other subdirectories to process, or it may be done – in which case it will also finish its execution, LabVIEW will release it back to the pool, and continue executing the clone that called it. This process will continue until all the clones have finished their work.

Some Further Points

And that, dear readers, is how the process basically works, but there are a couple important things still to cover. We need to talk about memory consumption, performance and how to interact with this control in your program once you have it populated with data.

Memory Considerations

I mentioned earlier that the information that you enter into a tree control are actually properties of the control – not its data. I also stated that as a result of that fact, said information will automatically be saved as part of the control. As a demonstration of that fact, consider that the very basic top-level VI I just showed you consumes about 14 kbytes on disk. However, as a test I turned the process loose on my PC’s Program Files (x86) directory. After it had finished processing the 14,832 folders(!) and 122,533 files(!!) contained therein, I saved the VI again. At that point, the size of the VI on disk ballooned to 2.6 Mbytes.

The solution is to remember to always remember to delete all items from a tree control when the program using it stops. Although you obviously don’t have to worry about this sort of growth is a compiled application (a standalone application can’t save changes to itself), this convention will help to keep you from inadvertently saving extraneous information during development and artificially expanding the size of your application.

Performance Considerations

The test I did to catalog my PC’s Program Files (x86) directory also highlighted another issue: execution speed. To complete the requested processing took about an hour and a half. Doing the same processing, but minus the tree control operations, took less than a minute, so the vast majority or this time was clearly spent in updating the tree control. But what exactly was it that was taking so long? As it turns out, there are two sources of delay, the first of which is actually pretty easy to control.

The way the code is currently written, the tree control on the front panel updates its appearance after each addition – a problem by the way that is not unique to tree controls. The solution is to tell LabVIEW to stop updating the front panel for a while, and here is how to do it:

Getting Processing Started w-Defer Panel Updates

A VI’s front panel has a property called Defer Panel Updates when you set this property to true, LabVIEW records all changes to the VI’s front panel, but doesn’t actually update it to reflect those changes. When the property is later set to false, all pending changes are applied to the front panel at once. The additions shown reduces the time to process my entire Program Files (x86) directory by 66% to just 30 minutes – which is much better, but still not great.

To reduce our processing time further, we have to take more drastic measures – starting with a fundamental change in how we add entries for individual files. The issue is that the technique we are using to add entries is very convenient because we are explicitly identifying the parent under which each child is to be placed. Consequently, we have the flexibility to add entries in essentially any order. However, as the total number of entries grows larger we begin to pay a high price for this convenience and flexibility because, under the hood, the control’s logic has to incorporate the ability to insert entries at random into the middle of existing data.

The solution to this problem is to use a different method. This method is called Edit Tree Items:Add Multiple Items to End and as it names says it simply appends new items to the end of the current list of entries. Of course for this to work, it means that we have to take responsibility for a lot of stuff that LabVIEW was doing for us, like updating the control in order and maintaining the indentation to preserve the hierarchical structure. Thankfully, that work isn’t very hard. For instance, here is the code for creating the new directory entry:

Process New Directory Entry

The first thing you will notice is that the invoke node is gone. The method that we will be invoking sports a single input which is an array of clusters representing the tree entries that it will add. The purpose of the logic before us is to assemble the array element that will create the parent folder’s entry in the tree.

Next, note that the information needed to define the entry is slightly different. First, we don’t need to specify a tag for the parent because we are assuming that the node are going to be simply added to the display in the order that they occur in the array. However, that simplification raises a problem. How do you maintain the display’s hierarchical structure? The thing to remember is that the hierarchy is defined visually, but also logically, by the indentations in the entries. Therefore the entry definition incorporates a parameter that explicitly defines the number of level which the new entry should be indented. Due to the way that we have been building the tags, this value is very easy to calculate. All we have to do is count the number of delimiters (“\”) in the entry’s tag and then subtract the number of delimiters in the starting path. The first part of that calculation occurs in the subVI Calculate Indent Level.vi and the second part is facilitated by a new input parameter Indent Offset.

Making the same adaptations to the routine for adding a new file entry and you get this:

Process New File Entry

Nothing new to see here. The important part is how these two new VIs fit together and to see that we need to look at the recursive VI Process Directory.vi (I have zoomed in on just the part that has changed):

Process Directory

This logic’s core functionality is to build the array of entry definitions that the Edit Tree Items:Add Multiple Items to End method needs to do its work. The first element in this array is the entry for the directory itself, and the following elements define the entries for the files within the directory. Finally, we have a make a small change to the top-level VI as well:

Getting Processing Started

Specifically, we need to calculate the Indent Offset value based on the Starting Path input. But the important question, is does all this really help? With these optimizations in place the processing time for my PC’s Program Files (x86) directory drops to just a hair under 10 minutes. Of course while that improvement is impressive, it might still be too long, but the changes to reduce the processing time further are only necessary if dealing with very large datasets. Plus they really have nothing to do with the tree control itself – so they will have to wait for another time.

Event Handling

The last we have left out so far is what happens after the tree control is populated with data. Well, like most controls in LabVIEW, tree controls support a variety of events including ones that allow event structures to respond when the user selects or double-clicks an item in the control. But this point begs the question: What is the fundamental datatype of a tree control? By default, the datatype of a tree control is a string, and its value is the tag of the currently selected item. Alternatively, if no items are selected, the tree control’s value is a null, or empty string.

Top Level File Explorah

Because the control’s datatype is a string, you can programmatically clear any selection by writing a null string to a Value property node or a local variable associated with the control. However, note the words “By default…” like a few other controls (such as listboxes) tree controls can be configured to allow multiple items to be selected at once. In that case, the control’s datatype changes to an array of strings where each element is the tag of a selected item.

The other thing I wanted to point out through this example is the importance of carefully considering how to define tags for items. it may seem obvious but if you are taking the time to put data into this control, you are probably going to want to use it in the future. it behooves you therefore to tag it in such as way as to allow you to quickly identify and parse values. For example, in this example I put together the tags such that they mirror the data’s natural structure – its file path. By mimicking your data’s natural structure you make it easier to locate the specific information that you need.

File Explorer – Release 1
Toolbox – Release 12

The Big Tease

OK, that is enough for now. Next time we will return to our testbed application and look at using tree controls as a control element. With this use case the focus shifts from volume of data, to organization of the GUI to simplify operator interactions.

Until Next Time…
Mike…

Dropping-In on the Testbed

Last time out we started exploring one common application of so-called “drop-in” VI. The technique is based on the idea of creating VIs that are capable of performing something useful for the VI that is hosting it, but without interacting directly with that VI’s basic logic. The example we considered was manipulating the font and type size used to present textual data.

At the close of that post we has created a basic object-oriented structure that could manipulate the label or caption of any front panel control or indicator. I want to finish this discussion by looking at how to expand that basic implementation to allow it to set the text properties of text contained inside a control or indicator. For that we will return to our testbed application.

A Brief Recap

It has been a while since we have worked with this code, so a brief refresher on what it does is probably in order. The testbed application we will be modifying consists of several processes that run independently of one another. To begin with, there is a background process that oversees the reporting of errors that occur. Handling the user interface duties, a GUI process incorporates a subpanel that can display the front panels of several simulated acquisition and process-control VIs. The whole thing is kicked off by a launcher VI that loads the various processes into memory and starts them executing.

Our goal here will be to add the drop-in VI we created last time to all the user-facing VIs and add classes as necessary to allow it to handle the controls and indicators on those VIs. However, if you don’t already have a tool for editing database contents directly, you should first download a tool called Database .NET (the link is to a zip file, and is at the bottom of the page). The program is a simple utility that lets you examine and edit database data from a number of different DBMS. I don’t know the folks that wrote this, and have no vested interest in the program other than I have used it for years and found it very useful. Note that this program has no installer so it has a very small footprint – it will even run from a USB stick. To “install” the program, simply create a directory for it on your computer and then drag into it the program that is inside the zip archive you downloaded, and installation is complete. The easiest way to invoke it is to set it as the default application for *.mdb files.

  • Note that if you decide to install this utility in a subdirectory of the Program Files (x86) directory, you may have to play around with the folder permissions a bit before it will run. Because the program generates several temporary files when it’s starting up, the user has to have Full Access to the folder in which it is installed.

One other caveat to bear in mind before we dive into the modifications is that, these operations cannot override limits on these properties that might exist for other reasons. For example, these techniques will not work on controls that you have defined as strict typedefs. The reason: The strict typedef defines everything about the control’s appearance and the property node will throw an error if you try to change them. Likewise, a System-themed control will let you change the font characteristics, but will complain if you try to change colors.

Making With the Modifications

So where do we start? Well the first hing we need to do is to make a couple minor tweaks to the Display Font Manager.vi. First, we need to define what happens to the drop-ins errors. Because it’s important to preserve them, we will save the errors that arise in the drop-in to the same location that errors from the testbed application proper are stored – but without bothering the program’s operator. To accomplish that task, let’s reuse a the subVI that the error handling logic uses to store error data.

Drop-in Error Handling

Note that I had to add a case structure because the location where this subVI was originally used only executed if there was an error. So unless we want to have spurious records being posted, we have to add that logic here.

Next, as the code is currently written, the error chain in the drop-in’s logic starts with the Error In control and terminates in the Error Out indicator. Although this arrangement works fine during development and testing, when the time comes to deploy the code, this is not what we want. As I said last time, drop-in VIs should not interact with the host VI and should not inject their own errors into the host’s error stream. Still, it can be useful to be able to use the drop-in’s error IO to establish data dependencies that control when it runs. The solution is for the drop-in to have error clusters, but not have them be connected internally.

Errors - Straight Through

Changing the Testbed

Now that we are to install the drop-in, we need to look for where to install it. Completing that examination of the code, we see that there are 5 VIs that are user-facing:

  1. The Launcher (testbed.vi)
  2. The Main GUI (Display Data.vi)
  3. The Temperature Controller (Temperature Controller.vi)
  4. Two “Acquisition” VIs (Acquire Ramp Data.vi and Acquire Sine Data.vi)

So the first thing I do is modify each of these VIs by dropping a copy of the drop-in VI on to their block diagram outside the outer-most loop. For example, this is what the modified launcher block diagram looks like:

textbed.vi with drop-in installed

As promised earlier, this is all the modification that the application will need – which means we are ready to start testing.

The First Test

“But wait a minute…” you protest. “…we haven’t configured anything yet. There’s nothing to test!”

Well you’re half right. We have not gone into the database and configured any controls to be modified, but we still have something to test. We still have to verify the drop-in’s default behavior, which by the way, is to do nothing. Yes, you read that right, we have to test that nothing happens. You see, a major aspect of the drop-in concepts is that drop-ins don’t do anything unless they are explicitly told to through their configuration. Right now we have installed the drop-in code, but there are no controls configured in the database so we need to make sure that the main application continues to run as it did before: no side-effects and no errors. In short, the drop-in right now should do nothing, and we need to make sure that it fulfills that requirement.

So launch the top-level VI (testbed.vi) or run the standalone executable. As before, the launcher will show the names of the processes it’s launching and when it finishes the main GUI will open. Again as before, you will be able to switch between screens using the popup menu and the plugins will operate just as they did before. Finally, if you look at the contents of the event table in the database, you will see that no errors have been generated.

It’s All About the Children

Now that we have “nothing” working, we need to finish implementing all the “somethings”. You will recall that when we ended last time I had created a basic implementation of the font manager functionality that could change the label or caption of any type of control. The tricky part, I said was going to be implementing the subclass, or children, methods that would modify the font of a configured control’s contents. So let’s look at those children.

The String and Digital Subclasses

I choose to start with these two because they are the easiest to understand, and are very much alike. Here’s the child method for handing strings…

String Subclass Method

…and the one for digital numerics…

Digital Subclass Method

In either subclass, the logic starts by calling the parent methods (which handles labels and captions) and then extracting from the parent’s class data the reference to the control that will be manipulated. At the same time that is going on, the Font Parameters data is unbundled and the Component to Set value controls what, if anything, happens next. If the selected component is Label or Caption a case is selected which does nothing but pass through the error cluster. If, however, the selected component is Contents the associated case casts the basic control reference from the parent class data into the control’s specific control class, and then sets the appropriate properties.

The Boolean and RingSubclasses

The next two I want to consider are, again, similar each other, but differ from the preceding pair in that they represent control classes that don’t have any readily discernible textual value. Booleans represent logical true and false conditions, while rings are technically numerics, but the number that is their value doesn’t appear anywhere. In this sort of situation, the idea is to look for text that is not the control’s value but is associated with that value. For example, Boolean controls in LabVIEW can have textual displays that state the control’s condition. These strings are called Boolean Text and are often used to label push buttons or lights…

Boolean Subclass Method

Likewise, the Ring control appears to the user as a pop-up menu, so we can use this code to set the text properties of the text that appears in the menu…

Ring Subclass Method

The WaveformChart Subclass

Finally, we need to take the idea of strings that are only associated with data one more step. What about complex controls that can have multiple strings associated with their values? Objects like charts are good examples of what I am talking about. Just to start, there is text associated with the axis tick marks, there is text that forms the axis labels, and there is text in the plot legends.

The most flexible approach would be to figure out how to uniquely identify each of these components, however we must be careful to not create an API that is so flexible that it is unusable. One solution would be to simply make all the text the same font and size – which is what they are anyway. A look that I prefer however is to have the tick mark labels slightly smaller than the axis labels. Here is one way to do that:

WaveformChart Subclass Method

As you can see, the code treats the two axes the same by combining references to them into an array and then passing that array into a loop that manipulates the display parameters. This logic makes the axis labels the size specified in the configuration, but does a bit of math to make the tick mark labels about 10% smaller. This difference might not seem like much, but it works. If this isn’t exactly what you want, that’s OK. The point here is not to present a canonical solution, but to present concepts and ideas that help you find your own way.

Adding Configurations

Now we are ready to add the font definitions to the database. I have created a total of 12 definitions covering 9 different controls and indicators and you can see them all by examining the SQL file in the _repos subdirectory in the project (starting at line 27). However, to give you a taste of what the SQL code for this functionality looks like, here is the SQL for the table holding the font configurations, and the font definition for the string indicator on the front panel of the launcher.

CREATE TABLE ctrl_font_definition (
    id          AUTOINCREMENT PRIMARY KEY,
    owner_name  TEXT(50) WITH COMPRESSION,
    ctrl_name   TEXT(50) WITH COMPRESSION,
    font_name   TEXT(20) WITH COMPRESSION,
    font_size   INTEGER,
    ctrl_comp   TEXT(20) WITH COMPRESSION
  )
;

INSERT INTO ctrl_font_definition
  (owner_name, ctrl_name, font_name, font_size, ctrl_comp)
VALUES
  ('testbed.vi', 'progress', 'Segoe UI', 24, 'Contents')
;

The goal of these initial definitions is to “turn-on” the functionality without changing too much. For example, the ‘Segoe UI’ font is the default font that LabVIEW uses on recent versions of the Windows platform. If you are running this code on the Macintosh or Linux (or an older version of Windows), the default font will be different. So on other platforms you may need to modify these definitions before you install them.

Once we have the definitions in the database, let’s try the testbed application again. You might not notice a lot of difference, that is sort of the point. This initial test is to reproduce the default values. One place where you will notice a difference is if you are running Windows and you have the display font scaling on your display set to the non-default value. The text size will now always be the same relative to the size of the window regardless of how the display setting changes.

From here I would recommend that you play around a bit and manually change the font and size of the various controls to see the effect.

Testbed Application – Release 16
Toolbox – Release 12
Testbed Installer – Release 16

Please note that I have included in this release a built version of the application so you can practice working with the database. The LocalDB.mdb file included with this installer has the table defined for holding the font definitions, but the table is empty. This release has two purposes: One, by adding to and manipulating the data in its database, you can see that you really can modify the visual presentation without changing code. Two, I have started using LabVIEW 2015 and realize that some of you may not have upgraded yet. If this version change is a problem, post a comment and I will send you a version of the code back-saved to LabVIEW 2014.

The Big Tease

One of the things that I like about NI Week is the opportunity to meet friends both new and old. Before a keynote address one morning I was talking to another one of the LabVIEW Champions, Jack Dunaway by name, and the topic of this blog came up. To make a long story short, he suggested a topic that sounded so good, I’m going to get started on it next time.

One good of way showing a lot of data in a small space is what is known as a tree control. It’s valuable because its structure is inherently hierarchical and so can display a lot of data while not taking up a lot of screen real estate. In addition, it can reduce the overwhelm that you sometimes feel when looking at large datasets because, when done well, they allow you to start with a high-level view of the data and gradually drill down to the specific results you want.

If you are working in Windows, there are two such controls available: one that is part of Windows, and one that is native to LabVIEW. So next time: the Native LabVIEW Tree Control. Be there or be square.

Until Next Time…

Mike…

More Than One Kind of Modularity

When learning something that you haven’t done before – like .NET – it’s not uncommon to go through a phase where you look at some of the code you wrote early on and cringe (or at least sigh deeply). The problem is that you are often not only learning a new interface or API, but you are learning how to best use that interface or API. The cause of all the cringing and sighing is that as you learn more, you begin to realize that some of the assumptions and design decisions that you made were misguided, if not flat-out wrong. If you look at the code we put together last time to help us learn about .NET in general, and the NotifyIcon assemble in particular, we see a gold-plated example of just such code. Although it is clearly accomplished it’s original goal of demonstrating how to access .NET functionality and illustrating how the various objects can relate to one another, it is certainly not reusable – or maintainable, or extensible, or any of the other “-ables” that good software needs to be.

In fact, I created the code in that way so this time we can take the lesson one step further to fix those shortcomings, and thus demonstrate how you can go about cleaning up code (of your own or inherited) that is making you cringe or sigh. Remember, it is always worth your time to fix bad design. I can’t tell you how many times I have seen people struggling with bad decisions made years before. Rather than taking a bit of time to fix the root cause of their trouble, they continue to waste hours on project after project in order to workaround the problem.

Ok, so where do we start?

Clearly this code would benefit from cleaning-up and refactoring, but where and how should we start? Well, if you are working on an older code base, the question of where to start will not be a problem. You start with where the most pain is. To put it another way, start with the things that cause you the biggest problems on a day-to-day basis.

This point, however, doesn’t mean that you should just sit around and wait for problems to arise. As you are working always be asking yourself if what you are doing has limitations, or embodies assumptions that might cause problems in the future.

The next thing to remember is that this work can, and should, be iterative. In other words you don’t have to fix everything at once. Start with the most egregious errors, and address the others as you have the opportunity. For example, if you see the code doing something stupid like using a string as a state variable, you can fix that quickly by replacing the strings with a typedef enumeration. I have even fixed some long-standing bugs in doing this replacement because it resolved places where states were subtly misspelled or contained extraneous spaces.

Finally, remember that the biggest payoffs, in terms of long-term benefit, come from improved modularity that corrects basic architectural problems. As we shall see in the following discussion, I include under this broad heading modularity in all its forms: modular functionality, modular logic and modular data.

Revisiting Modular Functionality

Modular functionality is the result of taking small reusable bits of code and encapsulating it in routines that simplify access, standardize the interface or ensure proper usage. There are good examples of all these usages in the application modified code, starting with Create NotifyIcon.vi:

Create NotifyIcon VI

Your first thought might be why I bothered turning this functionality into a subVI. After all, it’s just one constructor node. Well, yes that is true but it’s also true that in order to create that one node you have to remember multiple steps and object names. Even though this subVI appears rather simple, if you consider what it would take to recreate it multiple times in the future, you realize that it actually encapsulates quite a bit of knowledge. Moreover, I want to point out that this knowledge is largely stuff for which there is no overwhelming benefit to be gained from you committing it to memory.

Next, let’s consider the question of standardizing interfaces. Our example in this case is a new subVI I created to handle the task of assigning an icon to the interface we are creating. I have named it Set NotifyIcon Icon.vi:

Set NotifyIcon Icon VI

You will remember from out previous discussion that this task involves passing a .NET object encapsulating the icon we wish to use to a property node for the NotifyIcon object. Originally, this property was combined with several others on a single node. A more flexible approach is to breakup that functionality and standardize the interfaces for all the subVIs that will be setting NotifyIcon to simply consist of an object reference and the data to be used to set the property in a standard LabVIEW datatype – in this case a path input. This approach also clearly simplifies access to the desired functionality.

Finally, there is the matter of ensuring proper usage. A good place to highlight that feature is in the last subVI that the application calls before quitting: Drop NotifyIcon.vi.

Drop NotifyIcon VI

You have probably been warned many times about the necessity of closing references that you open. However, when working with .NET objects, that action by itself is sometimes not sufficient to completely release all the system resources that the assembly had been using. Most of the time if you don’t completely close out the assembly you many notice memory leaks or errors from attempting to access resources that still think they are busy. However with the NotifyIcon assembly you will see a problem that is far more noticeable, and embarrassing. If you don’t call the Dispose method your program will close and release all the memory it was using, but if you go to the System Tray you’ll still see your icon. In fact, you will be able to open its menu and even make selections – it just doesn’t do anything. Moreover, the only way to make it go away it to restart your computer.

Given the consequences of forgetting to include this method in your shutdown sequence, it is a good idea to make it an integral part of the code that you can’t forget to include.

Getting Down with Modular Logic

But as powerful as this technique is, there can still be situations where the basic concept of modularity needs to be expressed in a slightly different way. To see such a situation, let’s look at the structure that results from simply applying the previous form of modularity to the problem of building the menus that go with the icon.

Create ContextMenu VI

Comparing this diagram to the original one from last time, you can see that I have encapsulated the repetitive code that generated the MenuItem objects into dedicated subVIs. By any measure this change is a significant improvement: the code is cleaner, better organized, and far more readable. For example, it is pretty easy to visualize what menu items are on submenus. However, in cases such as this one, this improved readability can be a bit of a double-edged sword. To see what I mean, consider that for the structure of your code to allow you to visualize your menu organization, said organization must be hard-coded into the structure of the code. Consequently, changes to the menus will, as a matter of course, require modification to the fundamental structure of the code. If the justifications for modularity is to include concepts like flexibility and reusability, you just missed the boat.

The solution to this situation is to realize that there is more than one flavor of modularity. In addition to modularizing specific functionality, you can also modularize the logic required to perform complex and changeable tasks (like building menus) that you don’t want to hard code. If this seems like a strange idea to you, consider that computers spend most of their time using their generalized hardware to performed specialized tasks defined by lists of instructions called “programs”. The thing that makes this process work is a generalized bit of software called a “compiler” that turns the programs into data structures that the generalized hardware can use to perform specialized actions.

Carrying forward with this line of reasoning, what we need is a simple way of defining a menu structure that is external to our program, and a “menu compiler” that turns that definition into the MenuItem references that our program needs. So let’s build one…

Creating the Data for Our Menu Compiler

So what should this menu definition look like? Well, to answer that question we need to start with the data required to define a single MenuItem. We see that as a minimum, every item in a menu has to have a name for display to the user, a tag to identify it, and a parent tag that says if the item has a parent item (and if so which item is its parent). In addition, we haven’t really talked about it, but the order of references in an array of menu items defines the order in which the items appear in the menu or submenu – so we need a way to specify its menu position as well. Finally, because in the end the menu will consist of a list (array) of menu item references, it makes sense to express the overall menu definition that we will eventually compile into that array of references as a list (and eventually also an array).

But where should we store this list of menu item definitions? At least part of the to this question depends on who you want to be able to modify the menu, and the level of technical expertise that person has. For example, you could store this data in text files as INI keys, or as XML or JSON strings. These files have the advantage of being easy to generate and are readily accessible to anyone who has access to a text editor – of course that is their major disadvantage, as well. Databases on the other hand are more secure, but not as easy to access. For the purposes of this discussion, I’ll store the menu definitions in a JSON file because, when done properly, the whole issue of how to parse the data simply goes away.

To see what I mean, here is a nicely indented JSON file that describes the menu that we have been working using for our example NotifyIcon application:

[
	{
		"Menu Order":0,
		"Item Name":"Larry",
		"Item Tag":"Larry",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Moe",
		"Item Tag":"Moe",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"The Other Stooge",
		"Item Tag":"The Other Stooge",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":3,
		"Item Name":"-",
		"Item Tag":"",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":4,
		"Item Name":"Quit",
		"Item Tag":"Quit",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":0,
		"Item Name":"Curley",
		"Item Tag":"Curley",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Shep",
		"Item Tag":"Shep",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"Joe",
		"Item Tag":"Joe",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	}
]

And here is the LabVIEW code will convert this string into a LabVIEW array (even if it isn’t nicely indented):

Read JSON String

JSON has a lot of advantages over techniques like XML: For starters, it’s easier to read, and a lot more efficient, but this is why I really like using JSON: It is so very convenient.

Starting the Compilation

Now that we have our raw menu definition string read into LabVIEW and converted into a datatype that will simplify the next step in the processing, we need to ensure that the data is in the right order. To see why, we need to remember that the final data structure we are building is hierarchical, so the order in which we build it matters. For instance, “The Other Stooge” is a top-level menu item, but it is also a submenu so we can’t build it until we have references to all the menu items that are under it. Likewise, if one of the items under it is a submenu, we can’t build it until all its children are created.

So given the importance of order, we need to be careful how we handle the data because none of the available storage techniques can on their own guarantee proper ordering. The string formats can all be edited manually, and it’s not reasonable to expect people to always type in data in the right order. Even though databases can sort the result of queries, there isn’t enough information in the menu definition to allow it to do so.

The menu definition we created does have a numeric value that specifies the order of items in their respective menus and submenus. We don’t, however, yet have a way of telling the level the items reside at relative to the overall menu structure. Logically we can see that “Larry” is a top-level menu item, and “Shep” is one level down, but we can’t yet determine that information programmatically. Still the information we need is present in the data, it just needs to be massaged a bit. Here is the code for that task:

Ordering the Menu Items

As you can see, the process is basically pretty simple. I first rewrite the Item Tag value by adding the original Item Tag value to the colon-delimited list that starts with the Parent Tag. I then count the number of colons in the resulting string, and that is my Menu Level value. The exception to this processing are the top-level menu items which are easy to identify due to their null parent tags. I simply force their Menu Level values to zero and replace the null string with a known value that will make the subsequent processing easier. The real magic however, occurs after the loop stops. The code first sorts the array in ascending order and then reverses the array. Due to the way the 1D array sort works when operating on arrays of clusters, the array will be sorted first by Menu Level and then Menu Order – the first two items in the cluster. This sorting, in concert with the array reversal, guarantees that the children of a submenu will always be processed before the submenu item itself.

Some of you may be wondering why we go to all this trouble. After all, couldn’t we just add a value to the menu definition data to hold the Menu Level? Yes, we could, but it’s not a good idea, and here’s why. In some areas of software development (like database development, for instance) the experts put a lot of store in reducing “redundancy” – which they define basically as storing the same piece of information in more than one place. The problem is that if you have redundant information, you have to decide how to respond when the two pieces of information that are supposed to be the same, aren’t. So let’s say we add a field to the menu definition for the menu level. Now we have the same piece of information stored in two different places: It is stored explicitly in the Menu Level value while at the same time it is also stored implicitly in Parent Tag.

Generating the Menu Item “Code”

In order to turn this listing into the MenuItem references we need, we will pass this sorted and ordered array into a loop that will process one element at a time. And here it is:

Compiling the Menu-1

You can see that the loop carries two shift registers. The top SR holds a 1D array of strings that consists of the submenu tags that the loop has encountered so far. The other SR also carries a 1D array but each element in it is a cluster containing an array of MenuItem references associated with the submenu named in the corresponding element of the top SR.

As the screenshot shows, the first thing that happens in the loop is that the code checks to see if the indexed Item Tag is contained in the top SR. If the tag is missing from the array it means that the item is not a submenu, so the code uses its data to create a non-submenu MenuItem. In parallel with that operation, the code is also determining what to do with the reference that is being created by looking to see if the item’s Parent Tag exists in the top SR. If the item’s parent is also missing from the array, the code creates entries for it in both arrays. If the parent’s tag is found in the top SR, it means that one or more of the item’s sibling items has already been processed so code is executed to add the new MenuItem to the array of existing ones:

Compiling the Menu-2

Note that the new reference is added to the top of the array. The reason for this departure from the norm is that due to the way the sorting works, the menu order is also reversed and this logic puts the items on each submenu back in their correct order. Note also that during this processing the references associated the menu items are also accumulated in a separate array that will be used to initialize the callbacks. Because the array indexing operation is conditional, only a MenuItem that is not a submenu, will be included in this array.

Generating the Submenu “Code”

If the indexed Item Tag is found in the top SR, the item is a submenu and the MenuItem references needed to create its MenuItem should be in the array of references stored in the bottom SR.

Compiling the Menu-3

So the first thing the code does is delete the tag and its data from the two array (since they are no longer needed) and uses the data thus obtained to create the submenu’s MenuItem. At the same time, the code is also checking to see if the submenu’s parent exists in the top SR. As before, if the Parent Tag doesn’t exist in the array, the code creates an entry for it, and if it does…

Compiling the Menu-4

…adds the new MenuItem to the existing array – again at the top of the array. By the time this loop finishes, there should be only one element in each array. The only item left in the top SR should be “[top-menu]” and the bottom SR should be holding the references to the top-level menu items. The array of references is in turn used to create the ContextMenu object which written to the NotifyIcon object’s ContextMenu property.

What Could Possibly Go Wrong?

At this point, you can run the example code and see an iconic system tray interface that behaves pretty much as it did before, but with a few extra selections. However, we need to have a brief conversation about error checking, and frankly in this situation there are two schools of though on this topic. There is ample opportunity for errors to creep into the menu structure. Something as simple as misspelling a parent tag name could result in an “orphan” menu that would never get displayed – or could end up being the only one that is displayed. So the question is how much error checking do we really need to do? There are those that think you should spend a lot of time going through the logic looking for and trapping every possible error.

Given that most menus should be rather minimal, and errors are really obvious, I tend to concentrate on the low-hanging fruit. For example, one simple check that will catch a large number of possible errors, is looking to see if at the end of the processing, there is more than one menu name left in the top SR – and finding an extra one, asserting an error that gives the name of the extra menu. You should probably also use this error as an opportunity to abort the application launch since you could be left in a situation when you can’t shutdown the program because the “Quit” option is missing.

Something else that you might want to consider is what to do if the external file containing the menu definitions comes up missing. The most obvious solution is to, again, abort the application launch with some sort of appropriate error message. However, depending on the application it might be valuable to provide a hard-coded default menu that doesn’t depend on external files and provides a certain minimum level of functionality. In fact, I once worked on an application where this was an explicit requirement because one of the things that the program allowed the user to do was create custom menus, the structure of which was stored in external files.

Stooge Identifier – Release 2
Toolbox – Release 11

The Big Tease

So what are we going to talk about next time? Well something that I have seen coming up a lot lately on the user forum is the need to be able to work with very large datasets. Often, this issue arises when someone tries to display the results of a test that ran for several hours (or days!) only to discover that the complete dataset consists of hundreds of thousands of separate datapoints. While LabVIEW can easily deal with datasets of this magnitude, it should be obvious that you need to really bring you memory management “A” game. Next time will look into how to plot and manage VLDs (Very Large Datasets).

Until Next Time…

Mike…

Creating a Reconfigurable Interface Using Undockable Windows

Something I have always enjoyed doing is creating programs or interfaces that do things you don’t expect LabVIEW to be able to do. Consequently, I thought I would take a couple posts and consider some useful and perhaps surprising interfaces.

The first one I want to look at is actually an idea I got from another LabVIEW Champion – Ben Rayner by name. Some time ago, he posted on the user forum a small proof-of-concept VI for an interface that is so cool I though I’d polish it up, flesh it out and see what came out of it.

What it does

At first glance, the interface appears similar to the test bed application that we have built on this blog. There is a common display area and by making selections from a pop up menu, you can display screens showing different data.

docking single screen

The difference is that if you click a button, the GUI will “undock” the current screen and turn it into an independent floating window that is no longer accessible from the popup menu.

docking 3 screens

The software allows an operator to undock as many screen as they want at one time. The only limitations are screen space and logic that mandates leaving at least one screen docked. Likewise, if you close one of the floating windows, it will again take up its original position in the selection menu. As is our usual policy, a link to the code is a little further along in this post.

Why This Interface?

I fully plan to get into how the code works, but first we need to consider why you would want to use this interface in the first place. What is the use case that this interface addresses?

Hopefully it should be obvious to anyone who considers the matter for more than a moment or two that user interfaces are always compromises. If you do your due diligence when designing an interface you try to put together in one screen or window the various bits of information that are logically related, or at least will be used together. However, requirements can change, or a particular type of user might need to be able to correlate pieces of data that are on different screens.

You could create a separate screen for just that user, but that solution requires some additional development effort. However, if you create windows that can be undocked to be moved around the screen as desired, users that want to see their data in a particular way may be able to do so without waiting for you to create a custom screen.

So with all that in mind, let’s think about how we might accomplish this sort of GUI.

How it Works in Theory…

The first thing to you need to realize as we begin looking at how to create this interface, is that if you have been following this blog you already know everything required to make this happen. The only thing missing is an understanding of how to make all the pieces fit together in a slightly different way.

For example, we have talked many times about how to create a basic multiscreen interface using subpanels. Likewise, we know how to make windows float on top of one another by defining their behavior as floating. Now if you think about it, these are the two states that our screens can be in: as a subpanel (when docked) and as a floating window (when undocked).

So our problem is really as simple as deciding how to manage the transition from one state to the other. If you are following the recommendations I have been making, the screens are already written as separate processes that don’t know (or care) how they are being displayed – or even if they are being displayed.

So how do we want this to behave? Well for the undocking there are a variety of options. LabVIEW supports drag-and-drop so we might be able to do something with that. Alternatively, we could create a pull down menu with an “Undock Current Window” on it, or even a custom shortcut menu when the user right-clicks on the on the interface’s front panel. But those are really just different ways of triggering the same logic, so for this demo I’m just going to create a button below the subpanel that is labeled something obscure like, “Click to Undock the Current Display”.

For docking one of the floating windows back in the interface, our options are more limited. But when you have a really good technique available, you only need one. How about this: when you click the window’s close button it gets docked back into the interface. After all, it’s what most users would try anyway.

…and in Practice

Now that we basically understand where we are going, lets start looking at some code.

Undockable Windows – Release 1
Toolbox – Release 9

The block diagram of the top-level VI is pretty much what a regular reader of this blog would expect. It starts by running a subVI that loads into memory, and starts executing, all the VIs that are going to be available through the subpanel. References to these VIs are stored in a DVR so they can be used later to populate the subpanel itself. This subVI also outputs an array of strings that are the names of the VIs it loaded. The code uses this array to initialize the strings in a ring control, and then programmatically fires the value change event associated with the ring.

initialization for undocking

With this work done, the VI next registers to receive a UDE that we will discuss in a moment, and enters the program’s main event loop. This loop includes a value change event for the ring control that changes the VI that is visible in the subpanel, and two additional events that manage the undocking and docking processes.

Getting Undocked

The first of these new events handles the undocking of windows and, understandably, is a value change event on the undocking button. The event logic goes about its work in two steps that are contained in separate subVIs. The first one is called Float the VI.vi.

undock

Its job is to remove the VI that is being undocked from the subpanel using the Remove VI subpanel control method, and then (after looking up its VI reference in the DVR) open its front panel using the FP:Open VI method. The other half of the operation is performed in Process Undock.vi.

process undock

This VI’s purpose is to update the user interface to the VI’s change of state – which in this case means simply disabling its selection in the ring control, and updating the DVR. This subVI is also responsible for getting a reference to the next available docked window and inserting it into the subpanel. As a by-product of this operation, the subVI also generates a flag that indicates when there is only one docked window remaining. The event handler uses this flag to disable the undock button when all but one of the displays have been undocked.

Docking a Window

Like the previous one, this event handles its duties as a two-step process. But unlike the previous one, it is driven by a UDE from the VI that is being docked back into the main display. As stated before, this action should be triggered when the user closes the VI’s front panel. The plugins accomplish this task by intercepting the Panel Close? event and instead of closing its front panel, simply fires a UDE (Redock Screen) that tells the GUI to reincorporate it back into the subpanel. The first subVI in the event handler is Unfloat the VI.vi.

redock

After looking up the VI’s reference in the DVR, it closes the VI’s front panel and inserts it back into the subpanel. Note that it is not necessary to remove the VI that is already there. The other subVI that the event handler calls is Process Redock.vi.

process redock

Basically reversing the operation that were performed when the window was undocked, this VI removes the window’s name in the ring control from the list of disabled items and updates the status stored in the DVR.

The Big Tease

So there’s a basic implementation of a pretty neat capability, but what now? What sort of enhancements might we reasonably want to make? I have an idea. You notice that when the window’s undock, they always open in the middle of the main screen. This is certainly a reasonable approach, but an enhancement that would be sure to make your users happy is to provide some sort of mechanism whereby each window would remember its last position. With that capability in place a windows would reopen in the same place as it was when it was last closed.

So when we next get together, let’s do that. It will be a good thing to know how to do in general.

Until Next Time…

Mike…

Building a Web Backend in LabVIEW

When building a web-based application, one of the central goals is often to maximize return on investment (ROI) by creating an interface that is truly cross-platform. One of the consequences of this approach is that if you are using a LabVIEW-based program to do so-called “backend” tasks like data acquisition and hardware control, the interface to that program will likely be hidden from the user. You see the thing is, people like consistent user interfaces and while LabVIEW front panels can do a pretty good job as a conventional application, put that same front panel in a web page and it sticks out like the proverbial sore thumb.

Plus simply embedding a LabVIEW front panel in a web pages breaks one of the basic tenets of good web design by blurring the distinction between content and appearance. Moreover, it makes you — the LabVIEW developer — at least partially responsible for the appearance of a website. Trust me, that is a situation nobody likes. You are angered by what you see as a seemingly endless stream of requests for what to you are “pointless” interface tweaks, “So big deal! The control is 3 pixels too far to the left, what difference does that make?” And for their part, the full-time web developers are rankled by what they experience as a lack of control over an application for which they are ultimately responsible.

If you think about it, though, this situation is just another example of why basic computer science concepts like “low coupling” are, well, basic computer science concepts. Distinguishing between data collection and data presentation is just another form of modularity, and we all agree that modularity is A Very Good Thing, right?

It’s All About the (Data)Base

So if the concept that is really in play here is modularity, then our immediate concern needs to be the structure of the interface between that part of the system which is our backend responsibility, and that which is the “web guy’s” customer facing web-based GUI.

Recalling what we have learned about the overall structure of a web application, we know that the basic way most websites work is that PHP (or some other server-side programming tool) dynamically builds the web pages that people see based on data that they extract from a database. Consequently, as LabVIEW developer trying to make data available to the web, the basic question then is really, “How do I get my data into their database?”

The first option is to approach communications with the website’s database as we would with any other database. Although we have already talked at length about how to access databases in general, there is a problem. While most website admins are wonderful people who will share with you many, many things — direct access to their database is usually not one of those things.

The problem is security. Opening access to your LabVIEW program would create a security hole through which someone not as kind and wholesome as you, could wiggle. The solution is to let the web server mediate the data transfer — and provide the security. To explore this technique, we will consider two simple examples: one that inserts data and one that performs a query. In both cases, I will present a LabVIEW VI, and some simple PHP code that it will be accessing.

However, remember that the point of this lesson is the LabVIEW side of the operations. The PHP I present simply exists so you can get a general idea of how the overall process works. I make no claims that the PHP code bears any similarity to code running on your server, or even that it’s very good. In fact, the server you access might not even use PHP, substituting instead some other server-side tool such as Perl. But regardless of what happens on the other end of the wire, the LabVIEW code doesn’t need to change.

As a quick aside, if you don’t have a friendly HTTP server handy and you want to try some of this stuff out, there is a good option available. An organization called Apache Friends produces an all-in-one package that will install a serviceable server on just about any Windows, Linux or Apple computer you may have sitting around not being used. Note that you do not want to actually put a server that you create in this way on the internet. This package is intended for training and experimentation so they give short shrift to things like security.

The following two examples will be working with a simple database table with the following definition:

CREATE TABLE temp_reading (
   sample_dttm  TIMESTAMP PRIMARY KEY,
   temperature  FLOAT
   )
;

Note that although the table has two columns we will only be interacting directly with the temperature column. The sample_dttm column is configured in the DBMS such that if you don’t provide a value, the server will automatically generate one for the current time. I wanted to highlight this feature because the handling of date/time values is one of the most un-standardized parts of the SQL standard. So it will simplify your life considerably if you can get the DBMS to handle the inserting or updating of time data for you.

Writing to the Database

The first (and often the most common) thing a backend application has to do is insert data into the database, but to do that you need to be able to send data to the server along with the URL you are wanting to access. Now HTTP’s GET method which we played with a bit last week can pass parameters, but it has a couple big limitations: First, is the matter of security.

The way the GET method passes data is by appending it to the URL it is getting. For example, say you have a web page that is expecting two numeric parameters called “x” and “y”. The URL sent to the server would be in the form of:

http://www.mysite.html?x=3.14;y=1953

Consequently the data is visible to even the most casual observer. Ironically though, the biggest issue is not the values that are being sent. The biggest security hole is that it exposes the names of the variables that make the page work. If a person with nefarious intent know the names of the variables, they can potentially discover a lot about how the website works by simply passing different values and seeing how the page responds.

A second problem with the GET method’s way of passing data is that it limits the amount of data you can sent. According to the underlying internet standards, URLs are limited to a maximum of 2048 characters.

To deal with both of these issues, the HTTP protocol defines a method called POST that works much like GET, but without exposing the internal operation or limiting the size of the data it can pass. Here is an example of a simple LabVIEW VI for writing a temperature reading to the database.

Insert to database with PHP

This code is pretty similar to the other HTTP client functions that we have looked at before, but there are a few differences. To begin with, it is using the POST we just talked about. Next, it is formatting and passing a parameter to the method. Likewise, after the HTTP call it verifies that the text returned to it is the success message. But where do those values (the parameter name and the success message) come from? As it so happens I didn’t just pull them out of thin air.

Check out the URL the code is calling and you’ll note that it is not a web page, but PHP file. In fact, what we are doing is asking the server to run a short PHP program for us. And here is the program:

<?php
	if($dbc = mysqli_connect("localhost","myUserID","myPassword","myDatabase")) {
    	// Check connection
		if (mysqli_connect_errno()) {
			// There was a connection error, so bail with the error data
			echo "Failed to connect to MySQL: " . mysqli_connect_error();
		} else {
			// Create the command string...
			$insert = "INSERT INTO temp_reading (temperature) VALUES (" . $_POST["tempData"] . ")" ;
			
			// Execute the insert
			if(!mysqli_query($dbc,$insert)){
				// The insert generated an error so bail with the error data
				die('SQL Error: ' . mysqli_error($dbc));
			}
			
			// Close the connection and return the success message
			mysqli_close($dbc);
			echo "1 Record Added";
		}
	}
?>

Now the first thing that you need to know about PHP code is that whenever you see the keyword echo being used, some text is being sent back to the client that is calling the PHP. The first thing this little program does it try to connect to the database. If the connection fails the code echoes back to the clients a message indicating that fact, and telling it why the connection failed.

If the connection is successful, the code next assembles a generic SQL statement to insert a new temperature into the database. Note in particular the phrase $_POST["tempData"]. This statement tells the PHP engine to look through the data that the HTTP method provided and insert into the string it’s building the value associated with the parameter named tempData. This is where we get the name of the parameter we have to use to communicate the new temperature value. Finally, the code executes the insert that it assembled and echoes back to the client either an error, or a success message.

Although this code would work “as is” in many situations, it does have one very large problem. There is an extra communications layer in the process that can hide or create errors. Consider that a good response tells you that the process definitely worked. Likewise receiving one of the PHP generated errors tells you that the process definitely did not work. A network error, however, is ambiguous. You don’t know if the process failed because the transmission to the server was interrupted; or if the process worked but you don’t know about it because the response from the server to you got trashed.

PHP Mediated Database Reads

The other thing that backend applications have to do is fetch information (often configuration data) from the database. To demonstrate the process go back to the GET method:

Fetch from database with PHP

As with the first example, the target URL is for a small PHP program, but this time the goal is to read a time ordered list of all the saved temperatures and return them to the client. However, the data needs to be returned in such a way that it will be readable from any client — not just one written in LabVIEW. In the web development world the first and most common standard, is to return datasets like this in an XML string. This standard is very complete and rigorous. Unfortunately, it tends to produce strings that are very large compared to the data payload they are carrying.

Due to the size of XML output, and the complexity of parsing it, a new standard is gaining traction. Called JSON, for JavaScript Object Notation, this standard is really just the way that the JavaScript programming language defines complex data structures. It is also easy for human beings to read and understand, and unlike XML, features a terseness that makes it very easy to generate and parse.

Although, there are standard libraries for generating JSON, I have chosen in this PHP program to generate the string myself:

<?php
    if($dbc = mysqli_connect("localhost","myUserID","myPassword","myDatabase")) {

    	// Check connection
		if (mysqli_connect_errno()) {
			echo "Failed to connect to MySQL: " . mysqli_connect_error();
		} else {
			// Clear the output string and build the query
			$json_string = "";
			$query = "SELECT temperature FROM temp_reading ORDER BY sample_dttm ASC";
			
			// Execute the query and store the resulting data set in a variable (called $result)
			$result = mysqli_query($dbc,$query);
			
			// Parse the rows in $result and insert the values into a JSON string
			while($row = mysqli_fetch_array($result)) {
				$json_string = $json_string . $row['temperature'] . ",";
			}
			
			// Replace the last comma with the closing square bracket and send the
			// to the waiting client
			$json_string = "[" . substr_replace($json_string, "]", -1);
			echo $json_string;
		}
	}
	mysqli_close($dbc);
?>

As in the first PHP program, the code starts by opening a database connection and defining a small chunk of SQL — this time a query. The resulting recordset is inserted into a data structure that the code can access on a row by row basis. A while loop processes this data structure, appending the data into a comma-delimited list enclosed in square brackets, which the JSON notation for an array. This string is echoed to the client where a built-in LabVIEW function turns it into a 1D array of floats.

This technique works equally well for most complex datatypes so you should keep it in mind even if you are not working on web project. I once had an application where the client wanted to write the GUI in (God help them) C# but do all the DAQ and control in LabVIEW-generated DLLs. In this situation the C# developer had a variety of structs containing configuration data that he wanted to send me. He, likewise, wanted to receive data from me in another struct. The easy solution was to use JSON. He had a standard library that would turn his structs into JSON strings, and LabVIEW would turn those strings directly into LabVIEW clusters. Then on the return trip my LabVIEW data structure (a cluster containing a couple of arrays) could be turned, via JSON, directly into a C# struct. Very, very cool.

The Big Tease

So what is in store for next time? Well something I have been noticing is that people seem to be looking for new ways to let users interact with their applications. With that in mind I thought I might take some time to look at some nonstandard user interfaces.

For my first foray into this area I was thinking about a user interface where there are several screens showing different data, but by clicking a button you can pop the current screen out into a floating window. Of course, when you close the floating window, it will go back into being a screen the you can display in the main GUI. Should be fun.

Until Next Time…

Mike…

Objectifying LabVIEW

I suppose a good place to start this post is with an admission that, in a sense, it is flying a false flag. One way that you could reasonably interpret the title is that in this post I am going to be showing you how to start using objects in LabVIEW. That interpretation is not correct, and the troublesome word is “start”. The fact of the matter is that you can’t use LabVIEW without interacting with objects and many parts of it (think: VI Server) are overtly object-oriented — even without an obvious class structure. The language is built on an objects oriented foundation and so, in a very real way, has been object-oriented since Version 1.

What I am going to be showing you is how to simplify your work by building your own classes. As I stated in the teaser last time, the starting point for this discussion is the recommendation given in NI’s object-oriented training class that you should make your first attempts at using explicit object-oriented technique small, easy to manage subsystems — or put more simply, we need to start with baby steps.

Object-oriented baby steps

OK, so this is the point in the presentation where most presenters hauls out some standard theory, and moth-eaten descriptions of objects and classes — often lifted wholesale from a book on C++ programming. The problem with this approach is of course that we aren’t C++ programmers and the amount of useful information we can draw from an implementation of objects oriented programming that is so fundamentally flawed is minimal at best. The approach I intend to take instead focuses on key aspects of the technique that are of immediate, practical importance to someone who is working in LabVIEW and wants to take advantage of explicitly implementing object-oriented class structures.

A Quick Glossary

The first thing we need is a vocabulary that will let us talk about the topic at hand.

OOP Clouds

Now be forewarned that some of these definitions may not exactly match what you may read elsewhere, but they are correct for the LabVIEW development environment.

  • Class — An abstract datatype.
    If you think that sounds a lot like the definition of a cluster, you’re right! Due to the way LabVIEW implements object orientation, a class is essentially a very fancy cluster. In fact, when you create a class the first item that LabVIEW inserts into it is a typedef consisting of an empty cluster. Although you don’t have to put anything into the cluster, it provides a place to put data that is private to that class.
  • Object — An instance of a class.
    As with a normal cluster, every instance of a class has its own memory space. Consequently, a class wire is in most ways the same as any other wire in LabVIEW. We are still working in a dataflow environment.
  • Property — A piece of data that tells you something about the object.
    This is why there is a cluster at the heart of the class. You want to put in that cluster information that will describe the object is a way that is meaningful to you application. Because each instance of the class is a separate wire that has its own memory space, the data contained in the cluster describes that particular object.
  • Method — A VI that is associated with a particular class and which does something useful.
    So what do I mean by, “…something useful…”? Well that all depends on the class’ purpose. A the class that is responsible for creating a visual interface might have a method that causes an object to draw itself. While a class that manages the interface to data storage would likely have a method to store or retrieve application data.

From this simple list of words we can begin to see the general shape of the arena in which we will be playing. To recap: A class is a kind of wire. An object is a particular wire. A property is data carried in the wire that describes it in a useful way, and methods use the object data to do something you need done.

Dynamic Dispatch

Now that we have a basic vocabulary in place that lets us talk about this stuff, there are a couple of concepts that we need to discuss. I want to start with this exploration is with the mechanism that LabVIEW uses to call methods. Referred to as dynamic dispatch this feature it is often a source of confusion to developers getting started with object-oriented programming. A good way to come to grips with dynamic dispatch is to compare and contrast it to a feature of LabVIEW with which you may already be familiar: polymorphism.

Polymorphism (from the perspective of the developer using a polymorphic subVI) is the ability of a single functions to adapt to whatever datatype is wired to its inputs. For example, the low-level Add node in LabVIEW is polymorphic. Consequently, it can add scalar numeric if all types, as well as arrays of numerics of varied dimensions, clusters of numerics and even arrays of clusters of numerics.

Of course, from the perspective of the developer creating a polymorphic VI the view is much different. This flexibility doesn’t happen on its own. Rather, you have to create all the individual instance VIs that handle the various datatypes. For example, I often want to know if a value at a specific point in the code has changed from the last time this bit of code executed. So I created a polymorphic VI that performs this function. To create this subVI, I had to write variations of the same basic logic for about a half-dozen or so basic datatypes, as well as a version that used the variant datatype to catch everything else.

Dynamic dispatch (which is actually a form of polymorphism) works much the same way, but with a couple significant differences.

  • When the decision is made as to which instance VI is to be executed
    With conventional polymorphism, the decision of which instance VI to call happens as you wire in the subVI. In the case of my polymorphic subVI, as soon as I wire a U32 to the input, LabVIEW automatically selects the U32 version of the code. However, with dynamic dispatch, that decision gets put off until runtime with LabVIEW making the decision based on the datatype present on the wire as the subVI is called. Of course for that to work, you need a different kind of wire. Which brings us to the other point…

  • The criteria for choosing between VIs
    The wires that conventional polymorphism uses to select a VI all have one thing in common — they are all static datatypes. By that I mean that a wire is a U32, or a string or whatever and it can’t change on the fly. By contrast, with dynamic dispatch, the basis for selection is a wire that is an instance of a class, and the datatype of an object can be dynamic. However this variability is not infinite. A given class wire can’t hold just any object because class structure is also hierarchical.

Say you have a class named Geometric Shapes to Draw. You can define other classes (called subclasses) like Circle or Square that are interpreted by LabVIEW as being more specific instances of Geometric Shapes to Draw objects. Due to this hierarchical relationship, a given wire can be typed as a Geometric Shapes to Draw but at runtime really be carrying a Circle or Square. As a result, a dynamic dispatch VI can call different instance VIs based on the datatype at runtime.

However, one big thing that conventional polymorphism does have in common with dynamic dispatch, is that the power doesn’t come for free. You still have to write the method VIs for dynamic dispatch to call.

Inheritance

Remember a moment ago I referred to class datatypes as being hierarchical? The fancy computer science concept governing the use of hierarchical class structures is called inheritance. The point of this label is to drive home the idea that not only are subclasses logically related to the classes above them in the hierarchy, but these so-called child classes also have access to the properties and methods contained in their parent classes. In other words they can “inherit” or use data and capabilities that belong to their parents.

Handled properly, inheritance can significantly reduced the amount of code that you have to write. Handled poorly, inheritance can turn an otherwise promising project into a veritable train wreck. Which brings up our last point…

Proper Organization

Although organization isn’t really a feature of object-oriented programming, it is never the less critical. The simple fact of the matter is that while a disorganized, undisciplined developer might be able to get by when working in conventional LabVIEW, introducing the explicit use of classes can result in utter chaos. Of the real object-oriented failures that I have seen over the years, they all shared a lack of, or inconsistent, organization.

So what sort of organizational things am I talking about? Well it’s a lot of the same stuff that we have talked about before. For a more general discussion of the topic you can check out a post that I wrote very early on titled, Conventional Wisdom. What I want to do right now is highlight some of the points that are particularly important for object-oriented work.

The two main conventions (directory structure and file naming) go together because the point of one is to mirror the other. But rather than simply list some rules, I’ll demonstrate how this works. To start, I will create a directory that is named for the class hierarchy that I will build inside it. So if the point of this class hierarchy is, for example, to update my program’s user interface, I would call the directory something obvious like GUI Update. Inside this directory I would then create the top-level class with the file name GUI Update.lvclass. At this time I will also create a couple subdirectories (_subVIs and _typedefs) that I know I will undoubtedly be needing. Finally, I have learned over the years that being able to tightly control access to VIs is very important, so I will also create at this time a project library named GUI Update.lvlib and put into it the top-level class and a virtual folder called _subclasses with its access scope set to Private.

So the parent class is set up, but what about the subclasses? I simply repeat the pattern. Let’s say the GUI Update class has subclasses for three types of controls that it will need to update: Boolean, Digital and Cluster. I create subdirectories in the parent directory that are named for the subclass that will go into each, and hierarchically name the three subclasses GUI Update_Boolean.lvclass, GUI Update_Digital.lvclass, and GUI Update_Cluster.lvclass. I am also careful to remember to add the subclass files to the _subclasses virtual folder in the library, edit their icon overlays, and set their inheritance correctly — which is to say, identify their parents. Note that while the hierarchical naming structure doesn’t automatically establish correct inheritance, this convention does make it easier to visualize class relationships in the project file.

And so I go building each layer in my class hierarchy. With each new subclass I continue the same pattern so if I eventually want to find, say a subVI associated with the class GUI Update_Digital_Unsigned Word.lvclass, I know I will find it in the directory ../GUI Update/Digital/Unsigned Word/_subVIs.

Having a pattern to which you stick relentlessly — even one as simple as this one — will save you immeasurable amounts of time.

Creating the Blueprint

The next thing I do when creating a class hierarchy (but the last thing I want to talk about right now) is how the rest of the application will interface with my new GUI Update class. This is where the access scope we have been so careful to create comes into play. In the top-level class I always create a group of VIs that have their access scope set to public. These interface VIs form the totality of the external interface to the class hierarchy and so include the functions that define what the application as a whole needs GUI Update to do for it. The logical implications of this interface layer is why I sometimes call this step in the process, “Creating the Blueprint”.

In addition to providing a very clean interface, another advantage of having this “blueprint” is that if you ever need to expand your stable of subclasses, these interface VIs will serve as a list of functions you need to support in the new subclass — or at least a list of functions that you should consider implementing in the new subclass. To see what I mean, consider that the scenario we have been discussing is actually drawn from an application I created once. The list of public interface VIs was really very short: There was a method that read a value from a remote device and wrote it to the GUI object, one that looked for control value changes to write them to the remote device, and one that allowed the calling application to set control specific properties.

Of these, all GUI objects had to implement the first one because even the controls needed to be updated once a second. The reason for this constraint was that the remote device could also be reconfigured from a local interface and the LabVIEW application needed to keep itself up to date. However, the second interface method was only applicable to controls. Finally, the third interface method was implemented very rarely for the few subclasses that needed it.

What’s up next?

We have just about run out of space for this installment, but you may have noticed that something is missing from this post: Any actual LabVIEW code. Next time we will correct that sad situation by considering how to apply these principles to the creation of a class hierarchy that provides a common mechanism for storing and retrieving program data and setup parameters that works the same (from the application’s perspective at least) regardless of whether the program is interfacing with a database or text files.

Until Next Time…

Mike…

Raising the Bar on Dynamic Linking Even Further

Important: Before we get started this week, if any of you have downloaded the code from last week and have had problems with it, please update you working copy with what is currently in SVN. In working on this post, I found some “issues” — including the significant problem that the database that defines everything didn’t get included in the repository release. The current contents of the repository should fix the problems, and I am sorry for any hair-loss and/or gnashing of teeth these problems might have caused.

Also, be sure that if you plan to run this code in the development environment that you name the directory where it is located “testbed”.

These days, the most common way of implementing the dynamic calling of VIs in LabVIEW is through the Start Asynchronous Call node. In addition to being efficient, this technique is also very convenient in terms of passing data to the VI that is being called. Because it replicates the VI’s connector pane on the node, all you have to do is wire to terminals. However, this convenience comes at a price. For this type of call to work you have to know ahead of time what the connector pane of the VI being called looks like. This constraint can be a problem because it is not uncommon for situations to arise where you are wanting to dynamically call code the connector pane of which varies, is irrelevant (because no data is being passed), or is unknown. As you should by now be coming to expect, LabVIEW has you covered for those situations as well.

Where There’s a Will, There’s a Way Method

Over the years as LabVIEW developed as a language, its inherent object orientation began to become more obvious. For example, when VI Server was introduced it provided a very structured way of interacting with various objects within LabVIEW, as well as LabVIEW itself. With VI Server you can control where things appear on the front panel, how they look and even how LabVIEW itself operates. Although it didn’t reach its full expression until the Scripting API was released, the potential even existed to create LabVIEW code that wrote LabVIEW code.

We won’t be needing anything that complex, however, to accomplish our goal of dynamically launching a VI where we don’t have advance knowledge of its connector pane. In fact the part of VI Server that we will be looking at here is one of the oldest and most stable — the VI object interface. Just as you can get references to front panel controls, indicators and decorations, you can also get references to VIs as a whole.

Like control references, VI references come in two basic forms: strict and non-strict. To recap, a strict control reference contains added information that allows it to represent a particular instance of the given type of control. For example, a strict cluster control reference knows the structure, or datatype, of a particular cluster. By contrast, a non-strict cluster reference knows the control is a cluster, but can’t directly tell you what the various items are that make up the cluster.

In the same way, strict VI references, like we have been using to dynamically launch VIs, know a great deal about a specific class of VI, including the structure of its connector pane. This is why the Start Asynchronous Call node can show the connector pane of the target VI. However, as stated earlier, this nice feature only works with VIs that have connector panes exactly matching the prototype. As you might suspect, the solution to this problem is to use a non-strict VI reference, but that means we need to change our approach to dynamic launching a bit. Instead of using a special node, we’ll use standard VI Server methods to interact with and run VIs.

Mix and Match

To see how this discussion applies to our testbed code base, consider that to this point we have used a single technique to launch all the processes associated with the application. Of course to make that approach work required one teeny tiny hack. Remember when we added to the data source processes an input that tells them “who they are”? Well, that modification necessitated a change to the VIs’ connector panes, and because we were launching all the processes the same way, I had to make the same change to all the processes — even those that didn’t need the added input, like the GUI and the exception handler.

So big deal, right? It was only one control, and it only affected 2 VIs. Well maybe in this case it isn’t a huge issue, but what if it weren’t 2 VIs that needed to be changed, but 5 or 6? Or what if all the various processes needed different things to allow them to initialize themselves? Now we have a problem.

The first step to address this situation was actually taken some time ago when the launcher was designed to support more than one launch methodology. You’ll remember last week when creating the dynamically launched clones, we didn’t have to modify the launcher because it was written from day one to support reentrant VIs. What we have to do now is expand on this existing ability to mix and match VIs with launch methodologies to include two new options in the Process Type.ctl typedef. Here’s what the code for the first addition looks like:

run method - nonreentrant

As before, we start by opening a reference to the target VI, but this time it’s a non-strict reference. Next, we invoke the Run VI method, which has two inputs. The first input specifies whether we want to wait the target VI to finish executing before continuing, and we set it to false. The second parameter is named somewhat obscurely, Auto Dispose Ref. What it does is specify what to do with the VI reference after the VI is launched. In its default state (false) the calling VI retains the VI reference and is responsible for closing it to remove the target VI from memory. In addition, if the calling VI retains the reference to the dynamic VI, when the caller quits, the dynamic VI is also aborted and removed from memory. On the other hand, when this input is set to true, the reference is handed off to the target VI so the reference doesn’t close until the dynamic VI quits — which is what we want.

The other new launch option is like the first, except it connects the option constant that tells the Open VI Reference to open a reference to a reentrant clone. Other than that, it works exactly the same.

run method - reentrant

So with these two new launch methodologies created, all we have to do is change the database configuration for the GUI and Exception Handler processes to use the nonreentrant version of the Run VI method and we are done right? Well not quite…

One of the “quirks” of the Run VI method is that although it does start a VI executing, if that VI is configured to open its front panel when run (like our GUI is), the open operation never gets triggered and the front panel will stay closed. The result is that the VI will be open and running, you just won’t be able to see it.

To compensate for this effect (and the corresponding effect that the front panel won’t automatically close when the VI finishes), we need to add to the GUI a couple VIs from the toolbox that manage the opening and closing of the GUI’s the front panel.

open front panel

That’s the opener there, the last one in line after all the initialization code. This placement is important because it means that nearly all the interface initialization will be completed before the front panel opens. The result is much more professional looking. By the way, this improved appearance is why I rarely use the option to automatically open a VI’s front panel when it is run.

close the front panel

And here is the closer. The input parameter forces the front panel closed in the runtime, but allows it to stay open during development — a helpful feature if there was an error.

Where do we go from here?

So that is the basics of this technique, but there is one more point that needs to be covered. Earlier I talked about flexibility in passing data, so how do you pass data with this API? Well, we ran the VI using a method, so as you would expect, there is other methods that allow you to read or set the value of front panel controls. This is what the interface to the Control Value Set method looks like.

the set control value method

It has two input parameters: a string that is the label of the control you want to manipulate, and a variant that accepts the control’s new data value. Note that because LabVIEW has no way of knowing a priori what the datatype should be, you can get a runtime error here if you pass an incorrect datatype. Obviously, using this method your code can only set one control value at a time so unless you only have 1 or 2 controls that you know you will need to set, this method will often end up inside a loop like so:

set control value in a loop

…but this brings up an interesting, and perhaps exciting, idea. Where can we get that array of control name and value pairs? Would it not be a simple process to create tables in our database to hold this information? And having done that would you have not created a system that is supremely (yet simply) reconfigurable. This technique also works well with processes that don’t need any input parameters to be set. The loop for configuring control values passes the VI reference and error cluster through on shift registers and auto-indexes on the array of control name/value pairs. Consequently, if a given VI has no input parameters, the array will be empty and the loop will execute 0 times — effectively bypassing the loop with no added logic needed. By the way, this is an important principle to bear in mind at all times: Whenever possible avoid “special cases” that have to be managed by case structures or other artificial constructs.

More to Come

Over two consecutive posts, when have now covered two major use cases for using dynamic linking: VIs that will run as separate processes. But there is another large use case that we will look at the next time we get together: How do you dynamically link code that isn’t a separate process, but logically is a subVI?

Testbed Application &mdash Release 12

Toolbox — Release 6

Until next time…

Mike…