If the socket fits, wear it…

One of this blog’s recurring themes is the importance of modularity as an expression of the age-old tactic of “divide and conquer”. What is perhaps new (or at least daunting) to some readers is the idea of spreading tasks across not just separate processes on the same computer, but across multiple networked computers. Of course if this strategy is to be successful, the key is communications and to that end we have been examining ways of incorporating remote access capabilities into out testbed application.

Last time out, we implemented the first interface for remote applications to monitor and control our application. That interface took the form of a custom TCP protocol that used packets of JSON data to carry messages over a vanilla TCP connection. I started there because it provides a simplified mechanism for exploring some of the issues concerning basic code structure. Although this interface worked well, and in fact would prove adequate for a wide variety of applications, it did exhibit one big issue. To wit, clients had to be written in a specific way in order to use it. This fact is a problem for many applications because users are growing increasingly reticent about installing special software. They want to know why they need to load special code to do a job? The way they see it, their PCs (and cell phones for that matter) come with a bunch of networking software preloaded on them – and they have a valid point! Why should they have to install something new?

A complete answer to that question is far beyond the scope of this post, but we can spend a few useful moments considering one small niche of the overall problem, and a standardized solution to that problem. Specifically, how can we leverage some of those networking tools (read: browsers) to support remote access to our testbed application? As we have discussed before, the web environment provides ample tools for creating some really nice interfaces. The real sticking point is how that “really nice” interface can communicate with the testbed application. You may recall that a while back we considered one technique that I characterized as a “drop box” solution. The idea was to take advantage of the database underlying a web application by using it to mediate the communications. In other words, the LabVIEW application writes new data to the database and the web application reads and displays the data from the database – hence the “drop box” appellation.

While we might be able to force-fit this approach into providing a control capability, it would impose a couple big problems: First, it would mean that the local application would have to be constantly polling a remote database to see if there have been any changes. Second, it would be really, Really, REALLY slow. We need something faster. We need something more interactive. We need WebSockets.

What are WebSockets?

Simply put, the name WebSockets refers to a message-based protocol that was standardized in 2011 as RFC 6455. The protocol that the standard defines is low-overhead, full-duplex and content agnostic, meaning that it can carry data of any type – even JSON-encoded text data (hint, hint).

An interesting aspect of this protocol is that its default port for establishing a connection is port 80 – the same as the default port for HTTP. While this built-in conflict might be confusing, it actually makes sense. You see when a client initiates an HTTP connection, the first thing it does is pass to the server a number of headers that provide information on the requested connection. One of those headers allows the client to request an Upgrade connection. The original purpose of this header was to allow the client to request an upgraded connection with, for example, enhanced security. However, in recent years it has become a mechanism to allow multiple protocols to listen to the same port.

The way the process works is simple: The client initiates a normal HTTP connection to the server but sets the request headers to indicate that it is requesting a specific non-HTTP protocol. In the case of a request for the WebSockets protocol, the upgrade value is websocket. The server responds to this request with a return code of 101 (Switching Protocols). From that point on, all further communications are made using the WebSockets protocol. It is important to note that while this initial handshake leads some to assume that WebSockets in some ways dependent upon, or rides on top of the HTTP protocol, such is not the case. Aside from the initial connection handshake, the WebSockets protocol is a distinct process that shares nothing with HTTP. Consequently, while the most common application of the technique might be web-based client-server operation, the WebSockets protocol is equally well-suited for peer-to-peer messaging. The only limitation is that one of the two peers needs to be able to respond correctly to the initial handshake.

It is also worth understanding why the basic idea of using Port 80 for the initial connection is significant. A conversation on Stackoverflow gives a pretty good explanation of several issues, but for me the major advantage of using port 80 is that it avoids IT-induced complications. Many corporate IT departments will lock down ports that they don’t recognize. While there are some that try to lock down port 80, it is much less common. Before continuing on, if you’re interested, you also can find the details of the initial handshake here.

The LabVIEW Connection

Ok, so it sounds like WebSockets could definitely have a place in our communications toolbox, but how are we going to take advantage of it from LabVIEW? The answer to that question lies in the work of LabVIEW CLA Sam Sharp. He has developed a set of “pure G” VIs that allows you to implement either side of the connection. Because these are written in nothing but G, there are no DLLs involved so they can run equally well on any supported LabVIEW platform. Making the deal even sweeter, he has documented his code, created a tutorial on them, released his VIs for anyone to use, and all the compensation he requests is “…it would be great if you credit me…”. So, Sam, may you have a million click-throughs.

The following discussion is written assuming Sam’s VIs which I have converted to LabVIEW 2015. One quick note, if you don’t or can’t use the VIPM, you can still use the *.vip file, all you have to do is change the “v” to a “z” and you are good to go. As a first taste of how these VIs work, let’s look (like we did with the TCP example last time) at an over-simplified example to get a sense of the overall logical flow.

The Simplist WebSockets Server

For our purposes here, the testbed application will be the “server” so our code starts by listening for a connection attempt on the default Port 80. When it receives a connection, a reference to that connection is passed to a VI (DoHandshake.vi) that implements the initial handshake to activate the WebSockets protocol. Note that a key part of this process is the passing of a couple of “magic strings” between the client and server to validate the connection and protocol selection.

With the handshake completed and both ends of the connection satisfied that the WebSockets protocol is going to be used, the following subVI (Read.vi) reads a data packet from the client that, in our application, represents a data or control request. Next comes the subVI (Write.vi) that writes a response back to the client. Finally the code calls a subVI (Close.vi) that sends a WebSockets command to close the connection, and then closes the TCP connection reference that LabVIEW uses.

Building the Interface

To build this bare logic into something usable, the structure of the server task is essentially identical to that of the TCP process we built last time. In fact, the only difference between the two is ports to which they are listening, and the specific reentrant handlers that they launch in response to a TCP connection. So let’s concentrate on that alternate process. During initialization, the handler calls the subVI that implements the initial handshake.

Handler Initialization

In addition to the connection reference, this routine also outputs a string that is the URI that was used to establish the connection. Although we don’t need it for our application, it could be used to pass additional information to the server. Once initialization is complete the main event loop starts, but unlike the TCP handler we wrote earlier, it is not based around a state-machine structure.

Main Event Loop

While we could have broken up the process into separate states, the fact that Sam has provided excellent subVIs implementing the read and write functionality makes such a structure feel a bit contrived – or at least to me it does. When the timeout event fires, the code waits for 500 msecs for the first user data coming from the connection. If the read times-out, the loop waits for another 500 msec and then tries again. This polling technique is important because it allows other things (like the system shutdown event) to interrupt the process. Likewise, because we are waiting for a response that is, at least potentially, coming from a remote computer the polling allows us to wait as long as necessary for the response.

When the request data does arrive, the JSON data string is processed by a pair of subVIs that we originally created for the TCP protocol handler. They create the appropriate Remote Access Commands object and pass it on to the dynamic dispatch VI (Process Command.vi) that executes the command and returns the response. The response data is next flattened to a JSON string and written to the connection. Because the current implementation assumes a single request/response cycle per connection, the code closes the WebSockets connection and the TCP connection reference. However, it would be easy to visualize a structure that would not close the connection, but rather repeat one of the data read commands at a timed interval to create a remote “live” interface.

In terms of the errors that can occur during this process, the code has to correctly respond to two specific error codes. First is error code 56, a built-in LabVIEW error that flags a network operation timeout. Because this is the error that is generated if server hasn’t yet received the client’s request, the code basically ignores it. Second is error code 6066, which is a WebSockets-specific error defined in RFC 6455 to flag the situation where the remote client closes a WebSockets connection. Our code responds by closing the TCP connection reference and stopping the loop.

Testing our Work

Now that we have our new server up and running we need to be able to test its operation. However, rather than creating another LabVIEW application to act as the test platform, I built it into a web application. The interface consists of a main screen that provides a pop-up menu for selecting what you want to do and 5 other screens, each of which focus on a specific control action. As these things go, I guess it’s not a great web application, but it is serviceable enough for our purposes. If you need a great application, talk to Sam Sharp – that’s what his company does.

The HTML and CSS

As I have preached many times before, one of the things that makes web development powerful is the strict “division of labor” between its various components: the HTML defines the content, the CSS specifies how the content should look, JavaScript implements client-side interactivity and a variety of languages (including JavaScript!) providing server-side programmability. So lets start with a quick look at the HTML that defines my web interface, and CSS that makes it look good in spite of me… In order to provide some context for the following discussion, here is what the main screen looks like:

Main Screen

It has a title, a header and a pop-up menu from which you can select what you want to do. As a demonstration of the effect that CSS can have, here’s the part of the HTML that creates the pop-up menu.

<button class="btn btn-default dropdown-toggle" type="button" data-toggle="dropdown">Available Actions<span class="caret"></span></button>
<ul class="dropdown-menu">
  <li><a href="ReadGraphData.html">Read Graph Data</a></li>
  <li><a href="ReadGraphImage.html">Read Graph Image</a></li>
  <li class="divider"></li>
  <li><a href="SetAcquisitionRate.html">Set Acquisition Rate</a></li>
  <li><a href="SetDataBufferDepth.html">Set Data Buffer Depth</a></li>
  <li><a href="SetTCParameters.html">Set TC Parameters</a></li>
</ul>

You’ll notice that pop-up menu is constructed from two separate elements: A button and an unordered list – normally a set of bullet points – where each item in the list is defined as an anchor with a link to one of the other pages. However, as the picture shows, when this code runs we don’t see a button and a set of bullet points, we see one pop-up menu. How can this be? The magic lies in CSS that dramatically changes the appearance of these elements to give them the appearance of a menu. Likewise, some custom JavaScript makes the visually manipulate elements work like a menu. What is very cool, however, is that the resources making this transformation possible are part of a standard package, called Twitter Bootstrap, that is free for anyone to use. In a similar vein, let’s look at the page that displays a plot of data acquired from the testbed application:

Graph Screen - Blank

At the top of the screen there’s a small form where the user enters information defining the task to be performed, and a button to initiate the operation that the user is requesting. Below that form, is a blank area where the software will draw the graph of the acquired data. Let’s look at two specific bits of HTML, first the code that builds the data entry form…

<form>
  <fieldset class="input-box">
    <legend>View Graph Data</legend>
    <input type="text" class="str-input" id="ipAddr" value="localhost">  Host</input><br>
    <input type="number" class="num-input" id="portNum" value="80">  Port Number</input><br>
    <select id = "targetPlugin">
      <option value = "Sine Source">Sine Source</option>
      <option value = "Ramp Source">Ramp Source</option>
      <option value = "Hen House TC">Hen House TC</option>
      <option value = "Dog House TC">Dog House TC</option>
      <option value = "Out House TC">Out House TC</option>
    </select><label>  Select Target for Action</label><br>
    <input type="button" id="just-submit-button" value="Send Command">
  </fieldset>
</form>

…and now the code that defines the graph:

<div id="container" style="min-width: 310px; height: 400px; margin: 0 auto"></div>

But, something seems to be missing. The first snippet will create data-entry fields and a button, but what happens when the button is clicked? Apparently, nothing. Likewise, the consider the graphing element. We can see how large the area is to be, but where is the data coming from? And where are the graphing operations? To answer those questions, we need to look elsewhere.

The JavaScript

The power behind much of the web in general – and our application in particular – is the interpreted language JavaScript. In addition to being able to access all resources on your computer, JavaScript can interact directly with web pages and their underlying structures. For folks that like to split hairs, JavaScript is “object-based” because it does support the concept of object, but it is not “object-oriented” because it doesn’t explicitly support classes.

More important for what we are going to be doing is that it supports the concept of “callbacks” (read: User Defined Events). In other words, you can tell JavaScript to automatically performs functions when certain events occur. For example, our JavaScript code is going to be interacting with the web page that loaded it, we need to be sure that the page is fully loaded before that program starts. In order to accomplish that goal, the JavaScript file associated with the page includes this structure:

$(window).load(function() {
	...  // a lot of stuff goes here
});

This code creates a callback for the .load() event. The parameter passed to the .load() event is a reference to the function that JavaScript will run when the event fires. As is common in JavaScript, the code declares the function in line so everything between the opening and closing curly brackets will be executed when the event fires. So after declaring a few variables the code includes this:

$("#just-submit-button").click(function(){
  //The code here retrieves all of the input data and formats the request.
  target = $("#targetPlugin").val();
  remAddr = $("#ipAddr").val();
  remPort = $("#portNum").val();
  jsonData = '\"Read Graph Data\":' + JSON.stringify({"Target":target}); 

  // the websocket logic
  wc_connect(remAddr, remPort, parseData);
  wc_send(jsonData);
});

So the first thing the code does when the page finishes loading is register another callback, but this one defines what JavaScript will do when the user clicks the button in the form. The first three lines read the values of the form data entry fields, and the fourth assembles that data into the JSON string that will be sent to the server. The last two lines are the interface to the WebSockets logic. The first of these lines establishes the connection to the server, while the other one sends the command. But what about the response? Shouldn’t there be a line with a command like wc_receive? You really should be expecting this by now: Inside the wc_connect command the code registers another callback to handle the response.

The event (called onmessage) that is tied to this callback fires when a message is received from the server. The code implementing the callback resides in the file websockets.js (in case you’re curious) and its job is to read the JSON response data packet, check for errors, parse the data and generate the output – the graph. The only question now is, “How does it know how to parse the data and generate the graph?” And the answer is (all together now): “There’s another callback!” See the third parameter of wc_connect, the one named parseData? That value is actually a reference to a function contained in the JavaScript code for this particular page, and is an example of how JavaScript implements a “plugin architecture”. So here is how the data parser for this page starts…

var parseData = function(rawData){
  var plotData = JSON.parse(rawData);
  // trim decimal places
  plotData.forEach(function(element, index, array){
    plotData[index] =  Number(element.toFixed(3));
  });

At this point in the process, the data portion of the response is still a string, so to make processing the data easier, we first parse it to convert it into a JSON object. In the case of this particular response, the resulting object is the array of numbers expressed as strings. Really long strings. You see when LabVIEW encodes a number as a JSON string it includes far more digits of precision than are really needed, so forEach element in the array, I convert the value to a number with 3 decimal places. Here’s the rest of the code:

  // logic for drawing the graph
  $('#container').highcharts({
    title: { text: 'Recent Data', align: 'center' },
    subtitle: { text: 'System: '+remAddr+':'+remPort, align: 'center' },
    xAxis: { title: { text: 'Samples' }, tickInterval: 1 },
    yAxis: { title: { text: 'Amplitude' }, gridLineColor: "#D8D8D8" },
    tooltip: { headerFormat: '<small>Sample: {point.key}</small><br>' },
    series: [{ turboThreshold: 0, name: target, data: plotData, lineWidth: 1, marker:{enabled: false}, color: '#000000' }]
  });
}

This is the code that does the plotting, and as we shall see in a moment, this small amount of code produces a beautiful and highly functional chart that displays the values of individual points in a tooltip when you hover over them with the mouse and even provides a pop-up menu that allows you to save the plot image in a variety of image formats. This functionality is possible thanks to a plotting library called Highcharts that uses the structure defined in the HTML as a placeholder for what is going to draw. I have used this library before in demonstrations because in my experience it is stable, easy to use, and very well-documented. I also like the fact that regardless of what kind of plot I am trying to create they have a demo online that gets me about 95% of the way to my goal. Please note that this library is a commercial product, but they make it available for free for “non-Commercial” applications – however even for commercial usage, the one-time license fee is really pretty reasonable. Finally, even though it doesn’t appear that they actively police their licensing with things like crippled versions or the like, if you are using this on a professional project, pay the people. They have certainly earned their bread.

Testing the Pages

So at last we have our server in place and some test web pages (and supporting code) created. We need to consider how to run the web client. Here you have three options: First, you could just double-click the top-level file in Windows Explorer and Windows will dutifully open the file in your browser and everything will work as it should. Second, if you have access to an existing web server you can copy the dozen or so files to it and test it from there. Third, you could create a small temporary server strictly for testing. If you choose that path, a good option is a server called Express.js. As it name implies, it is written in JavaScript, which means it runs under the Node.JS execution engine. You can set one up sufficient to test our current code in about 10 minutes – including the time required to download the code.

The overall test process is similar to what we did to test the custom TCP server last time. The only significant change is the interface. First, test things that should work to make sure they do. Second, test the things that shouldn’t work and make sure they don’t. Here are examples of what you can expect to see on the graphing and image-fetch screens:

Graph Screen

Image Screen

Testbed App – Release 20
Toolbox – Release 17
WebSockets Client – Release 1

Big Tease

So what’s next? We have looked at access via a custom TCP interface and the standard WebSockets interface. How about next time, we look at how to do embed this connectivity in a C++ program using a DLL?

Until Next Time…
Mike…

Tree-Control Menus: A Case Study in Data Management

The last time we were together, we discussed the first of two common use cases for tree controls: displaying tabular data. This time out, we are going to look at the other major use case: using tree controls as a sort of menu system to control an application’s operation – or at least its GUI.

The Problem We’re Solving

If you look at the testbed application that we have been working on for almost a year, it’s pretty clear that much of the work has been going on “behind the scenes” and not in the GUI. Oh it is nicely modularized thanks to a structure built around a subpanel interface, but the actual controls are really pretty bare bones. A good example of the utilitarian, but unsophisticated structure is the usage of a simple pop-up menu to select the screen to view. Right now it works pretty well because there is only a handful of plugin screens from which we can choose. However, it doesn’t take much imagination to visualize the mess that would result if there were a dozen, or even hundreds of screens available. We need better organization.

Fixing the Data

The biggest conceptual difference between our current goal, and the one we worked on last time is that in our earlier discussion we were displaying data that already existed outside the application. In other words, my disk has directories that contain files and other directories whether or not I chose to create a program that can read and display the directory’s contents. By contrast, the data we are going to be displaying now only exists within the context of our program, or perhaps within the context of our test environment as a whole. One big consequence of this fact is that we a lot more freedom to define the data’s structure and presentation.

For instance, when our testbed runs right now, there are two “acquisition” processes and three “temperature controllers”. Let’s say, for the sake of argument, that the controller functions are dispersed geographically, and what we see on the local interface is status data from three remote processes. In such a situation, we can observe that there is no “correct” way of viewing that overall structure. Depending upon who the user is and what they need to do there are (at least) two ways that these systems could be organized.

One user, might want to see a top-level breakdown that groups systems based on the function they perform. With this approach to organization, you would have sections for “Data Sources” and “Temperature Controllers”. The individual screen would then be grouped under one or the other of those headings:

By Function

Alternatively, a different user might want to see the network resources grouped primarily by each system’s geographical location, with the functions for each site then grouped together like so:

By Location

However, as I said before, neither view is any more “correct” than the other. Therefore. we need to be able to support either one – and any other structure that our customers request, as well. Although this level of flexibility might seem to be a tall order, the truth of the matter is that the tree control’s basic operation is very simple, so all we are really talking about is a matter of data management. Moreover, we already have in our hands the tools we need to accomplish the job. I am talking, of course about our database.

Creating the Data Management Structures

So in defining our data structures, we can start with what we already know: The user needs to be able to select basic menu structures by changing a single value. From this requirement it’s obvious that we’re going to need a table to identify the menu’s basic context. We will then use the values stored in that table to qualify the menu item groupings. Here is the definition for this table, and the three records we are going to insert into it:

CREATE TABLE menu_context (
  id     AUTOINCREMENT PRIMARY KEY,
  label  TEXT(100),
  CONSTRAINT contextlabel_uc UNIQUE(label)
  )
;

INSERT INTO menu_context (id, label) VALUES (0,'NULL');
INSERT INTO menu_context (label) VALUES ('By Function');
INSERT INTO menu_context (label) VALUES ('By Location');

The second table we need to define, will hold the records that describe the actual menu entries. Each record defines one line of the tree control’s contents.

CREATE TABLE menu_group (
  id           AUTOINCREMENT PRIMARY KEY,
  context_id   INTEGER NOT NULL,
  item_name    TEXT(100),
  parent_id    INTEGER NOT NULL,
  sort_order   INTEGER,
  CONSTRAINT context_group_fk FOREIGN KEY (context_id) REFERENCES menu_context(id),
  CONSTRAINT self_ref_fk FOREIGN KEY (parent_id) REFERENCES menu_group(id)
  )
;

As is typical, the data for each record incorporates a primary key that uniquely identifies it. Next, comes a foreign key value that relates each record to one of the menu context values defined in the menu_context table. The last three fields store the data that controls the entry’s appearance in the tree control. The item_name field contains the text that will appear for the item’s entry in the tree control. The parent_id is the ID key for the item’s parent. A key value of 0 indicates a top-level item. Note that this values relates to the id field value in the same table. This sort of self-referential relationship is common when creating tables that are, in essence, linked lists. Finally, the sort_order field defines the order in which the menu entries will be added to the tree control. This last field is necessary because we are storing the configuration data in a database – and as you will recall DBMS make no promises about the order of data in queries unless you explicitly include an ORDER BY clause in the query.

Now that we have a table defining the overall tree control menu structure, we need to be able to insert into that structure the entries that will represent the plugin screens. In order to accomplish that task we need a table that relates data we already have in the database (the contents of the launch_item table) to the specific tree control entries that will be their parents in the tree control. The following table fulfills that task:

CREATE TABLE subpanel_group_xref (
  id               AUTOINCREMENT PRIMARY KEY,
  launch_item_id   INTEGER NOT NULL,
  menu_group_id    INTEGER NOT NULL,
  menu_context_id  INTEGER NOT NULL,
  CONSTRAINT launchid_subpanel_FK FOREIGN KEY (launch_item_id) REFERENCES launch_item(id),
  CONSTRAINT groupid_subpanel_FK FOREIGN KEY (menu_group_id) REFERENCES menu_group(id),
  CONSTRAINT contextid_subpanel_FK FOREIGN KEY (menu_context_id) REFERENCES menu_context(id)
  )
;

This table might seem a strange candidate for implementing this crucial bit of functionality because it doesn’t appear to actually store any data. The table only has 4 fields and they are all seem to be holding integers. The distinction here is that while most of the tables we have considered serve to store data, this table stores relationships – specifically the 3-way relationship that defines where each plugin will appear in the menu for each menu context. To see how these bits fit together we need to start considering the LabVIEW code that will read these structures and build the tree control based menus.

Creating the LabVIEW

The basic approach that we will take in creating the entries for the tree control is going to incorporate two distinct phases.

  1. Draw the menu structure
  2. Fill in the entries associated with the plugins

Reading the Data

The VI that is responsible for reading the menu data from the database (Config Data_DB_ADO:Read Tree Menu Structure.vi) has an enumerated input that selects the menu context the code will display. This value drives a subVI (Get Menu Context ID.vi) that reads and buffers the id value associated with the desired menu context.

Read Tree Menu Structure

In one sense, this subVI really isn’t necessary because you could theoretically perform this look-up operation using a so-called “subquery”, but this approach is far less efficient because it forces the DBMS to repeat the look-up with each query. In addition, these values are not going to change, so better to let LabVIEW remember them. To my way of thinking, however, the biggest issue with this approach is that it complicates the query itself. Given that this is the logic that maintainers (who may not be knowledgeable in SQL) are going to see, it’s a good idea to keep the SQL logic as simple as possible. The other thing to notice about the query is that it puts the entries in the correct order for display by incorporating the clause ORDER BY parent_id, sort_order ASC. Finally, you can see that I built this logic inside the generic ADO database subclass of the existing Config Data object structure.

For reading the tree entries associated with the plugins we use this VI, which is similar to the one for reading the main menu structure, but with some important differences.

Read Tree Menu Plugin Entries

The first obvious thing is that the query is much more complex because the primary table being queried is a cross-reference table. Consequently, we have to de-reference the id numbers to derive the data we need to build the menu entries. In learning how this de-referencing works, it’s important to remember that SQL is a language created by a mathematician – specifically a mathematician who specialized in a branch of mathematics called “Set Theory”. His (incredibly optimistic) idea was that if he could create a language based on mathematic principles, he would be able to prove, in the mathematical sense, that the program was correct (read: bug free).

While his grand hope evaporated in the face of the harsh reality that most programming has surprising little to do with mathematics (i.e. computing an answer), the set orientation of SQL has survived. For example, when you perform a query, what you are really doing is SELECTing a data subset FROM a larger set of data – which is typically a table. However, sometimes you need to gather data from a still larger set of data that is spread across multiple tables. To do that, you need to temporarily JOIN those tables together into one large virtual dataset based ON some criteria, like matching id numbers. Get the idea?

A not-so-obvious thing about this LabVIEW code is where it is located in the Config Data object structure. Unlike the routine for reading the basic menu structure, this VI is not located in the generic ADO database subclass. Instead it can be found in the JET database subclass, and the reason for this placement lies in the query. Unlike the other query operation which was implemented in generic SQL, there are aspects of this query that utilize JET-specific syntax (specifically, all the parentheses).

Generating the Menu Tags

With the data in hand that defines the tree-control menus, we now need to turn that data into menu entries. The first step in that transformation is to process the raw data we have acquired from the database to generate the tags that are needed to properly organize the tree entries. I won’t take up the room to show the code for this VI (Parse Tree Management Data.vi) because it’s easy to explain what it does – but feel free to check it out in the code. The VI’s primary program structure is a while loop that iterates through the raw tree-definition data generating the tags and formatting the data to generate the tree items. The loop has on it two shift registers: one holds an array of ID numbers that the loop has already processed, the other holds an array of clusters. Each element contains the four items that we will need to define a menu item (Parent Tag, Child Name, Child Tag and Child Only?).

With each iteration, the loop extracts the top element from the array and tests the Parent ID. If it is zero, the item is a top-level entry so the code builds its entry and continues with the next iteration. If Parent ID is anything other than 0, it searches the array of processed IDs to see if its parent has already been processed. When the comparison finds the new entry’s Parent ID it uses its tag value to synthesize the new entry’s child tag. When the new entry’s Parent ID is not found, the code adds the entry’s element back onto the bottom of the array of entries to be processed so it can be retried later. Normally, this search should never fail because the ordering in the queries should put the elements in the correct order, but this is just in case. This operation continues until there are no more entries left to process.

Finally, in terms of tree control infrastructure, the only thing we have left is to actually insert the entries that we have defined into the tree control. By the time we get to this point in the code we have gotten the definitions in the correct order so all we have left to do is disable front panel updates, clear the tree control, add the new tree entries and re-enable front panel updates. Again, this code is very simple so to save space I will refer you to the last post (https://www.notatamelion.com/2015/09/14/a-tree-grows-in-brooklyn-labview/) for details on the call and how it works.

Integration with the Testbed

To integrate this code with the existing testbed application requires very little work. First off on the front panel, we remove the existing ring control that we were using to select screens and add the tree control (I’m using he one from the System-themed palette), and a System-themed enumeration that will allow the operator to switch between the two menu context values. Note that this control could also be defined as a ring with the String[] control populated at run time to show the available options. This implementation would be useful if you want to provide the ability in your program to either allows the users to dynamically configure the basic structure of the tree control menus, or provide different options depending on who is using the system.

New Front Panel Controls

On the block diagram, the front panel changes impact the program logic in two places. First, we need to create a new Value Changeevent to handle the Menu Context control. This event (which is also fired when the GUI initializes itself) is responsible for rebuilding the tree-control menu display.

Menu Context-Value Change Event

The event handler starts by calling a subVI (Get Tree Menu Data.vi) that accepts as an input a Menu Context value and internally calls the two database query VIs we discussed above. After concatenating the arrays that it gets from the two routines, it passes the raw data to the VI (Parse Tree Management Data.vi) I described that converts the raw data into tree control entries. Finally it returns the array of tree control entries to the event handler, which passes it, and two references to the subVI (Draw Menu.vi) which does exactly what it name says. The first of the references is, obviously, a control reference to the tree control. The other is a VI reference to the GUI itself so the subVI can defer and then re-enable front panel updates.

The other block diagram change is to purpose an existing value change event. The event n question used to handle the ring control that changed screens and while it will still be a value change event, it will be a value change event on the aptly labeled tree control, Tree.

Tree-Value Change Event

The original logic that occupied this space took the string value of the selected ring item and used it to look up the name of the associated screen in an array of strings. The string array consisted of screen labels that were generated when the GUI loaded the subpanel VIs into memory and started them running. The resulting index was then used to index the screen’s VI reference from an array of plugin screen VI references. This VI reference would, in turn, drive the subpanel’s Insert VI method to make that screen visible in the subpanel.

The modified form works basically the same, but with a couple minor differences. Although the tree control’s value is a string, the string is the tag associated with the entry. Since the part we need to perform our search is the last item in the colon-delimited list, the first thing we need to do is strip off everything up to, and including, the lastcolon in the string. Moreover because we want this operation to be efficient as possible – so no looping. A very efficient solution is to use the built-in Match Pattern function with the rather curious-looking pattern shown. To see how it works, consider that a dot (“.”) is a special character matches any character. Next, the asterisk (“*”) is a special character that matches the longest sequence of the token that came just before it. Hence, it will match the longest sequence of any characters. Finally, the colon is not a special character so it will match just a colon. The end result is that the complete pattern will match the longest sequence of characters that are followed by a colon, and it works the same whether there is one colon in the string or a dozen. The string I want will be what is left after the match.

The other change that was needed for the tree control, is the case structure, which is there to work around a bug in the way LabVIEW handles value change events with tree controls. I configured the tree control such that only entries that are marked as Child Only are selectable. The bug is that when you click on one of the parent items, LabVIEW still fires the value change event even though the value of the tree control isn’t changing. To work around this issue, the event handler bypasses all further event processing of the “selected” item when it isn’t found in the list of screens.

Testing the Interface

As always, the “Proof of the pudding is in the eating” so let’s try running the application with its new GUI feature. One minor difference in behavior is that the subpanel now remains empty at startup until the user makes a selection. However, from the time the application starts, the user has visible a complete list of all the display screens that are available. In addition, the tree will automatically reconfigure itself when a different context is selected.

Testbed Application – Release 17
Toolbox – Release 14

The Big Tease

At one time when I started doing this work, test systems were surprisingly homogeneous. While it was true that instruments came from many different vendors, the software environment was pretty monolithic. Today, however, things have really changed. Every day it is becoming more common and accepted to have multiple applications running in parallel that were developed using a variety of development tools ranging from C++ to Java to C# and F.

In the past we have talked about creating a LabVIEW-based backend application with the main GUI built using standard web tools such as HTML, CSS and JavaScript (https://www.notatamelion.com/2015/06/08/building-a-web-backend-in-labview/). Over the next few posts I want to consider some of the other ways that your LabVIEW application can work with external applications.

Until Next Time…
Mike…

A Tree Grows in Brooklyn LabVIEW

This time out, I want to start exploring a user interface device that in my opinion is dramatically under-utilized. I am talking about the so-called tree control. This structure solves a number of interface challenges that might otherwise be intractable. For example, the preferred approach to displaying large amounts of data is to avoid generating large tabular blocks of data, opting instead to display these datasets on graphs. However, there can be situations where those large tabular blocks of data are exactly what the customer wants. What a tree control can do is display this data using a hierarchical structure that makes it easier for the user to find and read the specific data they needs. A good example of this sort of usage is Windows explorer. Can you imagine how long it would take to you find anything of all Windows provides was an alphabetical list of all the files on your multi-gigabyte hard drive?

Alternately, a tree control can provide a way of hierarchically organize interface options. For instance, to select the screen to display in the testbed application we have been building, the program currently uses a simple pop-up menu containing a list of the available screens. This technique works well is you have a limited number of screens, but does not scale well.

We will structure our evaluation of tree controls around two applications that demonstrate its usage both as a presentation device for large datasets and as a control interface. Starting that discussion, this week we will look how display a large amount of data (all the files on your PC). Then in the following post we will explore its usefulness for controlling the application itself by modifying the testbed application to incorporate it.

Our Current Goal

To demonstrate this control’s ability to organize and display a large amount of tabular data, we are going to consider an example that displays a hierarchical listing of the files on your computer starting with a directory that you specify. The resulting display will represent folders as expandable headings, and for files show their size and modification date.

I picked this application as an example because it provides the opportunity to discuss an interesting concept that I have been wanting to cover for some time (i.e. recursion). Moreover, on a practical level, this application makes it easy to generate a very large set of interesting data – though it isn’t very fast. But more on that in a bit. For now, let’s start by considering what it takes to make this tree grow. Then we can look at the application’s major components.

Becoming a LabVIEW Arborist

Although tree controls and menus occupy different functional niches, their APIs bear certain similarities. For example, they both draw a distinction between what the user sees and the “tags” that are used internally to identify specific items. Likewise, when creating a child item, both APIs use the parent’s tag to establish hierarchical relationships.

A big difference between the two is that a tree control can have multiple columns like a table. In fact, one way of understanding a tree control is as a table that lets you collapse multiple rows into a header row. So in designing for this thing, the first thing we need to do is decide what values are going to represent the “header rows”, and what values the “data rows”. For this, our first excursion into utilizing tree controls, the “header” rows will define the folders – so that is where we go first.

Showing the Folders

The code that we will use to add a new folder to the tree resides in a VI called Process New Directory Entry.slow.vi (the reasons for the “slow” appellation will be explained shortly).

Process New Directory Entry.slow

Because this logic resides in a subVI, the reference to tree control comes from a control on the VI’s front panel. Next, note that the way you get the row into the control is by using an invoke node that instantiates the Edit Tree Items:Add Item method. I point out this fact because it tells you something important: All the data we are going to be displaying in the control are properties of the control, not values. Consequently, they will be automatically saved as part of the control whenever you save the VI that contains the control.

Next, let’s consider the inputs to the method. The top-most item is Parent Tag. The assumption is that the method is defining a new child item, so this input defines the parent under which the new child will reside. Therefore, a Parent Tag that is a null string indicates an item with no parent (i.e. a top-level item). The next item down from the top is Child Position and its job is to tell the method where to insert the new child that it is creating. A value of -1, as is used here, tells the method to put the new child after any existing children of the identified parent. In other words, if this code is called multiple times, the children will appear in the control in the order in which they were created.

The next two input items (Left Cell String and Child Text) control what the user will see in the control. You will recall that I said that tree controls are sort of like hierarchical, collapsible tables. In that representation, the left-most cell shows the hierarchical organization through indentation. In addition, by default, every row that has other rows nested beneath it shows a small glyph indicating that the row can be expanded. The other cells in the row are like the additional columns in the table and can hold whatever data you want. When creating entries for directories, the left-most cell will contain the name of the directory, and the remainder of the row will be empty. To implement this functionality, the input path is stripped to remove the last item. This value is passed to the Left Cell String input. In addition, an empty string array is written to the Child Text input.

Next, the Child Tag input allows you to specify the value that you want to use to uniquely identify this row when creating children under it, or reading the value of control selections. Now the documentation says that if you don’t wire a string to this input, it will reuse the Left Cell String value as the tag, but you don’t want to depend on this feature. The problem is that tags have to be unique so to prevent duplication, LabVIEW automatically modifies these tags to insure that they are unique by appending a number at runtime. While it is true that the method returns a string containing the key that LabVIEW generated, not knowing ahead of time what the tag will be can complicate subsequent operations. To avoid this issue, I like to include logic that will guarantee that the tag value that I write to this input is unique. For this application, if the parent tag is a null string (indicating a top-level item), the code takes the entire path, converts it to a string and uses the result as the child tag. In the parent tag is not null, the code generates the tag by taking the parent tag value and appending to it a slash character and the string that is feeding the child’s Left Cell String input. If this reason for this logic escapes you, don’t worry about it – you’ll see why it’s important shortly.

Finally, the Child Only? input is a flag that, when false, allows other rows to be added hierarchically beneath it.

Showing the Files

With the code handled for creating entries associated with directories, now we need to implement the logic for creating the entries that represent the files inside those directories – which as you can see below, utilizes the same method as we saw earlier, but plays with the inputs in a slightly different ways.

Process New File Entry.slow

Named Process New File Entry.slow.VI, this VI is designed to use the additional columns to provide a little additional information about the file: to wit, its size and last modification date. Therefore, the first thing the code does is call a built-in function (File/Directory Info) that reads the desired information. However, this call raises the potential of an error being generated. When errors are possible you need to spend some time thinking about what you want to have happen when they occur. In this situation, there are three basic responses:

  1. Propagate the Error: With this approach, the error would simply be propagated on through the code and be reported like any other error. This action would ensure that the error would be reported, but would stop the processing of the interface.
  2. Don’t Include this File in the List: By trapping the error and preventing it from being passed on, we can use it to block the display of files for which we can’t retrieve the desired error. This technique would allow the interface processing to run to completion, by simply ignoring the error.
  3. Include the File but not its Data: A variation of Option 2, this approach would still block the error from being propagated. However, it would still create the file’s entry in the table but with dummy data like, “Not Available”, or simply “n/a” for the missing data.

So which of these options is the correct one? This is one of those situations where there is no universally correct answer. Sorting out which option is the correct one for your application is why, as a software engineering professional, you earn the “big bucks”. For the purpose of our demonstration, I picked Option 2.

Next, the New Directory Tag input is the tag that is associated with the folder in which this file resides. Finally, the Child Tag value is calculated by taking the Parent Tag value and appending to it a slash character and the name of the file stripped from the input path.

Pulling it all Together

So those are the two main pieces of code. All we have to do now is combine them into a single process that will process a starting directory to produce a hierarchical listing of its contents. The name of this VI is Process Directory.slow.vi, and this is what its code looks like:

Process Directory.slow

So you can see that the first thing is does is call the subVI we discussed for creating an entry in the tree control for the directory identified in the Starting Path input, using the tag value from the My Parent Tag input. The result is that the folder is added to the tree, and the subVI returns the tag for the new folder item. The next step is to process the folder’s contents so the code calls the built-in List Folder function to generate lists of the directory’s files and subdirectories.

The array of file names is passed into a loop that repeatedly calls the subVI we discussed earlier that creates entries in the tree control for individual files. The array of subdirectory names drives a loop that first verifies that the first character of the name is not a dollar sign (“$”). Although this check is not technically necessary, it serves to bypass various hidden system directories (like $Recycle Bin) which would generate errors anyway. Assuming that the subdirectory name passes the test, the code calls a subVI that we haven’t looked at before – or have we? If you open this subVI and go to its block diagram and you will see this:

Process Directory.slow

Look familiar? I have not simply duplicated the logic in Process Directory.slow.vi, rather I am using a technique called recursion to allow the VI to call itself. This idea might sound more than a little confusing, but if you think about it, the idea makes a lot of sense. Look at it this way, to correctly process these subdirectories, we need to do the exact same things as we are doing right now to process the parent directory, so why not use the exact same code?

The way it works is that Process Directory.slow.vi is configured in its VI Properties as a shared clone reentrant VI. To review, when LabVIEW runs code utilizing share clones, it creates a small pool of instances of the VIs code in memory. When the shared clone VI is actually called, LabVIEW goes to this pool and dynamically calls one of the share clones that isn’t currently being used. If the pool every “runs dry” LabVIEW automatically adds more clones to the pool. It is this behavior relative to shared clones that is key to the way LabVIEW implements recursion. In order to see how this recursion operates, let’s consider this very basic top-level VI:

Getting Processing Started.slow

The code first clears any contents that might already exist in the tree control and then makes the first call to Process Directory.slow.vi. When the runtime engine sees that call, it goes to the pool, gets a clone of the VI and starts executing it. An important point to remember is that even though all the clones in the pool were derived from the same VI, they are at this point separate entities. It is as though you manually created several copies of the same VI, except LabVIEW did the copying for you.

When running this first clone, LabVIEW will eventually get to the call that it makes to Process Directory.slow.vi. As before, the runtime engine will go to the pool, get a second clone of the VI and start it executing, and so it will go until execution gets to a directory that only has files in it. In that case, the cloned VI will not get called and that Nth-generation clone will finish its execution. At this point LabVIEW will release the clone back to the pool for future reuse, and return to executing the clone that called the one that just finished. This calling clone may have other subdirectories to process, or it may be done – in which case it will also finish its execution, LabVIEW will release it back to the pool, and continue executing the clone that called it. This process will continue until all the clones have finished their work.

Some Further Points

And that, dear readers, is how the process basically works, but there are a couple important things still to cover. We need to talk about memory consumption, performance and how to interact with this control in your program once you have it populated with data.

Memory Considerations

I mentioned earlier that the information that you enter into a tree control are actually properties of the control – not its data. I also stated that as a result of that fact, said information will automatically be saved as part of the control. As a demonstration of that fact, consider that the very basic top-level VI I just showed you consumes about 14 kbytes on disk. However, as a test I turned the process loose on my PC’s Program Files (x86) directory. After it had finished processing the 14,832 folders(!) and 122,533 files(!!) contained therein, I saved the VI again. At that point, the size of the VI on disk ballooned to 2.6 Mbytes.

The solution is to remember to always remember to delete all items from a tree control when the program using it stops. Although you obviously don’t have to worry about this sort of growth is a compiled application (a standalone application can’t save changes to itself), this convention will help to keep you from inadvertently saving extraneous information during development and artificially expanding the size of your application.

Performance Considerations

The test I did to catalog my PC’s Program Files (x86) directory also highlighted another issue: execution speed. To complete the requested processing took about an hour and a half. Doing the same processing, but minus the tree control operations, took less than a minute, so the vast majority or this time was clearly spent in updating the tree control. But what exactly was it that was taking so long? As it turns out, there are two sources of delay, the first of which is actually pretty easy to control.

The way the code is currently written, the tree control on the front panel updates its appearance after each addition – a problem by the way that is not unique to tree controls. The solution is to tell LabVIEW to stop updating the front panel for a while, and here is how to do it:

Getting Processing Started w-Defer Panel Updates

A VI’s front panel has a property called Defer Panel Updates when you set this property to true, LabVIEW records all changes to the VI’s front panel, but doesn’t actually update it to reflect those changes. When the property is later set to false, all pending changes are applied to the front panel at once. The additions shown reduces the time to process my entire Program Files (x86) directory by 66% to just 30 minutes – which is much better, but still not great.

To reduce our processing time further, we have to take more drastic measures – starting with a fundamental change in how we add entries for individual files. The issue is that the technique we are using to add entries is very convenient because we are explicitly identifying the parent under which each child is to be placed. Consequently, we have the flexibility to add entries in essentially any order. However, as the total number of entries grows larger we begin to pay a high price for this convenience and flexibility because, under the hood, the control’s logic has to incorporate the ability to insert entries at random into the middle of existing data.

The solution to this problem is to use a different method. This method is called Edit Tree Items:Add Multiple Items to End and as it names says it simply appends new items to the end of the current list of entries. Of course for this to work, it means that we have to take responsibility for a lot of stuff that LabVIEW was doing for us, like updating the control in order and maintaining the indentation to preserve the hierarchical structure. Thankfully, that work isn’t very hard. For instance, here is the code for creating the new directory entry:

Process New Directory Entry

The first thing you will notice is that the invoke node is gone. The method that we will be invoking sports a single input which is an array of clusters representing the tree entries that it will add. The purpose of the logic before us is to assemble the array element that will create the parent folder’s entry in the tree.

Next, note that the information needed to define the entry is slightly different. First, we don’t need to specify a tag for the parent because we are assuming that the node are going to be simply added to the display in the order that they occur in the array. However, that simplification raises a problem. How do you maintain the display’s hierarchical structure? The thing to remember is that the hierarchy is defined visually, but also logically, by the indentations in the entries. Therefore the entry definition incorporates a parameter that explicitly defines the number of level which the new entry should be indented. Due to the way that we have been building the tags, this value is very easy to calculate. All we have to do is count the number of delimiters (“\”) in the entry’s tag and then subtract the number of delimiters in the starting path. The first part of that calculation occurs in the subVI Calculate Indent Level.vi and the second part is facilitated by a new input parameter Indent Offset.

Making the same adaptations to the routine for adding a new file entry and you get this:

Process New File Entry

Nothing new to see here. The important part is how these two new VIs fit together and to see that we need to look at the recursive VI Process Directory.vi (I have zoomed in on just the part that has changed):

Process Directory

This logic’s core functionality is to build the array of entry definitions that the Edit Tree Items:Add Multiple Items to End method needs to do its work. The first element in this array is the entry for the directory itself, and the following elements define the entries for the files within the directory. Finally, we have a make a small change to the top-level VI as well:

Getting Processing Started

Specifically, we need to calculate the Indent Offset value based on the Starting Path input. But the important question, is does all this really help? With these optimizations in place the processing time for my PC’s Program Files (x86) directory drops to just a hair under 10 minutes. Of course while that improvement is impressive, it might still be too long, but the changes to reduce the processing time further are only necessary if dealing with very large datasets. Plus they really have nothing to do with the tree control itself – so they will have to wait for another time.

Event Handling

The last we have left out so far is what happens after the tree control is populated with data. Well, like most controls in LabVIEW, tree controls support a variety of events including ones that allow event structures to respond when the user selects or double-clicks an item in the control. But this point begs the question: What is the fundamental datatype of a tree control? By default, the datatype of a tree control is a string, and its value is the tag of the currently selected item. Alternatively, if no items are selected, the tree control’s value is a null, or empty string.

Top Level File Explorah

Because the control’s datatype is a string, you can programmatically clear any selection by writing a null string to a Value property node or a local variable associated with the control. However, note the words “By default…” like a few other controls (such as listboxes) tree controls can be configured to allow multiple items to be selected at once. In that case, the control’s datatype changes to an array of strings where each element is the tag of a selected item.

The other thing I wanted to point out through this example is the importance of carefully considering how to define tags for items. it may seem obvious but if you are taking the time to put data into this control, you are probably going to want to use it in the future. it behooves you therefore to tag it in such as way as to allow you to quickly identify and parse values. For example, in this example I put together the tags such that they mirror the data’s natural structure – its file path. By mimicking your data’s natural structure you make it easier to locate the specific information that you need.

File Explorer – Release 1
Toolbox – Release 12

The Big Tease

OK, that is enough for now. Next time we will return to our testbed application and look at using tree controls as a control element. With this use case the focus shifts from volume of data, to organization of the GUI to simplify operator interactions.

Until Next Time…
Mike…

Dropping-In on the Testbed

Last time out we started exploring one common application of so-called “drop-in” VI. The technique is based on the idea of creating VIs that are capable of performing something useful for the VI that is hosting it, but without interacting directly with that VI’s basic logic. The example we considered was manipulating the font and type size used to present textual data.

At the close of that post we has created a basic object-oriented structure that could manipulate the label or caption of any front panel control or indicator. I want to finish this discussion by looking at how to expand that basic implementation to allow it to set the text properties of text contained inside a control or indicator. For that we will return to our testbed application.

A Brief Recap

It has been a while since we have worked with this code, so a brief refresher on what it does is probably in order. The testbed application we will be modifying consists of several processes that run independently of one another. To begin with, there is a background process that oversees the reporting of errors that occur. Handling the user interface duties, a GUI process incorporates a subpanel that can display the front panels of several simulated acquisition and process-control VIs. The whole thing is kicked off by a launcher VI that loads the various processes into memory and starts them executing.

Our goal here will be to add the drop-in VI we created last time to all the user-facing VIs and add classes as necessary to allow it to handle the controls and indicators on those VIs. However, if you don’t already have a tool for editing database contents directly, you should first download a tool called Database .NET (the link is to a zip file, and is at the bottom of the page). The program is a simple utility that lets you examine and edit database data from a number of different DBMS. I don’t know the folks that wrote this, and have no vested interest in the program other than I have used it for years and found it very useful. Note that this program has no installer so it has a very small footprint – it will even run from a USB stick. To “install” the program, simply create a directory for it on your computer and then drag into it the program that is inside the zip archive you downloaded, and installation is complete. The easiest way to invoke it is to set it as the default application for *.mdb files.

  • Note that if you decide to install this utility in a subdirectory of the Program Files (x86) directory, you may have to play around with the folder permissions a bit before it will run. Because the program generates several temporary files when it’s starting up, the user has to have Full Access to the folder in which it is installed.

One other caveat to bear in mind before we dive into the modifications is that, these operations cannot override limits on these properties that might exist for other reasons. For example, these techniques will not work on controls that you have defined as strict typedefs. The reason: The strict typedef defines everything about the control’s appearance and the property node will throw an error if you try to change them. Likewise, a System-themed control will let you change the font characteristics, but will complain if you try to change colors.

Making With the Modifications

So where do we start? Well the first hing we need to do is to make a couple minor tweaks to the Display Font Manager.vi. First, we need to define what happens to the drop-ins errors. Because it’s important to preserve them, we will save the errors that arise in the drop-in to the same location that errors from the testbed application proper are stored – but without bothering the program’s operator. To accomplish that task, let’s reuse a the subVI that the error handling logic uses to store error data.

Drop-in Error Handling

Note that I had to add a case structure because the location where this subVI was originally used only executed if there was an error. So unless we want to have spurious records being posted, we have to add that logic here.

Next, as the code is currently written, the error chain in the drop-in’s logic starts with the Error In control and terminates in the Error Out indicator. Although this arrangement works fine during development and testing, when the time comes to deploy the code, this is not what we want. As I said last time, drop-in VIs should not interact with the host VI and should not inject their own errors into the host’s error stream. Still, it can be useful to be able to use the drop-in’s error IO to establish data dependencies that control when it runs. The solution is for the drop-in to have error clusters, but not have them be connected internally.

Errors - Straight Through

Changing the Testbed

Now that we are to install the drop-in, we need to look for where to install it. Completing that examination of the code, we see that there are 5 VIs that are user-facing:

  1. The Launcher (testbed.vi)
  2. The Main GUI (Display Data.vi)
  3. The Temperature Controller (Temperature Controller.vi)
  4. Two “Acquisition” VIs (Acquire Ramp Data.vi and Acquire Sine Data.vi)

So the first thing I do is modify each of these VIs by dropping a copy of the drop-in VI on to their block diagram outside the outer-most loop. For example, this is what the modified launcher block diagram looks like:

textbed.vi with drop-in installed

As promised earlier, this is all the modification that the application will need – which means we are ready to start testing.

The First Test

“But wait a minute…” you protest. “…we haven’t configured anything yet. There’s nothing to test!”

Well you’re half right. We have not gone into the database and configured any controls to be modified, but we still have something to test. We still have to verify the drop-in’s default behavior, which by the way, is to do nothing. Yes, you read that right, we have to test that nothing happens. You see, a major aspect of the drop-in concepts is that drop-ins don’t do anything unless they are explicitly told to through their configuration. Right now we have installed the drop-in code, but there are no controls configured in the database so we need to make sure that the main application continues to run as it did before: no side-effects and no errors. In short, the drop-in right now should do nothing, and we need to make sure that it fulfills that requirement.

So launch the top-level VI (testbed.vi) or run the standalone executable. As before, the launcher will show the names of the processes it’s launching and when it finishes the main GUI will open. Again as before, you will be able to switch between screens using the popup menu and the plugins will operate just as they did before. Finally, if you look at the contents of the event table in the database, you will see that no errors have been generated.

It’s All About the Children

Now that we have “nothing” working, we need to finish implementing all the “somethings”. You will recall that when we ended last time I had created a basic implementation of the font manager functionality that could change the label or caption of any type of control. The tricky part, I said was going to be implementing the subclass, or children, methods that would modify the font of a configured control’s contents. So let’s look at those children.

The String and Digital Subclasses

I choose to start with these two because they are the easiest to understand, and are very much alike. Here’s the child method for handing strings…

String Subclass Method

…and the one for digital numerics…

Digital Subclass Method

In either subclass, the logic starts by calling the parent methods (which handles labels and captions) and then extracting from the parent’s class data the reference to the control that will be manipulated. At the same time that is going on, the Font Parameters data is unbundled and the Component to Set value controls what, if anything, happens next. If the selected component is Label or Caption a case is selected which does nothing but pass through the error cluster. If, however, the selected component is Contents the associated case casts the basic control reference from the parent class data into the control’s specific control class, and then sets the appropriate properties.

The Boolean and RingSubclasses

The next two I want to consider are, again, similar each other, but differ from the preceding pair in that they represent control classes that don’t have any readily discernible textual value. Booleans represent logical true and false conditions, while rings are technically numerics, but the number that is their value doesn’t appear anywhere. In this sort of situation, the idea is to look for text that is not the control’s value but is associated with that value. For example, Boolean controls in LabVIEW can have textual displays that state the control’s condition. These strings are called Boolean Text and are often used to label push buttons or lights…

Boolean Subclass Method

Likewise, the Ring control appears to the user as a pop-up menu, so we can use this code to set the text properties of the text that appears in the menu…

Ring Subclass Method

The WaveformChart Subclass

Finally, we need to take the idea of strings that are only associated with data one more step. What about complex controls that can have multiple strings associated with their values? Objects like charts are good examples of what I am talking about. Just to start, there is text associated with the axis tick marks, there is text that forms the axis labels, and there is text in the plot legends.

The most flexible approach would be to figure out how to uniquely identify each of these components, however we must be careful to not create an API that is so flexible that it is unusable. One solution would be to simply make all the text the same font and size – which is what they are anyway. A look that I prefer however is to have the tick mark labels slightly smaller than the axis labels. Here is one way to do that:

WaveformChart Subclass Method

As you can see, the code treats the two axes the same by combining references to them into an array and then passing that array into a loop that manipulates the display parameters. This logic makes the axis labels the size specified in the configuration, but does a bit of math to make the tick mark labels about 10% smaller. This difference might not seem like much, but it works. If this isn’t exactly what you want, that’s OK. The point here is not to present a canonical solution, but to present concepts and ideas that help you find your own way.

Adding Configurations

Now we are ready to add the font definitions to the database. I have created a total of 12 definitions covering 9 different controls and indicators and you can see them all by examining the SQL file in the _repos subdirectory in the project (starting at line 27). However, to give you a taste of what the SQL code for this functionality looks like, here is the SQL for the table holding the font configurations, and the font definition for the string indicator on the front panel of the launcher.

CREATE TABLE ctrl_font_definition (
    id          AUTOINCREMENT PRIMARY KEY,
    owner_name  TEXT(50) WITH COMPRESSION,
    ctrl_name   TEXT(50) WITH COMPRESSION,
    font_name   TEXT(20) WITH COMPRESSION,
    font_size   INTEGER,
    ctrl_comp   TEXT(20) WITH COMPRESSION
  )
;

INSERT INTO ctrl_font_definition
  (owner_name, ctrl_name, font_name, font_size, ctrl_comp)
VALUES
  ('testbed.vi', 'progress', 'Segoe UI', 24, 'Contents')
;

The goal of these initial definitions is to “turn-on” the functionality without changing too much. For example, the ‘Segoe UI’ font is the default font that LabVIEW uses on recent versions of the Windows platform. If you are running this code on the Macintosh or Linux (or an older version of Windows), the default font will be different. So on other platforms you may need to modify these definitions before you install them.

Once we have the definitions in the database, let’s try the testbed application again. You might not notice a lot of difference, that is sort of the point. This initial test is to reproduce the default values. One place where you will notice a difference is if you are running Windows and you have the display font scaling on your display set to the non-default value. The text size will now always be the same relative to the size of the window regardless of how the display setting changes.

From here I would recommend that you play around a bit and manually change the font and size of the various controls to see the effect.

Testbed Application – Release 16
Toolbox – Release 12
Testbed Installer – Release 16

Please note that I have included in this release a built version of the application so you can practice working with the database. The LocalDB.mdb file included with this installer has the table defined for holding the font definitions, but the table is empty. This release has two purposes: One, by adding to and manipulating the data in its database, you can see that you really can modify the visual presentation without changing code. Two, I have started using LabVIEW 2015 and realize that some of you may not have upgraded yet. If this version change is a problem, post a comment and I will send you a version of the code back-saved to LabVIEW 2014.

The Big Tease

One of the things that I like about NI Week is the opportunity to meet friends both new and old. Before a keynote address one morning I was talking to another one of the LabVIEW Champions, Jack Dunaway by name, and the topic of this blog came up. To make a long story short, he suggested a topic that sounded so good, I’m going to get started on it next time.

One good of way showing a lot of data in a small space is what is known as a tree control. It’s valuable because its structure is inherently hierarchical and so can display a lot of data while not taking up a lot of screen real estate. In addition, it can reduce the overwhelm that you sometimes feel when looking at large datasets because, when done well, they allow you to start with a high-level view of the data and gradually drill down to the specific results you want.

If you are working in Windows, there are two such controls available: one that is part of Windows, and one that is native to LabVIEW. So next time: the Native LabVIEW Tree Control. Be there or be square.

Until Next Time…

Mike…

Drop-Ins Are Always Welcome

One of the key distinctions of web development is that the standards draw a bright line between content and presentation. While LabVIEW doesn’t (so far) have anything as powerful as the facilities that CSS provides, there are things that you can do to take steps in that direction. The basic technique is called creating a “drop-in” VI. These functions derive their name from the fact that they are dropped into an existing VI to change the display characteristics, but without impacting the host VI’s basic functionality.

The Main Characteristics

The first thing we need to do is consider the constraints under which these VIs will need to operate. These constraints will both assist in setting the scope of what we try to accomplish, and inform the engineering decision we have to make.

No Fraternization

The first requirement that a VI to meet in order to be considered truly “drop-in” capable, is that there must be no interaction between its logic and that of the VI into which it is being dropped. But if there is to be no interaction with the existing code, how is it supposed to change anything? Given that we are only talking about changing the aspects of the data presentation, all we need is a VI Server reference to the calling VI, and that we can get using the low-level Call Chain function.

VI Server Accesses

As you can see, from the VI reference you can get a reference to the VI’s front panel, and from that you can get an array of references for all the objects on the front panel. It is those references that allow you to set such things as the display font and size – which just happen to be the two things we are going to be manipulating for this example.

One potential problem to be aware of is the temptation to use these references to do things that directly affect how the code operates. However, this is a temptation you must resist. Even though it may seem like you “got away with it this time”, sooner or later it will bite you.

To be specific, changing the appearance of data is OK, but changing the data itself is, in general, not. However, there is one exception that when you think about it makes a lot of sense: localization. Localization is the process of changing the text of captions or labels so they appear is the language of the user, and not the developer. This operation is acceptable because although you might be changing the value in, for example, a button’s Boolean text you aren’t changing what the button does. The button will perform the same whether it is marked “OK”, “Si” or “Ja”.

Autonomous Error Handling

The next thing a drop-in has to be able to do is correctly manage errors. But here we have a bit of a conundrum. On the one hand, errors are still important, so you want to know if they occur. However, you don’t on the other hand, want this added functionality to interrupt the main code because an error occurred while configuring something the user would consider as “cosmetic”.

The solution is for the drop-in to have its own separate error reporting mechanism that records errors, but doesn’t inject them into the main VI’s error chain. The error handling library we have in place already has the needed functions for implementing this functionality.

External Configuration Storage

Finally, the drop-in VI needs configuration data that is stored in a central location outside the VI itself – after all, we want this drop-in to be usable in a wide variety of applications and projects. For implementing this storage you have at your disposal all the options you had when creating the main application itself, and as with the main application, the selection of the correct storage location depends on how much of this added capability will be exposed to the user. If you intend to let the user set the values, you can put the settings in an INI file. You just need to make sure that you quality the data they enter before you try using it. Otherwise you could end up in a situation where they specify a non-existent font, or a text size that is impossibly large or small.

To keep things simple for this test case, we will store the data in the same database that we use to store all the other configuration values. The data that we store in the database will also be (for now) simple. Each record will store the data needed to modify one part of one control, so it will contain a field for the name of the VI, the name of the control, an enumeration for selecting what part of the control is to be set, and finally the font name and size. The enumeration Component to Set will have 3 values: Label, Caption and Contents. Note that to keep things organized and easy to modify or expand, this structure as a whole, and the enumeration in particular are embodied on the LabVIEW side in type definitions.

The Plan of Action

So how can we implement this functionality? The literary device of the “omniscient author” has always bothered me so rather than simply heading off in a direction that I chose before I started writing, let’s take a look at a couple of implementation options and see which one of the two will work the best for us. Remember that the only thing more important that coming up with the right answer, is knowing how you came up with the right answer.

The “Normal” Way

For our first try, let’s start with the basic logic for getting the control references we saw a moment ago, and add to it a VI that returns the font configuration data for the VI that is being configured. Note that the input to this fetch routine (which gets the data from the application database) is the name of the VI that is calling the drop-in. This name is fully qualified, meaning it contains not just the VI name, but also the names of any library or class of which it might be a member.

Font Manager - Deadend

The output from the database lookup consists of a pair of correlated arrays. Correlated arrays are arrays where the data from a given element in one array correlates to, or goes with, the data from the same-numbered element in the other array. In this case, one array is a list of the control names and the other array is a list of all the font settings that go with the various parts of that control.

The first thing the code does is look to see if there are any font settings defined for the VI by checking to see if one of the arrays are empty. It is only necessary to check one of the arrays since they will always have the same number of elements. If there are font settings defined for the VI, the code takes the array of control references from the VI’s front panel and looks at them one-by-one to determine whether the label for that particular control or indicator is contained in the array of control names. When this search finds a control that is in the list of control names, the code unbundles the font settings data and uses the Component to Set value to select the frame of a case structure that contains the property node for the specified component’s property.

This approach works pretty well for labels and captions because all controls and indicators can, regardless of type, have them. In addition, regardless of whether the control is a string, numeric, cluster or what have you, the properties are always named the same. (The property for manipulating a control’s Caption is shown.)

Unfortunately, things begin to get complicated once you move past the properties that all controls share in common and start changing the font settings for the data contained inside the control – what we are calling the Contents. For example, the property for setting the font of the contents of a string control is called Text.FontName, whereas the property for setting the corresponding information in a digital numeric is called NumText.FontName. Things get even stranger when you start talking about setting the font of the Boolean text in the middle of a button, or worse the lines in a listbox – there each row has to be set individually.

The fundamental problem that this simple approach has is that the settings for controls and indicators are built on object-oriented principles. Labels and Captions are easy because they are common to all controls, but as soon as you start talking about text that is contained inside a control, you have to deal with a specific type, or subclass, of control. Plus to even get access to the required properties you need to cast the generic Ctl reference to a more specific class like a Str (string) or DigNum (digital numeric). We could, of course, simply expand the number of items in the Component to Set enumeration to explicitly call out all the various components that we want to be able modify. Then in each case we could do something like this:

'fixing' a problem

Because we know that the String Text is only valid for strings, we could cast the reference to the proper subclass, set the appropriate property, and call it done. If you look at very much code you will see this sort of thing being done all the time. But looking closer in those situations you will also see all the code that gets put into trying to fix this implementation’s shortcomings. For example, because the subclass selection logic is in essence being driven by the enumeration, and the enumeration value is stored in the database; we have created a situation where the contents of the database needs to be kept “in sync” with the controls on the front panels. Hence if a string control should be changed to a digital numeric (or vice versa) the database will need to be manually updated to track the change. This fact, in turn, means that we will need to add code to the VI to handle the errors that occur when we forget to keep the code and the database in sync.

As bad as that might sound, it is not the worst problem. The real deal-breaker is that every time you want or need to add support for another type of control, or another Component to Set, you will be back here modifying this VI. This ongoing maintenance task pretty much means that reusing this code will be difficult to impossible. Hopefully you can see that thanks to these problems (and these are just the two biggest ones), this “simple” approach built around a single case structure ends up getting very, very messy.

But if the object-oriented structure of controls is getting us into trouble, perhaps a bit more object orientation can get us out of trouble…

Riding a Horse in the Direction it’s Going

When programming you will often find yourself in a situation where you are wanting to extend a structure that you can see in a way that you can’t yet fully see or understand. When confronting that challenge, I often find it helpful to take some time and consider the overall trajectory of the part of the structure I can see to see where it’s pointing. Invariably, if you are working with a well-defined structure (as you are here) the best solutions will be found by “riding the horse in the direction it’s already going”.

So what direction is this “horse” already going? Well, the first thing we see is that it is going in the direction of a layered, hierarchical structure. In the VI Server structure that we can see, we observe that the basic control class is not at the top of the hierarchy, but rather in the middle of a much larger structure with multiple layers both above and below it.

Menus

The other thing we can note about the direction of this architectural trendline is that the hierarchy we just saw is organized using object-oriented principles, so the hierarchy is a hierarchy of classes, of datatypes. Hence, each object is distinct and in some way unique, but the objects as a group are also related to one another in useful ways.

Taking these two points together it becomes clear that we should be looking for a solution that is similarly layered and object-oriented. However, LabVIEW doesn’t (yet) have a way to seamlessly extend its internal object hierarchy, so while developing this structure using classes of our own creation, we will need to be careful to keep “on track”.

Moving Forward

The basic for this structure is a class that we will call Display Properties.lvclass. Initially this class will have two public interface VIs: One, Create Display Properties Update Object.vi, does as its name says and creates an object associated with a specific control or indicator. This object will drive what is now the only other interface VI (Set Control Font.vi) which is created for dynamic dispatch and will serve as the entry point for setting the font and size of text associated with GUI controls and indicators. I am building the class in this way because it is easy to imagine other display properties that we might want to manipulate in the future (e.g. colors, styles, localization, etc.). This is the code I use to dynamically load and create display property update objects:

Create Font Object

In general, it is very similar to code I have presented before to dynamically create objects, but there are a few differences. To begin with, the code does not buffer the object after it is created because unlike the other examples we have looked at over the past weeks, these objects do not need to be persistent. In other words, these objects will be created, used and then discarded.

Next, to simplify in their identification, all VI Server classes have properties that return a Class ID number and a Class Name. The code uses the latter value to build the path and class name of the child class being requested.

Finally, after the code builds the path and name of the subclass it wants to use, it checks to see if the class exists and only attempts to load it if the defining lvclass file is found. If the file is missing, the code outputs a parent class object. The reason for this difference is twofold:

  1. Without it, if a control class was called that we had not implemented, the code would throw an error. Consequently, in order to prevent those errors I would have to create dozens of empty classes that served no functional purpose – and that is wasteful of both my time and computer resources.
  2. With it, the logic extends what normally happens when a method is not overridden in a subclass, to include the case where the subclass hasn’t even been implemented yet: the parent class and, – more to the point – the parent methods, are invoked.

Taken Care of Business

The dynamic dispatch VI Set Control Font.vi is obviously the parent method for what will eventually be a family of override methods that will address specific types of controls. But that begs the question: What should go in this VI?

Well think about it for a moment. In the first possible implementation we looked at, things initially looked promising because changing the font and size of labels and captions was so easy. You’ll remember that the reason they were easy, was because all controls and indicators can have them and the properties are always named the same. That sounds like a pretty good description of what we would want in a parent method – so here it is:

Set Font Parent

The structure is pretty simple, the code retrieves the control reference from where it was stored in the class data and passes it into a case structure that has cases for Label and Caption. In addition, it has an empty case that handles the Contents value of Component to Set. This case is empty because that value will be handled during override. So all we have left to do for right now is look at how these VIs look incorporated into the structure we looked at earlier – all we really needed to replace was the case structure…

Font Manager

…and here it is. Nothing much new to see here, so let me just recommend that you take a good look at this code because you probably won’t be seeing it again. Since we will be adding functionality in the context of the class structure we created, we won’t need to revisit this logic any time soon, and maybe ever.

The Big Tease

So with the basic structure in place, all we have to do is start populating the subclasses we need. But that will have to wait for next time when I will also post all the code.

Until Next Time…

Mike…

More Than One Kind of Modularity

When learning something that you haven’t done before – like .NET – it’s not uncommon to go through a phase where you look at some of the code you wrote early on and cringe (or at least sigh deeply). The problem is that you are often not only learning a new interface or API, but you are learning how to best use that interface or API. The cause of all the cringing and sighing is that as you learn more, you begin to realize that some of the assumptions and design decisions that you made were misguided, if not flat-out wrong. If you look at the code we put together last time to help us learn about .NET in general, and the NotifyIcon assemble in particular, we see a gold-plated example of just such code. Although it is clearly accomplished it’s original goal of demonstrating how to access .NET functionality and illustrating how the various objects can relate to one another, it is certainly not reusable – or maintainable, or extensible, or any of the other “-ables” that good software needs to be.

In fact, I created the code in that way so this time we can take the lesson one step further to fix those shortcomings, and thus demonstrate how you can go about cleaning up code (of your own or inherited) that is making you cringe or sigh. Remember, it is always worth your time to fix bad design. I can’t tell you how many times I have seen people struggling with bad decisions made years before. Rather than taking a bit of time to fix the root cause of their trouble, they continue to waste hours on project after project in order to workaround the problem.

Ok, so where do we start?

Clearly this code would benefit from cleaning-up and refactoring, but where and how should we start? Well, if you are working on an older code base, the question of where to start will not be a problem. You start with where the most pain is. To put it another way, start with the things that cause you the biggest problems on a day-to-day basis.

This point, however, doesn’t mean that you should just sit around and wait for problems to arise. As you are working always be asking yourself if what you are doing has limitations, or embodies assumptions that might cause problems in the future.

The next thing to remember is that this work can, and should, be iterative. In other words you don’t have to fix everything at once. Start with the most egregious errors, and address the others as you have the opportunity. For example, if you see the code doing something stupid like using a string as a state variable, you can fix that quickly by replacing the strings with a typedef enumeration. I have even fixed some long-standing bugs in doing this replacement because it resolved places where states were subtly misspelled or contained extraneous spaces.

Finally, remember that the biggest payoffs, in terms of long-term benefit, come from improved modularity that corrects basic architectural problems. As we shall see in the following discussion, I include under this broad heading modularity in all its forms: modular functionality, modular logic and modular data.

Revisiting Modular Functionality

Modular functionality is the result of taking small reusable bits of code and encapsulating it in routines that simplify access, standardize the interface or ensure proper usage. There are good examples of all these usages in the application modified code, starting with Create NotifyIcon.vi:

Create NotifyIcon VI

Your first thought might be why I bothered turning this functionality into a subVI. After all, it’s just one constructor node. Well, yes that is true but it’s also true that in order to create that one node you have to remember multiple steps and object names. Even though this subVI appears rather simple, if you consider what it would take to recreate it multiple times in the future, you realize that it actually encapsulates quite a bit of knowledge. Moreover, I want to point out that this knowledge is largely stuff for which there is no overwhelming benefit to be gained from you committing it to memory.

Next, let’s consider the question of standardizing interfaces. Our example in this case is a new subVI I created to handle the task of assigning an icon to the interface we are creating. I have named it Set NotifyIcon Icon.vi:

Set NotifyIcon Icon VI

You will remember from out previous discussion that this task involves passing a .NET object encapsulating the icon we wish to use to a property node for the NotifyIcon object. Originally, this property was combined with several others on a single node. A more flexible approach is to breakup that functionality and standardize the interfaces for all the subVIs that will be setting NotifyIcon to simply consist of an object reference and the data to be used to set the property in a standard LabVIEW datatype – in this case a path input. This approach also clearly simplifies access to the desired functionality.

Finally, there is the matter of ensuring proper usage. A good place to highlight that feature is in the last subVI that the application calls before quitting: Drop NotifyIcon.vi.

Drop NotifyIcon VI

You have probably been warned many times about the necessity of closing references that you open. However, when working with .NET objects, that action by itself is sometimes not sufficient to completely release all the system resources that the assembly had been using. Most of the time if you don’t completely close out the assembly you many notice memory leaks or errors from attempting to access resources that still think they are busy. However with the NotifyIcon assembly you will see a problem that is far more noticeable, and embarrassing. If you don’t call the Dispose method your program will close and release all the memory it was using, but if you go to the System Tray you’ll still see your icon. In fact, you will be able to open its menu and even make selections – it just doesn’t do anything. Moreover, the only way to make it go away it to restart your computer.

Given the consequences of forgetting to include this method in your shutdown sequence, it is a good idea to make it an integral part of the code that you can’t forget to include.

Getting Down with Modular Logic

But as powerful as this technique is, there can still be situations where the basic concept of modularity needs to be expressed in a slightly different way. To see such a situation, let’s look at the structure that results from simply applying the previous form of modularity to the problem of building the menus that go with the icon.

Create ContextMenu VI

Comparing this diagram to the original one from last time, you can see that I have encapsulated the repetitive code that generated the MenuItem objects into dedicated subVIs. By any measure this change is a significant improvement: the code is cleaner, better organized, and far more readable. For example, it is pretty easy to visualize what menu items are on submenus. However, in cases such as this one, this improved readability can be a bit of a double-edged sword. To see what I mean, consider that for the structure of your code to allow you to visualize your menu organization, said organization must be hard-coded into the structure of the code. Consequently, changes to the menus will, as a matter of course, require modification to the fundamental structure of the code. If the justifications for modularity is to include concepts like flexibility and reusability, you just missed the boat.

The solution to this situation is to realize that there is more than one flavor of modularity. In addition to modularizing specific functionality, you can also modularize the logic required to perform complex and changeable tasks (like building menus) that you don’t want to hard code. If this seems like a strange idea to you, consider that computers spend most of their time using their generalized hardware to performed specialized tasks defined by lists of instructions called “programs”. The thing that makes this process work is a generalized bit of software called a “compiler” that turns the programs into data structures that the generalized hardware can use to perform specialized actions.

Carrying forward with this line of reasoning, what we need is a simple way of defining a menu structure that is external to our program, and a “menu compiler” that turns that definition into the MenuItem references that our program needs. So let’s build one…

Creating the Data for Our Menu Compiler

So what should this menu definition look like? Well, to answer that question we need to start with the data required to define a single MenuItem. We see that as a minimum, every item in a menu has to have a name for display to the user, a tag to identify it, and a parent tag that says if the item has a parent item (and if so which item is its parent). In addition, we haven’t really talked about it, but the order of references in an array of menu items defines the order in which the items appear in the menu or submenu – so we need a way to specify its menu position as well. Finally, because in the end the menu will consist of a list (array) of menu item references, it makes sense to express the overall menu definition that we will eventually compile into that array of references as a list (and eventually also an array).

But where should we store this list of menu item definitions? At least part of the to this question depends on who you want to be able to modify the menu, and the level of technical expertise that person has. For example, you could store this data in text files as INI keys, or as XML or JSON strings. These files have the advantage of being easy to generate and are readily accessible to anyone who has access to a text editor – of course that is their major disadvantage, as well. Databases on the other hand are more secure, but not as easy to access. For the purposes of this discussion, I’ll store the menu definitions in a JSON file because, when done properly, the whole issue of how to parse the data simply goes away.

To see what I mean, here is a nicely indented JSON file that describes the menu that we have been working using for our example NotifyIcon application:

[
	{
		"Menu Order":0,
		"Item Name":"Larry",
		"Item Tag":"Larry",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Moe",
		"Item Tag":"Moe",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"The Other Stooge",
		"Item Tag":"The Other Stooge",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":3,
		"Item Name":"-",
		"Item Tag":"",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":4,
		"Item Name":"Quit",
		"Item Tag":"Quit",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":0,
		"Item Name":"Curley",
		"Item Tag":"Curley",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Shep",
		"Item Tag":"Shep",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"Joe",
		"Item Tag":"Joe",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	}
]

And here is the LabVIEW code will convert this string into a LabVIEW array (even if it isn’t nicely indented):

Read JSON String

JSON has a lot of advantages over techniques like XML: For starters, it’s easier to read, and a lot more efficient, but this is why I really like using JSON: It is so very convenient.

Starting the Compilation

Now that we have our raw menu definition string read into LabVIEW and converted into a datatype that will simplify the next step in the processing, we need to ensure that the data is in the right order. To see why, we need to remember that the final data structure we are building is hierarchical, so the order in which we build it matters. For instance, “The Other Stooge” is a top-level menu item, but it is also a submenu so we can’t build it until we have references to all the menu items that are under it. Likewise, if one of the items under it is a submenu, we can’t build it until all its children are created.

So given the importance of order, we need to be careful how we handle the data because none of the available storage techniques can on their own guarantee proper ordering. The string formats can all be edited manually, and it’s not reasonable to expect people to always type in data in the right order. Even though databases can sort the result of queries, there isn’t enough information in the menu definition to allow it to do so.

The menu definition we created does have a numeric value that specifies the order of items in their respective menus and submenus. We don’t, however, yet have a way of telling the level the items reside at relative to the overall menu structure. Logically we can see that “Larry” is a top-level menu item, and “Shep” is one level down, but we can’t yet determine that information programmatically. Still the information we need is present in the data, it just needs to be massaged a bit. Here is the code for that task:

Ordering the Menu Items

As you can see, the process is basically pretty simple. I first rewrite the Item Tag value by adding the original Item Tag value to the colon-delimited list that starts with the Parent Tag. I then count the number of colons in the resulting string, and that is my Menu Level value. The exception to this processing are the top-level menu items which are easy to identify due to their null parent tags. I simply force their Menu Level values to zero and replace the null string with a known value that will make the subsequent processing easier. The real magic however, occurs after the loop stops. The code first sorts the array in ascending order and then reverses the array. Due to the way the 1D array sort works when operating on arrays of clusters, the array will be sorted first by Menu Level and then Menu Order – the first two items in the cluster. This sorting, in concert with the array reversal, guarantees that the children of a submenu will always be processed before the submenu item itself.

Some of you may be wondering why we go to all this trouble. After all, couldn’t we just add a value to the menu definition data to hold the Menu Level? Yes, we could, but it’s not a good idea, and here’s why. In some areas of software development (like database development, for instance) the experts put a lot of store in reducing “redundancy” – which they define basically as storing the same piece of information in more than one place. The problem is that if you have redundant information, you have to decide how to respond when the two pieces of information that are supposed to be the same, aren’t. So let’s say we add a field to the menu definition for the menu level. Now we have the same piece of information stored in two different places: It is stored explicitly in the Menu Level value while at the same time it is also stored implicitly in Parent Tag.

Generating the Menu Item “Code”

In order to turn this listing into the MenuItem references we need, we will pass this sorted and ordered array into a loop that will process one element at a time. And here it is:

Compiling the Menu-1

You can see that the loop carries two shift registers. The top SR holds a 1D array of strings that consists of the submenu tags that the loop has encountered so far. The other SR also carries a 1D array but each element in it is a cluster containing an array of MenuItem references associated with the submenu named in the corresponding element of the top SR.

As the screenshot shows, the first thing that happens in the loop is that the code checks to see if the indexed Item Tag is contained in the top SR. If the tag is missing from the array it means that the item is not a submenu, so the code uses its data to create a non-submenu MenuItem. In parallel with that operation, the code is also determining what to do with the reference that is being created by looking to see if the item’s Parent Tag exists in the top SR. If the item’s parent is also missing from the array, the code creates entries for it in both arrays. If the parent’s tag is found in the top SR, it means that one or more of the item’s sibling items has already been processed so code is executed to add the new MenuItem to the array of existing ones:

Compiling the Menu-2

Note that the new reference is added to the top of the array. The reason for this departure from the norm is that due to the way the sorting works, the menu order is also reversed and this logic puts the items on each submenu back in their correct order. Note also that during this processing the references associated the menu items are also accumulated in a separate array that will be used to initialize the callbacks. Because the array indexing operation is conditional, only a MenuItem that is not a submenu, will be included in this array.

Generating the Submenu “Code”

If the indexed Item Tag is found in the top SR, the item is a submenu and the MenuItem references needed to create its MenuItem should be in the array of references stored in the bottom SR.

Compiling the Menu-3

So the first thing the code does is delete the tag and its data from the two array (since they are no longer needed) and uses the data thus obtained to create the submenu’s MenuItem. At the same time, the code is also checking to see if the submenu’s parent exists in the top SR. As before, if the Parent Tag doesn’t exist in the array, the code creates an entry for it, and if it does…

Compiling the Menu-4

…adds the new MenuItem to the existing array – again at the top of the array. By the time this loop finishes, there should be only one element in each array. The only item left in the top SR should be “[top-menu]” and the bottom SR should be holding the references to the top-level menu items. The array of references is in turn used to create the ContextMenu object which written to the NotifyIcon object’s ContextMenu property.

What Could Possibly Go Wrong?

At this point, you can run the example code and see an iconic system tray interface that behaves pretty much as it did before, but with a few extra selections. However, we need to have a brief conversation about error checking, and frankly in this situation there are two schools of though on this topic. There is ample opportunity for errors to creep into the menu structure. Something as simple as misspelling a parent tag name could result in an “orphan” menu that would never get displayed – or could end up being the only one that is displayed. So the question is how much error checking do we really need to do? There are those that think you should spend a lot of time going through the logic looking for and trapping every possible error.

Given that most menus should be rather minimal, and errors are really obvious, I tend to concentrate on the low-hanging fruit. For example, one simple check that will catch a large number of possible errors, is looking to see if at the end of the processing, there is more than one menu name left in the top SR – and finding an extra one, asserting an error that gives the name of the extra menu. You should probably also use this error as an opportunity to abort the application launch since you could be left in a situation when you can’t shutdown the program because the “Quit” option is missing.

Something else that you might want to consider is what to do if the external file containing the menu definitions comes up missing. The most obvious solution is to, again, abort the application launch with some sort of appropriate error message. However, depending on the application it might be valuable to provide a hard-coded default menu that doesn’t depend on external files and provides a certain minimum level of functionality. In fact, I once worked on an application where this was an explicit requirement because one of the things that the program allowed the user to do was create custom menus, the structure of which was stored in external files.

Stooge Identifier – Release 2
Toolbox – Release 11

The Big Tease

So what are we going to talk about next time? Well something that I have seen coming up a lot lately on the user forum is the need to be able to work with very large datasets. Often, this issue arises when someone tries to display the results of a test that ran for several hours (or days!) only to discover that the complete dataset consists of hundreds of thousands of separate datapoints. While LabVIEW can easily deal with datasets of this magnitude, it should be obvious that you need to really bring you memory management “A” game. Next time will look into how to plot and manage VLDs (Very Large Datasets).

Until Next Time…

Mike…

A Brief Introduction to .NET in LabVIEW

From the earliest days of LabVIEW, National Instrument has recognized that it needed the ability to incorporate code that was developed in other programming environments. Originally this capability was realized through specialized functions called Code Interface nodes, or CINs. However as the underlying operating systems continued to develop, LabVIEW acquired the ability to leverage such things as DLLs, ActiveX controls and .NET assemblies. Unfortunately, while .NET solves many of the problems that earlier efforts to standardize sharable code exhibited, far too many LabVIEW developers feel intimidated by what they see as unmanageable complexity. The truth, however, is that there are many well-written .NET assemblies that are no more difficult to use than VI Server.

As an example of how to use .NET, we’ll look at an assembly that comes with all current versions of Windows. Called NotifyIcon, it is the mechanism that Windows itself uses to give you access to programs through the part of the taskbar called the System Tray. However, beyond that usage, it is also an interesting example of how to utilize .NET to implement an innovative interface for background tasks.

The Basic Points

Given that the whole point of this lesson is to learn about creating a System Tray interface for your application, a good place to start the discussion is with a basic understanding of how the bits will fit together. To begin with, it is not uncommon, though technically untrue, to hear someone say that their program was, “…running in the system tray…”. Actually, your program will continue to run in the same execution space, with or without this modification. All this .NET assembly does is provide a different way for your users to interact with the program.

But that explanation raises another question: If the .NET code allows me to create the same sort of menu-driven interface that I see other applications using, how do the users’ selections get communicated back to the application that is associated with the menu?

The answer to that question is another reason I wanted to discuss this technique. As we have talked about before, as soon as you have more than one process running, you encounter the need to communicate between process – often to tell another process that something just happened. In the LabVIEW world we often do this sort of signalling using UDEs. In the broader Windows environment, there is a similar technique that is used in much the same way. This technique is termed a callback and can seem a bit mysterious at first, so we’ll dig into it, as well.

Creating the Constructor

In the introduction to this post, I likened .NET to VI Server. My point was that while they are in many ways very different, the programming interface for each is exactly the same. You have a reference, and associated with that reference you have properties that describe the referenced object, and methods that tell the object to do something.

To get started, go to the .NET menu under the Connectivity function menu, and select Constructor Node. When you put the resulting node on a block diagram, a second dialog box will open that allows you to browse to the constructor that you want to create. The pop-up at the top of the dialog box has one entry for each .NET assembly installed on your computer – and there will be a bunch. You locate constructors in this list by name, and the name of the constructor we are interested in is System.Windows.Forms. On your computer there may be more than one assembly with this basic name installed. Pick the one with the highest version (the number in parentheses after the name).

In the Objects portion of the dialog you will now see a list of the objects contained in the assembly. Double click on the plus sign next to System.Windows.Forms and scroll down the list until you find the bullet item NotifyIcon, and select it. In the Constructors section of the dialog you will now see a list of constructors that are available for the selected object. In this case, the default selection (NotifyIcon()) is the one we want so just click the OK button. The resulting constructor node will look like this:

notifyicon constructor

But you may be wondering how you are supposed to know what to select. That is actually pretty easy. You see, Microsoft offers an abundance of example code showing how to use the assemblies, and while they don’t show examples in LabVIEW, they do offer examples in 2 or 3 other languages and – this is the important point – the object, property and method names are the same regardless of language so it’s a simple matter to look at the example code and, even without knowing the language, figure out what needs to be called, and in what order. Moreover, LabVIEW property and invoke nodes will list all the properties and methods associated with each type of object. As an example of the properties associated with the NotifyIcon object, here is a standard LabVIEW property node showing four properties that we will need to set for even a minimal instance of this interface. I will explain the first three, hopefully you should be able to figure out what the fourth one does on your own.

notifyicon property node

Starting at the top is the Text property. It’s function is to provide the tray icon with a label that will appear like a tip-strip when the user’s mouse overs over the icon. To this we can simply wire a string. You’ll understand the meaning of the label in a moment.

Giving the Interface an Icon

Now that we have created our NotifyIcon interface object and given it a label, we need to give it an icon that it can display in the system tray. In our previous screenshot, we see that the NotifyIcon object also has a property called Icon. This property allows you to assign an icon to the interface we are creating. However, if you look at the node’s context help you see that its datatype is not a path name or even a name, but rather an object reference.

context help window

But don’t despair, we just created one object and we can create another. Drop down another empty .NET constructor but this time go looking for System.Drawing.Icon and once you find the listing of possible constructors, pick the one named Icon(String fileName). Here is the node we get…

icon constructor

…complete with a terminal to which I have wired a path that I have converted to a string. In case you missed what we just did, consider that one of the major failings of older techniques such as making direct function calls to DLLs was how to handle complex datatypes. The old way of handling it was through the use of a C or C++ struct, but to make this method work you ended up needing to know way too much about how the function worked internally. In addition, for the LabVIEW developer, it was difficult to impossible to build these structures in LabVIEW. By contrast, the .NET methodology utilizes object-oriented techniques to encapsulate complex datatypes into simple-to-manipulate objects that accept standard data inputs and hide all the messy details.

Creating a Context Menu

With a label that will provide the users a reminder of what the interface is for, and an icon to visually identify the interface, we now turn to the real heart of the interface: the menu itself. As with the icon, assigning a menu structure consists of writing a reference to a property that describes the object to be associated with that property. In this case, however, the name of the property is ContextMenu, and the object for which we need to create a constructor is System.Windows.Forms.ContextMenu and the name of the constructor is ContextMenu(MenuItem[] menuItems).

context menu constructor

From this syntax we see that in order to initialize our new constructor we will need to create an array of menuItems. You got to admit, this makes sense: our interface needs a menu, and the menu is constructed from an array of menu items. So now we look at how to create the individual menu items that we want on the menu. Here is a complete diagram of the menu I am creating – clearly inspired by a youth spent watching way too many old movies (nyuk, nyuk, nyuk).

menu constructors

Sorry for the small image, but if you click on the image, you can zoom in on it. As you examine this diagram notice that while there is a single type of menuItem object, there are two different constructors used. The most common one has a single Text initialization value. The NotifyIcon uses that value as the string that will be displayed in the menu. This constructor is used to initialize menu items that do not have any children, or submenus. The other menuItem constructor is used to create a menu item that has other items under it. Consequently in addition to a Text initialization value, it also has an input that is – wait for it – an array of other menu items. I don’t know if there is a limit to how deeply a menu can be nested, but if that is a concern you need to be rethinking your interface.

In addition to the initialization values that are defined when the item is created, a menuItem object has a number of other properties that you can set as needed. For instance, they can be enabled and disabled, checked, highlighted and split into multiple columns (to name but a few). A property that I apply, but the utility which might not be readily apparent, is Name. Because it doesn’t appear anywhere in the interface, programmers are pretty much free to use is as they see fit, so I going to use it as the label to identify each selection programmatically. Which, by the way, is the next thing we need to look at.

Closing the Event Loop

If we stopped with the code at this point, we would have an interface with a perfectly functional menu system, but which would serve absolutely no useful purpose. To correct that situation we have to “close the loop” by providing a way for the LabVIEW-based code to react in a useful way to the selections that the user makes via the .NET assembly. The first part of that work we have already completed by establishing a naming convention for the menu items. This convention largely guarantees menu items will have a unique name by defining each menu item name as a colon-delimited list of the menu item names in the menu structure above it. For example, “Larry” and “Moe” are top-level menu items so their names are the same as their text values. “Shep” however is in a submenu to the menu item “The Other Stooge” so its name is “The Other Stooge:Shep”.

The other thing we need in order to handle menu selections is to define the callback operations. To simplify this part of the process, I like to create a single callback process that services all the menu selections by converting them into a single LabVIEW event that I can handle as part of the VI’s normal processing. Here is the code that creates the callback for our test application:

callback generator

The way a callback works is that the callback node incorporates three terminals. The top terminal accepts an object reference. After you wire it up, the terminal changes into a pop-up menu listing all the callback events that the attached item supports. The one we are interested in is the Click event. The second terminal is a reference for the VI that LabVIEW will have executed when the event you selected is fired. However, you can’t wire just any VI reference here. For it to be callable from within the .NET environment it has to have a particular set of inputs and a particular connector pane. To help you create a VI with the proper connections, you can right-click on the terminal and select Create Callback VI from the menu. The third terminal on the callback registration node is labelled User Parameters and it provides the way to pass static application-specific data into the callback event.

There are two important points here: First, as I stated before, the User Parameters data is static. This means that whatever value is passed to the terminal when the callback is registered is from then on essentially treated as a constant. Second, whatever you wire to this terminal modifies the data inputs to the callback VI so if you are going to use this terminal to pass in data, you need to wire it up before you create the callback VI.

In terms of our specific example, I have an array of the menu items that the main VI will need to handle so I auto-index through this array creating a callback event for each one. In all cases, though, the User Parameter input is populated with a reference to a UDE that I created, so the callbacks can all use the same callback VI. This is what the callback VI looks like on the inside:

callback vi

The Control Ref input (like User Parameter) is a static input so it contains the reference to the menu item that was passed to the registration node when the callback was created. This reference allows me to read the Name property of the menu item that triggered the callback, and then use that value to fire the SysTray Callback UDE. It’s important to remember when creating a callback VI to not include too much functionality. If fact, this is about as much code as I would ever put in one. The problem is that this code is nearly impossible to debug because it does not actually execute in the LabVIEW environment. The best solution is to get the selection into the LabVIEW environment as quickly as possible and deal with any complexity there. Finally, here is how I handle the UDE in the main VI:

systray callback handler

Here you can see another reason why I created the menu item names as I did. Separating the different levels in the menu structure by colons allows to code to easily parse the selection, and simultaneously organizes the logic.

Future Enhancements

With the explanations done, we can now try running the VI – which disappears as soon as you start it. However, if you look in the system tray, you’ll see its icon. As you make selections from its menu you will see factoids appear about the various Stooges. But this program is just the barest of implementations and there is still a lot you can do. For example, you can open a notification balloon to notify the user of something important, or manipulate the menu properties to show checkmarks on selected items or disable selections to which you want block access.

The most important changes you should make, however, are architectural. For demonstration purposes the implementation I have presented here is rather bare-bones. While the resulting code is good at helping you visualize the relationships between the various objects, it’s not the kind of code you would want to ship to a customer. Rather, you want code that simplifies operation, improves reusability and promotes maintainability.

Stooge Identifier — Release 1

The Big Tease

So you have the basics of a neat interface, and a basic technique for exploring .NET functionality in general. But what is in store for next time? Well I’m not going to leave you hanging. Specifically, we are going to take a hard look at menu building to see how to best modularize that functionality. Although this might seem a simple task, it’s not as straight-forward as it first seems. As with many things in life, there are solutions that sound good – and there are those that are good.

Until Next Time…

Mike…

Helping a Window to Remember

One of the most common, most basic, and most mindless, things we do with computers every day is open windows. Launching a program or opening a document is often synonymous (on a practical level at least) with opening a window. As common as this action is, we rarely give any thought to what is going on behind the scenes when we open a window – hence the wisecrack about it being a mindless operation.

However, if we want to make the most of our design efforts, be need to replace this “mindlessness” with “mindfulness” by really thinking about the things that make windows easy and comfortable to use.

Defining a Well Behaved Window

As we begin looking at the behavior of windows, I want to emphasise that I am not talking about user interface design. User interface design deals with the details of what a window does functionally. Rather what I’m talking about is an examination to the behaviors a window should exhibit, regardless of what happens to be on the screen.

If we think about the window as a kind of frame that supports the interface’s core functionality, we see that one of the big things a window can do is remember things. Over the years, people have developed, and posted on the LabVIEW forums, a variety of toolboxes for storing generic window information like screen customizations, positions and settings. One of these toolsets combined with an event-driven structure can make it easy to significantly pump up the convenience factor of just about any application.

To see how these sorts of tools work, we’re going to enhance our undockable windows application with a simple addition that automatically saves a windows last position and restores that position the next time the window undocks. Although the basic logic is simple, it provides us with the opportunity to discuss many of the major issues that impact this sort of functionality.

The Data, and Where to Keep it

When considering the data that this sort of functionality requires and uses, the operative word is: “convenience”. By that I mean that this data may make using the screen more convenient, but nobody is going to be crying if it gets lost. In fact, a valuable behavior is the ability basically “reset” all the stored data back to its default value by simply deleting the data from the file that is storing it and letting the application rebuild it as needed.

Likewise the data should be of low “intelligence” value. In other words, we don’t want to include things in this data that could constitute a security risk. However, having said that, we also want to make sure that a well-meaning user doesn’t mess up the program’s operation by manually editing the data. My approach to blocking such edits has three major points:

  • Be careful about what you name things: You want to give identifiers that are, of course, meaningful. However, you don’t want to use names that will call attention to themselves in a way that says, “Hi, I’m a setting you might want to play with…”
  • Use a non-obvious data structure: For example, in our example we don’t save a window’s position as a simple list of four values. The problem is that a user looking at these values might decide to try to edit them manually – a simple act that could have some significant side effects. To see why consider that the way you move a window around the screen is by changing the VI’s WinBounds property. However, this property defines a windows position by essentially specifying the location of the window’s upper-left and lower-right corners. Consequently, while it does set the window’s location, it is also specifying the window’s size.
  • Provide a simple way to validate the data: Given that there is no way to know ahead of time what sort of data you might be wanting to store, validation might seem like a huge task, but it’s really not. Remember, when validating the data you don’t have to prove the data values are valid, just that they haven’t changed since the program wrote them.

As we get into how the position saving is implemented, you’ll see how I put these ideas into action, but first we need to look at how we are going to implement the capability from a high-level view.

Our Basic Approach

When adding in new or enhanced functionality, you want to do so in a way that requires as few modifications to (and has as little impact on) the existing structure, as possible. This ability to easily incorporate new functionality is a large part of the meaning of the term, “maintainable”. It is also why it is always good to think about your overall application in terms of functional blocks – or specific VIs that do specific things, and handle specific situations.

With that point in mind we know we have two basic operations we need to add: one sets the position when we open a VI and one writes a new position when we close it. Of these two operations, the simplest is the one that reads the last saved location and moves the window to that location. It’s simple because there is only one place in the code where that opening takes place, and that is right here:

Read Position Installed

This is the VI Float the VI.vi and if you compare it to the version that I presented last week you will note that it has one extra subVI that uses a reference to the VI being opened to look up and set the window position. we’ll look at exactly how it does that in just a moment. The operation that saves a VI’s last open position can also be boiled down to a single subVI, but due to the nature of our application, it will have to be installed in two locations.

Save Position Installed 1

Here’s the first of those locations. It occurs in the subVI Unfloat the VI.vi and it handles the case where the user closes any of the floating windows. Again you’ll notice one added subVI. Using the VI reference supplied to it, the subVI determines the target VI’s new window position and saves it. The other place where this VI occurs is in the event that stops the application.

Save Position Installed 2

Here the event logic checks to see if a window is docked, and if not calls the same subVI to save the window position of the VI associated with the reference.

Digging for Details

Now that we see where the modification fits into the application, let’s look at how the subVIs work – starting with the routine that saves the window position.

Save Window Location

As you can see, I am using the configuration file VIs to store the data in a text file using the INI file structure. However, it’s not the application’s INI file but rather one that I am creating in the user’s “My Documents” directory. This selection has at least a couple of implications. First it means that every user that logs into the computer will have their own set of customizations. Second, if the user wants to reset all their customizations back to default, all they have to do is delete or rename that one file.

Next, notice that this VI was designed to be usable in two different ways. If the VI reference input carries a valid reference the code uses that reference to get the data it wants to save. Alternatively, if the VI reference provided is not valid, the true case of the structure (not shown) open a reference to the VI calling this subVI and save the data for it.

Finally, let’s look at the subVI that the code uses to convert the window bounds data into the string that will be saved to the custom INI file.

Pack WinBounds

In keeping with the concepts I cited earlier, I obfuscate the datatype by flattening the structure to a string, convert the string into an array of U8s, and finally format the array as a string of 2-character hexadecimal values. However, before making that last conversion I provide a mechanism for ensuring data validity: I append a 16 bit CRC. The result is a string that will allow you to detect if it has been manipulated outside the program.

Turning now to the VI that retrieves the data, we see logic that reverses what was done in the position save routine. However, there is one added twist: if this VI is being called for the first time, the position of the target VI might not be in the INI file. Consequently the code needs to be able to recognize that situation and just let the windows open in its default position.

Read Location and Position Window

The subVI (below) that converts the string from the INI file back into the LabVIEW data structure for defining a windows bounds, generates a structure containing all 0s when it is passed an empty string – which is what you get when you try to read a string value from an INI file that doesn’t exist.

Unpack WinBounds

Also notice how this VI verifies the CRC. It’s commonly believed that in order to verify a CRC you have to split the CRC from the message, calculate a new CRC on the message and compare the result to the CRC received as part of the message. However, such is not the case. Due to the way the CRC calculation works, if you simply perform a CRC on the entire message including the original CRC the results will always be 0 for a valid message and CRC. Hence, the logic only goes to the trouble of splitting the message from the CRC after it has determined that the message is valid. In the case where the second CRC calculation is not zero (indicating a corrupted message) the logic outputs the same invalid data structure that you get from a null input string.

Given these facts, the VI for reading the target VI’s last position only has to look for that known-invalid data structure and if it finds it, bypasses the logic that set the window’s bounds.

Undockable Windows – Release 2
Toolbox – Release 10

The Big Tease

So there we have it, a basic framework that you can use to implement a variety of “window memory” functions. But what about next time? I have talked a lot about processes that run in the background. What happens though, if you want to be able to provide a minimal interface that isn’t always there but can be easily called up when needed. Next time I’m going how you can utilize the Windows system tray to house just such an interface. At the same time we’ll look at one of the more interesting backwaters of LabVIEW development – .net callbacks.

Until Next Time…

Mike…

Creating a Reconfigurable Interface Using Undockable Windows

Something I have always enjoyed doing is creating programs or interfaces that do things you don’t expect LabVIEW to be able to do. Consequently, I thought I would take a couple posts and consider some useful and perhaps surprising interfaces.

The first one I want to look at is actually an idea I got from another LabVIEW Champion – Ben Rayner by name. Some time ago, he posted on the user forum a small proof-of-concept VI for an interface that is so cool I though I’d polish it up, flesh it out and see what came out of it.

What it does

At first glance, the interface appears similar to the test bed application that we have built on this blog. There is a common display area and by making selections from a pop up menu, you can display screens showing different data.

docking single screen

The difference is that if you click a button, the GUI will “undock” the current screen and turn it into an independent floating window that is no longer accessible from the popup menu.

docking 3 screens

The software allows an operator to undock as many screen as they want at one time. The only limitations are screen space and logic that mandates leaving at least one screen docked. Likewise, if you close one of the floating windows, it will again take up its original position in the selection menu. As is our usual policy, a link to the code is a little further along in this post.

Why This Interface?

I fully plan to get into how the code works, but first we need to consider why you would want to use this interface in the first place. What is the use case that this interface addresses?

Hopefully it should be obvious to anyone who considers the matter for more than a moment or two that user interfaces are always compromises. If you do your due diligence when designing an interface you try to put together in one screen or window the various bits of information that are logically related, or at least will be used together. However, requirements can change, or a particular type of user might need to be able to correlate pieces of data that are on different screens.

You could create a separate screen for just that user, but that solution requires some additional development effort. However, if you create windows that can be undocked to be moved around the screen as desired, users that want to see their data in a particular way may be able to do so without waiting for you to create a custom screen.

So with all that in mind, let’s think about how we might accomplish this sort of GUI.

How it Works in Theory…

The first thing to you need to realize as we begin looking at how to create this interface, is that if you have been following this blog you already know everything required to make this happen. The only thing missing is an understanding of how to make all the pieces fit together in a slightly different way.

For example, we have talked many times about how to create a basic multiscreen interface using subpanels. Likewise, we know how to make windows float on top of one another by defining their behavior as floating. Now if you think about it, these are the two states that our screens can be in: as a subpanel (when docked) and as a floating window (when undocked).

So our problem is really as simple as deciding how to manage the transition from one state to the other. If you are following the recommendations I have been making, the screens are already written as separate processes that don’t know (or care) how they are being displayed – or even if they are being displayed.

So how do we want this to behave? Well for the undocking there are a variety of options. LabVIEW supports drag-and-drop so we might be able to do something with that. Alternatively, we could create a pull down menu with an “Undock Current Window” on it, or even a custom shortcut menu when the user right-clicks on the on the interface’s front panel. But those are really just different ways of triggering the same logic, so for this demo I’m just going to create a button below the subpanel that is labeled something obscure like, “Click to Undock the Current Display”.

For docking one of the floating windows back in the interface, our options are more limited. But when you have a really good technique available, you only need one. How about this: when you click the window’s close button it gets docked back into the interface. After all, it’s what most users would try anyway.

…and in Practice

Now that we basically understand where we are going, lets start looking at some code.

Undockable Windows – Release 1
Toolbox – Release 9

The block diagram of the top-level VI is pretty much what a regular reader of this blog would expect. It starts by running a subVI that loads into memory, and starts executing, all the VIs that are going to be available through the subpanel. References to these VIs are stored in a DVR so they can be used later to populate the subpanel itself. This subVI also outputs an array of strings that are the names of the VIs it loaded. The code uses this array to initialize the strings in a ring control, and then programmatically fires the value change event associated with the ring.

initialization for undocking

With this work done, the VI next registers to receive a UDE that we will discuss in a moment, and enters the program’s main event loop. This loop includes a value change event for the ring control that changes the VI that is visible in the subpanel, and two additional events that manage the undocking and docking processes.

Getting Undocked

The first of these new events handles the undocking of windows and, understandably, is a value change event on the undocking button. The event logic goes about its work in two steps that are contained in separate subVIs. The first one is called Float the VI.vi.

undock

Its job is to remove the VI that is being undocked from the subpanel using the Remove VI subpanel control method, and then (after looking up its VI reference in the DVR) open its front panel using the FP:Open VI method. The other half of the operation is performed in Process Undock.vi.

process undock

This VI’s purpose is to update the user interface to the VI’s change of state – which in this case means simply disabling its selection in the ring control, and updating the DVR. This subVI is also responsible for getting a reference to the next available docked window and inserting it into the subpanel. As a by-product of this operation, the subVI also generates a flag that indicates when there is only one docked window remaining. The event handler uses this flag to disable the undock button when all but one of the displays have been undocked.

Docking a Window

Like the previous one, this event handles its duties as a two-step process. But unlike the previous one, it is driven by a UDE from the VI that is being docked back into the main display. As stated before, this action should be triggered when the user closes the VI’s front panel. The plugins accomplish this task by intercepting the Panel Close? event and instead of closing its front panel, simply fires a UDE (Redock Screen) that tells the GUI to reincorporate it back into the subpanel. The first subVI in the event handler is Unfloat the VI.vi.

redock

After looking up the VI’s reference in the DVR, it closes the VI’s front panel and inserts it back into the subpanel. Note that it is not necessary to remove the VI that is already there. The other subVI that the event handler calls is Process Redock.vi.

process redock

Basically reversing the operation that were performed when the window was undocked, this VI removes the window’s name in the ring control from the list of disabled items and updates the status stored in the DVR.

The Big Tease

So there’s a basic implementation of a pretty neat capability, but what now? What sort of enhancements might we reasonably want to make? I have an idea. You notice that when the window’s undock, they always open in the middle of the main screen. This is certainly a reasonable approach, but an enhancement that would be sure to make your users happy is to provide some sort of mechanism whereby each window would remember its last position. With that capability in place a windows would reopen in the same place as it was when it was last closed.

So when we next get together, let’s do that. It will be a good thing to know how to do in general.

Until Next Time…

Mike…