Laying the good foundation, with TCP…

In case you are just joining the conversation, we are in the midst of a project to modify the testbed application that we have been slowly assembling over the past year. I would heartily recommend that you take some time and review the past posts.

To this point in our latest expansion project, we have created a remote control interface, embedded it in our testbed and performed some basic testing to verify that the interface works. Our next step is to create the first of several “middleware” processes. I call them middleware because they sort of sit between our application’s basic code and the external applications and users. In future installments we will look at middleware for .NET, ActiveX and WebSockets, but we will start with a more fundamental interface: TCP/IP.

The Roadbed for the Information Highway

Aside from giving me the opportunity to air out some tired metaphors, TCP/IP is a good place to start because it gives us the opportunity to examine the protocol that underlies a host of other connection options.

Just the basics

Although the idea of creating a “server” can have a certain mystique, there really isn’t much to it really – at least when you are working with LabVIEW. The underlying assumption to the process is that there is something monitoring the computer’s network interface waiting for a client application to request a TCP/IP connection. In network parlance, this “something” is called a “listener” because listens to the Ethernet interface for a connection. However a given listener isn’t simply listening for any connection attempt, rather the network standards define the ability to create multiple “ports” on a single interface, and then associate particular ports with particular applications. Thus, when you create the listener you have to tell it what port that it is to monitor. In theory, a port number can be any U32 value, but existing standards specify what sorts of traffic is expected on certain port numbers. For example, by default HTTP connections are expected on ports 80 or 8080, port 21 is the default for FTP and LabVIEW by default listens to port 3363. All you have to do is pick a number that isn’t being used for anything else on your computer. To create a listener in LabVIEW, there is a built-in function called TCP Create Listener. It expects a port number, and returns a reference to the listener that it creates – or an error if you pick a port to which some other application is already listening.

Once you have created the listener, you have to tell it to start listening by calling the built-in function TCP Wait On Listener. As its name implies, it waits until a connection is made on the associated port, though you will typically want to specify a timeout. When this function sees and establishes an incoming connection it outputs a new reference specific to that particular connection. A connection handler VI can then use that reference to manage the interactions with that particular remote device or process.

Finally, when you are done with your work, you kill the server by closing the listener reference (TCP Close Connection), and all connection references that you have open. Put these three phases together, you come up with something like this.

The Simplest TCP Server

This simple code creates a listener, waits for a connection, services that connection (it reads 4 bytes from it), and then quits. While this code works, it isn’t really very useful. For example, what good is a server that only waits for one connection and then quits? Thankfully, it’s not hard to expand this example. All you have to do is turn it into a mini state-machine.

One Step at a Time

As usual, the state-machine is built into the timeout event so the following states include a shift register pointing to the next state to be executed, and a second one carrying the delay that the code will impose before going to that state. But before we get into the specifics, here’s a state diagram showing the process’ basic flow.

State Diagram

Execution starts with the Initialize Listener state. It’s main job at this point is to create the TCP listener. Next is the aptly named state, Wait for Connection. It patiently waits for a connection by looping back to itself with a short timeout. As long as there is no connection established, this state will execute over and over again. This series of short waits gives other events (like the one for shutting down the server) a chance to execute.

When a connection is made, the machine transitions to the Spawn Handler state. Since it is critical that the state machine gets back to waiting for a new connection as soon as possible, this state dynamically launches a reentrant connection handler VI and immediately transitions back to the Wait for Connection state.

The state machine continues ping-ponging between these last two states until the server is requested to stop. At that point, the code transitions to the Close Listener state which disposes of the listener and stops the state machine. So let’s look at some real code to implement these logical states – which, by the way resides in a new process VI named TCP-IP Server.vi.

The Initialize Listener State

This state at present only executes once, and its job is to create the listener that initiates connections with remote clients. The TCP Create Listener node has two inputs, the first of which is the port that the listener will monitor. Although I could have hard-coded this number, I instead chose to derive this value from the application’s Server.Port property. In a standalone executable, the application reads this value from its INI file at start-up, thus making it reconfigurable after the application is deployed. If the server.tcp.port key does not exist in the INI file, the runtime engine defaults to LabVIEW’s official port number, 3363.

Initialize Listener

When running in the development environment, this value is still reconfigurable, but it is set through the My Computer target’s VI Server settings. To change this value, right-click on My Computer in the project explorer window and select Properties. In the resulting dialog box, select the VI Server Category. At this point, the port number field is visible in the Protocols section of the VI Server page, but it is disabled. To edit this value, check the TCP/IP box to enable the setting, make the desired change and then uncheck the TCP/IP box, and click the OK button. It is critical that you remember to uncheck TCP/IP before leaving this setting. If you don’t, the project will be linked to the specified port and the TCP server in the testbed application will throw an error 60 when it tries to start.

The other input to the TCP Create Listener node is a timeout. However, this isn’t the time that the node will wait to finish creating the listener. We will be testing this code on a single computer and so don’t have to worry about such things as the network going down – even momentarily. In the broader world, though, there are a plethora of opportunities for things to go wrong. For example, the network could go south while a client is in the middle of connecting to our server. This timeout addresses this sort of situation by specifying the amount of time that the listener will wait for the connection to complete, once a connection attempt starts.

The Wait for Connection State

This state waits for connection attempts, and when one comes, completes the connection. Unfortunately, LabVIEW doesn’t support events based on a connection attempt so this operation takes the form of a polling operation where the code checks for a connection attempt, and if there is none, waits a short period of time and then checks again. The short wait period is needed to give the process as a whole the chance to respond to other events that might occur.

Wait for Connection

The logic implementing this logic starts with a call to the built-in TCP Wait On Listener node with a very short (5-msec) timeout. If there is no connection attempt pending when the call is made, or an attempt is not received during that 5-msec window, the node terminates with an error code 56. The following subVI (Clear Errors.vi) looks for, and traps that error code so its occurrence can be used to decide what to do next. If the subVI finds an error 56, the following logic repeats the current state and sets the timeout to 1000-msec. If there is no error, the next state to be executed is Spawn Handler and the timeout is 0.

If there is a successful connection attempt, the TCP Wait On Listener node also outputs a new reference that is unique to that particular connection. This new reference is passed to a shift-register that makes is available to the next state.

The Spawn Handler State

In this state, the code calls a subVI (Launch Connection Handler.vi) that spawns a process to handle the remote connection established in the previous state. This connection handler takes the form of a reentrant VI that accepts two inputs: a reference to a TCP connection and a boolean input that enable debugging operations – which at the current time consists of opening the clone’s front panel when it launches, and closing it when it closes.

Spawn Handler

It is important that the connection handler be a reentrant process for two reasons: First, we want the code to be able to handle more than one connection at a time. Second, the listener need to get back to listening for another new connection as quickly as possible. We’ll discuss exactly what goes into the connection handler in a bit.

The Close Listener State

Finally, when the process is stopping, this event closes open connections, sets the timeout to -1, and stops the event loop.

Close Listener

But why are there two connections to be closed? Doesn’t the connection handler that gets launched to manage the remote connection handle closing that reference? While that point is true, the logic behind it is flawed. There is a small, but finite, delay between when the remote connection is completed and the Spawn Handler starts executing. If the command to stop should occurring during that small window of time, the handler will never be launched, and so can’t close that new connection and its associated reference.

Turning States into a Plugin

Now that we have an understanding of the process’ basic operation, we need to wrap a bit more logic around it to turn it into usable code.

Adding Shutdowns and Error Handling

To begin with, if this new process is going to live happily inside the structure we have already defined for testbed application plugins, it is going to need a mechanism to shut itself down when the rest of the application stops. Since that mechanism is already defined, all we have to do is register for the correct event (Stop Application) and add an event handler to give it something to do.

Loop Shutdown

Nothing too surprising here: When the shutdown event fires, the handler sets the next state to be executed to Close Listener and the timeout to 0. Note that it does not actually stop the loop – if it did the last state (which closes the listener reference) would never get the chance to execute. Finally, we also need to provide for error handling…

Add in Error Handling

…but as with the shutdown logic, this enhancement basically consists of adding in existing code. In this case, the application’s standard error reporting VI.

Defining the Protocol

With the new middleware plugin ensconced happily in the testbed framework, we need to create the reentrant connection handler that will handle the network interactions. However, before we can do that we need to define exactly what the communications protocol will look like. In later posts, I will present implementations of a couple of standardized protocols, but for now let’s explore the overall communications process by “rolling our own”.

As a quick aside, you may have noticed that I have been throwing around the word “protocol” a lot lately. Last time, I talked about creating a safe protocol for remote access. Then this time we discussed the TCP protocol, and now I am using the word again to describe the data we will be sending over out TCP connection. A key concept in networking is the idea of layers. We have discussed the TCP protocol for making connections, but that isn’t whole story. TCP is build on top of a lower-level protocol called IP – which is itself built on even lower level protocols for handling such things as physical interfaces. In addition, this protocol stack can also extend upwards. For example, VI Server is at least partially built on top of TCP, and we are now going to create our own protocol that will define how we want to communicate over TCP.

This layering may seem confusing, but it offers immense value because each layer is a modular entity that can be swapped out without disrupting everything else. For example, say you swap out the NIC (Network Interface Card) in your computer, the only part of the stack that needs to change are the very lowest levels that interface to the hardware.

The first thing we need to do is define the data that will be passed back and forth over the connection, and how that data will be represented while it is in the TCP communications channel. Taking the more basic decision first, let’s look at how we want to represent the data. Ideally, we want a data representation that is flexible in terms of capability, is rigorous in its data representations and easy to generate in even primitive languages like C++. The first standard that was created to fill this niche was a spin-off of HTML called XML. The problem is that while it excels in the first two points, the third is a problem because when used to encode small data structures the same features that make it incredibly flexible and rigorous, conspire to make it is very verbose. Or to put it another way, for small data structures the data density in an XML document is very low.

Fortunately, there is an alternative that is perfect for what we need to do: JSON. The acronym stands for “JavaScript Object Notation”, and as the name implies is the notation originally used to facilitate the passing of data within JavaScript applications. The neat part is that a lot of the JSON concepts map really well to native LabVIEW data structures. For example, in terms of datatypes, you can have strings, numbers and booleans, as well as arrays of those datatypes. When you define a JSON object, you define it as a collection of those basic datatypes – sort of like what we do with clusters in LabVIEW. But (as they say on the infomercials), “Wait there’s more…” JSON also allows you include other JSON objects in the definition of a new object just LabVIEW lets us embed clusters within clusters. Finally, to put icing on the cake, nearly every programming language on the planet (including LabVIEW) incorporates support for this standard.

To see how this will work, let’s consider the case of the temperature controller parameters. When wanting to configure this value, the remote application will need to send the following string: (Note: As with JavaScript itself, the presence of “white space” in JSON representations is not significant. I’m showing this “pretty-printed” to make it easier to understand.)

{
    "Target":"Dog House TC", 
    "Data":{
        "Error High Level":100,
        "Warning High Level":90,
        "Warning Low Level":70,
        "Error Low Level":60,
        "Sample Interval":1
    }
}

This string defines a JSON object that contains two items. The first is labeled Target and it holds a string identifying the specific plugin that it is wanting to configure – in this case the Dog House TC. In the same way, the name of the second item is labeled Data, but look at its value! If you think that looks like another JSON object definition, you’d be right. This sub-object has 5 values representing the individual parameters needed to configure a temperature controller. In case you’re wondering, this is what the code looks like that parses this string back into a LabVIEW data structure:

Unflattening JSON

That’s right, all it takes is one built-in function and one typedef cluster. The magic lies in the fact that the string and the cluster represent the exact same logical structure so it is very easy for LabVIEW’s built-in functions to map from one to the other.

The Unflattened JSON Data

The other thing to note is that the Sample Interval value in the cluster has a unit associated with it, in this case milliseconds. The way LabVIEW handles this situation is consistent with how it handles units in general: When converting data to a unitless form (like a JSON value) it expresses the value using the base unit for the type of data that it is. In the example shown, Sample Interval is time, and the base unit for time is seconds, so LabVIEW expresses the 1000 msecs as 1 sec in JSON. Likewise when unflattening the string back to a LabVIEW data structure, the function interprets the input value in the value’s base units as defined in the cluster.

We are about done with what our message will look like, but there are still a couple of things we need to add before we can start shooting our data down a wire. To begin with, we need to remember that Ethernet is a serial protocol and as so it’s much easier to uses if a receiver can know ahead of time how much data to be expecting. To meet that need, we will append a 2-byte binary value that is the total message. The other thing we need is someway to tell whether the message arrived intact and without corruption, so we will also append a 2-byte CRC. Moreover, to make the CRC easy for other applications to generate we will use a standard 16-bit CCITT form of the calculation. So this is what one of our command data packets will look like:

Message Format

In the same way, we can use the same basic structure for response messages. All we have to do is redefine the JSON “payload” as a JSON object with two objects: a numeric error code (where 0 = “No Error”), and a string that is a contains any data that the response needs to return. As you would expect, this string would itself be another JSON-encoded data structure.

Creating the Connection Handler

We are finally ready to implement the reentrant command handler that manages these messages, and the important part of that job is to ensure that it is fully reentrant. By that I mean that it does little good to make the VI itself reentrant if its major subVIs are not. So what is a “major” subVI? The two things to consider are:

  • How often does the SubVI execute? If the subVI rarely executes or only runs once during initialization, it might not be advantageous to make it reentrant.
  • How long does it take to execute? In the same way, subVIs that implement simple logic and so execute quickly, might not provide a lot of benefit as reentrant code.

As I am wont to do, I defined the handler’s overall structure as a state machine with three states corresponding to the three phases of the response interaction. So the first thing we need to do (and the first state to be executed) is Read Data Packet. Its job is to read an entire message from the new TCP connection, test it for validity and, if valid, pass the command on to the Process Command state.

Read Data Packet

The protocol we have defined calls for each message to start with a 2-byte count, so the state starts by reading two bytes from the interface, casting the resulting binary value to U16 number and then using that number to read the remainder of the message. Then to validate the message, the code performs a CRC calculation on the entire message, including the CRC at the end. Due to the way the CRC calculation works, if the message and CRC are valid, the result of this calculation will always be 0. Assuming the CRC checks, the code strips the CRC from the end of the string and sends the remaining part of the string to a subVI that converts the JSON object into a LabVIEW object. I chose an object-oriented approach here because it actually simplifies the code (no case structures) and it provides a clear roadmap of what I need to do if I ever decide to add more interface commands in the future. If the CRC does not check, the next state to execute is either Send Response if no error occurred during the network reads, or Stop Handler if there was.

Moving on, the Process Command state calls a dynamic dispatch method (Process Command.vi) that is responsible for interfacing to the rest of the application through the events we defined last time, and formatting a response to be sent to the caller. The object model for this part of the code has 5 subclasses (one for each command) and the parent class is used as the default for when JSON command structure does not contain a valid command object. It should surprise no one that the command processing subclass methods look a lot like the test VIs we created last time to verify the operation of the remote access processor, consequently, I am not going to take the time or space to present them all again here. However I will highlight the part that makes them different:

Parsing Response for Error or Data

This snippet shows the logic that I use to process the response coming back from the remote access engine in response to the event that reads the graph data. Because the variant returned in the response notifier can be either a text error message, or an array of real data, the first thing the code does is attempt to convert the variant into a string. If this attempt fails and generates an error, we know that the response contains data and so can format it for return to the remote caller. If the variant converts successfully to a string, we know the command failed and can pass an error back to the caller.

At this point, we now have a response ready to send back to the caller, so the state machine transitions to the Send Response state. Here we see the logic that formats and transfers the response to the caller:

Send Response

Since the core of the message is a JSON representation of a response cluster, the code first flattens the cluster to a JSON string. Note however, the string that it generates contains no extraneous white space, so it will look different very from the JSON example I showed earlier. The logic next calculates the length of the return message and the CRC of the JSON. Those two values are added to the beginning and end, respectively, of the JSON string and the concatenated result is written back to the TCP connection.

Finally, the Stop Handler state closes the TCP connection and stops the state machine loop, which also stops and removes from memory the reentrant clone that has been running.

Testing the Middleware

Finally, as always we need to again test what we have done, and to do that I have written a small LabVIEW test client program. However, if you know another programming language, feel free to write a short program to implement the transactions that we have defined. The program I created is included as a separate project. The top-level VI opens a window that allows you to select the action you want to perform, the plugin that it should target and (if required) enter the data associated with the action. Because this is a test program, it also incorporates a Boolean control that forces an invalid CRC, so you can test that functionality as well.

So open both projects and run the testbed application, nothing new here – or so it seems. Now run the simple TCP client, its IP address and port number are correct for this test scenario. As soon as the client starts, the waveform graph for displaying the plugin graph data appears, so let’s start with that. You should be able to see the data from each of the 5 testbed plugins by selecting the desired target and clicking the Send Command button. You should also be able to see all 5 graph images.

Now try generating some errors. Turn on the Force CRC Error check-box and retry the tests that you just ran successfully. The client’s error cluster should now show a CRC Error. Next turn the Force CRC Error check-box back off and try doing something illegal, like using the Set Acquisition Rate action on one of the temperature controllers. Now you should see an Update Failed error.

Continue trying things out, verifying that the thing which should work do, and that the things that shouldn’t work, don’t. If you did the testing associated with the last post, you will notice that there is a lag between sending the command and getting the results, but that is to be expected since you are now running over a network interface. Finally, assuming the network is configured correctly, and the desired ports are open, the client application should be able to work from a computer across the room, across the hall or across the world.

Testbed Application – Release 19
Toolbox – Release 16
Simple TCP Client – Release 1

Big Tease

So what is in store for next time? Well, let’s extend things a bit further and look at a way to access this same basic interface, but this time from a web browser! Should be fun.

Until Next Time…
Mike…

Dropping-In on the Testbed

Last time out we started exploring one common application of so-called “drop-in” VI. The technique is based on the idea of creating VIs that are capable of performing something useful for the VI that is hosting it, but without interacting directly with that VI’s basic logic. The example we considered was manipulating the font and type size used to present textual data.

At the close of that post we has created a basic object-oriented structure that could manipulate the label or caption of any front panel control or indicator. I want to finish this discussion by looking at how to expand that basic implementation to allow it to set the text properties of text contained inside a control or indicator. For that we will return to our testbed application.

A Brief Recap

It has been a while since we have worked with this code, so a brief refresher on what it does is probably in order. The testbed application we will be modifying consists of several processes that run independently of one another. To begin with, there is a background process that oversees the reporting of errors that occur. Handling the user interface duties, a GUI process incorporates a subpanel that can display the front panels of several simulated acquisition and process-control VIs. The whole thing is kicked off by a launcher VI that loads the various processes into memory and starts them executing.

Our goal here will be to add the drop-in VI we created last time to all the user-facing VIs and add classes as necessary to allow it to handle the controls and indicators on those VIs. However, if you don’t already have a tool for editing database contents directly, you should first download a tool called Database .NET (the link is to a zip file, and is at the bottom of the page). The program is a simple utility that lets you examine and edit database data from a number of different DBMS. I don’t know the folks that wrote this, and have no vested interest in the program other than I have used it for years and found it very useful. Note that this program has no installer so it has a very small footprint – it will even run from a USB stick. To “install” the program, simply create a directory for it on your computer and then drag into it the program that is inside the zip archive you downloaded, and installation is complete. The easiest way to invoke it is to set it as the default application for *.mdb files.

  • Note that if you decide to install this utility in a subdirectory of the Program Files (x86) directory, you may have to play around with the folder permissions a bit before it will run. Because the program generates several temporary files when it’s starting up, the user has to have Full Access to the folder in which it is installed.

One other caveat to bear in mind before we dive into the modifications is that, these operations cannot override limits on these properties that might exist for other reasons. For example, these techniques will not work on controls that you have defined as strict typedefs. The reason: The strict typedef defines everything about the control’s appearance and the property node will throw an error if you try to change them. Likewise, a System-themed control will let you change the font characteristics, but will complain if you try to change colors.

Making With the Modifications

So where do we start? Well the first hing we need to do is to make a couple minor tweaks to the Display Font Manager.vi. First, we need to define what happens to the drop-ins errors. Because it’s important to preserve them, we will save the errors that arise in the drop-in to the same location that errors from the testbed application proper are stored – but without bothering the program’s operator. To accomplish that task, let’s reuse a the subVI that the error handling logic uses to store error data.

Drop-in Error Handling

Note that I had to add a case structure because the location where this subVI was originally used only executed if there was an error. So unless we want to have spurious records being posted, we have to add that logic here.

Next, as the code is currently written, the error chain in the drop-in’s logic starts with the Error In control and terminates in the Error Out indicator. Although this arrangement works fine during development and testing, when the time comes to deploy the code, this is not what we want. As I said last time, drop-in VIs should not interact with the host VI and should not inject their own errors into the host’s error stream. Still, it can be useful to be able to use the drop-in’s error IO to establish data dependencies that control when it runs. The solution is for the drop-in to have error clusters, but not have them be connected internally.

Errors - Straight Through

Changing the Testbed

Now that we are to install the drop-in, we need to look for where to install it. Completing that examination of the code, we see that there are 5 VIs that are user-facing:

  1. The Launcher (testbed.vi)
  2. The Main GUI (Display Data.vi)
  3. The Temperature Controller (Temperature Controller.vi)
  4. Two “Acquisition” VIs (Acquire Ramp Data.vi and Acquire Sine Data.vi)

So the first thing I do is modify each of these VIs by dropping a copy of the drop-in VI on to their block diagram outside the outer-most loop. For example, this is what the modified launcher block diagram looks like:

textbed.vi with drop-in installed

As promised earlier, this is all the modification that the application will need – which means we are ready to start testing.

The First Test

“But wait a minute…” you protest. “…we haven’t configured anything yet. There’s nothing to test!”

Well you’re half right. We have not gone into the database and configured any controls to be modified, but we still have something to test. We still have to verify the drop-in’s default behavior, which by the way, is to do nothing. Yes, you read that right, we have to test that nothing happens. You see, a major aspect of the drop-in concepts is that drop-ins don’t do anything unless they are explicitly told to through their configuration. Right now we have installed the drop-in code, but there are no controls configured in the database so we need to make sure that the main application continues to run as it did before: no side-effects and no errors. In short, the drop-in right now should do nothing, and we need to make sure that it fulfills that requirement.

So launch the top-level VI (testbed.vi) or run the standalone executable. As before, the launcher will show the names of the processes it’s launching and when it finishes the main GUI will open. Again as before, you will be able to switch between screens using the popup menu and the plugins will operate just as they did before. Finally, if you look at the contents of the event table in the database, you will see that no errors have been generated.

It’s All About the Children

Now that we have “nothing” working, we need to finish implementing all the “somethings”. You will recall that when we ended last time I had created a basic implementation of the font manager functionality that could change the label or caption of any type of control. The tricky part, I said was going to be implementing the subclass, or children, methods that would modify the font of a configured control’s contents. So let’s look at those children.

The String and Digital Subclasses

I choose to start with these two because they are the easiest to understand, and are very much alike. Here’s the child method for handing strings…

String Subclass Method

…and the one for digital numerics…

Digital Subclass Method

In either subclass, the logic starts by calling the parent methods (which handles labels and captions) and then extracting from the parent’s class data the reference to the control that will be manipulated. At the same time that is going on, the Font Parameters data is unbundled and the Component to Set value controls what, if anything, happens next. If the selected component is Label or Caption a case is selected which does nothing but pass through the error cluster. If, however, the selected component is Contents the associated case casts the basic control reference from the parent class data into the control’s specific control class, and then sets the appropriate properties.

The Boolean and RingSubclasses

The next two I want to consider are, again, similar each other, but differ from the preceding pair in that they represent control classes that don’t have any readily discernible textual value. Booleans represent logical true and false conditions, while rings are technically numerics, but the number that is their value doesn’t appear anywhere. In this sort of situation, the idea is to look for text that is not the control’s value but is associated with that value. For example, Boolean controls in LabVIEW can have textual displays that state the control’s condition. These strings are called Boolean Text and are often used to label push buttons or lights…

Boolean Subclass Method

Likewise, the Ring control appears to the user as a pop-up menu, so we can use this code to set the text properties of the text that appears in the menu…

Ring Subclass Method

The WaveformChart Subclass

Finally, we need to take the idea of strings that are only associated with data one more step. What about complex controls that can have multiple strings associated with their values? Objects like charts are good examples of what I am talking about. Just to start, there is text associated with the axis tick marks, there is text that forms the axis labels, and there is text in the plot legends.

The most flexible approach would be to figure out how to uniquely identify each of these components, however we must be careful to not create an API that is so flexible that it is unusable. One solution would be to simply make all the text the same font and size – which is what they are anyway. A look that I prefer however is to have the tick mark labels slightly smaller than the axis labels. Here is one way to do that:

WaveformChart Subclass Method

As you can see, the code treats the two axes the same by combining references to them into an array and then passing that array into a loop that manipulates the display parameters. This logic makes the axis labels the size specified in the configuration, but does a bit of math to make the tick mark labels about 10% smaller. This difference might not seem like much, but it works. If this isn’t exactly what you want, that’s OK. The point here is not to present a canonical solution, but to present concepts and ideas that help you find your own way.

Adding Configurations

Now we are ready to add the font definitions to the database. I have created a total of 12 definitions covering 9 different controls and indicators and you can see them all by examining the SQL file in the _repos subdirectory in the project (starting at line 27). However, to give you a taste of what the SQL code for this functionality looks like, here is the SQL for the table holding the font configurations, and the font definition for the string indicator on the front panel of the launcher.

CREATE TABLE ctrl_font_definition (
    id          AUTOINCREMENT PRIMARY KEY,
    owner_name  TEXT(50) WITH COMPRESSION,
    ctrl_name   TEXT(50) WITH COMPRESSION,
    font_name   TEXT(20) WITH COMPRESSION,
    font_size   INTEGER,
    ctrl_comp   TEXT(20) WITH COMPRESSION
  )
;

INSERT INTO ctrl_font_definition
  (owner_name, ctrl_name, font_name, font_size, ctrl_comp)
VALUES
  ('testbed.vi', 'progress', 'Segoe UI', 24, 'Contents')
;

The goal of these initial definitions is to “turn-on” the functionality without changing too much. For example, the ‘Segoe UI’ font is the default font that LabVIEW uses on recent versions of the Windows platform. If you are running this code on the Macintosh or Linux (or an older version of Windows), the default font will be different. So on other platforms you may need to modify these definitions before you install them.

Once we have the definitions in the database, let’s try the testbed application again. You might not notice a lot of difference, that is sort of the point. This initial test is to reproduce the default values. One place where you will notice a difference is if you are running Windows and you have the display font scaling on your display set to the non-default value. The text size will now always be the same relative to the size of the window regardless of how the display setting changes.

From here I would recommend that you play around a bit and manually change the font and size of the various controls to see the effect.

Testbed Application – Release 16
Toolbox – Release 12
Testbed Installer – Release 16

Please note that I have included in this release a built version of the application so you can practice working with the database. The LocalDB.mdb file included with this installer has the table defined for holding the font definitions, but the table is empty. This release has two purposes: One, by adding to and manipulating the data in its database, you can see that you really can modify the visual presentation without changing code. Two, I have started using LabVIEW 2015 and realize that some of you may not have upgraded yet. If this version change is a problem, post a comment and I will send you a version of the code back-saved to LabVIEW 2014.

The Big Tease

One of the things that I like about NI Week is the opportunity to meet friends both new and old. Before a keynote address one morning I was talking to another one of the LabVIEW Champions, Jack Dunaway by name, and the topic of this blog came up. To make a long story short, he suggested a topic that sounded so good, I’m going to get started on it next time.

One good of way showing a lot of data in a small space is what is known as a tree control. It’s valuable because its structure is inherently hierarchical and so can display a lot of data while not taking up a lot of screen real estate. In addition, it can reduce the overwhelm that you sometimes feel when looking at large datasets because, when done well, they allow you to start with a high-level view of the data and gradually drill down to the specific results you want.

If you are working in Windows, there are two such controls available: one that is part of Windows, and one that is native to LabVIEW. So next time: the Native LabVIEW Tree Control. Be there or be square.

Until Next Time…

Mike…

Drop-Ins Are Always Welcome

One of the key distinctions of web development is that the standards draw a bright line between content and presentation. While LabVIEW doesn’t (so far) have anything as powerful as the facilities that CSS provides, there are things that you can do to take steps in that direction. The basic technique is called creating a “drop-in” VI. These functions derive their name from the fact that they are dropped into an existing VI to change the display characteristics, but without impacting the host VI’s basic functionality.

The Main Characteristics

The first thing we need to do is consider the constraints under which these VIs will need to operate. These constraints will both assist in setting the scope of what we try to accomplish, and inform the engineering decision we have to make.

No Fraternization

The first requirement that a VI to meet in order to be considered truly “drop-in” capable, is that there must be no interaction between its logic and that of the VI into which it is being dropped. But if there is to be no interaction with the existing code, how is it supposed to change anything? Given that we are only talking about changing the aspects of the data presentation, all we need is a VI Server reference to the calling VI, and that we can get using the low-level Call Chain function.

VI Server Accesses

As you can see, from the VI reference you can get a reference to the VI’s front panel, and from that you can get an array of references for all the objects on the front panel. It is those references that allow you to set such things as the display font and size – which just happen to be the two things we are going to be manipulating for this example.

One potential problem to be aware of is the temptation to use these references to do things that directly affect how the code operates. However, this is a temptation you must resist. Even though it may seem like you “got away with it this time”, sooner or later it will bite you.

To be specific, changing the appearance of data is OK, but changing the data itself is, in general, not. However, there is one exception that when you think about it makes a lot of sense: localization. Localization is the process of changing the text of captions or labels so they appear is the language of the user, and not the developer. This operation is acceptable because although you might be changing the value in, for example, a button’s Boolean text you aren’t changing what the button does. The button will perform the same whether it is marked “OK”, “Si” or “Ja”.

Autonomous Error Handling

The next thing a drop-in has to be able to do is correctly manage errors. But here we have a bit of a conundrum. On the one hand, errors are still important, so you want to know if they occur. However, you don’t on the other hand, want this added functionality to interrupt the main code because an error occurred while configuring something the user would consider as “cosmetic”.

The solution is for the drop-in to have its own separate error reporting mechanism that records errors, but doesn’t inject them into the main VI’s error chain. The error handling library we have in place already has the needed functions for implementing this functionality.

External Configuration Storage

Finally, the drop-in VI needs configuration data that is stored in a central location outside the VI itself – after all, we want this drop-in to be usable in a wide variety of applications and projects. For implementing this storage you have at your disposal all the options you had when creating the main application itself, and as with the main application, the selection of the correct storage location depends on how much of this added capability will be exposed to the user. If you intend to let the user set the values, you can put the settings in an INI file. You just need to make sure that you quality the data they enter before you try using it. Otherwise you could end up in a situation where they specify a non-existent font, or a text size that is impossibly large or small.

To keep things simple for this test case, we will store the data in the same database that we use to store all the other configuration values. The data that we store in the database will also be (for now) simple. Each record will store the data needed to modify one part of one control, so it will contain a field for the name of the VI, the name of the control, an enumeration for selecting what part of the control is to be set, and finally the font name and size. The enumeration Component to Set will have 3 values: Label, Caption and Contents. Note that to keep things organized and easy to modify or expand, this structure as a whole, and the enumeration in particular are embodied on the LabVIEW side in type definitions.

The Plan of Action

So how can we implement this functionality? The literary device of the “omniscient author” has always bothered me so rather than simply heading off in a direction that I chose before I started writing, let’s take a look at a couple of implementation options and see which one of the two will work the best for us. Remember that the only thing more important that coming up with the right answer, is knowing how you came up with the right answer.

The “Normal” Way

For our first try, let’s start with the basic logic for getting the control references we saw a moment ago, and add to it a VI that returns the font configuration data for the VI that is being configured. Note that the input to this fetch routine (which gets the data from the application database) is the name of the VI that is calling the drop-in. This name is fully qualified, meaning it contains not just the VI name, but also the names of any library or class of which it might be a member.

Font Manager - Deadend

The output from the database lookup consists of a pair of correlated arrays. Correlated arrays are arrays where the data from a given element in one array correlates to, or goes with, the data from the same-numbered element in the other array. In this case, one array is a list of the control names and the other array is a list of all the font settings that go with the various parts of that control.

The first thing the code does is look to see if there are any font settings defined for the VI by checking to see if one of the arrays are empty. It is only necessary to check one of the arrays since they will always have the same number of elements. If there are font settings defined for the VI, the code takes the array of control references from the VI’s front panel and looks at them one-by-one to determine whether the label for that particular control or indicator is contained in the array of control names. When this search finds a control that is in the list of control names, the code unbundles the font settings data and uses the Component to Set value to select the frame of a case structure that contains the property node for the specified component’s property.

This approach works pretty well for labels and captions because all controls and indicators can, regardless of type, have them. In addition, regardless of whether the control is a string, numeric, cluster or what have you, the properties are always named the same. (The property for manipulating a control’s Caption is shown.)

Unfortunately, things begin to get complicated once you move past the properties that all controls share in common and start changing the font settings for the data contained inside the control – what we are calling the Contents. For example, the property for setting the font of the contents of a string control is called Text.FontName, whereas the property for setting the corresponding information in a digital numeric is called NumText.FontName. Things get even stranger when you start talking about setting the font of the Boolean text in the middle of a button, or worse the lines in a listbox – there each row has to be set individually.

The fundamental problem that this simple approach has is that the settings for controls and indicators are built on object-oriented principles. Labels and Captions are easy because they are common to all controls, but as soon as you start talking about text that is contained inside a control, you have to deal with a specific type, or subclass, of control. Plus to even get access to the required properties you need to cast the generic Ctl reference to a more specific class like a Str (string) or DigNum (digital numeric). We could, of course, simply expand the number of items in the Component to Set enumeration to explicitly call out all the various components that we want to be able modify. Then in each case we could do something like this:

'fixing' a problem

Because we know that the String Text is only valid for strings, we could cast the reference to the proper subclass, set the appropriate property, and call it done. If you look at very much code you will see this sort of thing being done all the time. But looking closer in those situations you will also see all the code that gets put into trying to fix this implementation’s shortcomings. For example, because the subclass selection logic is in essence being driven by the enumeration, and the enumeration value is stored in the database; we have created a situation where the contents of the database needs to be kept “in sync” with the controls on the front panels. Hence if a string control should be changed to a digital numeric (or vice versa) the database will need to be manually updated to track the change. This fact, in turn, means that we will need to add code to the VI to handle the errors that occur when we forget to keep the code and the database in sync.

As bad as that might sound, it is not the worst problem. The real deal-breaker is that every time you want or need to add support for another type of control, or another Component to Set, you will be back here modifying this VI. This ongoing maintenance task pretty much means that reusing this code will be difficult to impossible. Hopefully you can see that thanks to these problems (and these are just the two biggest ones), this “simple” approach built around a single case structure ends up getting very, very messy.

But if the object-oriented structure of controls is getting us into trouble, perhaps a bit more object orientation can get us out of trouble…

Riding a Horse in the Direction it’s Going

When programming you will often find yourself in a situation where you are wanting to extend a structure that you can see in a way that you can’t yet fully see or understand. When confronting that challenge, I often find it helpful to take some time and consider the overall trajectory of the part of the structure I can see to see where it’s pointing. Invariably, if you are working with a well-defined structure (as you are here) the best solutions will be found by “riding the horse in the direction it’s already going”.

So what direction is this “horse” already going? Well, the first thing we see is that it is going in the direction of a layered, hierarchical structure. In the VI Server structure that we can see, we observe that the basic control class is not at the top of the hierarchy, but rather in the middle of a much larger structure with multiple layers both above and below it.

Menus

The other thing we can note about the direction of this architectural trendline is that the hierarchy we just saw is organized using object-oriented principles, so the hierarchy is a hierarchy of classes, of datatypes. Hence, each object is distinct and in some way unique, but the objects as a group are also related to one another in useful ways.

Taking these two points together it becomes clear that we should be looking for a solution that is similarly layered and object-oriented. However, LabVIEW doesn’t (yet) have a way to seamlessly extend its internal object hierarchy, so while developing this structure using classes of our own creation, we will need to be careful to keep “on track”.

Moving Forward

The basic for this structure is a class that we will call Display Properties.lvclass. Initially this class will have two public interface VIs: One, Create Display Properties Update Object.vi, does as its name says and creates an object associated with a specific control or indicator. This object will drive what is now the only other interface VI (Set Control Font.vi) which is created for dynamic dispatch and will serve as the entry point for setting the font and size of text associated with GUI controls and indicators. I am building the class in this way because it is easy to imagine other display properties that we might want to manipulate in the future (e.g. colors, styles, localization, etc.). This is the code I use to dynamically load and create display property update objects:

Create Font Object

In general, it is very similar to code I have presented before to dynamically create objects, but there are a few differences. To begin with, the code does not buffer the object after it is created because unlike the other examples we have looked at over the past weeks, these objects do not need to be persistent. In other words, these objects will be created, used and then discarded.

Next, to simplify in their identification, all VI Server classes have properties that return a Class ID number and a Class Name. The code uses the latter value to build the path and class name of the child class being requested.

Finally, after the code builds the path and name of the subclass it wants to use, it checks to see if the class exists and only attempts to load it if the defining lvclass file is found. If the file is missing, the code outputs a parent class object. The reason for this difference is twofold:

  1. Without it, if a control class was called that we had not implemented, the code would throw an error. Consequently, in order to prevent those errors I would have to create dozens of empty classes that served no functional purpose – and that is wasteful of both my time and computer resources.
  2. With it, the logic extends what normally happens when a method is not overridden in a subclass, to include the case where the subclass hasn’t even been implemented yet: the parent class and, – more to the point – the parent methods, are invoked.

Taken Care of Business

The dynamic dispatch VI Set Control Font.vi is obviously the parent method for what will eventually be a family of override methods that will address specific types of controls. But that begs the question: What should go in this VI?

Well think about it for a moment. In the first possible implementation we looked at, things initially looked promising because changing the font and size of labels and captions was so easy. You’ll remember that the reason they were easy, was because all controls and indicators can have them and the properties are always named the same. That sounds like a pretty good description of what we would want in a parent method – so here it is:

Set Font Parent

The structure is pretty simple, the code retrieves the control reference from where it was stored in the class data and passes it into a case structure that has cases for Label and Caption. In addition, it has an empty case that handles the Contents value of Component to Set. This case is empty because that value will be handled during override. So all we have left to do for right now is look at how these VIs look incorporated into the structure we looked at earlier – all we really needed to replace was the case structure…

Font Manager

…and here it is. Nothing much new to see here, so let me just recommend that you take a good look at this code because you probably won’t be seeing it again. Since we will be adding functionality in the context of the class structure we created, we won’t need to revisit this logic any time soon, and maybe ever.

The Big Tease

So with the basic structure in place, all we have to do is start populating the subclasses we need. But that will have to wait for next time when I will also post all the code.

Until Next Time…

Mike…

A Brief Introduction to .NET in LabVIEW

From the earliest days of LabVIEW, National Instrument has recognized that it needed the ability to incorporate code that was developed in other programming environments. Originally this capability was realized through specialized functions called Code Interface nodes, or CINs. However as the underlying operating systems continued to develop, LabVIEW acquired the ability to leverage such things as DLLs, ActiveX controls and .NET assemblies. Unfortunately, while .NET solves many of the problems that earlier efforts to standardize sharable code exhibited, far too many LabVIEW developers feel intimidated by what they see as unmanageable complexity. The truth, however, is that there are many well-written .NET assemblies that are no more difficult to use than VI Server.

As an example of how to use .NET, we’ll look at an assembly that comes with all current versions of Windows. Called NotifyIcon, it is the mechanism that Windows itself uses to give you access to programs through the part of the taskbar called the System Tray. However, beyond that usage, it is also an interesting example of how to utilize .NET to implement an innovative interface for background tasks.

The Basic Points

Given that the whole point of this lesson is to learn about creating a System Tray interface for your application, a good place to start the discussion is with a basic understanding of how the bits will fit together. To begin with, it is not uncommon, though technically untrue, to hear someone say that their program was, “…running in the system tray…”. Actually, your program will continue to run in the same execution space, with or without this modification. All this .NET assembly does is provide a different way for your users to interact with the program.

But that explanation raises another question: If the .NET code allows me to create the same sort of menu-driven interface that I see other applications using, how do the users’ selections get communicated back to the application that is associated with the menu?

The answer to that question is another reason I wanted to discuss this technique. As we have talked about before, as soon as you have more than one process running, you encounter the need to communicate between process – often to tell another process that something just happened. In the LabVIEW world we often do this sort of signalling using UDEs. In the broader Windows environment, there is a similar technique that is used in much the same way. This technique is termed a callback and can seem a bit mysterious at first, so we’ll dig into it, as well.

Creating the Constructor

In the introduction to this post, I likened .NET to VI Server. My point was that while they are in many ways very different, the programming interface for each is exactly the same. You have a reference, and associated with that reference you have properties that describe the referenced object, and methods that tell the object to do something.

To get started, go to the .NET menu under the Connectivity function menu, and select Constructor Node. When you put the resulting node on a block diagram, a second dialog box will open that allows you to browse to the constructor that you want to create. The pop-up at the top of the dialog box has one entry for each .NET assembly installed on your computer – and there will be a bunch. You locate constructors in this list by name, and the name of the constructor we are interested in is System.Windows.Forms. On your computer there may be more than one assembly with this basic name installed. Pick the one with the highest version (the number in parentheses after the name).

In the Objects portion of the dialog you will now see a list of the objects contained in the assembly. Double click on the plus sign next to System.Windows.Forms and scroll down the list until you find the bullet item NotifyIcon, and select it. In the Constructors section of the dialog you will now see a list of constructors that are available for the selected object. In this case, the default selection (NotifyIcon()) is the one we want so just click the OK button. The resulting constructor node will look like this:

notifyicon constructor

But you may be wondering how you are supposed to know what to select. That is actually pretty easy. You see, Microsoft offers an abundance of example code showing how to use the assemblies, and while they don’t show examples in LabVIEW, they do offer examples in 2 or 3 other languages and – this is the important point – the object, property and method names are the same regardless of language so it’s a simple matter to look at the example code and, even without knowing the language, figure out what needs to be called, and in what order. Moreover, LabVIEW property and invoke nodes will list all the properties and methods associated with each type of object. As an example of the properties associated with the NotifyIcon object, here is a standard LabVIEW property node showing four properties that we will need to set for even a minimal instance of this interface. I will explain the first three, hopefully you should be able to figure out what the fourth one does on your own.

notifyicon property node

Starting at the top is the Text property. It’s function is to provide the tray icon with a label that will appear like a tip-strip when the user’s mouse overs over the icon. To this we can simply wire a string. You’ll understand the meaning of the label in a moment.

Giving the Interface an Icon

Now that we have created our NotifyIcon interface object and given it a label, we need to give it an icon that it can display in the system tray. In our previous screenshot, we see that the NotifyIcon object also has a property called Icon. This property allows you to assign an icon to the interface we are creating. However, if you look at the node’s context help you see that its datatype is not a path name or even a name, but rather an object reference.

context help window

But don’t despair, we just created one object and we can create another. Drop down another empty .NET constructor but this time go looking for System.Drawing.Icon and once you find the listing of possible constructors, pick the one named Icon(String fileName). Here is the node we get…

icon constructor

…complete with a terminal to which I have wired a path that I have converted to a string. In case you missed what we just did, consider that one of the major failings of older techniques such as making direct function calls to DLLs was how to handle complex datatypes. The old way of handling it was through the use of a C or C++ struct, but to make this method work you ended up needing to know way too much about how the function worked internally. In addition, for the LabVIEW developer, it was difficult to impossible to build these structures in LabVIEW. By contrast, the .NET methodology utilizes object-oriented techniques to encapsulate complex datatypes into simple-to-manipulate objects that accept standard data inputs and hide all the messy details.

Creating a Context Menu

With a label that will provide the users a reminder of what the interface is for, and an icon to visually identify the interface, we now turn to the real heart of the interface: the menu itself. As with the icon, assigning a menu structure consists of writing a reference to a property that describes the object to be associated with that property. In this case, however, the name of the property is ContextMenu, and the object for which we need to create a constructor is System.Windows.Forms.ContextMenu and the name of the constructor is ContextMenu(MenuItem[] menuItems).

context menu constructor

From this syntax we see that in order to initialize our new constructor we will need to create an array of menuItems. You got to admit, this makes sense: our interface needs a menu, and the menu is constructed from an array of menu items. So now we look at how to create the individual menu items that we want on the menu. Here is a complete diagram of the menu I am creating – clearly inspired by a youth spent watching way too many old movies (nyuk, nyuk, nyuk).

menu constructors

Sorry for the small image, but if you click on the image, you can zoom in on it. As you examine this diagram notice that while there is a single type of menuItem object, there are two different constructors used. The most common one has a single Text initialization value. The NotifyIcon uses that value as the string that will be displayed in the menu. This constructor is used to initialize menu items that do not have any children, or submenus. The other menuItem constructor is used to create a menu item that has other items under it. Consequently in addition to a Text initialization value, it also has an input that is – wait for it – an array of other menu items. I don’t know if there is a limit to how deeply a menu can be nested, but if that is a concern you need to be rethinking your interface.

In addition to the initialization values that are defined when the item is created, a menuItem object has a number of other properties that you can set as needed. For instance, they can be enabled and disabled, checked, highlighted and split into multiple columns (to name but a few). A property that I apply, but the utility which might not be readily apparent, is Name. Because it doesn’t appear anywhere in the interface, programmers are pretty much free to use is as they see fit, so I going to use it as the label to identify each selection programmatically. Which, by the way, is the next thing we need to look at.

Closing the Event Loop

If we stopped with the code at this point, we would have an interface with a perfectly functional menu system, but which would serve absolutely no useful purpose. To correct that situation we have to “close the loop” by providing a way for the LabVIEW-based code to react in a useful way to the selections that the user makes via the .NET assembly. The first part of that work we have already completed by establishing a naming convention for the menu items. This convention largely guarantees menu items will have a unique name by defining each menu item name as a colon-delimited list of the menu item names in the menu structure above it. For example, “Larry” and “Moe” are top-level menu items so their names are the same as their text values. “Shep” however is in a submenu to the menu item “The Other Stooge” so its name is “The Other Stooge:Shep”.

The other thing we need in order to handle menu selections is to define the callback operations. To simplify this part of the process, I like to create a single callback process that services all the menu selections by converting them into a single LabVIEW event that I can handle as part of the VI’s normal processing. Here is the code that creates the callback for our test application:

callback generator

The way a callback works is that the callback node incorporates three terminals. The top terminal accepts an object reference. After you wire it up, the terminal changes into a pop-up menu listing all the callback events that the attached item supports. The one we are interested in is the Click event. The second terminal is a reference for the VI that LabVIEW will have executed when the event you selected is fired. However, you can’t wire just any VI reference here. For it to be callable from within the .NET environment it has to have a particular set of inputs and a particular connector pane. To help you create a VI with the proper connections, you can right-click on the terminal and select Create Callback VI from the menu. The third terminal on the callback registration node is labelled User Parameters and it provides the way to pass static application-specific data into the callback event.

There are two important points here: First, as I stated before, the User Parameters data is static. This means that whatever value is passed to the terminal when the callback is registered is from then on essentially treated as a constant. Second, whatever you wire to this terminal modifies the data inputs to the callback VI so if you are going to use this terminal to pass in data, you need to wire it up before you create the callback VI.

In terms of our specific example, I have an array of the menu items that the main VI will need to handle so I auto-index through this array creating a callback event for each one. In all cases, though, the User Parameter input is populated with a reference to a UDE that I created, so the callbacks can all use the same callback VI. This is what the callback VI looks like on the inside:

callback vi

The Control Ref input (like User Parameter) is a static input so it contains the reference to the menu item that was passed to the registration node when the callback was created. This reference allows me to read the Name property of the menu item that triggered the callback, and then use that value to fire the SysTray Callback UDE. It’s important to remember when creating a callback VI to not include too much functionality. If fact, this is about as much code as I would ever put in one. The problem is that this code is nearly impossible to debug because it does not actually execute in the LabVIEW environment. The best solution is to get the selection into the LabVIEW environment as quickly as possible and deal with any complexity there. Finally, here is how I handle the UDE in the main VI:

systray callback handler

Here you can see another reason why I created the menu item names as I did. Separating the different levels in the menu structure by colons allows to code to easily parse the selection, and simultaneously organizes the logic.

Future Enhancements

With the explanations done, we can now try running the VI – which disappears as soon as you start it. However, if you look in the system tray, you’ll see its icon. As you make selections from its menu you will see factoids appear about the various Stooges. But this program is just the barest of implementations and there is still a lot you can do. For example, you can open a notification balloon to notify the user of something important, or manipulate the menu properties to show checkmarks on selected items or disable selections to which you want block access.

The most important changes you should make, however, are architectural. For demonstration purposes the implementation I have presented here is rather bare-bones. While the resulting code is good at helping you visualize the relationships between the various objects, it’s not the kind of code you would want to ship to a customer. Rather, you want code that simplifies operation, improves reusability and promotes maintainability.

Stooge Identifier — Release 1

The Big Tease

So you have the basics of a neat interface, and a basic technique for exploring .NET functionality in general. But what is in store for next time? Well I’m not going to leave you hanging. Specifically, we are going to take a hard look at menu building to see how to best modularize that functionality. Although this might seem a simple task, it’s not as straight-forward as it first seems. As with many things in life, there are solutions that sound good – and there are those that are good.

Until Next Time…

Mike…

Finishing the Configuration Management

For the last couple posts we have been looking at how to best utilize object-oriented programming methodologies. In that quest, we have taken as our example the goal of converting the parts of our testbed application that use stored configuration data to a configuration manager based on object classes. In this implementation, the classes represent the various types of potential data repositories.

Maximizing Flexibility by Leveraging Existing Code

When we stopped last time we had created all the basic code infrastructure and all we had to do was construct the dynamic dispatch methods that retrieve the data the application needs. In creating these methods, we have as our guiding principle reducing the amount of code we have to create by reusing as much code as we can. In other words we need to really spend some time thinking about how to structure our code such that it combines maximum reuse with maximum flexibility.

One good way of attaining that ideal is to allow for multiple levels of dynamic dispatch within the same method. Let’s say for the sake of argument that you have a method that will need to be accessible from 10 different subclasses. Furthermore, let’s say that 4 of those subclasses all need to do the same thing, an additional 4 are mostly the same but with a few differences, and the last 2 subclasses require fundamentally different logic to perform the same task. You could implement the logic that is common to the largest group of subclasses (the first 4) in the parent and let the remaining 6 override the parent to define their own solutions. This solution would work, but could potentially result in a lot of duplicated code. It depends on how similar the second group of 4 subclasses are to the first group of 4.

In creating the Initialize New method last week we saw a far better way to optimize the code: For the 4 subclasses that are similar, we could call the parent method in the child (thus taking advantage of that existing code) and then add a little logic to customize the functionality. This solution will work well in many situations, but one area where it will not is in scenarios where the common parts of the code need to pass data to the unique parts.

To address those situations, a very useful solution can be to write the parent method such that it has a subVI encapsulating the similar, but unique bits, that is itself a dynamic dispatch VI. By providing multiple levels of dynamic dispatch, you create a situation that is easy to understand and minimizes duplication of code. In our example, the 4 subclasses would use the parent implementations of the dynamic dispatch VI and subVI. The 4 subclasses that are similar, but a bit different would use the parent method, but override the dynamic dispatch subVI, and the two that are fundamentally different would override the parent method VI itself.

Getting Down to Business

So let’s start looking at what we need to do to finish our conversion — while remembering that most of this discussion will be about the reorganization and repurposing of existing code. For the most part, I won’t be going into how the basic functionality works since we discussed that code when it was originally introduced.

The basic internal structure of our four “blueprint” VIs will be largely similar. We will call the VI that creates the object we want, followed by a dynamic dispatch method that does what we need done. However, note that there isn’t (and shouldn’t be) a direct one-to-one correlation between the blueprint VI and the underlying method. We still need to be looking for ways to minimize redundant code. Back when we were setting up the testbed application originally we considered the need for unstructured configuration values — like you would usually store in an INI file. In fact, we implemented that capability and used it to store the default sample period. Consequently, one of the three methods we end up implementing will be used in a couple of different places now, and will be reusable in the future for other unstructured configuration data. So, let’s start with that one.

Get Misc Setups

Creating this method follows the same procedure as we have used before, except that this method has two input parameters and an output string. So the finished parent method has a front panel that looks like this:

misc read front panel

Another difference with this method is that if it is called for a subclass that does not override it, the parent VI does not provide any default functionality. To see what I mean, check out the VI’s block diagram. You see that it does nothing. This structure might seem rather pointless, but in reality it can at times be pretty handy. Say you have a method that is really only needed by one subclass. Without dynamic dispatch you would have to put a case structure around the code so the VI is only called in that one particular circumstance. However, with dynamic dispatch, this selection takes place automatically and without cluttering up the code with case structures.

With the parent VI (Read Misc Settings.vi) created and saved, we can now build the two subclass overrides. Starting with the Text version, to read the data from the INI file, we create an override VI in Config Data_Text.lvclass containing code that uses the built-in configuration VIs to fetch the data:

read misc - ini

But wait! This won’t work. You notice that the path to the INI file comes from the class data — which is fine except that we forgot to initialize it. I wanted to highlight this point because it is a very easy, and common mistake to make. Initializing the DVR is not the same as initializing the data in the DVR. All we have to do to fix this memory lapse is modify the class’ version of the Initialize New method, like so:

finish initializing the DVR - text

The subVI with the green banner builds the path to the application’s INI file programmatically. By the way, in case you’re wondering about whether the same issue applies to the other override to this method (the one for databases), the answer is “Yes”. However the solution there is a bit more complex, so we’ll deal with it in a moment.

The other thing to notice about the setup reading method’s code is that if the read doesn’t find the desired value in the INI file, it creates it and gives it a default value. Adding this extra bit is often helpful for recovering from situations where a uses has gotten in and mucked around with the INI file and deleted something they should not have. In any case, to see this code in action, we need to finish up the code in the Get Default Sample Period.vi “blueprint” VI:

get default sample period

You can now run this VI and get a valid result from the INI file. Of course, if you set the configuration data selector value in the INI file to Jet you will get a zero back. This results is because we haven’t provided an override for that subclass yet. Note that you do not get an error because not overriding a parent method is a perfectly valid thing to do.

Updating the Database Initialization

But we need the database data too, so let’s backtrack to the Initialize New VI in Config Data_DB.lvclass. The class data for this subclass is a string that the code will use in one way or another to connect to the database. However, there are three possible sources for this string:

  • Most ADO interfaces use a rather complex and specialized connection string that can identify logical names, network paths, security parameters and a lot more.
  • Although Jet does use ADO drivers, its connection string is much simpler. In fact, with the exception of the file path, it can be largely treated as a big string constant.
  • SQLite (which we aren’t going to support right now, but which exists in the class hierarchy) doesn’t use any ADO parameters. Keeping with its minimalist approach, all it needs is a file path.

So what are we going to do? Well, if you said, this sounds like a job for dynamic dispatch, you’re right! We need to create a method that will derive the correct string depending on which specific subclass is present. So let’s create a virtual folder and a subdirectory named _protected in Config Data_DB.lvclass with Protected access scope. Inside the virtual folder we will create a new VI using the dynamic dispatch template called Get Connection String.vi, and save it in the subdirectory. In addition to the signal IO the template provides, we need to add a string output:

get connection string

After saving this new the parent class method, we can fill in the overrides. However that code is pretty trivial, mostly scavenged from the original code, and has very little to do with OOP. Consequently, I won’t take time here to highlight it, but feel free to check it for yourself. Then with this work completed, we can finish the initialization code.

finish initializing the DVR - jet

Accessing the Database

Although we will be using a Jet database, note that the override for the Read Misc Settings method will reside in the Config Data_DB_ADO.lvclass subclass, not Config Data_DB_ADO_Jet.lvclass. The reason for this placement is that with the exception of the connection string (which we handle during initialization) the query logic is the same for Jet or most other ADO databases. By the way, what would we do if we ever did find a DBMS that needed something different? That’s right, we would just create a subclass for it and override the method in that new subclass.

In any case, most of the code we need now was lifted wholesale from the old configuration library:

read misc - jet

And with that we are done with the first of our blueprint VIs. However, this is also the procedure we will follow the VIs to read the error handling parameters, the startup processes and load the machine configurations for each state machine clone. Again, I won’t step through the creation of this code because from the standpoint of this post’s main topic (object oriented code development) there is nothing in these that is any different from the one we did go through. As a friend of mine used to say, “From here on, it’s just a matter of turnin’ the crank.”

Testing

To make the results of these changes easier to verify, I have made slight differences between the configuration in the INI file, and the configuration in the database. Notice that if you start the application with the INI file set to Text, the application launches with only two TC state machines running and the acquisition sample rate defaults to 500 msec. However, a setting of Jet produces the functionality we have seen before (3 state machines and a 1000 msec default sample rate).

Notice also that you have to make the INI file setting changes before starting the application. The logic could certainly have been written to make the settings reconfigurable on the fly, but it would have added another level of complication, so I will leave that as, “…as an exercise for the reader…”.

Before tagging this release of the code in SVN, I also went through the “recycled” code and made sure that the VIs were all saved in their proper locations and all the redundant code had been removed. A project window feature that made the latter task much easier was the ability to right-click on a folder and have LabVIEW list all the files with no callers. One thing to be careful about, however, is deleting “unused” files that are in classes. The logic behind this feature can’t identify VIs that are being dynamically linked — which pretty much describes dynamic dispatch VIs.

Finally, before closings out this post, I want to remind you of something important. I make no claim that this code is the best implementation for all possible situations. Although a lot of it is based on software that I developed for deliverable systems, the point on this blog is to provide you with examples, demonstrations, and models that you can then customize and mold to fit your customers’ specific needs. In music, there is the idea of “variations on a theme”. In fact, in a few cases the variations have become more famous than the original. So take the themes I offer here and feel free to deconstruct, rearrange and reassemble them into something that is new and exciting.

Testbed application Release 15
Toolbox Release 7

The Big Tease

What if I told you that LabVIEW incorporates a feature that can improve the quality of your code and reduce errors by helping to validate your math? Hey, this is a no-brainer! Everybody can use help validating their code. But, what would you say if I told you that most people never use this feature? Crazy…

Until Next Time…

Mike…

Objectifying the Testbed

Object-oriented programming as a technique promises a host of benefits, but suffers from the impression that it is in some way an “advanced” topic. In contrast, I feel that OOP is just a logical extension of the concepts that LabVIEW developers use every day. The basic problem has been with the way it has been taught. However, the various object-oriented frameworks that are overly complex and difficult to learn, haven’t helped matters. These bad “actors” often only serve to hide the inherent elegance of the OOP paradigm and scare off users that could benefit from it.

To help clear away some of the extraneous mystique, I have presented a brief introduction to the topic that provides a foundation sufficient to let us get into OOP by implementing a module for managing program configuration data that provides the calling application with a common interface regardless of how (or where) the data is stored.

Filling a Niche

Most of the work we will be doing will eventually replace the Configuration Management library. Now while this might sound like a major shift, it really is not because (like I repeatedly tell you) the whole point of good design is to make changes and upgrades like this possible. So let’s look at what this upgrade will need to do.

Normally a large part of any upgrade projects is defining the requirements, but due to the design work that was put in originally, we already have a pretty good handle on what the new class structure has to do. In terms of surface functionality, we know we have to be able to handle all the same information as before — with, of course, the ability to add more when we want or need that ability.

Designing the Structure

The trick is going to be sorting out what new functionality will be needed under the covers. At the most basic level we need to be able use either text files or databases to actually store the configuration data, so there we have two subclasses. But we need to consider whether each of those options needs to be broken down further.

On the “text file” side, the data might be coming from a standard INI file, or the code might be in using a custom text file format. Custom text configuration files are very common when some of the configuration data is tabular, since it is a pain to store tabular data in an INI file. However, regardless of the format of the contents, the basic mechanism for reading and writing text files remains the same so it probably won’t be valuable to have any subclasses under “text files”.

On the “database” side of things, however, the situation is very different. First of all, in terms of connectivity, you can access most databases through the standard ADO (ActiveX Data Objects, also sometimes called ole-db) interface. However, “most” is not the same as “all” and one common exception is SQLite. A popular, lightweight data management engine, SQLite can run on a variety of platforms — including some real-time systems. To keep its footprint small, SQLite utilizes a small custom DLL, rather than a large, but standardized, interface. So we need to make provisions for other types of connectivity by creating (for now) two subclasses below “database”: “ado” and “sqlite” — though we won’t be implementing the SQLite functionality right now.

Finally, what about “ado”? Can it be broken down further? Maybe. One of the advantages of ADO is that it, for the most part it does a pretty good job of hiding the differences between one database management system (or DBMS) and the next, but there are some variations it can’t paper over. These differences often relate to the version, or dialect, of SQL the DBMS speaks. However sometimes differences arise because some DBMS fundamentally don’t operate the same. For example, while most DBMS go to extraordinary lengths to hide exactly where and how the data is actually stored, Jet (the DBMS built into Windows) stores the data in a file you explicitly identify. Hence, while the connection to other DBMS might be defined in terms of network paths and logical names, with Jet you are connecting to a particular file.

To provide for these sorts of functional nuances, let’s create a subclass below “ado” for “jet” — understanding there could be others in the future.

Configuration Data Classes

This is what the hierarchy looks like so far, all drawn out.

Doing the Rough Framing

When you are building a house the first tradesmen to show up onsite are the carpenters to do the so-called “rough framing”. This process creates the skeletal form of the final house that is covered with rough exterior plywood. The idea is that later workers will fill in the details and fine tune the construction. And metaphorically speaking, that’s what we have to do now for our configuration data class.

For a class hierarchy, that framing consists of the directory structure and the class files themselves. Using the techniques I gave last time, I first create mirroring directory structures inside the project directory and the project itself…

Configuration Data Classes in Project

Note that I have also created some virtual folders inside the Config Data class which represents actual sub directories.

  • Interface
    The VIs in this folder will have public access scope. In fact they will be the only VIs in the library that are so scoped. Because these VIs are the only ones that outside callers will be able to call, they alone form the interface between the class hierarchy and the rest of the code.
  • _private
    As the folder title implies, the files that go in here will have private access scope. This assignment means that they will only be accessible from other VIs in the top-level class.
  • _protected
    This is another folder that specifies access scope for its contents. In the case of protected scope, the VIs in this folder will only be accessible from the top-level class, or any of its child classes.

Creating the Interface

With the functional scaffolding in place, we can start filling in the blanks. And the first thing we need to do is create what I referred to as the “blueprint” in the previous post — the VIs that the rest of the application will call. After going through the public VIs in the old library we see that there are really only 4 VIs that the rest of the application uses directly

  1. Get Default Sample Period.vi
  2. Get Error Handling Parameters.vi
  3. Get Processes to Launch.vi
  4. Load Machine Configuration.vi

Many of the others will still be used, but their presence will be hidden in subclasses. As I create these (for now) empty VIs, I make sure they have the same front panels and connector panes as the ones they will be replacing.

Adding Infrastructure

Getting back to our housebuilding analogy: After the framework is completed and the outside skin is on, the next job is to start installing some of the needed infrastructure, because without electric, water, sewer and perhaps gas connections, our new home is not much better than the cave dwellings that our prehistoric ancestors inhabited — and in some ways is far worse.

What we need to add to our nascent configuration management subsystem is some data handling, but not for our data. The data I’m talking is the private internal data that the subsystem needs to maintain in order to do its job.

Thinking about what our various bits of code need to do, we see first that there is certain data that will commonly be needed regardless of how you actually end up getting the data. What I’m thinking about here is the name of the operator and a password. Now, some subclasses might need both, while others might need only one or the other, and that’s fine. The important point is that if all subclasses could potentially need at least some of this data, it has to be data that is associated with the top-level class. However, this requirement for global availability causes a problem.

Remember how when we were defining terms in the previous post we said that in the LabVIEW world an object is a wire? LabVIEW wires, and data contained in them are by definition not global. If you want data from a wire you need to be connected to it. So how can we create wires that are separate, but which still share at least some of the same data? Well, one very good way of doing it would be to create one central data store that all the wires can access, and as it turns out LabVIEW incorporates a feature that is very efficient and so is perfect for such an implementation. I’m talking about the Data Value Reference, or DVR.

Our approach will be simple. We first create a DVR that is defined to hold the data we need and make a reference to that DVR the class data for our Config Data class. Then to access that data, we create a family of data access VIs to insert data into, or read data from, the DVR.

The first step on this process is to create another virtual folder in the top-level class named _dvr with privateaccess scope. Next, I create in that folder a typedef control named Config Mgr Data.ctl that consists of a cluster containing two strings, one for user name and one for password.

Config Mgr Data

When saving this control I create a subdirectory (called _dvr) to hold it. Likewise, I also create a VI in the same directory (and virtual folder) called Config Mgr DVR.vi that contains this code:

Config Mgr DVR

Two comments about this code. First, it has the basic form of a FGV where the variable value is the DVR reference. Because the case that generates the reference is only run once, this VI will always return a reference to the same DVR no matter how many times it is run. Second, the data for the DVR is the typedef we created. This point is critical. As with UDEs, if you define a DVR reference using a typedef, you can later change the typedef and it won’t break the reference.

To make this DVR available and inheritable through the class, I copy and paste the reference indicator into the class data cluster, like so.

Config Mgr DVR in Class Data

Then to provide access to the DVR contents, I create protected scope VIs that I store in a virtual folder and subdirectory both named _data access. Here is what the read VI for the user ID parameter looks like…

User ID Read

…and the write VI…

User ID Write

I next realize that the two main subclasses also have some data that will need to be held in common for their subclasses. So I repeat the process I just used to store a file path for the file subclass and the connection string that the db subclass needs. Here’s what the project looks like now.

project with basic infrastructure

Finally, before moving on we are going to need a way to initialize all the logic we have created, and to do that we will take our first foray into the exciting — though sometimes confusing — world of creating dynamic dispatch VIs. The goal is to create a method called Initialize New that causes the DVR in a new instance of a class to automatically initialize itself.

I start by right clicking on the _protected virtual folder in Config Data.lvclass and from the New sub menu selecting the option to create a new VI using the dynamic dispatch template. I leave the front panel of the resulting VI the way it is, but I change the connector pane, edit the icon and add this code to the block diagram.

Initialize top-level dvr

If the DVR in the class data is not valid, I call the DVR VI to get a valid reference and then use that reference to populate the class data. If the DVR reference is valid, the false case (not shown) does nothing. When I save this VI, I put it in a subdirectory named _protected.

To create the subclass versions of this method, I right-click on the subclass name and from the New sub menu select the item to create a new VI for override — which is the technical term for what we are doing. We are overriding the parent functionality with different functionality in the child.

However before we get a new VI, LabVIEW opens a dialog to ask us which parent method we want to override. After double clicking on Initialize New we get our new VI, also called Initialize New. After saving this new VI, (the default location LabVIEW picks is perfect) I modify the code to look like this:

initialize new - child

Most of this logic should be familiar because it is the same as we did for the parent. If the child DVR reference is not valid, we initialize it. Otherwise, we do nothing. But what is that funny looking VI in front of the initialization logic?

Object-oriented methodology recognizes that there are going to be times when a method in a child class will need to do what the parent method does, but perhaps a bit more or maybe do it a bit differently. One solution to this situation would be to simply duplicate all the parent code in the child. However that approach would be wasteful. What object-oriented logic does instead is it allows a child to directly call its parent’s version of the method. So here, the code first calls the parent’s version of the initialization VI and then executes the logic to initialize itself. In the end, both the parent and the child will get initialized.

Creating Objects

The time is now upon us to start tying this infrastructure together into an organized system. The first thing to sort out is how to create an object of a given type. One way is very similar to what you would do with a conventional datatype. If you wanted to create, say an I32 value, you would drop down a constant and start using it. In the same way, you can also drop down a class constant and wire to it, thus creating an object. The problem with this approach is that when LabVIEW instantiates a class it also loads into memory all the VIs associated with the class. What you can end up with is a situation where all the VIs for all the classes are loaded, even if you will never use some of the classes. The way to get around that problem is to load classes dynamically as you need them. This is the code I use to perform that operation.

create object dynamically

You will notice that the VI has no inputs, save the requisite error cluster. This is because the basic piece of information that specifies the specific class to be created will be loaded from the application INI file. But why the INI file? Isn’t the point of this exercise to get rid or configuration data in that file? Well yes, but there is a bit of a paradox at work here. Simply put, an application can’t go to a database it doesn’t know it has to find if it should look in the database to get its setup data. That basic piece of information has to be stored somewhere that will always be there, and on the Windows platform you have exactly two choices: the INI file and the Windows registry. Of the two, the INI file is much safer — you at least don’t have to worry about a user going wild and trashing their whole computer.

We are initially only going to support two options for managing configuration data, so we only need two settings (Text and Jet). The VI that communicates with the INI file reads one of these values from the Configuration Data key in the Data Management section and returns two values based on what it finds there: A relative path to the location of a class file, and the name of the file. Thanks to the naming convention we use, these two values are very closely related.

The remaining code loads the target class into memory and initializes it. In addition, the resulting object is buffered as in a FGV. For the details of how this process works, see the comments in the code. The final thing to notice is that the output from this VI is an indicator of the Config Data class datatype.

Building Out the Remaining Methods

All that’s left now is to implement the code that does what the testbed needs done, however this post is getting long and I have already passed on a lot of information for you to absorb — so we’ll leave that discussion (and the testing!) for next week.

Until Next Time…

Mike…

Objectifying LabVIEW

I suppose a good place to start this post is with an admission that, in a sense, it is flying a false flag. One way that you could reasonably interpret the title is that in this post I am going to be showing you how to start using objects in LabVIEW. That interpretation is not correct, and the troublesome word is “start”. The fact of the matter is that you can’t use LabVIEW without interacting with objects and many parts of it (think: VI Server) are overtly object-oriented — even without an obvious class structure. The language is built on an objects oriented foundation and so, in a very real way, has been object-oriented since Version 1.

What I am going to be showing you is how to simplify your work by building your own classes. As I stated in the teaser last time, the starting point for this discussion is the recommendation given in NI’s object-oriented training class that you should make your first attempts at using explicit object-oriented technique small, easy to manage subsystems — or put more simply, we need to start with baby steps.

Object-oriented baby steps

OK, so this is the point in the presentation where most presenters hauls out some standard theory, and moth-eaten descriptions of objects and classes — often lifted wholesale from a book on C++ programming. The problem with this approach is of course that we aren’t C++ programmers and the amount of useful information we can draw from an implementation of objects oriented programming that is so fundamentally flawed is minimal at best. The approach I intend to take instead focuses on key aspects of the technique that are of immediate, practical importance to someone who is working in LabVIEW and wants to take advantage of explicitly implementing object-oriented class structures.

A Quick Glossary

The first thing we need is a vocabulary that will let us talk about the topic at hand.

OOP Clouds

Now be forewarned that some of these definitions may not exactly match what you may read elsewhere, but they are correct for the LabVIEW development environment.

  • Class — An abstract datatype.
    If you think that sounds a lot like the definition of a cluster, you’re right! Due to the way LabVIEW implements object orientation, a class is essentially a very fancy cluster. In fact, when you create a class the first item that LabVIEW inserts into it is a typedef consisting of an empty cluster. Although you don’t have to put anything into the cluster, it provides a place to put data that is private to that class.
  • Object — An instance of a class.
    As with a normal cluster, every instance of a class has its own memory space. Consequently, a class wire is in most ways the same as any other wire in LabVIEW. We are still working in a dataflow environment.
  • Property — A piece of data that tells you something about the object.
    This is why there is a cluster at the heart of the class. You want to put in that cluster information that will describe the object is a way that is meaningful to you application. Because each instance of the class is a separate wire that has its own memory space, the data contained in the cluster describes that particular object.
  • Method — A VI that is associated with a particular class and which does something useful.
    So what do I mean by, “…something useful…”? Well that all depends on the class’ purpose. A the class that is responsible for creating a visual interface might have a method that causes an object to draw itself. While a class that manages the interface to data storage would likely have a method to store or retrieve application data.

From this simple list of words we can begin to see the general shape of the arena in which we will be playing. To recap: A class is a kind of wire. An object is a particular wire. A property is data carried in the wire that describes it in a useful way, and methods use the object data to do something you need done.

Dynamic Dispatch

Now that we have a basic vocabulary in place that lets us talk about this stuff, there are a couple of concepts that we need to discuss. I want to start with this exploration is with the mechanism that LabVIEW uses to call methods. Referred to as dynamic dispatch this feature it is often a source of confusion to developers getting started with object-oriented programming. A good way to come to grips with dynamic dispatch is to compare and contrast it to a feature of LabVIEW with which you may already be familiar: polymorphism.

Polymorphism (from the perspective of the developer using a polymorphic subVI) is the ability of a single functions to adapt to whatever datatype is wired to its inputs. For example, the low-level Add node in LabVIEW is polymorphic. Consequently, it can add scalar numeric if all types, as well as arrays of numerics of varied dimensions, clusters of numerics and even arrays of clusters of numerics.

Of course, from the perspective of the developer creating a polymorphic VI the view is much different. This flexibility doesn’t happen on its own. Rather, you have to create all the individual instance VIs that handle the various datatypes. For example, I often want to know if a value at a specific point in the code has changed from the last time this bit of code executed. So I created a polymorphic VI that performs this function. To create this subVI, I had to write variations of the same basic logic for about a half-dozen or so basic datatypes, as well as a version that used the variant datatype to catch everything else.

Dynamic dispatch (which is actually a form of polymorphism) works much the same way, but with a couple significant differences.

  • When the decision is made as to which instance VI is to be executed
    With conventional polymorphism, the decision of which instance VI to call happens as you wire in the subVI. In the case of my polymorphic subVI, as soon as I wire a U32 to the input, LabVIEW automatically selects the U32 version of the code. However, with dynamic dispatch, that decision gets put off until runtime with LabVIEW making the decision based on the datatype present on the wire as the subVI is called. Of course for that to work, you need a different kind of wire. Which brings us to the other point…

  • The criteria for choosing between VIs
    The wires that conventional polymorphism uses to select a VI all have one thing in common — they are all static datatypes. By that I mean that a wire is a U32, or a string or whatever and it can’t change on the fly. By contrast, with dynamic dispatch, the basis for selection is a wire that is an instance of a class, and the datatype of an object can be dynamic. However this variability is not infinite. A given class wire can’t hold just any object because class structure is also hierarchical.

Say you have a class named Geometric Shapes to Draw. You can define other classes (called subclasses) like Circle or Square that are interpreted by LabVIEW as being more specific instances of Geometric Shapes to Draw objects. Due to this hierarchical relationship, a given wire can be typed as a Geometric Shapes to Draw but at runtime really be carrying a Circle or Square. As a result, a dynamic dispatch VI can call different instance VIs based on the datatype at runtime.

However, one big thing that conventional polymorphism does have in common with dynamic dispatch, is that the power doesn’t come for free. You still have to write the method VIs for dynamic dispatch to call.

Inheritance

Remember a moment ago I referred to class datatypes as being hierarchical? The fancy computer science concept governing the use of hierarchical class structures is called inheritance. The point of this label is to drive home the idea that not only are subclasses logically related to the classes above them in the hierarchy, but these so-called child classes also have access to the properties and methods contained in their parent classes. In other words they can “inherit” or use data and capabilities that belong to their parents.

Handled properly, inheritance can significantly reduced the amount of code that you have to write. Handled poorly, inheritance can turn an otherwise promising project into a veritable train wreck. Which brings up our last point…

Proper Organization

Although organization isn’t really a feature of object-oriented programming, it is never the less critical. The simple fact of the matter is that while a disorganized, undisciplined developer might be able to get by when working in conventional LabVIEW, introducing the explicit use of classes can result in utter chaos. Of the real object-oriented failures that I have seen over the years, they all shared a lack of, or inconsistent, organization.

So what sort of organizational things am I talking about? Well it’s a lot of the same stuff that we have talked about before. For a more general discussion of the topic you can check out a post that I wrote very early on titled, Conventional Wisdom. What I want to do right now is highlight some of the points that are particularly important for object-oriented work.

The two main conventions (directory structure and file naming) go together because the point of one is to mirror the other. But rather than simply list some rules, I’ll demonstrate how this works. To start, I will create a directory that is named for the class hierarchy that I will build inside it. So if the point of this class hierarchy is, for example, to update my program’s user interface, I would call the directory something obvious like GUI Update. Inside this directory I would then create the top-level class with the file name GUI Update.lvclass. At this time I will also create a couple subdirectories (_subVIs and _typedefs) that I know I will undoubtedly be needing. Finally, I have learned over the years that being able to tightly control access to VIs is very important, so I will also create at this time a project library named GUI Update.lvlib and put into it the top-level class and a virtual folder called _subclasses with its access scope set to Private.

So the parent class is set up, but what about the subclasses? I simply repeat the pattern. Let’s say the GUI Update class has subclasses for three types of controls that it will need to update: Boolean, Digital and Cluster. I create subdirectories in the parent directory that are named for the subclass that will go into each, and hierarchically name the three subclasses GUI Update_Boolean.lvclass, GUI Update_Digital.lvclass, and GUI Update_Cluster.lvclass. I am also careful to remember to add the subclass files to the _subclasses virtual folder in the library, edit their icon overlays, and set their inheritance correctly — which is to say, identify their parents. Note that while the hierarchical naming structure doesn’t automatically establish correct inheritance, this convention does make it easier to visualize class relationships in the project file.

And so I go building each layer in my class hierarchy. With each new subclass I continue the same pattern so if I eventually want to find, say a subVI associated with the class GUI Update_Digital_Unsigned Word.lvclass, I know I will find it in the directory ../GUI Update/Digital/Unsigned Word/_subVIs.

Having a pattern to which you stick relentlessly — even one as simple as this one — will save you immeasurable amounts of time.

Creating the Blueprint

The next thing I do when creating a class hierarchy (but the last thing I want to talk about right now) is how the rest of the application will interface with my new GUI Update class. This is where the access scope we have been so careful to create comes into play. In the top-level class I always create a group of VIs that have their access scope set to public. These interface VIs form the totality of the external interface to the class hierarchy and so include the functions that define what the application as a whole needs GUI Update to do for it. The logical implications of this interface layer is why I sometimes call this step in the process, “Creating the Blueprint”.

In addition to providing a very clean interface, another advantage of having this “blueprint” is that if you ever need to expand your stable of subclasses, these interface VIs will serve as a list of functions you need to support in the new subclass — or at least a list of functions that you should consider implementing in the new subclass. To see what I mean, consider that the scenario we have been discussing is actually drawn from an application I created once. The list of public interface VIs was really very short: There was a method that read a value from a remote device and wrote it to the GUI object, one that looked for control value changes to write them to the remote device, and one that allowed the calling application to set control specific properties.

Of these, all GUI objects had to implement the first one because even the controls needed to be updated once a second. The reason for this constraint was that the remote device could also be reconfigured from a local interface and the LabVIEW application needed to keep itself up to date. However, the second interface method was only applicable to controls. Finally, the third interface method was implemented very rarely for the few subclasses that needed it.

What’s up next?

We have just about run out of space for this installment, but you may have noticed that something is missing from this post: Any actual LabVIEW code. Next time we will correct that sad situation by considering how to apply these principles to the creation of a class hierarchy that provides a common mechanism for storing and retrieving program data and setup parameters that works the same (from the application’s perspective at least) regardless of whether the program is interfacing with a database or text files.

Until Next Time…

Mike…