Dropping-In on the Testbed

Last time out we started exploring one common application of so-called “drop-in” VI. The technique is based on the idea of creating VIs that are capable of performing something useful for the VI that is hosting it, but without interacting directly with that VI’s basic logic. The example we considered was manipulating the font and type size used to present textual data.

At the close of that post we has created a basic object-oriented structure that could manipulate the label or caption of any front panel control or indicator. I want to finish this discussion by looking at how to expand that basic implementation to allow it to set the text properties of text contained inside a control or indicator. For that we will return to our testbed application.

A Brief Recap

It has been a while since we have worked with this code, so a brief refresher on what it does is probably in order. The testbed application we will be modifying consists of several processes that run independently of one another. To begin with, there is a background process that oversees the reporting of errors that occur. Handling the user interface duties, a GUI process incorporates a subpanel that can display the front panels of several simulated acquisition and process-control VIs. The whole thing is kicked off by a launcher VI that loads the various processes into memory and starts them executing.

Our goal here will be to add the drop-in VI we created last time to all the user-facing VIs and add classes as necessary to allow it to handle the controls and indicators on those VIs. However, if you don’t already have a tool for editing database contents directly, you should first download a tool called Database .NET (the link is to a zip file, and is at the bottom of the page). The program is a simple utility that lets you examine and edit database data from a number of different DBMS. I don’t know the folks that wrote this, and have no vested interest in the program other than I have used it for years and found it very useful. Note that this program has no installer so it has a very small footprint – it will even run from a USB stick. To “install” the program, simply create a directory for it on your computer and then drag into it the program that is inside the zip archive you downloaded, and installation is complete. The easiest way to invoke it is to set it as the default application for *.mdb files.

  • Note that if you decide to install this utility in a subdirectory of the Program Files (x86) directory, you may have to play around with the folder permissions a bit before it will run. Because the program generates several temporary files when it’s starting up, the user has to have Full Access to the folder in which it is installed.

One other caveat to bear in mind before we dive into the modifications is that, these operations cannot override limits on these properties that might exist for other reasons. For example, these techniques will not work on controls that you have defined as strict typedefs. The reason: The strict typedef defines everything about the control’s appearance and the property node will throw an error if you try to change them. Likewise, a System-themed control will let you change the font characteristics, but will complain if you try to change colors.

Making With the Modifications

So where do we start? Well the first hing we need to do is to make a couple minor tweaks to the Display Font Manager.vi. First, we need to define what happens to the drop-ins errors. Because it’s important to preserve them, we will save the errors that arise in the drop-in to the same location that errors from the testbed application proper are stored – but without bothering the program’s operator. To accomplish that task, let’s reuse a the subVI that the error handling logic uses to store error data.

Drop-in Error Handling

Note that I had to add a case structure because the location where this subVI was originally used only executed if there was an error. So unless we want to have spurious records being posted, we have to add that logic here.

Next, as the code is currently written, the error chain in the drop-in’s logic starts with the Error In control and terminates in the Error Out indicator. Although this arrangement works fine during development and testing, when the time comes to deploy the code, this is not what we want. As I said last time, drop-in VIs should not interact with the host VI and should not inject their own errors into the host’s error stream. Still, it can be useful to be able to use the drop-in’s error IO to establish data dependencies that control when it runs. The solution is for the drop-in to have error clusters, but not have them be connected internally.

Errors - Straight Through

Changing the Testbed

Now that we are to install the drop-in, we need to look for where to install it. Completing that examination of the code, we see that there are 5 VIs that are user-facing:

  1. The Launcher (testbed.vi)
  2. The Main GUI (Display Data.vi)
  3. The Temperature Controller (Temperature Controller.vi)
  4. Two “Acquisition” VIs (Acquire Ramp Data.vi and Acquire Sine Data.vi)

So the first thing I do is modify each of these VIs by dropping a copy of the drop-in VI on to their block diagram outside the outer-most loop. For example, this is what the modified launcher block diagram looks like:

textbed.vi with drop-in installed

As promised earlier, this is all the modification that the application will need – which means we are ready to start testing.

The First Test

“But wait a minute…” you protest. “…we haven’t configured anything yet. There’s nothing to test!”

Well you’re half right. We have not gone into the database and configured any controls to be modified, but we still have something to test. We still have to verify the drop-in’s default behavior, which by the way, is to do nothing. Yes, you read that right, we have to test that nothing happens. You see, a major aspect of the drop-in concepts is that drop-ins don’t do anything unless they are explicitly told to through their configuration. Right now we have installed the drop-in code, but there are no controls configured in the database so we need to make sure that the main application continues to run as it did before: no side-effects and no errors. In short, the drop-in right now should do nothing, and we need to make sure that it fulfills that requirement.

So launch the top-level VI (testbed.vi) or run the standalone executable. As before, the launcher will show the names of the processes it’s launching and when it finishes the main GUI will open. Again as before, you will be able to switch between screens using the popup menu and the plugins will operate just as they did before. Finally, if you look at the contents of the event table in the database, you will see that no errors have been generated.

It’s All About the Children

Now that we have “nothing” working, we need to finish implementing all the “somethings”. You will recall that when we ended last time I had created a basic implementation of the font manager functionality that could change the label or caption of any type of control. The tricky part, I said was going to be implementing the subclass, or children, methods that would modify the font of a configured control’s contents. So let’s look at those children.

The String and Digital Subclasses

I choose to start with these two because they are the easiest to understand, and are very much alike. Here’s the child method for handing strings…

String Subclass Method

…and the one for digital numerics…

Digital Subclass Method

In either subclass, the logic starts by calling the parent methods (which handles labels and captions) and then extracting from the parent’s class data the reference to the control that will be manipulated. At the same time that is going on, the Font Parameters data is unbundled and the Component to Set value controls what, if anything, happens next. If the selected component is Label or Caption a case is selected which does nothing but pass through the error cluster. If, however, the selected component is Contents the associated case casts the basic control reference from the parent class data into the control’s specific control class, and then sets the appropriate properties.

The Boolean and RingSubclasses

The next two I want to consider are, again, similar each other, but differ from the preceding pair in that they represent control classes that don’t have any readily discernible textual value. Booleans represent logical true and false conditions, while rings are technically numerics, but the number that is their value doesn’t appear anywhere. In this sort of situation, the idea is to look for text that is not the control’s value but is associated with that value. For example, Boolean controls in LabVIEW can have textual displays that state the control’s condition. These strings are called Boolean Text and are often used to label push buttons or lights…

Boolean Subclass Method

Likewise, the Ring control appears to the user as a pop-up menu, so we can use this code to set the text properties of the text that appears in the menu…

Ring Subclass Method

The WaveformChart Subclass

Finally, we need to take the idea of strings that are only associated with data one more step. What about complex controls that can have multiple strings associated with their values? Objects like charts are good examples of what I am talking about. Just to start, there is text associated with the axis tick marks, there is text that forms the axis labels, and there is text in the plot legends.

The most flexible approach would be to figure out how to uniquely identify each of these components, however we must be careful to not create an API that is so flexible that it is unusable. One solution would be to simply make all the text the same font and size – which is what they are anyway. A look that I prefer however is to have the tick mark labels slightly smaller than the axis labels. Here is one way to do that:

WaveformChart Subclass Method

As you can see, the code treats the two axes the same by combining references to them into an array and then passing that array into a loop that manipulates the display parameters. This logic makes the axis labels the size specified in the configuration, but does a bit of math to make the tick mark labels about 10% smaller. This difference might not seem like much, but it works. If this isn’t exactly what you want, that’s OK. The point here is not to present a canonical solution, but to present concepts and ideas that help you find your own way.

Adding Configurations

Now we are ready to add the font definitions to the database. I have created a total of 12 definitions covering 9 different controls and indicators and you can see them all by examining the SQL file in the _repos subdirectory in the project (starting at line 27). However, to give you a taste of what the SQL code for this functionality looks like, here is the SQL for the table holding the font configurations, and the font definition for the string indicator on the front panel of the launcher.

CREATE TABLE ctrl_font_definition (
    id          AUTOINCREMENT PRIMARY KEY,
    owner_name  TEXT(50) WITH COMPRESSION,
    ctrl_name   TEXT(50) WITH COMPRESSION,
    font_name   TEXT(20) WITH COMPRESSION,
    font_size   INTEGER,
    ctrl_comp   TEXT(20) WITH COMPRESSION
  )
;

INSERT INTO ctrl_font_definition
  (owner_name, ctrl_name, font_name, font_size, ctrl_comp)
VALUES
  ('testbed.vi', 'progress', 'Segoe UI', 24, 'Contents')
;

The goal of these initial definitions is to “turn-on” the functionality without changing too much. For example, the ‘Segoe UI’ font is the default font that LabVIEW uses on recent versions of the Windows platform. If you are running this code on the Macintosh or Linux (or an older version of Windows), the default font will be different. So on other platforms you may need to modify these definitions before you install them.

Once we have the definitions in the database, let’s try the testbed application again. You might not notice a lot of difference, that is sort of the point. This initial test is to reproduce the default values. One place where you will notice a difference is if you are running Windows and you have the display font scaling on your display set to the non-default value. The text size will now always be the same relative to the size of the window regardless of how the display setting changes.

From here I would recommend that you play around a bit and manually change the font and size of the various controls to see the effect.

Testbed Application – Release 16
Toolbox – Release 12
Testbed Installer – Release 16

Please note that I have included in this release a built version of the application so you can practice working with the database. The LocalDB.mdb file included with this installer has the table defined for holding the font definitions, but the table is empty. This release has two purposes: One, by adding to and manipulating the data in its database, you can see that you really can modify the visual presentation without changing code. Two, I have started using LabVIEW 2015 and realize that some of you may not have upgraded yet. If this version change is a problem, post a comment and I will send you a version of the code back-saved to LabVIEW 2014.

The Big Tease

One of the things that I like about NI Week is the opportunity to meet friends both new and old. Before a keynote address one morning I was talking to another one of the LabVIEW Champions, Jack Dunaway by name, and the topic of this blog came up. To make a long story short, he suggested a topic that sounded so good, I’m going to get started on it next time.

One good of way showing a lot of data in a small space is what is known as a tree control. It’s valuable because its structure is inherently hierarchical and so can display a lot of data while not taking up a lot of screen real estate. In addition, it can reduce the overwhelm that you sometimes feel when looking at large datasets because, when done well, they allow you to start with a high-level view of the data and gradually drill down to the specific results you want.

If you are working in Windows, there are two such controls available: one that is part of Windows, and one that is native to LabVIEW. So next time: the Native LabVIEW Tree Control. Be there or be square.

Until Next Time…

Mike…

Drop-Ins Are Always Welcome

One of the key distinctions of web development is that the standards draw a bright line between content and presentation. While LabVIEW doesn’t (so far) have anything as powerful as the facilities that CSS provides, there are things that you can do to take steps in that direction. The basic technique is called creating a “drop-in” VI. These functions derive their name from the fact that they are dropped into an existing VI to change the display characteristics, but without impacting the host VI’s basic functionality.

The Main Characteristics

The first thing we need to do is consider the constraints under which these VIs will need to operate. These constraints will both assist in setting the scope of what we try to accomplish, and inform the engineering decision we have to make.

No Fraternization

The first requirement that a VI to meet in order to be considered truly “drop-in” capable, is that there must be no interaction between its logic and that of the VI into which it is being dropped. But if there is to be no interaction with the existing code, how is it supposed to change anything? Given that we are only talking about changing the aspects of the data presentation, all we need is a VI Server reference to the calling VI, and that we can get using the low-level Call Chain function.

VI Server Accesses

As you can see, from the VI reference you can get a reference to the VI’s front panel, and from that you can get an array of references for all the objects on the front panel. It is those references that allow you to set such things as the display font and size – which just happen to be the two things we are going to be manipulating for this example.

One potential problem to be aware of is the temptation to use these references to do things that directly affect how the code operates. However, this is a temptation you must resist. Even though it may seem like you “got away with it this time”, sooner or later it will bite you.

To be specific, changing the appearance of data is OK, but changing the data itself is, in general, not. However, there is one exception that when you think about it makes a lot of sense: localization. Localization is the process of changing the text of captions or labels so they appear is the language of the user, and not the developer. This operation is acceptable because although you might be changing the value in, for example, a button’s Boolean text you aren’t changing what the button does. The button will perform the same whether it is marked “OK”, “Si” or “Ja”.

Autonomous Error Handling

The next thing a drop-in has to be able to do is correctly manage errors. But here we have a bit of a conundrum. On the one hand, errors are still important, so you want to know if they occur. However, you don’t on the other hand, want this added functionality to interrupt the main code because an error occurred while configuring something the user would consider as “cosmetic”.

The solution is for the drop-in to have its own separate error reporting mechanism that records errors, but doesn’t inject them into the main VI’s error chain. The error handling library we have in place already has the needed functions for implementing this functionality.

External Configuration Storage

Finally, the drop-in VI needs configuration data that is stored in a central location outside the VI itself – after all, we want this drop-in to be usable in a wide variety of applications and projects. For implementing this storage you have at your disposal all the options you had when creating the main application itself, and as with the main application, the selection of the correct storage location depends on how much of this added capability will be exposed to the user. If you intend to let the user set the values, you can put the settings in an INI file. You just need to make sure that you quality the data they enter before you try using it. Otherwise you could end up in a situation where they specify a non-existent font, or a text size that is impossibly large or small.

To keep things simple for this test case, we will store the data in the same database that we use to store all the other configuration values. The data that we store in the database will also be (for now) simple. Each record will store the data needed to modify one part of one control, so it will contain a field for the name of the VI, the name of the control, an enumeration for selecting what part of the control is to be set, and finally the font name and size. The enumeration Component to Set will have 3 values: Label, Caption and Contents. Note that to keep things organized and easy to modify or expand, this structure as a whole, and the enumeration in particular are embodied on the LabVIEW side in type definitions.

The Plan of Action

So how can we implement this functionality? The literary device of the “omniscient author” has always bothered me so rather than simply heading off in a direction that I chose before I started writing, let’s take a look at a couple of implementation options and see which one of the two will work the best for us. Remember that the only thing more important that coming up with the right answer, is knowing how you came up with the right answer.

The “Normal” Way

For our first try, let’s start with the basic logic for getting the control references we saw a moment ago, and add to it a VI that returns the font configuration data for the VI that is being configured. Note that the input to this fetch routine (which gets the data from the application database) is the name of the VI that is calling the drop-in. This name is fully qualified, meaning it contains not just the VI name, but also the names of any library or class of which it might be a member.

Font Manager - Deadend

The output from the database lookup consists of a pair of correlated arrays. Correlated arrays are arrays where the data from a given element in one array correlates to, or goes with, the data from the same-numbered element in the other array. In this case, one array is a list of the control names and the other array is a list of all the font settings that go with the various parts of that control.

The first thing the code does is look to see if there are any font settings defined for the VI by checking to see if one of the arrays are empty. It is only necessary to check one of the arrays since they will always have the same number of elements. If there are font settings defined for the VI, the code takes the array of control references from the VI’s front panel and looks at them one-by-one to determine whether the label for that particular control or indicator is contained in the array of control names. When this search finds a control that is in the list of control names, the code unbundles the font settings data and uses the Component to Set value to select the frame of a case structure that contains the property node for the specified component’s property.

This approach works pretty well for labels and captions because all controls and indicators can, regardless of type, have them. In addition, regardless of whether the control is a string, numeric, cluster or what have you, the properties are always named the same. (The property for manipulating a control’s Caption is shown.)

Unfortunately, things begin to get complicated once you move past the properties that all controls share in common and start changing the font settings for the data contained inside the control – what we are calling the Contents. For example, the property for setting the font of the contents of a string control is called Text.FontName, whereas the property for setting the corresponding information in a digital numeric is called NumText.FontName. Things get even stranger when you start talking about setting the font of the Boolean text in the middle of a button, or worse the lines in a listbox – there each row has to be set individually.

The fundamental problem that this simple approach has is that the settings for controls and indicators are built on object-oriented principles. Labels and Captions are easy because they are common to all controls, but as soon as you start talking about text that is contained inside a control, you have to deal with a specific type, or subclass, of control. Plus to even get access to the required properties you need to cast the generic Ctl reference to a more specific class like a Str (string) or DigNum (digital numeric). We could, of course, simply expand the number of items in the Component to Set enumeration to explicitly call out all the various components that we want to be able modify. Then in each case we could do something like this:

'fixing' a problem

Because we know that the String Text is only valid for strings, we could cast the reference to the proper subclass, set the appropriate property, and call it done. If you look at very much code you will see this sort of thing being done all the time. But looking closer in those situations you will also see all the code that gets put into trying to fix this implementation’s shortcomings. For example, because the subclass selection logic is in essence being driven by the enumeration, and the enumeration value is stored in the database; we have created a situation where the contents of the database needs to be kept “in sync” with the controls on the front panels. Hence if a string control should be changed to a digital numeric (or vice versa) the database will need to be manually updated to track the change. This fact, in turn, means that we will need to add code to the VI to handle the errors that occur when we forget to keep the code and the database in sync.

As bad as that might sound, it is not the worst problem. The real deal-breaker is that every time you want or need to add support for another type of control, or another Component to Set, you will be back here modifying this VI. This ongoing maintenance task pretty much means that reusing this code will be difficult to impossible. Hopefully you can see that thanks to these problems (and these are just the two biggest ones), this “simple” approach built around a single case structure ends up getting very, very messy.

But if the object-oriented structure of controls is getting us into trouble, perhaps a bit more object orientation can get us out of trouble…

Riding a Horse in the Direction it’s Going

When programming you will often find yourself in a situation where you are wanting to extend a structure that you can see in a way that you can’t yet fully see or understand. When confronting that challenge, I often find it helpful to take some time and consider the overall trajectory of the part of the structure I can see to see where it’s pointing. Invariably, if you are working with a well-defined structure (as you are here) the best solutions will be found by “riding the horse in the direction it’s already going”.

So what direction is this “horse” already going? Well, the first thing we see is that it is going in the direction of a layered, hierarchical structure. In the VI Server structure that we can see, we observe that the basic control class is not at the top of the hierarchy, but rather in the middle of a much larger structure with multiple layers both above and below it.

Menus

The other thing we can note about the direction of this architectural trendline is that the hierarchy we just saw is organized using object-oriented principles, so the hierarchy is a hierarchy of classes, of datatypes. Hence, each object is distinct and in some way unique, but the objects as a group are also related to one another in useful ways.

Taking these two points together it becomes clear that we should be looking for a solution that is similarly layered and object-oriented. However, LabVIEW doesn’t (yet) have a way to seamlessly extend its internal object hierarchy, so while developing this structure using classes of our own creation, we will need to be careful to keep “on track”.

Moving Forward

The basic for this structure is a class that we will call Display Properties.lvclass. Initially this class will have two public interface VIs: One, Create Display Properties Update Object.vi, does as its name says and creates an object associated with a specific control or indicator. This object will drive what is now the only other interface VI (Set Control Font.vi) which is created for dynamic dispatch and will serve as the entry point for setting the font and size of text associated with GUI controls and indicators. I am building the class in this way because it is easy to imagine other display properties that we might want to manipulate in the future (e.g. colors, styles, localization, etc.). This is the code I use to dynamically load and create display property update objects:

Create Font Object

In general, it is very similar to code I have presented before to dynamically create objects, but there are a few differences. To begin with, the code does not buffer the object after it is created because unlike the other examples we have looked at over the past weeks, these objects do not need to be persistent. In other words, these objects will be created, used and then discarded.

Next, to simplify in their identification, all VI Server classes have properties that return a Class ID number and a Class Name. The code uses the latter value to build the path and class name of the child class being requested.

Finally, after the code builds the path and name of the subclass it wants to use, it checks to see if the class exists and only attempts to load it if the defining lvclass file is found. If the file is missing, the code outputs a parent class object. The reason for this difference is twofold:

  1. Without it, if a control class was called that we had not implemented, the code would throw an error. Consequently, in order to prevent those errors I would have to create dozens of empty classes that served no functional purpose – and that is wasteful of both my time and computer resources.
  2. With it, the logic extends what normally happens when a method is not overridden in a subclass, to include the case where the subclass hasn’t even been implemented yet: the parent class and, – more to the point – the parent methods, are invoked.

Taken Care of Business

The dynamic dispatch VI Set Control Font.vi is obviously the parent method for what will eventually be a family of override methods that will address specific types of controls. But that begs the question: What should go in this VI?

Well think about it for a moment. In the first possible implementation we looked at, things initially looked promising because changing the font and size of labels and captions was so easy. You’ll remember that the reason they were easy, was because all controls and indicators can have them and the properties are always named the same. That sounds like a pretty good description of what we would want in a parent method – so here it is:

Set Font Parent

The structure is pretty simple, the code retrieves the control reference from where it was stored in the class data and passes it into a case structure that has cases for Label and Caption. In addition, it has an empty case that handles the Contents value of Component to Set. This case is empty because that value will be handled during override. So all we have left to do for right now is look at how these VIs look incorporated into the structure we looked at earlier – all we really needed to replace was the case structure…

Font Manager

…and here it is. Nothing much new to see here, so let me just recommend that you take a good look at this code because you probably won’t be seeing it again. Since we will be adding functionality in the context of the class structure we created, we won’t need to revisit this logic any time soon, and maybe ever.

The Big Tease

So with the basic structure in place, all we have to do is start populating the subclasses we need. But that will have to wait for next time when I will also post all the code.

Until Next Time…

Mike…

More Than One Kind of Modularity

When learning something that you haven’t done before – like .NET – it’s not uncommon to go through a phase where you look at some of the code you wrote early on and cringe (or at least sigh deeply). The problem is that you are often not only learning a new interface or API, but you are learning how to best use that interface or API. The cause of all the cringing and sighing is that as you learn more, you begin to realize that some of the assumptions and design decisions that you made were misguided, if not flat-out wrong. If you look at the code we put together last time to help us learn about .NET in general, and the NotifyIcon assemble in particular, we see a gold-plated example of just such code. Although it is clearly accomplished it’s original goal of demonstrating how to access .NET functionality and illustrating how the various objects can relate to one another, it is certainly not reusable – or maintainable, or extensible, or any of the other “-ables” that good software needs to be.

In fact, I created the code in that way so this time we can take the lesson one step further to fix those shortcomings, and thus demonstrate how you can go about cleaning up code (of your own or inherited) that is making you cringe or sigh. Remember, it is always worth your time to fix bad design. I can’t tell you how many times I have seen people struggling with bad decisions made years before. Rather than taking a bit of time to fix the root cause of their trouble, they continue to waste hours on project after project in order to workaround the problem.

Ok, so where do we start?

Clearly this code would benefit from cleaning-up and refactoring, but where and how should we start? Well, if you are working on an older code base, the question of where to start will not be a problem. You start with where the most pain is. To put it another way, start with the things that cause you the biggest problems on a day-to-day basis.

This point, however, doesn’t mean that you should just sit around and wait for problems to arise. As you are working always be asking yourself if what you are doing has limitations, or embodies assumptions that might cause problems in the future.

The next thing to remember is that this work can, and should, be iterative. In other words you don’t have to fix everything at once. Start with the most egregious errors, and address the others as you have the opportunity. For example, if you see the code doing something stupid like using a string as a state variable, you can fix that quickly by replacing the strings with a typedef enumeration. I have even fixed some long-standing bugs in doing this replacement because it resolved places where states were subtly misspelled or contained extraneous spaces.

Finally, remember that the biggest payoffs, in terms of long-term benefit, come from improved modularity that corrects basic architectural problems. As we shall see in the following discussion, I include under this broad heading modularity in all its forms: modular functionality, modular logic and modular data.

Revisiting Modular Functionality

Modular functionality is the result of taking small reusable bits of code and encapsulating it in routines that simplify access, standardize the interface or ensure proper usage. There are good examples of all these usages in the application modified code, starting with Create NotifyIcon.vi:

Create NotifyIcon VI

Your first thought might be why I bothered turning this functionality into a subVI. After all, it’s just one constructor node. Well, yes that is true but it’s also true that in order to create that one node you have to remember multiple steps and object names. Even though this subVI appears rather simple, if you consider what it would take to recreate it multiple times in the future, you realize that it actually encapsulates quite a bit of knowledge. Moreover, I want to point out that this knowledge is largely stuff for which there is no overwhelming benefit to be gained from you committing it to memory.

Next, let’s consider the question of standardizing interfaces. Our example in this case is a new subVI I created to handle the task of assigning an icon to the interface we are creating. I have named it Set NotifyIcon Icon.vi:

Set NotifyIcon Icon VI

You will remember from out previous discussion that this task involves passing a .NET object encapsulating the icon we wish to use to a property node for the NotifyIcon object. Originally, this property was combined with several others on a single node. A more flexible approach is to breakup that functionality and standardize the interfaces for all the subVIs that will be setting NotifyIcon to simply consist of an object reference and the data to be used to set the property in a standard LabVIEW datatype – in this case a path input. This approach also clearly simplifies access to the desired functionality.

Finally, there is the matter of ensuring proper usage. A good place to highlight that feature is in the last subVI that the application calls before quitting: Drop NotifyIcon.vi.

Drop NotifyIcon VI

You have probably been warned many times about the necessity of closing references that you open. However, when working with .NET objects, that action by itself is sometimes not sufficient to completely release all the system resources that the assembly had been using. Most of the time if you don’t completely close out the assembly you many notice memory leaks or errors from attempting to access resources that still think they are busy. However with the NotifyIcon assembly you will see a problem that is far more noticeable, and embarrassing. If you don’t call the Dispose method your program will close and release all the memory it was using, but if you go to the System Tray you’ll still see your icon. In fact, you will be able to open its menu and even make selections – it just doesn’t do anything. Moreover, the only way to make it go away it to restart your computer.

Given the consequences of forgetting to include this method in your shutdown sequence, it is a good idea to make it an integral part of the code that you can’t forget to include.

Getting Down with Modular Logic

But as powerful as this technique is, there can still be situations where the basic concept of modularity needs to be expressed in a slightly different way. To see such a situation, let’s look at the structure that results from simply applying the previous form of modularity to the problem of building the menus that go with the icon.

Create ContextMenu VI

Comparing this diagram to the original one from last time, you can see that I have encapsulated the repetitive code that generated the MenuItem objects into dedicated subVIs. By any measure this change is a significant improvement: the code is cleaner, better organized, and far more readable. For example, it is pretty easy to visualize what menu items are on submenus. However, in cases such as this one, this improved readability can be a bit of a double-edged sword. To see what I mean, consider that for the structure of your code to allow you to visualize your menu organization, said organization must be hard-coded into the structure of the code. Consequently, changes to the menus will, as a matter of course, require modification to the fundamental structure of the code. If the justifications for modularity is to include concepts like flexibility and reusability, you just missed the boat.

The solution to this situation is to realize that there is more than one flavor of modularity. In addition to modularizing specific functionality, you can also modularize the logic required to perform complex and changeable tasks (like building menus) that you don’t want to hard code. If this seems like a strange idea to you, consider that computers spend most of their time using their generalized hardware to performed specialized tasks defined by lists of instructions called “programs”. The thing that makes this process work is a generalized bit of software called a “compiler” that turns the programs into data structures that the generalized hardware can use to perform specialized actions.

Carrying forward with this line of reasoning, what we need is a simple way of defining a menu structure that is external to our program, and a “menu compiler” that turns that definition into the MenuItem references that our program needs. So let’s build one…

Creating the Data for Our Menu Compiler

So what should this menu definition look like? Well, to answer that question we need to start with the data required to define a single MenuItem. We see that as a minimum, every item in a menu has to have a name for display to the user, a tag to identify it, and a parent tag that says if the item has a parent item (and if so which item is its parent). In addition, we haven’t really talked about it, but the order of references in an array of menu items defines the order in which the items appear in the menu or submenu – so we need a way to specify its menu position as well. Finally, because in the end the menu will consist of a list (array) of menu item references, it makes sense to express the overall menu definition that we will eventually compile into that array of references as a list (and eventually also an array).

But where should we store this list of menu item definitions? At least part of the to this question depends on who you want to be able to modify the menu, and the level of technical expertise that person has. For example, you could store this data in text files as INI keys, or as XML or JSON strings. These files have the advantage of being easy to generate and are readily accessible to anyone who has access to a text editor – of course that is their major disadvantage, as well. Databases on the other hand are more secure, but not as easy to access. For the purposes of this discussion, I’ll store the menu definitions in a JSON file because, when done properly, the whole issue of how to parse the data simply goes away.

To see what I mean, here is a nicely indented JSON file that describes the menu that we have been working using for our example NotifyIcon application:

[
	{
		"Menu Order":0,
		"Item Name":"Larry",
		"Item Tag":"Larry",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Moe",
		"Item Tag":"Moe",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"The Other Stooge",
		"Item Tag":"The Other Stooge",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":3,
		"Item Name":"-",
		"Item Tag":"",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":4,
		"Item Name":"Quit",
		"Item Tag":"Quit",
		"Parent Tag":"",
		"Enabled":true
	},{
		"Menu Order":0,
		"Item Name":"Curley",
		"Item Tag":"Curley",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":1,
		"Item Name":"Shep",
		"Item Tag":"Shep",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	},{
		"Menu Order":2,
		"Item Name":"Joe",
		"Item Tag":"Joe",
		"Parent Tag":"The Other Stooge",
		"Enabled":true
	}
]

And here is the LabVIEW code will convert this string into a LabVIEW array (even if it isn’t nicely indented):

Read JSON String

JSON has a lot of advantages over techniques like XML: For starters, it’s easier to read, and a lot more efficient, but this is why I really like using JSON: It is so very convenient.

Starting the Compilation

Now that we have our raw menu definition string read into LabVIEW and converted into a datatype that will simplify the next step in the processing, we need to ensure that the data is in the right order. To see why, we need to remember that the final data structure we are building is hierarchical, so the order in which we build it matters. For instance, “The Other Stooge” is a top-level menu item, but it is also a submenu so we can’t build it until we have references to all the menu items that are under it. Likewise, if one of the items under it is a submenu, we can’t build it until all its children are created.

So given the importance of order, we need to be careful how we handle the data because none of the available storage techniques can on their own guarantee proper ordering. The string formats can all be edited manually, and it’s not reasonable to expect people to always type in data in the right order. Even though databases can sort the result of queries, there isn’t enough information in the menu definition to allow it to do so.

The menu definition we created does have a numeric value that specifies the order of items in their respective menus and submenus. We don’t, however, yet have a way of telling the level the items reside at relative to the overall menu structure. Logically we can see that “Larry” is a top-level menu item, and “Shep” is one level down, but we can’t yet determine that information programmatically. Still the information we need is present in the data, it just needs to be massaged a bit. Here is the code for that task:

Ordering the Menu Items

As you can see, the process is basically pretty simple. I first rewrite the Item Tag value by adding the original Item Tag value to the colon-delimited list that starts with the Parent Tag. I then count the number of colons in the resulting string, and that is my Menu Level value. The exception to this processing are the top-level menu items which are easy to identify due to their null parent tags. I simply force their Menu Level values to zero and replace the null string with a known value that will make the subsequent processing easier. The real magic however, occurs after the loop stops. The code first sorts the array in ascending order and then reverses the array. Due to the way the 1D array sort works when operating on arrays of clusters, the array will be sorted first by Menu Level and then Menu Order – the first two items in the cluster. This sorting, in concert with the array reversal, guarantees that the children of a submenu will always be processed before the submenu item itself.

Some of you may be wondering why we go to all this trouble. After all, couldn’t we just add a value to the menu definition data to hold the Menu Level? Yes, we could, but it’s not a good idea, and here’s why. In some areas of software development (like database development, for instance) the experts put a lot of store in reducing “redundancy” – which they define basically as storing the same piece of information in more than one place. The problem is that if you have redundant information, you have to decide how to respond when the two pieces of information that are supposed to be the same, aren’t. So let’s say we add a field to the menu definition for the menu level. Now we have the same piece of information stored in two different places: It is stored explicitly in the Menu Level value while at the same time it is also stored implicitly in Parent Tag.

Generating the Menu Item “Code”

In order to turn this listing into the MenuItem references we need, we will pass this sorted and ordered array into a loop that will process one element at a time. And here it is:

Compiling the Menu-1

You can see that the loop carries two shift registers. The top SR holds a 1D array of strings that consists of the submenu tags that the loop has encountered so far. The other SR also carries a 1D array but each element in it is a cluster containing an array of MenuItem references associated with the submenu named in the corresponding element of the top SR.

As the screenshot shows, the first thing that happens in the loop is that the code checks to see if the indexed Item Tag is contained in the top SR. If the tag is missing from the array it means that the item is not a submenu, so the code uses its data to create a non-submenu MenuItem. In parallel with that operation, the code is also determining what to do with the reference that is being created by looking to see if the item’s Parent Tag exists in the top SR. If the item’s parent is also missing from the array, the code creates entries for it in both arrays. If the parent’s tag is found in the top SR, it means that one or more of the item’s sibling items has already been processed so code is executed to add the new MenuItem to the array of existing ones:

Compiling the Menu-2

Note that the new reference is added to the top of the array. The reason for this departure from the norm is that due to the way the sorting works, the menu order is also reversed and this logic puts the items on each submenu back in their correct order. Note also that during this processing the references associated the menu items are also accumulated in a separate array that will be used to initialize the callbacks. Because the array indexing operation is conditional, only a MenuItem that is not a submenu, will be included in this array.

Generating the Submenu “Code”

If the indexed Item Tag is found in the top SR, the item is a submenu and the MenuItem references needed to create its MenuItem should be in the array of references stored in the bottom SR.

Compiling the Menu-3

So the first thing the code does is delete the tag and its data from the two array (since they are no longer needed) and uses the data thus obtained to create the submenu’s MenuItem. At the same time, the code is also checking to see if the submenu’s parent exists in the top SR. As before, if the Parent Tag doesn’t exist in the array, the code creates an entry for it, and if it does…

Compiling the Menu-4

…adds the new MenuItem to the existing array – again at the top of the array. By the time this loop finishes, there should be only one element in each array. The only item left in the top SR should be “[top-menu]” and the bottom SR should be holding the references to the top-level menu items. The array of references is in turn used to create the ContextMenu object which written to the NotifyIcon object’s ContextMenu property.

What Could Possibly Go Wrong?

At this point, you can run the example code and see an iconic system tray interface that behaves pretty much as it did before, but with a few extra selections. However, we need to have a brief conversation about error checking, and frankly in this situation there are two schools of though on this topic. There is ample opportunity for errors to creep into the menu structure. Something as simple as misspelling a parent tag name could result in an “orphan” menu that would never get displayed – or could end up being the only one that is displayed. So the question is how much error checking do we really need to do? There are those that think you should spend a lot of time going through the logic looking for and trapping every possible error.

Given that most menus should be rather minimal, and errors are really obvious, I tend to concentrate on the low-hanging fruit. For example, one simple check that will catch a large number of possible errors, is looking to see if at the end of the processing, there is more than one menu name left in the top SR – and finding an extra one, asserting an error that gives the name of the extra menu. You should probably also use this error as an opportunity to abort the application launch since you could be left in a situation when you can’t shutdown the program because the “Quit” option is missing.

Something else that you might want to consider is what to do if the external file containing the menu definitions comes up missing. The most obvious solution is to, again, abort the application launch with some sort of appropriate error message. However, depending on the application it might be valuable to provide a hard-coded default menu that doesn’t depend on external files and provides a certain minimum level of functionality. In fact, I once worked on an application where this was an explicit requirement because one of the things that the program allowed the user to do was create custom menus, the structure of which was stored in external files.

Stooge Identifier – Release 2
Toolbox – Release 11

The Big Tease

So what are we going to talk about next time? Well something that I have seen coming up a lot lately on the user forum is the need to be able to work with very large datasets. Often, this issue arises when someone tries to display the results of a test that ran for several hours (or days!) only to discover that the complete dataset consists of hundreds of thousands of separate datapoints. While LabVIEW can easily deal with datasets of this magnitude, it should be obvious that you need to really bring you memory management “A” game. Next time will look into how to plot and manage VLDs (Very Large Datasets).

Until Next Time…

Mike…

Conventional Wisdom

The dictionary refers to a “convention” as a set or standardized way of doing something. In LabVIEW programming, conventions are an effective organizational tool that provides an avenue for solving many common problems. In fact there are far more opportunities for defining useful conventions than there is room in this post, so to get you thinking we’ll just cover some of the key areas.

Directory Conventions

The topic of directory structure is a good place to start this discussion because this area is often neglected by developers. As you might expect their primary focus is getting their code running — not where to store the code on disk. After all, one place is as good as any other, right?/p>

Uh, not so much, no… Storing files just anywhere can cause or facilitate a variety of problems. For example, one of the most common ones is that you can end up with multiple VIs sporting the same file name at various locations on disk. The best case scenario is that your application gets cross-linking to the wrong version of a file. The worst case situation is that your code gets cross-linked to a completely different file that just happens to have the same name. And if you’re really unlucky, the wrong file will have the same connector pane and IO as the one you want, so your code won’t even show up as being broken — it just stops working.

More problems can also arise when you decide to try to reuse the code on a new project or backup your work. If the files are spread all over your disk, how can you be absolutely sure you got everything? And the right version of everything? Of course, I have seen the opposite problem too. A developer puts EVERYTHING in one directory. Now you have a directory containing hundreds of files, so how do you find anything?

The correct answer is to go back to first principles and put a little effort into designing your development environment. So if you think about it, there basically two kinds of code you need to store: code that is specific to a particular project, and code that you are planning on reusing. Likewise, you want to minimize the code “changes” that are associated with a VI having been moved on disk. Consequently, I like to start at the root of a hard drive (doesn’t matter which one) and create a folder called Workspace.

image

This directory could actually be named anything, the point is that it will be the home of all my LabVIEW code. Inside this directory I create two sub-folders. The Toolbox sub-folder contains all my code that is intended for reuse. To keep it organized, its contents are further broken up into sub-folders by function type. In addition, I make it easy to get to my reusable code, by point the user.lib menu link for the LabVIEW function palette to this toolbox directory.

The other sub-folder inside my workspace directory is called Projects. It contains sub-folders for the various projects I am working on. Because I sometimes work for multiple clients, I also have my projects segregated by the year that I got the contract. If you work for a single company but support multiple departments, you might want to have separate folders for the various departments.

Please note that this directory structure is one that has worked well for me over the years but it isn’t written in stone. The main thing is that you be consistent so you (and people who in the future might be supporting your code) can find what is needed.

Naming Conventions

The next area where having some set conventions can pay big dividends is in the area of naming things. If you are a part of a larger organization, this is also an area where you should consider having a meeting to discuss the issue because it can have long-term effects of the way you and your group operate.

Online you will find a lot of recommendations as to how to name all sorts of things from entire projects to individual controls and indicators. The thing to remember is that some of these suggestions originated years ago and so are no longer appropriate — or necessary. For example, some people still advise you to prefix the name of a VI with the name of the project in which the VI is used. This technique, however, is counter-productive because it complicates the reuse of the VI. In addition, if you are making appropriate use of LabVIEW libraries (*.lvlib) it is unnecessary because the name of the library creates a namespace that, for LabVIEW’s internal use, is prepended to the VI’s file name.

Say for instance you have a library called Stop Application.lvlib and let’s further say that the library references a VI called Generate Event.vi. Now without the library, this VI name would be a problem because it is too non-specific. It is easy to imagine dozens of VIs that could have that same name. But because the VI is associated with a library, as far as LabVIEW is concerned, the name of that VI is Stop Application.lvlib:Generate Event.vi. By the way, you’ll notice that this composite name reads pretty nice for human beings too. The only remaining issue is that your OS doesn’t know about LabVIEW’s naming conventions, which is why I always put libraries (and the files they reference) in separate directories with the same name as the library, but minus the lvlib part.

Another common place where a naming convention can be helpful is in showing fixed relationships that LabVIEW doesn’t otherwise illustrate. A good use case for this type of naming convention would be with LabVIEW classes. Currently (as of LV2014) you can name a class anything you want but LabVIEW doesn’t organize the class lists in the project file to illustrate inheritance relationships. Until NI decides to fill this gaping hole, you can work around it by implementing a naming convention that (when sorted by name) shows the inheritance relationships. Starting with a top-level parent class, like database.lvclass, you build a hierarchical name for the subclasses by prepending the base name and an underscore character to the subclass name. In this case you might have two subclasses: one for ADO connections and one for SQLite (which uses its own dll). So you would create classes with the names database_ado.lvclass and database_sqlite.lvclass. Now within ADO-compatible databases there is a lot of commonality, but there can be differences so to cover those cases you might want to further define subclasses like database_ado_access.lvclass, database_ado_sqlserver.lvclass and database_ado_oracle.lvclass. Obviously this technique has limitations — like it has to be maintained manually, and you can end up with very long file names — but until NI addresses the basic issue, it is better than nothing.

Two other important conventions are that you should never use the same name to refer to two things that are structurally or different. Likewise, you should never use two different names to refer to the same object.

Style Conventions

At NIWeek this year, I had an interesting meeting. I had a young developer come up to me and asked me whether he was right in assuming that I had written a particular application. I said I had, but asked how he knew. He replied that he had worked on some code of mine on an earlier contract and said he recognized my coding style. For him it was an advantage because when making a modification he knew where to start looking.

Basically, style conventions define a standardized way of addressing certain common situations. For example, I make use of units for floating-point numbers to the greatest degree that I can because it simplifies code. One common situation that illustrates this point is the need to perform math on time values. If you want to generate a time that is exactly 1 day in the future you have a few options:

image

This works, but it assumes that the maintainer is going to recognize that 86,400 is the number of seconds in a day.

image

This is also a possibility and, thanks to constant folding, is just as efficient as the previous example. However it takes up an awful lot of space on the diagram and it essentially hardcodes a 1 day decrement — though that might not be clear at first glance either.

image

Personally, I like this solution because it literally expresses the math involved: Take the current time expressed in days, subtract one day from it and express the result in seconds. Remember that the point of good style is to make it clear what the code is logically doing. Great style makes it clear what you were thinking.

So that’s enough for now to introduce you to the concept of conventions. But be assured, it will come up again.

Until next time…

Mike…