Pay Attention to Units

The title for this post was a common quote from a favorite teacher of mine in high school. Mr Holt taught chemistry and physics and it seemed like I spent most of my life in his classroom — and I guess I did because in four years of high school I took 3 years each of chemistry and physics. Being a small school in a small Missouri town with a small budget we spent a lot of time (especially in physics) working on what would today be called, “mathematical modelling”. Back then, we just called it, “figuring out what would likely happen if we did something that we can’t really try because there isn’t enough money available to actually run the experiment”.

It was in those classes that I began to develop an understanding of, and appreciation for, all the pieces of contextual information that surrounds numbers. For example, if you have the number 4 floating around in a computer the most basic thing you can define about that number is what that 4 is counting or quantifying. Is it the mass of something (4 pounds)? Is it the cost of something in England (also, interestingly, 4 pounds)? Or is it a measure of how much your college roommate could drink before passing out (4 beers)?

In my work today, where I sometimes still have budgetary constraints, the need to remember that I have to, “…pay attention to the units…” remains. The difference is that now I have a powerful ally in the form of the facility that LabVIEW provides for assigning units to floating point numbers.

LabVIEW Units Aren’t Smarter Than You Are

Before we get into the meat of the discussion, however, we need to deal.with a few concerns and consideration that developers — even some very senior developers — have about using units. While there are a few behaviors that are clearly bugs, there are many more that people think are bugs but really aren’t. We should, of course, not be surprised by this situation. Anything related to floating-point will exhibit similar issues. For example, there used to be claims that LabVIEW didn’t do math correctly because the results of some calculations were different from what you got from Excel. Such claims ignored the fact that at the time Excel did all math in single-precision, while LabVIEW has always used double-precision math.

Similarly, a lot of the upset over “bugs” in units handling comes from an incomplete understanding of the entire problem. The simple truth is that there are a lot of things we do unconsciously when handling values with units so when we try to automate those operations, things can get messy. For example, we realize that sometimes when we square a number with units we want to square the unit, so meters become meters2. But sometimes we just want to square the number. Does it after all really make sense to convert volts into volts2? In implementing the units functionality, NI had to deal with similar ambiguity in something approaching a systematic manner so they made some decisions and then documented those decisions. So we may disagree with those decisions, but disagreement is not the same thing as a bug.

Another complaint that you sometimes hear is that using units complicates simple math. People will say things like, “…all I want to do is increment a number and LabVIEW won’t let me.” Ok, let’s look at that problem. You say you want to add 1 to a value, but 1 what? Even assuming a given type of unit, like length, how is LabVIEW supposed to know what you want to do? Do you want to add 1 meter, 1 furlong, 1 light-year?

Finally, there can be issues caused by shortcuts we take in our everyday thinking and communications, that are (to put it kindly) imprecise. For example, there are two units that can be, and often are, referred to as “PSI”. However, one is a force and one is a pressure, so mathematically they are very different beasts. In the end, a good way of summarizing a lot of this discussion is that using explicit units requires us to be more precise and careful in our thinking — which is always a good thing any way.

How Units Work

The basic idea behind LabVIEW’s implementation of units is that it draws a sharp distinction between how a number is stored and how it is expressed. The first step in making this distinction is deciding what kind of unit the developer selected and then expressing the value they entered in the “base unit” for the type of unit being used. For example, lengths are converted meters, temperatures are converted to degrees Kelvin, and time is converted to seconds and it is this converted value that exists on the wire. Likewise, when this wire encounters an indicator, LabVIEW converts the value on the wire to the units specified in the indicator before display. It is this basic functionality that makes simple math on timestamps possible.

pi time

In this snippet, each data source (the constants) converts the number it contains into seconds using its associated time unit. For example, 3 hours is converted into 10,800 seconds (60 x 60 x 3), 14 minutes is converted into 840 seconds (14 x 60) and the last input is left as is because it is already in seconds. Because all the numbers are now expressed the same unit, we can simply add them together and then add the result to the timestamp — which is also expressed in seconds. By the way, this is one of my favorite places to use units because it does away with so many magic numbers like 86,400 (the number of seconds in a day).

In terms of selecting the units to use, we have already seen one technique. On a control or constant you can show the units label and then enter or select the unit you want. To select the units, right-click on the units label and select the Build Unit String… option from the popup menu. The resulting dialog allows you to browse the standard units that LabVIEW supports and select or build the desired units.

However, if you have an existing number to which you want to apply a unit, you need to use a Convert Unit node. It looks a lot like the Expression Node but instead of typing simple math expressions into it, you enter unit strings.

unit conversion nodes

This snippet is one that I sometimes use when specifying timeouts. Note that a Convert Unit node is used to both apply are remove units.

When applying units, the unit string tells LabVIEW how to interpret the unitless number. It does not, as some suppose, specify the specific unit that value will have. For example, you may tell LabVIEW to interpret a particular number as feet, but internally the value will still be stored (like all lengths) in meters. When removing units, the node’s unit string tells LabVIEW how to express the internal value. In the above snippet, time is always carried as a floating point number of seconds, so the node driving the output causes LabVIEW to express the number of seconds as milliseconds.

Dealing with Temperatures

Another place I like to use units is with temperatures, but let’s be honest things can seem to get a bit strange here — or it can seem strange until you understand what is going one. The root of this apparent strangeness lies in the fact that there are absolute temperatures (like the room is 72° Fahrenheit) and there are relative temperatures (like the temperature went up 5° Fahrenheit in the last hour). Moreover, as we think about and manipulate temperatures we unconsciously, and automatically, keep track of the difference between the two. Unfortunately, when using a computer nothing happens unconsciously or automatically. Things happen because we tell them to.

The first thing NI had to decide when implementing this functionality was what base unit to use for temperature, and they picked Kelvin because it is metric and 0° K is the coldest something can get. Being metric is important because it means that the magnitude of change in 1° K is the same as for 1° C. By the way, did you notice what your brain just did? It went from thinking about absolute temperatures to thinking about relative temperatures — without thinking about it. This is what computers can’t do. Computers need some sort of indication or hint to tell them how to think about things, and in the context of temperature, this hint needs to provide a means for drawing a distinction between the units used for absolute temperature and units for relative temperatures. To see this issue in action, let’s revisit the previous time example but recast it to use temperature unit:

bad temperature calculation

Here we are taking a temperature value 20° C and incrementing it by 5° C. Unfortunately, when you run this example, the result is 298.15° C. “But,” you protest, “I thought that converting everything to the same units allowed you to perform any math you wanted with impunity. Where did this wild result come from. Surely this is a bug!”

Actually, it’s not. The only “problem” is that LabVIEW did exactly what the programmer told it to do, and not what they wanted it to do. LabVIEW added together two absolute temperature values and came up with an answer they didn’t expect. While its true that adding absolute temperatures probably doesn’t make any logical sense — how is LabVIEW supposed to know that? The real error is with the way the developer framed the problem and not with LabVIEW or its handling of units. What the developer obviously intended was to add a relative temperature value to the absolute temperature. To do that, you have to change the units of the increment input.

The way that LabVIEW identifies a relative temperature is to change the unit label from degC or degF, to Cdeg or Fdeg. By the way, now that you have the unit definitions correct, you can freely mix and match temperature scales. For example, you can add a 5° F increment to a 23° C room temperature and LabVIEW will happily give you the right answer (25.7778° C).

good temperature calculation

More Calculations

Other math operations can sometimes be confusing as well. For example, we have already discussed why you can’t simply increment or decrement numbers with units, but multiply and divide operation do not have the same restriction. You can multiply or divide a number with units by one without units with no problems. This apparent inconsistency makes sense when you remember that these two math operations used in this way simply scale the value that is there, regardless of how it is stored internally. Also NI has decides that many operations only operate on the numeric value. For example, if you square 5 meters using the square function, you get 25 meters. If you want to operate on the unit, you have to use the multiply function which will generate any unit squared.

But moving beyond these issues, there are many operations that using units will simplify, and even prevent you from making some kinds of logical errors. For instance, dividing a distance by a time will automatically generate a velocity unit (meters/sec by default). In the same way dividing a velocity by time will calculate an acceleration. And these conversions work the other way too: multiply a velocity by time and you get a distance (by default, meters).

The only negative here is that some of the unit labels can look a bit odd at first glance — though they are mathematically correct. For example, meters per second is shown as s^-1 m. However, despite the strangeness of this representation, you can alter the unit displayed by changing either of the base units shown. If you wanted feet per second, the unit string would be s^-1 ft, or miles per hour would be h^-1 mi. By the way, if you want to hide these potentially confusing representations from your users, the units string is a writable property so you could create a popup menu on a control that shows prettier labels.

unit selector

Note that he enumeration making the selection from the array has three values: Miles per Hour, Kilometers per Hour and Meters per Second.

Carrying on further, the units understands Ohm’s Law. Divide a voltage by a resistance and you get amps. Multiplying ohms and amps returns volts, and the calculations even know that the reciprocal of a frequency is time. All and all, this is some very cool stuff here that can significantly improve your code — and perhaps prevent you from making some silly mistakes. But as with all things new, work into it slowly, and thoroughly double-check the results when you try something different.

The Big Tease

There seems to be rumor running around that I don’t like queues. Well that is not true. They have a couple features that make them uniquely qualified for certain kinds of operations. My next couple posts will demonstrate a really good place use a queue — a data processing engine for manipulating data that will dynamically resize itself on the fly to provide more or less processing bandwidth.

Until Next Time…

Mike…

Finishing the Configuration Management

For the last couple posts we have been looking at how to best utilize object-oriented programming methodologies. In that quest, we have taken as our example the goal of converting the parts of our testbed application that use stored configuration data to a configuration manager based on object classes. In this implementation, the classes represent the various types of potential data repositories.

Maximizing Flexibility by Leveraging Existing Code

When we stopped last time we had created all the basic code infrastructure and all we had to do was construct the dynamic dispatch methods that retrieve the data the application needs. In creating these methods, we have as our guiding principle reducing the amount of code we have to create by reusing as much code as we can. In other words we need to really spend some time thinking about how to structure our code such that it combines maximum reuse with maximum flexibility.

One good way of attaining that ideal is to allow for multiple levels of dynamic dispatch within the same method. Let’s say for the sake of argument that you have a method that will need to be accessible from 10 different subclasses. Furthermore, let’s say that 4 of those subclasses all need to do the same thing, an additional 4 are mostly the same but with a few differences, and the last 2 subclasses require fundamentally different logic to perform the same task. You could implement the logic that is common to the largest group of subclasses (the first 4) in the parent and let the remaining 6 override the parent to define their own solutions. This solution would work, but could potentially result in a lot of duplicated code. It depends on how similar the second group of 4 subclasses are to the first group of 4.

In creating the Initialize New method last week we saw a far better way to optimize the code: For the 4 subclasses that are similar, we could call the parent method in the child (thus taking advantage of that existing code) and then add a little logic to customize the functionality. This solution will work well in many situations, but one area where it will not is in scenarios where the common parts of the code need to pass data to the unique parts.

To address those situations, a very useful solution can be to write the parent method such that it has a subVI encapsulating the similar, but unique bits, that is itself a dynamic dispatch VI. By providing multiple levels of dynamic dispatch, you create a situation that is easy to understand and minimizes duplication of code. In our example, the 4 subclasses would use the parent implementations of the dynamic dispatch VI and subVI. The 4 subclasses that are similar, but a bit different would use the parent method, but override the dynamic dispatch subVI, and the two that are fundamentally different would override the parent method VI itself.

Getting Down to Business

So let’s start looking at what we need to do to finish our conversion — while remembering that most of this discussion will be about the reorganization and repurposing of existing code. For the most part, I won’t be going into how the basic functionality works since we discussed that code when it was originally introduced.

The basic internal structure of our four “blueprint” VIs will be largely similar. We will call the VI that creates the object we want, followed by a dynamic dispatch method that does what we need done. However, note that there isn’t (and shouldn’t be) a direct one-to-one correlation between the blueprint VI and the underlying method. We still need to be looking for ways to minimize redundant code. Back when we were setting up the testbed application originally we considered the need for unstructured configuration values — like you would usually store in an INI file. In fact, we implemented that capability and used it to store the default sample period. Consequently, one of the three methods we end up implementing will be used in a couple of different places now, and will be reusable in the future for other unstructured configuration data. So, let’s start with that one.

Get Misc Setups

Creating this method follows the same procedure as we have used before, except that this method has two input parameters and an output string. So the finished parent method has a front panel that looks like this:

misc read front panel

Another difference with this method is that if it is called for a subclass that does not override it, the parent VI does not provide any default functionality. To see what I mean, check out the VI’s block diagram. You see that it does nothing. This structure might seem rather pointless, but in reality it can at times be pretty handy. Say you have a method that is really only needed by one subclass. Without dynamic dispatch you would have to put a case structure around the code so the VI is only called in that one particular circumstance. However, with dynamic dispatch, this selection takes place automatically and without cluttering up the code with case structures.

With the parent VI (Read Misc Settings.vi) created and saved, we can now build the two subclass overrides. Starting with the Text version, to read the data from the INI file, we create an override VI in Config Data_Text.lvclass containing code that uses the built-in configuration VIs to fetch the data:

read misc - ini

But wait! This won’t work. You notice that the path to the INI file comes from the class data — which is fine except that we forgot to initialize it. I wanted to highlight this point because it is a very easy, and common mistake to make. Initializing the DVR is not the same as initializing the data in the DVR. All we have to do to fix this memory lapse is modify the class’ version of the Initialize New method, like so:

finish initializing the DVR - text

The subVI with the green banner builds the path to the application’s INI file programmatically. By the way, in case you’re wondering about whether the same issue applies to the other override to this method (the one for databases), the answer is “Yes”. However the solution there is a bit more complex, so we’ll deal with it in a moment.

The other thing to notice about the setup reading method’s code is that if the read doesn’t find the desired value in the INI file, it creates it and gives it a default value. Adding this extra bit is often helpful for recovering from situations where a uses has gotten in and mucked around with the INI file and deleted something they should not have. In any case, to see this code in action, we need to finish up the code in the Get Default Sample Period.vi “blueprint” VI:

get default sample period

You can now run this VI and get a valid result from the INI file. Of course, if you set the configuration data selector value in the INI file to Jet you will get a zero back. This results is because we haven’t provided an override for that subclass yet. Note that you do not get an error because not overriding a parent method is a perfectly valid thing to do.

Updating the Database Initialization

But we need the database data too, so let’s backtrack to the Initialize New VI in Config Data_DB.lvclass. The class data for this subclass is a string that the code will use in one way or another to connect to the database. However, there are three possible sources for this string:

  • Most ADO interfaces use a rather complex and specialized connection string that can identify logical names, network paths, security parameters and a lot more.
  • Although Jet does use ADO drivers, its connection string is much simpler. In fact, with the exception of the file path, it can be largely treated as a big string constant.
  • SQLite (which we aren’t going to support right now, but which exists in the class hierarchy) doesn’t use any ADO parameters. Keeping with its minimalist approach, all it needs is a file path.

So what are we going to do? Well, if you said, this sounds like a job for dynamic dispatch, you’re right! We need to create a method that will derive the correct string depending on which specific subclass is present. So let’s create a virtual folder and a subdirectory named _protected in Config Data_DB.lvclass with Protected access scope. Inside the virtual folder we will create a new VI using the dynamic dispatch template called Get Connection String.vi, and save it in the subdirectory. In addition to the signal IO the template provides, we need to add a string output:

get connection string

After saving this new the parent class method, we can fill in the overrides. However that code is pretty trivial, mostly scavenged from the original code, and has very little to do with OOP. Consequently, I won’t take time here to highlight it, but feel free to check it for yourself. Then with this work completed, we can finish the initialization code.

finish initializing the DVR - jet

Accessing the Database

Although we will be using a Jet database, note that the override for the Read Misc Settings method will reside in the Config Data_DB_ADO.lvclass subclass, not Config Data_DB_ADO_Jet.lvclass. The reason for this placement is that with the exception of the connection string (which we handle during initialization) the query logic is the same for Jet or most other ADO databases. By the way, what would we do if we ever did find a DBMS that needed something different? That’s right, we would just create a subclass for it and override the method in that new subclass.

In any case, most of the code we need now was lifted wholesale from the old configuration library:

read misc - jet

And with that we are done with the first of our blueprint VIs. However, this is also the procedure we will follow the VIs to read the error handling parameters, the startup processes and load the machine configurations for each state machine clone. Again, I won’t step through the creation of this code because from the standpoint of this post’s main topic (object oriented code development) there is nothing in these that is any different from the one we did go through. As a friend of mine used to say, “From here on, it’s just a matter of turnin’ the crank.”

Testing

To make the results of these changes easier to verify, I have made slight differences between the configuration in the INI file, and the configuration in the database. Notice that if you start the application with the INI file set to Text, the application launches with only two TC state machines running and the acquisition sample rate defaults to 500 msec. However, a setting of Jet produces the functionality we have seen before (3 state machines and a 1000 msec default sample rate).

Notice also that you have to make the INI file setting changes before starting the application. The logic could certainly have been written to make the settings reconfigurable on the fly, but it would have added another level of complication, so I will leave that as, “…as an exercise for the reader…”.

Before tagging this release of the code in SVN, I also went through the “recycled” code and made sure that the VIs were all saved in their proper locations and all the redundant code had been removed. A project window feature that made the latter task much easier was the ability to right-click on a folder and have LabVIEW list all the files with no callers. One thing to be careful about, however, is deleting “unused” files that are in classes. The logic behind this feature can’t identify VIs that are being dynamically linked — which pretty much describes dynamic dispatch VIs.

Finally, before closings out this post, I want to remind you of something important. I make no claim that this code is the best implementation for all possible situations. Although a lot of it is based on software that I developed for deliverable systems, the point on this blog is to provide you with examples, demonstrations, and models that you can then customize and mold to fit your customers’ specific needs. In music, there is the idea of “variations on a theme”. In fact, in a few cases the variations have become more famous than the original. So take the themes I offer here and feel free to deconstruct, rearrange and reassemble them into something that is new and exciting.

Testbed application Release 15
Toolbox Release 7

The Big Tease

What if I told you that LabVIEW incorporates a feature that can improve the quality of your code and reduce errors by helping to validate your math? Hey, this is a no-brainer! Everybody can use help validating their code. But, what would you say if I told you that most people never use this feature? Crazy…

Until Next Time…

Mike…

Objectifying the Testbed

Object-oriented programming as a technique promises a host of benefits, but suffers from the impression that it is in some way an “advanced” topic. In contrast, I feel that OOP is just a logical extension of the concepts that LabVIEW developers use every day. The basic problem has been with the way it has been taught. However, the various object-oriented frameworks that are overly complex and difficult to learn, haven’t helped matters. These bad “actors” often only serve to hide the inherent elegance of the OOP paradigm and scare off users that could benefit from it.

To help clear away some of the extraneous mystique, I have presented a brief introduction to the topic that provides a foundation sufficient to let us get into OOP by implementing a module for managing program configuration data that provides the calling application with a common interface regardless of how (or where) the data is stored.

Filling a Niche

Most of the work we will be doing will eventually replace the Configuration Management library. Now while this might sound like a major shift, it really is not because (like I repeatedly tell you) the whole point of good design is to make changes and upgrades like this possible. So let’s look at what this upgrade will need to do.

Normally a large part of any upgrade projects is defining the requirements, but due to the design work that was put in originally, we already have a pretty good handle on what the new class structure has to do. In terms of surface functionality, we know we have to be able to handle all the same information as before — with, of course, the ability to add more when we want or need that ability.

Designing the Structure

The trick is going to be sorting out what new functionality will be needed under the covers. At the most basic level we need to be able use either text files or databases to actually store the configuration data, so there we have two subclasses. But we need to consider whether each of those options needs to be broken down further.

On the “text file” side, the data might be coming from a standard INI file, or the code might be in using a custom text file format. Custom text configuration files are very common when some of the configuration data is tabular, since it is a pain to store tabular data in an INI file. However, regardless of the format of the contents, the basic mechanism for reading and writing text files remains the same so it probably won’t be valuable to have any subclasses under “text files”.

On the “database” side of things, however, the situation is very different. First of all, in terms of connectivity, you can access most databases through the standard ADO (ActiveX Data Objects, also sometimes called ole-db) interface. However, “most” is not the same as “all” and one common exception is SQLite. A popular, lightweight data management engine, SQLite can run on a variety of platforms — including some real-time systems. To keep its footprint small, SQLite utilizes a small custom DLL, rather than a large, but standardized, interface. So we need to make provisions for other types of connectivity by creating (for now) two subclasses below “database”: “ado” and “sqlite” — though we won’t be implementing the SQLite functionality right now.

Finally, what about “ado”? Can it be broken down further? Maybe. One of the advantages of ADO is that it, for the most part it does a pretty good job of hiding the differences between one database management system (or DBMS) and the next, but there are some variations it can’t paper over. These differences often relate to the version, or dialect, of SQL the DBMS speaks. However sometimes differences arise because some DBMS fundamentally don’t operate the same. For example, while most DBMS go to extraordinary lengths to hide exactly where and how the data is actually stored, Jet (the DBMS built into Windows) stores the data in a file you explicitly identify. Hence, while the connection to other DBMS might be defined in terms of network paths and logical names, with Jet you are connecting to a particular file.

To provide for these sorts of functional nuances, let’s create a subclass below “ado” for “jet” — understanding there could be others in the future.

Configuration Data Classes

This is what the hierarchy looks like so far, all drawn out.

Doing the Rough Framing

When you are building a house the first tradesmen to show up onsite are the carpenters to do the so-called “rough framing”. This process creates the skeletal form of the final house that is covered with rough exterior plywood. The idea is that later workers will fill in the details and fine tune the construction. And metaphorically speaking, that’s what we have to do now for our configuration data class.

For a class hierarchy, that framing consists of the directory structure and the class files themselves. Using the techniques I gave last time, I first create mirroring directory structures inside the project directory and the project itself…

Configuration Data Classes in Project

Note that I have also created some virtual folders inside the Config Data class which represents actual sub directories.

  • Interface
    The VIs in this folder will have public access scope. In fact they will be the only VIs in the library that are so scoped. Because these VIs are the only ones that outside callers will be able to call, they alone form the interface between the class hierarchy and the rest of the code.
  • _private
    As the folder title implies, the files that go in here will have private access scope. This assignment means that they will only be accessible from other VIs in the top-level class.
  • _protected
    This is another folder that specifies access scope for its contents. In the case of protected scope, the VIs in this folder will only be accessible from the top-level class, or any of its child classes.

Creating the Interface

With the functional scaffolding in place, we can start filling in the blanks. And the first thing we need to do is create what I referred to as the “blueprint” in the previous post — the VIs that the rest of the application will call. After going through the public VIs in the old library we see that there are really only 4 VIs that the rest of the application uses directly

  1. Get Default Sample Period.vi
  2. Get Error Handling Parameters.vi
  3. Get Processes to Launch.vi
  4. Load Machine Configuration.vi

Many of the others will still be used, but their presence will be hidden in subclasses. As I create these (for now) empty VIs, I make sure they have the same front panels and connector panes as the ones they will be replacing.

Adding Infrastructure

Getting back to our housebuilding analogy: After the framework is completed and the outside skin is on, the next job is to start installing some of the needed infrastructure, because without electric, water, sewer and perhaps gas connections, our new home is not much better than the cave dwellings that our prehistoric ancestors inhabited — and in some ways is far worse.

What we need to add to our nascent configuration management subsystem is some data handling, but not for our data. The data I’m talking is the private internal data that the subsystem needs to maintain in order to do its job.

Thinking about what our various bits of code need to do, we see first that there is certain data that will commonly be needed regardless of how you actually end up getting the data. What I’m thinking about here is the name of the operator and a password. Now, some subclasses might need both, while others might need only one or the other, and that’s fine. The important point is that if all subclasses could potentially need at least some of this data, it has to be data that is associated with the top-level class. However, this requirement for global availability causes a problem.

Remember how when we were defining terms in the previous post we said that in the LabVIEW world an object is a wire? LabVIEW wires, and data contained in them are by definition not global. If you want data from a wire you need to be connected to it. So how can we create wires that are separate, but which still share at least some of the same data? Well, one very good way of doing it would be to create one central data store that all the wires can access, and as it turns out LabVIEW incorporates a feature that is very efficient and so is perfect for such an implementation. I’m talking about the Data Value Reference, or DVR.

Our approach will be simple. We first create a DVR that is defined to hold the data we need and make a reference to that DVR the class data for our Config Data class. Then to access that data, we create a family of data access VIs to insert data into, or read data from, the DVR.

The first step on this process is to create another virtual folder in the top-level class named _dvr with privateaccess scope. Next, I create in that folder a typedef control named Config Mgr Data.ctl that consists of a cluster containing two strings, one for user name and one for password.

Config Mgr Data

When saving this control I create a subdirectory (called _dvr) to hold it. Likewise, I also create a VI in the same directory (and virtual folder) called Config Mgr DVR.vi that contains this code:

Config Mgr DVR

Two comments about this code. First, it has the basic form of a FGV where the variable value is the DVR reference. Because the case that generates the reference is only run once, this VI will always return a reference to the same DVR no matter how many times it is run. Second, the data for the DVR is the typedef we created. This point is critical. As with UDEs, if you define a DVR reference using a typedef, you can later change the typedef and it won’t break the reference.

To make this DVR available and inheritable through the class, I copy and paste the reference indicator into the class data cluster, like so.

Config Mgr DVR in Class Data

Then to provide access to the DVR contents, I create protected scope VIs that I store in a virtual folder and subdirectory both named _data access. Here is what the read VI for the user ID parameter looks like…

User ID Read

…and the write VI…

User ID Write

I next realize that the two main subclasses also have some data that will need to be held in common for their subclasses. So I repeat the process I just used to store a file path for the file subclass and the connection string that the db subclass needs. Here’s what the project looks like now.

project with basic infrastructure

Finally, before moving on we are going to need a way to initialize all the logic we have created, and to do that we will take our first foray into the exciting — though sometimes confusing — world of creating dynamic dispatch VIs. The goal is to create a method called Initialize New that causes the DVR in a new instance of a class to automatically initialize itself.

I start by right clicking on the _protected virtual folder in Config Data.lvclass and from the New sub menu selecting the option to create a new VI using the dynamic dispatch template. I leave the front panel of the resulting VI the way it is, but I change the connector pane, edit the icon and add this code to the block diagram.

Initialize top-level dvr

If the DVR in the class data is not valid, I call the DVR VI to get a valid reference and then use that reference to populate the class data. If the DVR reference is valid, the false case (not shown) does nothing. When I save this VI, I put it in a subdirectory named _protected.

To create the subclass versions of this method, I right-click on the subclass name and from the New sub menu select the item to create a new VI for override — which is the technical term for what we are doing. We are overriding the parent functionality with different functionality in the child.

However before we get a new VI, LabVIEW opens a dialog to ask us which parent method we want to override. After double clicking on Initialize New we get our new VI, also called Initialize New. After saving this new VI, (the default location LabVIEW picks is perfect) I modify the code to look like this:

initialize new - child

Most of this logic should be familiar because it is the same as we did for the parent. If the child DVR reference is not valid, we initialize it. Otherwise, we do nothing. But what is that funny looking VI in front of the initialization logic?

Object-oriented methodology recognizes that there are going to be times when a method in a child class will need to do what the parent method does, but perhaps a bit more or maybe do it a bit differently. One solution to this situation would be to simply duplicate all the parent code in the child. However that approach would be wasteful. What object-oriented logic does instead is it allows a child to directly call its parent’s version of the method. So here, the code first calls the parent’s version of the initialization VI and then executes the logic to initialize itself. In the end, both the parent and the child will get initialized.

Creating Objects

The time is now upon us to start tying this infrastructure together into an organized system. The first thing to sort out is how to create an object of a given type. One way is very similar to what you would do with a conventional datatype. If you wanted to create, say an I32 value, you would drop down a constant and start using it. In the same way, you can also drop down a class constant and wire to it, thus creating an object. The problem with this approach is that when LabVIEW instantiates a class it also loads into memory all the VIs associated with the class. What you can end up with is a situation where all the VIs for all the classes are loaded, even if you will never use some of the classes. The way to get around that problem is to load classes dynamically as you need them. This is the code I use to perform that operation.

create object dynamically

You will notice that the VI has no inputs, save the requisite error cluster. This is because the basic piece of information that specifies the specific class to be created will be loaded from the application INI file. But why the INI file? Isn’t the point of this exercise to get rid or configuration data in that file? Well yes, but there is a bit of a paradox at work here. Simply put, an application can’t go to a database it doesn’t know it has to find if it should look in the database to get its setup data. That basic piece of information has to be stored somewhere that will always be there, and on the Windows platform you have exactly two choices: the INI file and the Windows registry. Of the two, the INI file is much safer — you at least don’t have to worry about a user going wild and trashing their whole computer.

We are initially only going to support two options for managing configuration data, so we only need two settings (Text and Jet). The VI that communicates with the INI file reads one of these values from the Configuration Data key in the Data Management section and returns two values based on what it finds there: A relative path to the location of a class file, and the name of the file. Thanks to the naming convention we use, these two values are very closely related.

The remaining code loads the target class into memory and initializes it. In addition, the resulting object is buffered as in a FGV. For the details of how this process works, see the comments in the code. The final thing to notice is that the output from this VI is an indicator of the Config Data class datatype.

Building Out the Remaining Methods

All that’s left now is to implement the code that does what the testbed needs done, however this post is getting long and I have already passed on a lot of information for you to absorb — so we’ll leave that discussion (and the testing!) for next week.

Until Next Time…

Mike…

Objectifying LabVIEW

I suppose a good place to start this post is with an admission that, in a sense, it is flying a false flag. One way that you could reasonably interpret the title is that in this post I am going to be showing you how to start using objects in LabVIEW. That interpretation is not correct, and the troublesome word is “start”. The fact of the matter is that you can’t use LabVIEW without interacting with objects and many parts of it (think: VI Server) are overtly object-oriented — even without an obvious class structure. The language is built on an objects oriented foundation and so, in a very real way, has been object-oriented since Version 1.

What I am going to be showing you is how to simplify your work by building your own classes. As I stated in the teaser last time, the starting point for this discussion is the recommendation given in NI’s object-oriented training class that you should make your first attempts at using explicit object-oriented technique small, easy to manage subsystems — or put more simply, we need to start with baby steps.

Object-oriented baby steps

OK, so this is the point in the presentation where most presenters hauls out some standard theory, and moth-eaten descriptions of objects and classes — often lifted wholesale from a book on C++ programming. The problem with this approach is of course that we aren’t C++ programmers and the amount of useful information we can draw from an implementation of objects oriented programming that is so fundamentally flawed is minimal at best. The approach I intend to take instead focuses on key aspects of the technique that are of immediate, practical importance to someone who is working in LabVIEW and wants to take advantage of explicitly implementing object-oriented class structures.

A Quick Glossary

The first thing we need is a vocabulary that will let us talk about the topic at hand.

OOP Clouds

Now be forewarned that some of these definitions may not exactly match what you may read elsewhere, but they are correct for the LabVIEW development environment.

  • Class — An abstract datatype.
    If you think that sounds a lot like the definition of a cluster, you’re right! Due to the way LabVIEW implements object orientation, a class is essentially a very fancy cluster. In fact, when you create a class the first item that LabVIEW inserts into it is a typedef consisting of an empty cluster. Although you don’t have to put anything into the cluster, it provides a place to put data that is private to that class.
  • Object — An instance of a class.
    As with a normal cluster, every instance of a class has its own memory space. Consequently, a class wire is in most ways the same as any other wire in LabVIEW. We are still working in a dataflow environment.
  • Property — A piece of data that tells you something about the object.
    This is why there is a cluster at the heart of the class. You want to put in that cluster information that will describe the object is a way that is meaningful to you application. Because each instance of the class is a separate wire that has its own memory space, the data contained in the cluster describes that particular object.
  • Method — A VI that is associated with a particular class and which does something useful.
    So what do I mean by, “…something useful…”? Well that all depends on the class’ purpose. A the class that is responsible for creating a visual interface might have a method that causes an object to draw itself. While a class that manages the interface to data storage would likely have a method to store or retrieve application data.

From this simple list of words we can begin to see the general shape of the arena in which we will be playing. To recap: A class is a kind of wire. An object is a particular wire. A property is data carried in the wire that describes it in a useful way, and methods use the object data to do something you need done.

Dynamic Dispatch

Now that we have a basic vocabulary in place that lets us talk about this stuff, there are a couple of concepts that we need to discuss. I want to start with this exploration is with the mechanism that LabVIEW uses to call methods. Referred to as dynamic dispatch this feature it is often a source of confusion to developers getting started with object-oriented programming. A good way to come to grips with dynamic dispatch is to compare and contrast it to a feature of LabVIEW with which you may already be familiar: polymorphism.

Polymorphism (from the perspective of the developer using a polymorphic subVI) is the ability of a single functions to adapt to whatever datatype is wired to its inputs. For example, the low-level Add node in LabVIEW is polymorphic. Consequently, it can add scalar numeric if all types, as well as arrays of numerics of varied dimensions, clusters of numerics and even arrays of clusters of numerics.

Of course, from the perspective of the developer creating a polymorphic VI the view is much different. This flexibility doesn’t happen on its own. Rather, you have to create all the individual instance VIs that handle the various datatypes. For example, I often want to know if a value at a specific point in the code has changed from the last time this bit of code executed. So I created a polymorphic VI that performs this function. To create this subVI, I had to write variations of the same basic logic for about a half-dozen or so basic datatypes, as well as a version that used the variant datatype to catch everything else.

Dynamic dispatch (which is actually a form of polymorphism) works much the same way, but with a couple significant differences.

  • When the decision is made as to which instance VI is to be executed
    With conventional polymorphism, the decision of which instance VI to call happens as you wire in the subVI. In the case of my polymorphic subVI, as soon as I wire a U32 to the input, LabVIEW automatically selects the U32 version of the code. However, with dynamic dispatch, that decision gets put off until runtime with LabVIEW making the decision based on the datatype present on the wire as the subVI is called. Of course for that to work, you need a different kind of wire. Which brings us to the other point…

  • The criteria for choosing between VIs
    The wires that conventional polymorphism uses to select a VI all have one thing in common — they are all static datatypes. By that I mean that a wire is a U32, or a string or whatever and it can’t change on the fly. By contrast, with dynamic dispatch, the basis for selection is a wire that is an instance of a class, and the datatype of an object can be dynamic. However this variability is not infinite. A given class wire can’t hold just any object because class structure is also hierarchical.

Say you have a class named Geometric Shapes to Draw. You can define other classes (called subclasses) like Circle or Square that are interpreted by LabVIEW as being more specific instances of Geometric Shapes to Draw objects. Due to this hierarchical relationship, a given wire can be typed as a Geometric Shapes to Draw but at runtime really be carrying a Circle or Square. As a result, a dynamic dispatch VI can call different instance VIs based on the datatype at runtime.

However, one big thing that conventional polymorphism does have in common with dynamic dispatch, is that the power doesn’t come for free. You still have to write the method VIs for dynamic dispatch to call.

Inheritance

Remember a moment ago I referred to class datatypes as being hierarchical? The fancy computer science concept governing the use of hierarchical class structures is called inheritance. The point of this label is to drive home the idea that not only are subclasses logically related to the classes above them in the hierarchy, but these so-called child classes also have access to the properties and methods contained in their parent classes. In other words they can “inherit” or use data and capabilities that belong to their parents.

Handled properly, inheritance can significantly reduced the amount of code that you have to write. Handled poorly, inheritance can turn an otherwise promising project into a veritable train wreck. Which brings up our last point…

Proper Organization

Although organization isn’t really a feature of object-oriented programming, it is never the less critical. The simple fact of the matter is that while a disorganized, undisciplined developer might be able to get by when working in conventional LabVIEW, introducing the explicit use of classes can result in utter chaos. Of the real object-oriented failures that I have seen over the years, they all shared a lack of, or inconsistent, organization.

So what sort of organizational things am I talking about? Well it’s a lot of the same stuff that we have talked about before. For a more general discussion of the topic you can check out a post that I wrote very early on titled, Conventional Wisdom. What I want to do right now is highlight some of the points that are particularly important for object-oriented work.

The two main conventions (directory structure and file naming) go together because the point of one is to mirror the other. But rather than simply list some rules, I’ll demonstrate how this works. To start, I will create a directory that is named for the class hierarchy that I will build inside it. So if the point of this class hierarchy is, for example, to update my program’s user interface, I would call the directory something obvious like GUI Update. Inside this directory I would then create the top-level class with the file name GUI Update.lvclass. At this time I will also create a couple subdirectories (_subVIs and _typedefs) that I know I will undoubtedly be needing. Finally, I have learned over the years that being able to tightly control access to VIs is very important, so I will also create at this time a project library named GUI Update.lvlib and put into it the top-level class and a virtual folder called _subclasses with its access scope set to Private.

So the parent class is set up, but what about the subclasses? I simply repeat the pattern. Let’s say the GUI Update class has subclasses for three types of controls that it will need to update: Boolean, Digital and Cluster. I create subdirectories in the parent directory that are named for the subclass that will go into each, and hierarchically name the three subclasses GUI Update_Boolean.lvclass, GUI Update_Digital.lvclass, and GUI Update_Cluster.lvclass. I am also careful to remember to add the subclass files to the _subclasses virtual folder in the library, edit their icon overlays, and set their inheritance correctly — which is to say, identify their parents. Note that while the hierarchical naming structure doesn’t automatically establish correct inheritance, this convention does make it easier to visualize class relationships in the project file.

And so I go building each layer in my class hierarchy. With each new subclass I continue the same pattern so if I eventually want to find, say a subVI associated with the class GUI Update_Digital_Unsigned Word.lvclass, I know I will find it in the directory ../GUI Update/Digital/Unsigned Word/_subVIs.

Having a pattern to which you stick relentlessly — even one as simple as this one — will save you immeasurable amounts of time.

Creating the Blueprint

The next thing I do when creating a class hierarchy (but the last thing I want to talk about right now) is how the rest of the application will interface with my new GUI Update class. This is where the access scope we have been so careful to create comes into play. In the top-level class I always create a group of VIs that have their access scope set to public. These interface VIs form the totality of the external interface to the class hierarchy and so include the functions that define what the application as a whole needs GUI Update to do for it. The logical implications of this interface layer is why I sometimes call this step in the process, “Creating the Blueprint”.

In addition to providing a very clean interface, another advantage of having this “blueprint” is that if you ever need to expand your stable of subclasses, these interface VIs will serve as a list of functions you need to support in the new subclass — or at least a list of functions that you should consider implementing in the new subclass. To see what I mean, consider that the scenario we have been discussing is actually drawn from an application I created once. The list of public interface VIs was really very short: There was a method that read a value from a remote device and wrote it to the GUI object, one that looked for control value changes to write them to the remote device, and one that allowed the calling application to set control specific properties.

Of these, all GUI objects had to implement the first one because even the controls needed to be updated once a second. The reason for this constraint was that the remote device could also be reconfigured from a local interface and the LabVIEW application needed to keep itself up to date. However, the second interface method was only applicable to controls. Finally, the third interface method was implemented very rarely for the few subclasses that needed it.

What’s up next?

We have just about run out of space for this installment, but you may have noticed that something is missing from this post: Any actual LabVIEW code. Next time we will correct that sad situation by considering how to apply these principles to the creation of a class hierarchy that provides a common mechanism for storing and retrieving program data and setup parameters that works the same (from the application’s perspective at least) regardless of whether the program is interfacing with a database or text files.

Until Next Time…

Mike…