When calling DLLs we are often more concerned about making the calls correctly, and not so much about what happens after we are done using it. However, when developing professional applications we have to be concerned about the entire process. As an example of the kind of things that can happen, I once built an application that called several proprietary DLLs. Moreover, these DLLs internally made calls to other DLLs that were occasionally updated. Due to this update requirement, my application also needed to be able to run the MSIs needed to uninstall the old versions of these internal DLLs and install the new versions.
For a while everything worked as it should, but gradually a problem began to appear. If the user updated the internal DLLs immediately after launching the application, all was well. However, if the user did anything that accessed the DLLs first, the MSI operations generated a warning saying that the DLL being changed was in use by my application and that a reboot might be necessary after the update – though in the end a reboot was never actually required. And to repeat an important point: my code never directly accesses the DLLs being changed, so I was left wondering why the MSI installer flagged my application as being the problem?
My first response was to back and double-checked all the DLL calls I was making to ensure that I was closing and disposing of all the resources I was using, but the problem persisted. In the end, I discovered that the DLLs my code called weren’t shutting down properly. Consequently, it appeared to Windows that those DLLs were still in use. Therefore, it appeared to Windows that the DLLs they were calling were also still in use. As a result, my application got the credit for holding open all the DLLs.
As I was researched this problem, I learned that this sort of issue is not at all uncommon in the C/C++ programming world. In fact, since DLL developers typically blew-off fixing the root cause, there was even a well-defined workaround for the problem. All I had to do was turn all the routines that accessed the DLLs into standalone executables that I would call from my main program. With this alteration to the logic the problem goes away because when the little executable closes, Windows forcibly cleans-up the DLL mess.
Having done this sort of thing before, the basic shape of the process was pretty easy to grasp: Create a standalone executable and call it using the built-in LabVIEW
System Exec.vi function. The unique part was that all the instances where I had used this technique in the past had one thing in common: They had all been higher-level processes that pretty much ran on their own with minimal connection to the main application. By contrast, these little executables needed to operate pretty much like a subVI.
Thinking About Interfaces
One of the main differences between a “high-level” executable and a “subVI” executable lies in how we handle the passing of data back and forth between the main application and the executables. When the executable in question is a higher-level process it typically doesn’t require a lot of interaction with the caller, and the interaction that is required is often built on top of a standardized infrastructure that provides well-defined mechanisms for such tasks as the passing of runtime data or the storing of results.
The situation, however, is very different if the executable is a small routine that is launched, does something simple, and then quits. In that situation, it’s hard to justify the overhead of such common techniques as a database interface, or TCP communications channel. Moreover, when we compare the data requirements of a large standalone process to a small “subVI” executable, we see that the inherent nature of the data differs between the two types of executables.
Standalone executable processes need primarily meta-data consisting of high-level information that in theory can change, but as a practical matter rarely does. This type of data can be effectively stored in an INI file. I’m thinking here about such things as the path to a database, or the port that a standard external process will be using for communications. The expectation is that with these bits of general data, the process can go and fetch for itself the detailed information that it needs in order to operate.
By contrast, because the “subVI” executable is operating logically as a subroutine, its only interface is the data it receives from the caller. Likewise, the result of its work needs to be returned to that same caller as directly and as easily as possible because you have no idea how an eventual developer will choose to you this “subroutine”.
Getting Data In
Anyone who knows me knows that I am a great proponent of simplifying interfaces through the process of Information Hiding, and while that concept still holds true here, the fact that the subroutine lives in a separate application adds a bit of complexity because its execution context is different from the main application. For example, one of the ways that the various VIs contained in a module can hide their inner workings is through shared memory structures like FGVs. However, if some part of that module resides in another executable that technique is no longer available.
Another issue is that the most secure, least error-prone technique for passing data is using complex data structures the mirror the input data’s inherent structure. Here the challenge is that while we can pass data into an executable using command-line parameters, that interface limits us to passing simple strings. Consequently, we will need to reformat our data, but do so in such a way that it is both easy to parse and doesn’t compromise the data’s integrity.
It appears that we have a conflict in requirements. We know, that in the end, everything has to be expressed as an ASCII string, but to transfer numeric data without compromising its precision we need to send binary information. Fortunately this problem isn’t a new one – in fact, it’s as old as the internet.
Originally, the internet was all about the text: no pictures, no sound files, no cat videos – everything was expressed as ASCII text. While is was amazing what people were able to do given that constraint, there soon arose applications that couldn’t be faked: they needed to be able to send real binary data through an ASCII “pipe”. The solution (which, by the way, is still used today) was to encode the binary data as a string using a subset of ASCII. One of the oldest and most common encoding scheme is called Base64, and its name is derived the way it encodes the data. It starts by stringing together consecutive 3 byte chunks of data into a single 24-bit value. This 24-bit value is then broken up into four, 6-bit values. Now a 6-bit binary can have 64 distinct values (hence the name). Finally, the technique assigns a printable ASCII value to each of the 6-bit values from
111111. For this final mapping, Base64 uses the characters a through z, A through Z, 0 through 9, “+” and “/”. The result is a string that is guaranteed to pass through an ASCII network without being corrupted.
Applying this technique to our application, we find that we can flatten any LabVIEW datatype to a string, and then Base64 encode the flattened string. What we get for this minimal effort is a string that we can easily parse and convert back into LabVIEW data. To keep things simple and error free, when using this technique, you just need to be sure that the LabVIEW data you flatten is defined as a typedef. Here is an example of one simple encoding implementation…
…and the corresponding decoding logic…
The Base64 encoding and decoding VIs that I use are in the code linked at the end of the post. But what about the Information Hiding that I mentioned earlier? We can still hide the internal data structures, it just takes is one small additional step. Let’s say that you have a module that contains several VIs that all manipulate the same private (hidden) data structure. All you have to do is create a new member of the module that reads the private data and returns to the caller a Base64-encoded version of the data. Private data stays private, and yet the calling VI can pass it as a parameter to the external module member.
Note that because a Base64-encoded structure can’t contain spaces, carriage returns or other control characters, you can easily send (nearly) any number of structures using the command line.
Getting Data Out
So it turns out that using command line parameters with appropriate encoding makes getting runtime data into the external executable is pretty easy. Now, how do we go about getting an error cluster out. Oh sure, I could implement some sort of communications protocol using TCP or even just write the output to a file that I could then read from the main application, but the first of those options seems like an awful lot of work for just an error cluster, and the second is just plain clunky. A far more elegant solution would be to make use if a feature that has been built into the OS since the days of the DOS command line, and which is the output analog to the command line parameters: Standard Out, or
stdout is a kind of buffer to which your code can write messages consisting of ASCII strings. Now if you are running a program that uses
stdout from a command prompt, each message will appear as it is written. If you are running the program using the LabVIEW
System Exec.vi function, the strings are all buffered and returned at once when the function finishes. Happily there are existing VIs that allow LabVIEW executables to write to the
stdout. The API to the interface consists of three VIs that I have encapsulated into a single library (
Stdout Operations.lvlib). Here is a simplified example of how they might be used:
This examples reads from the command line parameters a number that specifies how many times the loop should iterate, and includes logic to generate an error if the count s greater than 10. The first of the
stdout VIs (
Setup for StdOut.vi) opens the output buffer. Here is its front panel:
Describing the internal operation of this VI is way beyond the scope of this article because it involves a lot of Windows minutia, so let’s just note that it has no inputs but the incoming error cluster. Likewise, although you might be expecting to see some sort of output reference, the only output is the outgoing error cluster. The reason for this stripped down interface lies in the the fact that the
stdout is built on an internal Windows construct called the
console. Because there can be only one console, Windows maintains the structure internally – hence, no references needed. With console open, we can start writing to it (
Append String to StdOut.vi):
Note that you have very little control over the contents of the
stdout buffer. Every time you write to it, the new message is simply appended to the end of what was already there. In our little test program, with every iteration of the loop we append a timestamp and the current loop count. Finally when we are done with the loop, we need to close the reference, but first there is one more thing to do. We have added to the buffer several strings that are effectively status messages. What about any errors that might have occurred? How do we report those? We’ll look at a better solution in a bit, but for now let’s just unbundle the error message and append it to
stdout output buffer.
Finally the last VI (
Terminate StdOut.vi) closes out the console associated with the
stdout functionality. Here’s the front panel, but there still isn’t much to see here.
Trying out this simple example
In the code for this post, I have included this simple example, so we can try it out. All you have to do is open a Windows command line window, navigate to the directory containing the executable, and run the command:
You should see something like this:
Now try the same command, but with a count of 15, rather than 4. In this case you should see the error message, and not the counts.
Ok, so far, so good. Let’s now consider an example that is a little more complete, and so could be used as a template for a deliverable application.
Building towards a better example
While the simple example is to some extent functional (Note that I avoided using the word “works”) there are many places where we can improve its structure. To begin with, the logic for reading and handling the command line parameters is rather ad-hoc. Defining some reusable VIs to handle that for you offers two benefits:
- Coding is Simpler: Having VIs that encapsulates (and therefore standardizes) this functionality means you don’t have to remember how it all works.
- Coding is Faster: Clearly if you can just drop down a subVI that does what you need, development will go faster than if you have to build the same logic, “yet again”.
In addition, while the basic
stdout functions we used in the first example cover the basics needed to implement the functionality, there are a few places where we can add significant capability by building a bit on that existing foundation. To get started, let’s look at a VI I created to standardize how I deal with command line parameters.
A Reusable Command Line VI
One of the quirks of how LabVIEW returns command line parameters is that it breaks up the command line into an array of strings. The first element of this array is always the name of the application. The remaining elements hold the command line parameters delimited with spaces. Now I’m sure there was probably a really good reason for this approach but it certainly complicates parsing the parameters because it is much easier to parse a string than it is an array of strings. Consequently, the first thing I do in my command line parser (
Get Command Line Arguments.vi) is delete the first element and then turn the remainder of the array back into a long string by inserting a space between each element.
To extract the parameters from the resulting string, I pass it into a loop that indexes through the string and separates into two new arrays: One of parameter labels and one of parameter values. Technically you don’t need the labels, but I find that using them is beneficial from the usability standpoint. Without the labels the parameters have to be provided in a particular order, and the user has to remember that order. With labels the parameters can be in any order. Moreover, it simplifies the implementation of default values for parameters that the user doesn’t provide.
Examining the code, you can see that I derive two benefits from parsing the command line as a string. First, I can create a match string pattern that will find the equal sign regardless of whether there are spaces around it. If I were working with the original array, this simple operation would be much more complex as I would have to write code to deal with the equal sign at the beginning and end of an element, as well as the case where the equal is in an element by itself. A second related issue is that working with a string makes it easy for me to deal with parameters that include embedded spaces. All I have to do is look at the 1st character after equal sign and if it is a quotation mark I look for the other quote and anything between them is my parameter value. If, however, the 1st character is any other character, the parameter value only goes as far as the first space.
Building on Standard stdout
In our earlier discussion of this functionality, we saw three functions that implemented three basic operations: opening the
stdout, writing a string to it, and finally closing it. However, as I have said before, part of your job as a developer is to take such basic operations and combine them in ways that extend the API in directions that enhance, or simplify, its operation. As I considered how
stdout might be used to return data, I saw two places where it would be valuable to add a few enhancements.
First, there’s the matter of how to return non-string data. Mirroring our discussion of formatting data for command line, it should be obvious that we will often need to pass back data that isn’t naturally a string, and just as with that earlier discussion, we need to do it in a way that doesn’t result in the loss of any of its structure or precision. As you might expect, the solution we developed for the input will also work for the output.
My first step in creating this solution was to create a VI (
Append Anything to StdOut.vi) that uses the same basic logic as the existing VI for writing to the
stdout, but which processes the data in a systematic way before transmitting it:
The first thing to note is the data input, which is a variant. Why a variant? Simple. When you place this VI on a block diagram, you can wire anything to a variant – hence, acting as a kind of “wildcard” this one VI will be able to handle any kind of LabVIEW data. Once we have all our data coerced to a variant, we use logic very similar to what we did for the command line: we flatten it, Base64 encode it and finally add it to the output buffer. To simplify the processing of this data in the caller, I also created a VI that reverses the process by Base64 decoding the string and unflattening it back to a variant.
Finally, before moving on I did one thing more to make life easier and (slightly) less error prone for the other customers that we always have: developers that will be maintaining/re-purposing our code after we are gone. Specifically, as things are now, we have two VI for returning data to the calling program: one for string data and one for everything else. Wouldn’t it be nice if a developer could just drop down a VI and have it auto-magically adapt as needed to accommodate the data that was wired to it? LabVIEW actually provides that ability in the form of a “polymorphic” VI.
Unlike a normal VI, a polymorphic VI doesn’t have a block diagram or a front panel. Rather, it just provides a table for linking to a number of normal VIs that are called “instance” VIs. Here is the polymorphic VI I created for the
stdout data write function:
Now there is a lot of detail here that we won’t explore right now, but note two things. First, the table contains a list of two VIs: the version with the string input and the one with the variant input. Second, the checkbox labeled “Allow Polymorphic VI to Adapt to Data Type” is checked. The result is that future developers don’t have to worry about what the data being returned might be. Instead they simple place the polymorphic VI on their block diagram and LabVIEW will select the appropriate instance of the underlying VIs based on what you wire to the data input. If you wire a string, the string version will be selected. Wire anything else and LabVIEW will link in the variant version:
Givin’ errors some love
The other place where we can benefit from expanding the
stdout API is with error handling. I’m separating out error data for special attention because even if your executable returns nothing else, it should always return an error cluster to tell the calling application that things work as intended – or didn’t. Here is its block diagram of the VI I created for returning errors.
It’s job is to simply take the error cluster, flatten it, Base64 encode it and write it to the
stdout. Normally this value will be last one you write so I created a complementary VI for the caller that strips off the last line of the data and extracts from it the external app’s error cluster.
One nuance that you might want to notice in this code is how I merge the error coming from the external program with the caller’s internal error chain. It might be tempting to simply make the error coming from the external program the output error cluster, like so:
This approach would work fine except for one little problem: The external program isn’t the only place where an error can occur. In particular, an incoming error can cause you to receive a spurious error that is a direct consequence of the earlier failure. One common way to handle this issue is to simply put the code in a big case structure that only executes what’s inside if there is no error. The problem with this “common” approach is that, when applied universally, it can result is a lot of case structures cluttering up a lot of block diagrams.
By contrast, the
Merge Errors node combines error streams while effectively prioritizing the output. As used here, if there is a preexisting error it gets reported, and not the returned error cluster. When working with code that can generate custom errors you always need to be sure that the code will always report the first error that occurs, and not a side-effect.
A practical example
So let’s now take all the pieces that we have put together, and use them to assemble our practical example of creating external “subVIs”. For this expanded example, let’s create a function that calculates a factorial.
Mathematically, a factorial is like a running total of consecutive numbers, but instead of adding you multiply. For example, a factorial of 4 (represented by the expression 4!) is calculated by
1*2*3*4, resulting in an answer of 24. Clearly, this calculation can generate some very large numbers very fast.
As we are designing our new external function, we see that we have one input value that we need to provide, and one output – plus several possible error conditions. To begin with, the user could enter an invalid value as the factorial limit. Invalid values include things like zero, a negative number, alpha characters and of course, no limit at all. Alternatively, the user could enter an input value that results in the answer overflowing the math’s integer representation, so we need to be able to identify that condition as well.
The executable’s initialization logic has two tasks: set the correct window state (“Hidden”) and obtain the input from the command line parameters. Here is the logic that performs those tasks:
Here we see in action the subVI that I wrote to obtain the command line parameters. To get the required value the code searches the list of parameter names for the parameter label “n”. It then uses the index thus obtained to index out the parameter value. The resulting value is turned into a number, which in turn drives the selector node of a case structure. Note that due to the way the indexing and numeric conversion work, two of our three bad inputs are guaranteed to generate an error condition.
Here is the case that gets selected for inputs of 0 or less.
For an error condition, the executable will only return one value: an error cluster identifying the problem. This logic generates an error that includes the extracted input value that raised the error.
Doin’ the factoral
The code that calculates the factorial generates a list of integers and then multiplies all the elements. As you can see, I am calculating the answer as an I64.
The reason for this approach lies in the way LabVIEW handles overflow conditions in signed integers, compared to unsigned integers. If you overflow an unsigned integer, the value simply rolls over, with no indication that the overflow occurred. By contrast, if a signed number overflows a little bit, the output will appear as a negative number; and if it overflows a lot it will return a 0 – both of which are easy conditions to trap.
Terminating the function
When the case structure finishes, the code appends the error cluster to the
stdout data, closes the buffer and shut-down the function.
Testing our work…
As always, the last thing we need to do is test the code we have created. For that work, I created a VI (
Run Factoral.vi) that expects the executable in a subdirectory from where it’s located.
To try out the code, enter a value for N and click the run arrow. Note that because the VI calling the external function formats the input value you immediately avoid two possible errors. First, because the input is a number, the user can’t inadvertently send an alpha character. Second, because the input is a U8, the user can’t enter a negative number – though 0 is still a possibility, so give it a try to see the error message.
Toolbox – Release 19
The Big Tease
So what do we want to look into next time? One of the major resources that we have at our disposal as LabVIEW developers is the user forum. It can be an access point for discussing problems and issues with people who have decades of experience. Unfortunately a many people don’t get all the benefit that is available due to the way they ask questions. For example, people will ask overly general questions like:
Is it possible to save data to disk with LabVIEW?”
Or at the other end of the spectrum, folks will over qualify questions by filling them with too many details:
I have a [[insert here the name of some obscure type of instrument]] and am trying to [[insert here 2 or more paragraphs of very detailed information]]. The problem I’m having is I can’t figure out how to set the instrument’s GPIB address…
In either of these situations it is highly unlikely that the OP (original poster) will get answers to their questions – though I must confess that there have been times when being in a somewhat uncharitable frame of mind I have answered the first type of question with:
Yes, yes it can.
The point of this post will be to consider several of the more common errors that people make when asking questions, and show how to get the information that you need.
Until Next Time…