Getting Everything Running

I left you last time stuck on the horns of a dilemma. To wit: It’s not very hard to create LabVIEW applications structured as multiple parallel processes that are for the most part operating independently of one another. However, now you are going to need some way to launch all those independent processes and manage their operation.

These are some of the issues that we’ll deal with this time, but first you may be wondering if this methodology is really worthwhile. Specifically, what benefits are to be garnered? In a word: modularity — on steroids (Ok, that’s three words. So sue me).

The Case for Autonomous Processes

A few posts ago, I dealt in broad terms with the issue of modularity and mentioned that effects both the design and implementation of an application. Consequently, modularity has implications that can reach far beyond simply the structure of an application. Modularity can also effect such diverse, and apparently unrelated, areas as resource allocation and even staffing.

For example, let’s say you have an application that needs to acquire data, display data, and manage system events that occur. Now let’s say you structure this application such that it has three loops on the same block diagram — or worse, one big loop that tries to do everything. How many developers can effectively work on the application at once? One.

Oh you might be able to split off a few random tasks like developing lower-level drivers, or prototyping the user interface, but when crunch time comes the work will fall to one person. Heaven help you if that one person gets sick. Or that one person gets hit by a bus. Or that one person gets tired of dealing with the stress caused by being the project critical path incarnate and quits.

And don’t forget that one developer is a best-case scenario. Such projects can easily turn into such a mess that there is no way anyone can “effectively” work on them. If you get thrust into such a situation, and you don’t have a manager that really and truly has your back, the best you can hope for is to survive the encounter. (And, yes. I am speaking from personal experience.)

But let’s say that you decide use a different modularization. Let’s say that you break the application into three independent, but interacting, processes. Now how many people can effectively work on the project at once? Three. Likewise, at crunch time how many people can assist with integration testing and validation? Three — and each one with a specific area of expertise in the project domain.

Oh, in case you’re wondering, “three” is a worst-case scenario. Depending how each of those three pieces are structured, the number of possible concurrent operations could go much higher. For instance, the lead developer of the DAQ module might decide to create separate processes for DAQ boards and GPIB instruments; or the GUI developer might identify 4 separate interface screens that will run independently, and so can be developed independently. Now you are up to 7 parallel development tasks, and 7 pairs of hands that can pitch in at crunch time.

So in terms of justification, ‘Nuff said… All we have to do now is figure out where to put the logic that we need.

Making a Splash [Screen]

Did you ever wonder why so many large applications have big splash screens telling you about the program you are launching? Is narcissism really running rampant in the software industry? Or is there a little slight-of-hand going on here?

Part of the answer to those questions lies in a magic number: 200. That number is important because that is the approximate number of milliseconds that average users will wait for a program to respond before they begin to wonder if something has gone horribly wrong. Any way you slice it 200-msec isn’t remotely long enough to get a large program loaded into memory and visible to the user.

So what is a developer to do? As quickly as possible put up an appealing bit of eye-candy the implicit (and occasionally explicit) purpose of which is to congratulate the user for having the wisdom and foresight to decide to launch your program. Then while the user is thus distracted, get everything else loaded and running as quickly as you can.

That is why programs have splash screens. It also points to one of the unique aspects of a splash screen: Unlike most VIs, what you see on the front panel of a splash screen often has little or nothing to do with what the VI is actually doing. Or put another way, splash screens are a practical implementation of the concept, “Ignore the man behind the curtain!”

Designing a Launch Pad

As you should expect from me by now, the first thing to do is consider the basic requirements for this launcher VI and list the essential components:

  1. A list of processes to be launched. As a minimum, this list will provide all the information needed to identify the process VIs. However it must also be expandable to allow the inclusion of additional runtime parameters.
  2. A loop to process the launch list. This loop will step through the launch list one element at a time and perform whatever operations are needed to get each process running. As a first-pass effort, the launcher must be able to handle reentrant and non-reentrant processes.

Because the loop that does the actual launching is pretty trivial, we’ll concentrate our design effort on a discussion of the data structure needed to represent the launch list. Clearly, the list will need to vary in size, so the basic structure will need to be an array. Next, to properly represent or describe each process to be launched (as well as any runtime data it may require) will probably require a variety of different datatypes, so the structure inside the array will need to be a cluster — and a typedef’d one at that. It is inevitable that eventually this data will need to change.

To determine what information the cluster will need to contain, let’s look at what we know about the call process that we will be using. Because these processes will be operating independently from one another, the launcher will use the Asynchronous Call-and-Forget method. To make this technique work, we will need the path to the VI so the launcher can load it into memory, and something to identify how each VI should be run. Remember, we are wanting to support both reentrant and non-reentrant processes. While the launch processes that the two types of VIs require are very similar they are not identical, so at the very least we need to be able to distinguish between them. But what should the datatype of this selector be? Most folks would figure that the selector only has 2 possible states, and so they would make it a Boolean…

…and most folks would be wrong.

There are 3 reasons why a Boolean value is incorrect:

  1. It is inherently confusing. There is no natural or intuitive mapping between possible logical values of this parameter and the true or false states available with a Boolean.
  2. It runs counter to the logical purpose of a Boolean. Boolean controls aren’t for just any value that has two states. They are for values that can only have two states. This selector coincidentally happens to have two states.
  3. It will complicate future expansion. There are other possible situations where the technique for using a given process might vary from the two currently being envisioned. When that happens you’ll have to go back and change the parameter’s fundamental datatype — or do something equally ugly.

The correct solution is, of course, to make the value a typedef enumeration with two values, “Normal” and “Reentrant”.

Handling Paths

While we are considering the data details, we also need to talk a bit about the VI path parameter. At first blush, this might seem a straight-forward matter — and in the development environment it is. The challenge arises when you decide to build your code into a standalone executable. To understand why this is so, we need to consider a little history.

With LabVIEW versions prior to LabVIEW 2009, executables that you created stored files internally in what was essentially a flat directory structure. Hence if you wanted to build a path to a VI inside an executable it was just the path to the executable with the VI name added to the end, like so:

C:\myProgram\myProgram.exe\myProgram.vi

This simple structure made it very easy to build a path to a VI and worked well for many years. However, as the LabVIEW development environment became more sophisticated, this simple structure began to exhibit several negative side-effects. One of most pernicious of those effects was one that nearly everyone tripped over at least once. The path of a VI in an executable, while simple to build, was different from the path to the same VI in the development environment.

To fix this problem (and many, many, many others) LabVIEW 2009 defined a new internal structure that more closely reflects the directory structure present in the development environment. Thanks to this change, once you know the path to the “project directory” inside the executable, the logic to building a path to a VI is exactly the same, regardless of the environment in which it was running. The only trick now is getting to that internal starting point, and that can be a bit tricky depending on how you have VIs stored on your computer. Being basically a simple man, when approached with a requirement such as this I prefer an uncomplicated approach — and what could be less complicated than a functional global variable?

Because the launcher VI is always in the project root directory, writing its path (stripped of the VI name, of course) to a FGV provides a fast shortcut for any process or function that needs to build an internal path. Note that this expedient is only needed for LabVIEW files that are located in the executable. For non-LabVIEW files, the Application Directory node provides the same logical starting point for path building.

Managing Windows

Finally, if you implemented an application with all the features that we have discussed to this point you would have an application that would acquire data in the producer loop, and seamlessly pass it to the consumer loop. Unfortunately, you wouldn’t be able to what it was doing because there wouldn’t be any windows open.

By default, when you dynamically call a VI the target of the call doesn’t automatically open its front panel. This behavior is fine for the acquisition VI, because we want it to run unseen in the background anyway. However, it also means that you can’t see the data display. To make the display visible we need to tweak a few VI properties for the display VI. To make this sort of setting easy, the Window Appearance portion in the VI properties dialog box provides a couple, predefined window definitions. The one that will get you what we want is the top-most radio button labeled Top-level application window. This selection sets a number of properties such as turning off the scrollbars and setting the VI to automatically open its front panel when it runs and closing it when it finishes.

Note that the Window Appearance screen also allows you to specify a runtime title for the window — an optional but professional touch to add to you application. This setting only defines a static title, you can also set it dynamically from the VI using a VI property node.

So that’s about it for now. The next time we get together I’ll introduce how I have applied all the infrastructure we have been discussing to the “testbed” application introduced in the post on UDEs.

Until next time…

Mike…

Module, module, whose got the module?

One of the foundation stones of software architecture is the idea of modularity, or modular design. The only problem is that while the concepts of modular design are well-known, the term “module” seems to get used at times in rather inexact ways. So what is a module? Depending on who’s speaking it sometimes appears to refer to a VI, sometimes to a library or collection of VIs, and sometimes even a whole program or executable can be called a module. The thing is, all those are legitimate usages of the word — and they are just the beginning.

So how is one to keep straight what is meant? The only way that I have found is to stay alert, and keep straight in your head the conversation’s context. Well-designed software is always modular, but it is also hierarchical. Consequently, the modularity will also be hierarchical. For example, a VI can encapsulate some functionality that you wish to use through out a project (or across multiple projects) and so can be considered a module. But that VI can also be used as part of a higher-level more-abstract module that uses it to implement some broader system-level functionality.

To see this structure in action, all you have to do is consider any modern software package — including LabVIEW itself. Much of the user experience we think of as “LabVIEW” is actually the result of cooperation between dozens of independent modules. There are modules that form the user interface, modules that implement the compiler and other modules that manage system resources like licenses, network connections and DAQ IO. Regardless of level at which the modularity occurs, the benefits are much the same — reduced development costs, improved maintainability, lower coupling and enhanced cohesion. In short, modularity is a Very Good Thing.

However, you may have noticed that this description (like so many others) concentrates on modularization as a part of the implementation process. Consequently, I want to introduce another idea about modules: The use of modularization during the design process. In particular, I want to provide you with a link to a paper by D.L. Parnas that was published the year I graduated high-school, that I first read in 1989, and that every software engineer should read at least once a year because it is as relevant today as it was 43 years ago. The paper bears the rather daunting title: On the Criteria To Be Used in Decomposing Systems into Modules.

As the title suggests, the point is to not make the case for modularization — even then it was assumed that modularity was the way to go. The question was how do you go about breaking up a system into modules so as to derive the greatest benefit. To this end, Dr Parnas takes a simple application and compares two different ways that it could be broken down into modules, comparing and contrasting the results.

In a section titled What is Modularization? Dr Parnas sets the context for the following discussion:

In this context “module” is considered to be a responsibility assignment rather than a subprogram. The modularizations include the design decisions which must be made before the work on independent modules can begin.

In other words, this discussion is primarily about designing, not implementing. Just as the first step in designing a database is generating a data model, so the first step in designing a program is to identify where the functional responsibilities lie. And just as the data model for a database doesn’t translate directly into table structures, so this design modularization will not always translate directly into code. However, when done properly this initial design work serves a far greater good than simply providing you with a blueprint of the code you need, it also teaches you how to think about the problem you are trying to solve.

It can be easy to get tied-up in the buzzwords of the work we do and think that they (the buzzwords) will protect us from creating bad code like talismans or magical words of power. But there is no magic to be found in words like “object-oriented”, “hierarchical structure” or “modular design”. As Parnas shows, it is possible to create very bad code that is modular, hierarchical and object oriented.

Until next time…

Mike…