Building a Proper LabVIEW State Machine Design Pattern – Pt 2

Last week’s post was rather long because (as is often the case in this work) there was a lot we had to go over before we could start writing code. That article ended with the presentation of a state diagram describing a state machine for a very simple temperature controller. This week we have on our plate the task of turning that state diagram into LabVIEW code that implements the state machine functionality.

The first step in that process is to consider in more detail the temperature control process that the state diagram I presented last week described in broad terms. By the way, as we go through this discussion please remember that the point of this exercise is to learn about state machines — not temperature control techniques. So unless your application is to manage the internal temperature of a hen-house, dog-house or out-house; don’t use this temperature control algorithm anywhere.

How the demo works

The demonstration is simulating temperature control for an exothermic process — which is to say, a process that tends to warm or release heat into the environment over time. To control the temperature, the system has two resources at its disposal: an exhaust fans and a cooler. Because the cooler is actively putting cool air into the area, it has a very dramatic effect on temperature. The fan, on the other hand, has a much smaller effect because it is just reduces the heat through increased ventilation.

When the system starts, the state machine is simply monitoring the area temperature and as long as the temp is below a defined “High Warning Level” it does nothing. When that level is exceeded, the system turns on the fan, as shown by the fan light coming on. In this state, the temperature will continue to rise, but at a slower rate.

Eventually the temperature will, exceed the “High Error Level” and when it does, the system will turn on the cooler (it has a light too). The cooler will cause the temperature to start dropping. When the temperature drops below the “Low Warning Level” the fan will turn off. This action will reduce the cooling rate, but not stop it completely. When the temperature reaches the “Low Error Level”, the cooler will turn off and the temperature will start rising again.

So let’s look at how our state machine will implement that functionality.

“State”-ly Descriptions

As I stated last week, the basic structure is an event structure with most of the state machine functionality built into the timeout case. To get us started with a little perspective, this is what the structure as a whole looks like.

State Machine Overview

Obviously, from the earlier description, this state machine will need some internal data in order to perform its work. Some of the data (the 4 limit values and the sample interval) is stored in the database, while others are generated dynamically as the state machine executes. Regardless of its source, all of this data is stored in a cluster, and you can see two of the values that it contains being unbundled for use. Timeout is the timeout value for the event structure and is initialized to zero. The Mode value is an enumeration with two values. In the Startup case is the logic that implements startup error checking and reads the initial setup values from the database. When it finishes, it sets Mode to its other value: Run. This is where you will find the bulk of the state machine logic. Note that while I don’t implement it here, this logic could be expanded to provide the ability to do such things as pause the state machine.

The following sections describe the function of each state and shows the code that implements it.

Initialization

This state is responsible for getting everything initialized and ready to run.

Initialization State

In a real system, the logic represented here by a single hardware initialization VI would be responsible for initializing the data acquisition, verifying that the system is capable of communicating with the fan and cooler, and reading their operational states. Consequently, this logic might be 1 subVI or there might be 2 or 3 subVIs. The important point is to not show too much detail in various states. Use subVIs. Likewise, do not try to expand on the structure by adding multiple initialization states.

Finally, note that while the selection logic for the next state may appear to be a default transition, it isn’t. The little subVI outside the case structure actually creates a two-way branch in the logic. If the incoming error cluster is good, the incoming state transitions (in this case, Read Input) is passed through unmodified. However, if an error is present, the state machine will instead branch to the Error state where the problem can be addressed as needed.

Read Input

This state is responsible for reading the current temperature (referred to as the Process Variable) and updating the state machine data cluster.

Read Input State

In addition to updating the last value, there are a couple other values that it sets. Both of these values relate to how the acquisition delay is implemented in the state machine. The first of these is the Timeout value, and since we want no delay between states, we set this to zero. The other value is the Last Sample Time. It is a timestamp indicating when the reading was made. You’ll see in a bit how these values are used.

This state also updates two front panel indicators, the graph and a troubleshooting value.

Test Fan Limits

This state incorporates a subVI that analyzes the data contained in the state machine data to determine whether or not the fan needs to change state.

Test Fan Limits State

The selector in this state results in what is the potential for a 3-way branch. If a threshold has been crossed, the next state will be Set Fan, if it has not, the next state will be Test Cooler Limits, and if an error has occurred, the next state will be Error.

Set Fan

Since the fan can only be on or off, all this state needs to do is reverse its current operating condition.

Set Fan State

In addition to the subVI toggling the fan on or off, the resulting Fan State is unbundled from the state machine data and the value is used to control the fan LED.

Test Cooler Limits

This state determines whether the cooler needs to be changed. It can also only be on or off.

Test Cooler Limits State

The logic here is very similar to that used to test the fan limits.

Set Cooler

Again, not unlike the corresponding fan control state, this state changes the cooler state and sets the cooler LED as needed.

Set Cooler State

Acquisition TO Wait

This state handles the part of the state machine that is in often the Achilles Heel of an implementation. How do we delay starting the control sequence again without incurring the inefficiency of polling?

Acquisition TO Wait State

The answer is to take advantage of the timeout that event structures already have. The heart of that capability is the timeout calculation VI, shown here:

Calculate Time Delay

Using inputs from the state machine data, the VI adds the Sample Interval to the Last Sample Time. The result is the time when the next sample will need to be taken. From that value, the code subtracts the current time and converts the difference into milliseconds. This value is stored back into the state machine data as the new timeout. This technique is very efficient because it requires no polling.

But wait, you say. This won’t work if some other event happens before the timeout expires! True. But that is very easy to handle. As an example, here is the modified event handler for the save data button.

Handling Interrupting Events

It looks just as it did before, but with one tiny addition. After reading, formatting and saving the data, the event handler calls the timeout calculation VI again to calculate a new timeout to the intended next sample time.

Error

This state handles errors that occur in the state machine. That being the case, it is the new home for our error VI.

Error State

Deinitialize

Finally, this state provides the way to stop the state machine and the VI running it. To reach this state, the event handler for the UDE that shuts down the application, will branch to this state. Because it is the last thing to execute before the VI terminates, you need to be sure that it includes everything you need to bring the system to a safe condition.

Deinitialize State

With those descriptions done, let’s look at how the code works.

The Code Running

When you look at the new Temperature Controller screen you’ll notice that in addition to the graph and the indicators showing the states of the fan and cooler, there are a couple of numbers below the LEDs. The top one is the amount of elapsed between the last two samples, the bottom one is the delay calculated for the acquisition timeout.

If you watch the program carefully as its running, you’ll notice something a bit odd. The elapsed time indicator shows a constant 10 seconds between updates (plus or minus a couple of milliseconds — which is about all you can hope for on a PC). However, the indicator showing the actual delay being applied is never anywhere near 10,000 milliseconds. Moreover, if you switch to one of the other screens and the back, the indicated delay can be considerably less than 10,000 milliseconds, but the elapsed time never budges from 10 seconds. So what gives?

What you are seeing in action is the delay recalculation, we talked about earlier. In order to better simulate a real-world system, I put a delay in the read function that pauses between 200- and 250-msec. Consequently when execution reaches the timeout calculation, we are already about 1/4 of a second into the 10 second delay. The calculation, however, automatically compensates for this delay because the timeout is always referenced to the time of the last measurement. The same thing happens if another event comes between successive data acquisitions.

On Deck

As always, if you have any questions, concerns, gripes or even (gasp!) complements, post ’em. If not feel free to use any of this logic as you see fit — and above all, play with the code see how you might modify it to do similar sorts of things.

Stay tuned. Next week we will take a deeper look at something we have used before, but not really discussed in detail: Dynamically Calling of VIs. I know there are people out there wondering about this, so it should be fun.

Until next time…

Mike…

Building a Proper LabVIEW State Machine Design Pattern – Pt 1

The other day I was looking at the statistics for this site and I noticed that one of the most popular post with readers was the one I wrote on the producer/consumer design pattern. Given that level of interest, it seemed appropriate to write a bit about another very common and very popular design pattern: the state machine. There’s a good reason for the state-machine’s popularity. It is an excellent, often ideal, way to deal with logic that is repetitive, or branches through many different paths. Although, it certainly isn’t the right design pattern for every application, it is a really good technique for translating a stateful process into LabVIEW code.

However, some of the functionality that state machines offer also means they can present development challenges. For example, they are far more demanding in terms of the design process, and consequently far less forgiving of errors in that process. As we have seen before with other topics, even the most basic discussion of how to properly design and use state machines is too big for a single post. Therefore, I will present one post to discuss the concepts and principles involved, and in a second post present a typical implementation.

State Machine Worst Practices

For some reason it seems like there has been a lot of discussions lately about state machine “best practices”. Unfortunately, some of the recommendations are simply not sound from the engineering standpoint. Meanwhile others fail to take advantage of all that LabVIEW offers because they attempt to mimic the implementation of state machines in primitive (i.e. text-based) languages. Therefore, rather than spinning out yet another “best practices” article, I think it might interesting to spend a bit of time discussing things to never do.

In describing bad habits to avoid, I think it’s often a good idea to start at the most general level and work our way down to the details. So let’s start with the most important mistake you can make.

1. Use the state machine as the underlying structure of your entire application

State machines are best suited for situations where you need to create a complex, cohesive, and tightly-coupled process. While an application might have processes within it that need to be tightly-coupled, you want the application as a whole to exhibit very low levels of coupling. In fact, much of the newest computer science research deprecates the usage of state machines by asserting that they are inherently brittle and non-maintainable.

While I won’t go that far, I do recognize that state machines are typically best suited for lower-level processes that rarely, if ever, appear to the user. For example, communications protocols are often described in terms of state machines and are excellent places to apply them. One big exception to this “no user-interface” rule is the “wizard” dialog box. Wizards will often be built around state machines precisely because they have complex interface functionality requirements.

2. Don’t start with a State Diagram

Ok, so you have a process that you feel will benefit from a state machine implementation. You can’t just jump in and start slinging code right and left. Before you start developing a state machine you need to create a State Diagram (also sometimes called a State Transition Diagram), to serve as a road-map of sorts for you during the development process. If you don’t take the time for this vital step, you are pretty much in the position of a builder that starts work on a large building with no blueprint. To be fair, design patterns exist that are less dependent upon having a completed, through design. However, those patterns tend to be very linear in structure, and so are easy to visualize in good dataflow code. By contrast, state machines are very non-linear in their structure so can be very difficult to develop and maintain. To keep straight what you are trying to accomplish, state machines need to be laid out carefully and very clearly. The unfortunate truth, however, is that state machines are often used for the exact opposite reason. There is a common myth that state machines require a minimum of design because if you get into trouble, you can always just, “add another state”. In fact, I believe that much of the bad advice you will get on state machines finds its basis in this myth.

But even if we buy the idea that state machines require a more through design, why insisted on State Diagrams? One of the things that design patterns do is foster their own particular way of visualizing solutions to programming problems. For example, I have been very candid about how a producer/consumer design pattern lends itself to thinking about applications as a collection of interacting processes. In the same way, state machines foster a viewpoint where the process being developed is seen as a virtual machine constructed from discrete states and the transitions between those states. Given that approach to problem solving, the state diagram is an ideal design tool because it allows you to visually represent the structure that the states and transitions create.

So what does it take to do a good state-machine design? First you need to understand the process — a huge topic on its own. There are many good books available on the topic, as well as several dedicated web sites. Second, having a suitable drawing program to create State Diagrams can be helpful, and one that I have used for some time is a free program called yEd. However fancy graphics aren’t absolutely necessary. You can create perfectly acceptable State Diagrams with nothing more than a paper, a pencil and a reasonably functional brain. I have even drawn them on a white board during a meeting with a client and saved them by taking a picture of them with my cell phone.

Moreover, drawing programs aren’t much help if you don’t know what to draw. The most important knowledge you can have is a firm understanding of what a state machine is. This is how Wikipedia defines a state machine:

A finite-state machine (FSM) or finite-state automaton (plural: automata), or simply a state machine, is a mathematical model of computation used to design both computer programs and sequential logic circuits. It is conceived as an abstract machine that can be in one of a finite number of states. The machine is in only one state at a time; the state it is in at any given time is called the current state. It can change from one state to another when initiated by a triggering event or condition; this is called a transition.

An important point to highlight in this description is that a state machine is at its core is a mathematical model — which itself implies a certain level of needed rigor in their design and implementation. The other point is that it is a model that consists of a “finite number of steps” that the machine moves between on the basis of well-defined events or conditions.

3. Ignore what a “state” is

Other common problems can arise when there is confusion over what constitutes a state. So let’s go back to Wikipedia for one more definition:

A state is a description of the status of a system that is waiting to execute a transition.

A state is, in short, a place where the code does something and then pauses while it waits for something else to happen. Now this “something” can be anything from the expiration of a timer to a response from a piece of equipment that a command had been completed successfully (or not). Too often people get this definition confused with that for a subVI. In fact one very common error is for a developer to treat states as though they were subroutines that they can simply “call” as needed.

4. Use strings for the state variable

The basic structure behind any state machine is that you have a loop that executes one state each time it iterates. To define execution order there is a State Variable that identifies the next state the machine will execute in response to an event or condition change. Although I once saw an interesting object-oriented implementation that used class objects to both identify the next state (via dynamic dispatch) and pass the state machine operational data (in the class data), in most cases there is a much simpler choice of datatype for the State Variable: string or enumeration.

Supposedly there is an ongoing discussion over which of these two datatypes make better state variables. The truth is that while I have heard many reasons for using strings as state variables, I have yet to hear a good one. I see two huge problems with string state variables. First, in the name of “flexibility” they foster the no-design ethic we discussed earlier. Think about it this way, if you know so little about the process you are implementing that you can’t even list the states involved, what in the world are you doing starting development? The second problem with state strings is that using them requires the developer to remember the names of all the states, and how to spell them, and how to capitalize them — or in a code maintenance situation, remember how somebody else spelled and capitalized them. Besides trying to remember that the guy two cubicles down can never seem to remember that “flexible” is spelled with an “i” and not an “a”, don’t forget that there is a large chunk of the planet that thinks words like “behavior” has a “u” in them…

By the way, not only should the state variable be an enumeration, it should be a typedef enumeration.

5. Turn checking the UI for events into a state

In the beginning, there were no events in LabVIEW and so state machines had to be built using what was available — a while loop, a shift register to carry the state variable, and a case structure to implement the various states. When events made their debut in Version 6 of LabVIEW, people began to consider how to integrate the two disparate approaches. Unfortunately, the idea that came to the front was to create a new state (typically called something like, Check UI) that would hold the event structure that handles all the events.

The correct approach is to basically turn that approach inside out and build the state machine inside the event structure — inside the timeout event to be precise. This technique as a number of advantages. To begin with, it allows the state machine to leverage the event structure as a mechanism for controlling the state machine. Secondly, it provides a very efficient mechanism for building state machines that require user interaction to operate.

Say you have a state machine that is basically a wizard that assists the user in setting up some part of your application. To create this interactivity, states in the timeout event would put a prompt on the front panel and sets the timeout for the next iteration to -1. When the user makes the required selection or enters the needed data, they click a “Next” button. The value change event handler for the button knows what state the state machine was last in, and so can send the state machine on to its next state by setting the timeout back to 0. Very neat and, thanks to the event-driven programming, very efficient.

On the other hand, if you are looking for a way to allow your program to lock-up and irritate your users, putting an event structure inside a state is a dandy technique. The problem is that all you need to stop your application in its tracks is one series of state transitions where the “Check UI” state doesn’t get called often enough, or at all. If someone clicks a button or changes something on the UI while those states are executing, LabVIEW will dutifully lock the front panel until the associated event is serviced — which of course can’t happen because the code can’t get to the event structure that would service it. Depending on how bad the overall code design is and the exact circumstances that caused the problem, this sort of lock-up can last several seconds, or be permanent requiring a restart.

6. Allow default state transitions

A default state transition is when State A always immediately transitions to State B. This sort of design practice is the logical equivalent of a sequence structure, and suffers from all the same problems. If you have two or more “states” with default transitions between them, you in reality have a single state that has been arbitrarily split into multiple pieces — pieces that hide the true structure of what the code is doing, complicates code maintenance and increases the potential for error. Plus, what happens if an error occurs, there’s a shutdown request, or anything else to which the code needs to respond? As with an actual sequence structure, you’re stuck going through the entire process.

7. Use a queue to communicate state transitions

Question: If default transitions are bad, why would anyone want to queue up several of them in a row?
Answer: They are too lazy to figure out exactly what they want to do so they create a bunch of pieces that they can assemble at runtime — and then call this kind of mess, “flexibility”. And even if you do come up with some sort of edge case where you might want to enqueue states, there are better ways of accomplishing it than having a queue drive the entire state machine.

Implementation Preview

So this is about all we have room for in this post. Next Monday I’ll demonstrate what I have been writing about by replacing the random number acquisition process in our testbed application with an updated bit of LabVIEW memorabilia. Many years ago the very first LabVIEW demo I saw was a simple “process control” demo. Basically it had a chart with a line on it that trended upwards until it reached a limit. At that point, an onscreen (black and white!) LED would come on indicating a virtual fan had been turned on and the line would start trending back down. When it hit a lower limit, the LED and the virtual fan would go off and the line would start trending back up again. With that early demonstration in mind, I came up with this State Diagram:

Demo State Machine

When we next get together, we’ll look at how I turn this diagram into a state-machine version of the original demo — but with color LEDs!

Until next time…

Mike…

Advantages of Using Alternative Logic Symbols

There is a feature hidden on the LabVIEW function palettes that implements a very useful bit of functionality, but unfortunately is not being used nearly enough. I am speaking of the Compound Arithmetic node which can be found on either the Numeric or Boolean palette. Despite its name, the node does far more than simple “Arithmetic”. In this post we’ll consider how it can and should be used to help us implement logic operations.

Most folks use it when they need to save space while implementing a logic gate with more than 2 inputs, but there is a lot more functionality hidden inside it. For example, have you ever gotten tied up in the logic of what you feel should be a pretty simple operation, but end up spend a lot of time figuring it out? Well this little node can help you define logic using a simple, straightforward process.

Ever look at a logical operation in a piece of inherited code (or worse code you wrote yourself six months ago) and sat scratching your head trying to figure out what the original developer was trying to do? Well this node can help you learn not only what the developer wanted, but what they were thinking.

Augustus De Morgan: The Other Big Name in Binary Logic

I first learned about binary and logic gates while going through electronics school in the United States Air Force. However, when I got out and found my first civilian job, I saw a gate on a schematic that looked like something like this:

Alternative OR Gate

I of course knew what an AND gate was, but what was the deal with all the extra bubbles? When I asked an engineer I worked with about the symbol, he explained that it’s just another way of drawing an OR gate. When I asked why the designer didn’t just draw an OR gate, he explained (somewhat irritated) that the designer couldn’t because an OR gate means something different. So I asked one more question, “Well, if it’s the same as an OR gate how can it mean something different?”. The engineer responded (this time, decidedly irritated), “Because one is an OR gate and one is an AND gate!” At this point he stomped off praying under his breath for some unnamed deity to give him strength when dealing with, “stupid technicians”.

OK, so this mystery gate with all the extra bubbles is just another way of drawing an OR gate, but somehow it also managed to mean something different from an OR gate. In other words, it is simultaneously the same-as and different-from an OR gate. At this point, my head began to hurt as I wondered how functions that allowed this sort of confusion could be called “logic”.

Somewhat later I discovered Augustus De Morgan, and specifically De Morgan’s Law which stated (in the linguistic gobbledygook that mathematicians seem to love):

  • The negation of a conjunction is the disjunction of the negations.
  • The negation of a disjunction is the conjunction of the negations.

Uh, right… or in normal in English: If you draw the truth-tables for an AND gate with all its inputs and the output inverted and an OR gate you will see that they are in fact identical. Likewise, the opposite is also true. The truth table for an OR gate with all its inputs and output inverted is the same as that for an AND gate. Well, that explained the bit about the modified gate symbol being, “…another way of drawing an OR gate.”

But what about the part where they mean different things? Again De Morgan provided the answer, not in the form of another pithily worded law, but in the basic meaning of the word “meaning”. Coming from the background I did, the output of a logic gate was either high or low, on or off, +4.5V or 0V. It never occurred to me that these signals could have a broader meaning — like a low on this input means that something important has happened, or even basic concepts like true and false. This realization made the engineer’s last statement make sense. An OR gate and an AND gate express fundamentally different logical ideas, regardless of how their inputs and output might be inverted.

At its most basic, an OR gate says, I want a given output when any input is in the indicated state. By contrast, the message an AND gate delivers is that I want a given output when all inputs are in the indicated states. So they really do mean different things. Still while this discussion may have been interesting, and perhaps even intellectually stimulating, if there is no practical applications of the knowledge, we have wasted our time. However the preceding discussions has several very practical implications. To begin with, it points the way to the possibility of creating code that not only defines a given functionality, but can actually provide insight into what you were thinking as you were designing and implementing the code. Moreover, due to the way LabVIEW implements the Compound Arithmetic node, you can easily build-up even complex logic operations through a simple 3-step process. To see how this works, let’s walk through a simple example that shows all the steps.

Setting-up the Example

The example we’ll consider is developing the logic that either allows a while loop to keep running or causes it to stop — depending upon your point of view. We know that, by default, passing a Boolean true to the conditional node causes the loop to stop at the end of the current iteration. Now we can change that behavior by right-clicking on the conditional node — but why bother? Here is skeleton of the loop that we’ll be controlling.

Basic Boolean Problem

The goal will be to control the operation of the loop where we have two values upon which to make our control decisions: a scaled random number that is measured against a range of 0 to 975, and the number of loop iterations which tested to determine whether or not it is equal to 5o.

Defining the Output

Our first challenge is to decide what we are wanting to do: Let the loop run until something is true, or let the loop run as long as something is not true? Either one would work equally well, but given the specifics of the code you’re working on, one may represent how you are thinking about the task at hand better than the other. You want to use the one that better fits your thinking.

To implement this first decision, place a Compound Arithmetic node on the diagram and if you are thinking that the loop should run until something happens leave the output as is, otherwise right click on the node’s output terminal and select Invert from the pop-up menu. I’ll invert the output.

Added the logic node

Defining the Core Logic

To this point, we have been talking in generalities about “something” happening or not happening. Now we need to drill down a bit and start defining the “something”. Specifically, do we want the event to be flagged when all the inputs are in particular states or when any of the inputs are in particular states? The answer to this question tells us whether we fundamentally want an AND operation or an OR operation. Looking at the logic, I see the two parameters and I want the loop to continue if they are both valid — an AND operation. Now if you get your Compound Arithmetic node from the Boolean palette, the default operation is OR. On the other hand, if you get the node from the Numeric palette you’ll note that the default operation is ADD. In either case you will need to right-click on the node and select AND from the Change Mode… submenu. While we’re at it, let’s wire up our two conditions.

Correct logic gate selected and its all wired-up

Note that although the run arrow is no longer broken, we aren’t done yet. We still have one more step.

Defining the Inputs

I said earlier that I wanted the gate to test whether all the inputs are valid. So now we need to look at each of the two inputs and identify what the valid state for each. The top terminal is wired to the In Range? output from the In Range and Coerce node. Because this output is true when the input number is in range (and therefore, valid) we will leave its input as is. However the bottom input is coming from logic that is comparing the loop counter to a limit (50). Since that value is a limit, a valid input is one that has not yet reached the limit. Consequently, in this case, a valid number is indicated by the output of the comparison being false. To indicate that fact, right-click on the lower input terminal and select Invert from the popup menu. Here is the finished code.

Logic Example All Done

If you run it you’ll notice that the loop counter indicator will show a variety of values, depending on which of the two conditions stopped the loop, but the count will never be greater than 50.

Admittedly, at first glance the logic gate looks a little strange, but if you deconstruct it by stepping through the foregoing process backwards you can easily what it’s doing. More importantly, though, the structure gives you an insight into what I was thinking when I created the logic. Also note that depending on how you think about what it is that you are doing, there are several other ways the gate could be defined that would give the same functionality.

Until next time…

Mike…

Customers and Other Difficult People

This time I want to take a break from the usual technical “how-to” and talk a bit about another type of “how-to” — dealing with difficult people in a technical environment. Never fear, however, these points are very practical. So here we go:

Customers Lie

Ok, I can hear you already: “But, I have always heard that the customer is always right!” Well in some circumstances that maxim may be immediately applicable, but in the field in which we work it is almost always wrong, unless you substitute one little word. Let’s replace “always” with a more realistic “eventually”. So what do I mean by, “The customer is eventually right”? Let me explain:

When I was very new to this business, I got a phone call one afternoon from a friend who had arranged to so some work for someone, but was not going to have the time to fulfill the contract. He assured me that the job was very simple. All I had to do was go in, develop a simple data-logging program for the customer and leave.

The customer was a doctor doing research at Massachusetts General Hospital. She had gotten her hardware all set-up by the local NI sales representative but needed a simple program built to record the data she got. When I arrived for our appointment she was feeling very rushed and needed the software done very quickly. She said all she needed was for the software to read a new data point once a second and display it in a table on the front panel. This task that took 5 or 10 minutes.

When I showed her the result she was delighted. She affirmed that the program would significantly speed up their work because all her grad students have to do now was copy the numbers off the screen and plot them.

“Oh”, says I, “You need to plot the data?”

“Yes”, she replied, “but we don’t have the time…”

Asking for a moment more, I replaced the table with a graph and re-ran the program.

“My students are going to love you!”, she enthused, “With the graphing done, all they will need to do now is trace the image from the screen onto typing paper.”

“Wait a minute”, says I, “You need to print the graph?”

“Well, yes — but we don’t have the time…”

I think you can see where the conversation was going. In total, we did that dance for the better part of an hour and a half, and in the end, she had a program that did everything she needed. The problem was that her attempt to save time actually resulted in the program taking twice as long to develop because she never actually told me what she needed. The reason for this omission was simple: She had already decided that implementing all of what she needed would take too long and cost too much. Unfortunately that decision was made without information on me, LabVIEW or what can be done with LabVIEW in a short period of time (…even in Ver 1!). Consequently, she didn’t really tell me what she needed, rather she told me what she thought I could accomplish.

Over the years I have come to see that this is a common problem. You see, by the time you walk in the door, many customers are like this doctor. They have been through some sort of internal approval process — which always takes longer than expected — so they start off having less time than they thought they had. Moreover, now they have a dollar figure that they can’t go over.

To prevent this sort of situation, I now always begin all new customer contacts with what I call “The Talk”. During this conversation I point out politely, but firmly, that their job during the initial meetings is to tell me everything they need from the proposed application, both now and in the foreseeable future. The most important part of the process is that they are to hold nothing back. It is only after that information is on the table that we begin talking about schedules and budgets. When you know all the facts, the customer will get the best results and you can do your best work.

Never Ask Permission to do What is Needed

This section is actually something I have been wanting to write about for a while, but wasn’t sure about how to get into it. The problem, of course, is that when misunderstood or applied incorrectly, this point can become an open door for ego to come charging in and muddy things up. Like many things in life the answer lies not in maintaining some sort of “balance” but rather to learn to be comfortable with having two ideas running around in your head at the same time that on might seem contradictory:

  1. I have been doing this work a long time know what needs to be done.
  2. I might be wrong.

For a healthy career (and/or life) these ideas need to be in constant tension because clinging to either one will lead to problems: either arrogance on the one side or fear-induced immobility on the other. So how do you learn to be comfortable with this tension? Well, that is the Big Question that folks have been trying to figure out for millennia. If you have any thoughts on the topic I would be glad to hear them, but in the mean time, here are a couple things that work for me.

  1. Know your standards:
    There are things on which we feel free to compromise. There are also also things for which compromise is not possible for to compromise on those things would be to compromise on who we are. And then there is, of course, that vast gray area between the two extremes. I won’t try to tell you where the lines should be drawn — I’m just going to remind you that there are lines for you to draw.
  2. Always be learning:
    Learning is a great way of “keeping it real”. I read new things that come out, but I also reread things that I first read decades ago. An interest phenomenon that I have noticed is that each time I read certain papers I see things that I didn’t see before. It’s sort of like building: every time you learn something you are laying down a stone, and then standing on that stone you can see further than you did before.

Well, that’s all for now. Until next time…

Mike…

Flashback Thursday

Recently I was moving and in my garage came across something that I thought had been lost years ago. It is a hardcopy of the first thing I ever wrote about LabVIEW for public consumption. The year was 1989 and the event was the first (and I think only) NI User’s Symposium. It was held at the Adam’s Mark hotel in Austin. Reading the paper, I was struck by two things: First, not only have I been using LabVIEW for 29 years, but I have been writing about it for 25 years! Ye Gads!

Second, what that although I alternately cringed and laughed while reading it, I also spent a lot of time nodding. Although the paper is clearly dated, many of the basic concepts have held up very well over the years.

Consequently, I have scanned it into a PDF and have posted it on the site in the White Paper section. Enjoy.

Until next time…

Mike…

PS: Before anybody asks: No, I am not going to post any pictures of me from 1989…

The Opposite of Simple

It may not have occurred to you, but every application exhibits two forms of complexity. This dual nature of complexity can, at times, make it hard to keep straight what someone means when they start talking about “simplifying” an application.

First there is the complexity that is inherent in what you are trying to do. In that definition, notice that the word “inherent” is the important one. It means that there is no way of reducing that complexity without fundamentally changing the goal of your work.

Inherent Complexity in a Trip

As an example of what I mean, let’s go back to the summer between my 6th and 7th grades in school. That year my Dad had gotten a new job and we were moving to the (then) little town of Lebanon Missouri. As we were driving to our new home we stopped along the way at a Shell filling station and I noticed that the oil company was offering a new travel service. If you sent in a card with the departure date, starting point and destination of a trip, they would send you all the required maps and plan the trip for you — for free…

Being the kind of kid I was (and some would say, still am) I snagged one of the postage-paid cards and sent in a request for a trip from our new home in Lebanon, Missouri to Nome, Alaska. Why Nome? Why not Nome? Moreover I stated that the trip was to be made in October. After sending off the card, I thought no more about it until about 3 weeks later when a very large box arrived for me from “Shell Oil Travel Service”. Oh yes, did I mention that I hadn’t told my folks I was sending off the card?

Upon opening the box, I discovered highway maps for several US states and a couple Canadian provinces with my suggested route highlighted in blue — or at least most of my trip was in blue. Also in the box was a very nice letter from a travel planner stating that there weren’t actually any roads connecting Nome to the rest of the world, but if I desired more assistance they would be glad to help me identify a reliable bush pilot and/or dog sled owner who would help me complete the last leg of my journey.

Needless to say, my parents were not amused and, being the good parents they were, they made me write an apology to the planner identified in the letter.

The point is that a trip from Lebanon to Nome in October involves a certain level of complexity and the only way to reduce that complexity is to change the trip. Instead of going to Nome, perhaps I could travel to Seattle, Washington. Or instead of going in October, I could make the trip in June when the ocean wasn’t frozen and I at least could travel the last leg by ship.

Induced Complication

In addition to an application’s inherent complexity, there is also the issue of how complicated I choose to make the implementation of that complexity. Getting back to my “Alaskan Adventure”, if I made the trip is a brand-new 4-wheel-drive vehicle with good snow tires and extra fuel tanks, the journey would be long, but manageable. However, if I tried to make the journey in a beat-up ’53 Nash Rambler with bald tires and an engine that burnt as much oil as it did gasoline, the trip would be much more complicated.

The same thing applies to software. There is a myth floating around the industry that complex requirements require complicated code. Actually, I have found no real correlation between the two. Therefore, I particularly like the tagline the LabVIEW Champion Christian Altenbach has for his forum posts:

Do more with less code in less time

As you’re designing and developing code you must never forget the difference between inherent complexity and induced complication. You need to remember especially that in that last sentence, you are the inducer. You get to decide how complicated you make your code — don’t use complexity as an excuse for over-complication.

Until next time…

Mike…

Module, module, whose got the module?

One of the foundation stones of software architecture is the idea of modularity, or modular design. The only problem is that while the concepts of modular design are well-known, the term “module” seems to get used at times in rather inexact ways. So what is a module? Depending on who’s speaking it sometimes appears to refer to a VI, sometimes to a library or collection of VIs, and sometimes even a whole program or executable can be called a module. The thing is, all those are legitimate usages of the word — and they are just the beginning.

So how is one to keep straight what is meant? The only way that I have found is to stay alert, and keep straight in your head the conversation’s context. Well-designed software is always modular, but it is also hierarchical. Consequently, the modularity will also be hierarchical. For example, a VI can encapsulate some functionality that you wish to use through out a project (or across multiple projects) and so can be considered a module. But that VI can also be used as part of a higher-level more-abstract module that uses it to implement some broader system-level functionality.

To see this structure in action, all you have to do is consider any modern software package — including LabVIEW itself. Much of the user experience we think of as “LabVIEW” is actually the result of cooperation between dozens of independent modules. There are modules that form the user interface, modules that implement the compiler and other modules that manage system resources like licenses, network connections and DAQ IO. Regardless of level at which the modularity occurs, the benefits are much the same — reduced development costs, improved maintainability, lower coupling and enhanced cohesion. In short, modularity is a Very Good Thing.

However, you may have noticed that this description (like so many others) concentrates on modularization as a part of the implementation process. Consequently, I want to introduce another idea about modules: The use of modularization during the design process. In particular, I want to provide you with a link to a paper by D.L. Parnas that was published the year I graduated high-school, that I first read in 1989, and that every software engineer should read at least once a year because it is as relevant today as it was 43 years ago. The paper bears the rather daunting title: On the Criteria To Be Used in Decomposing Systems into Modules.

As the title suggests, the point is to not make the case for modularization — even then it was assumed that modularity was the way to go. The question was how do you go about breaking up a system into modules so as to derive the greatest benefit. To this end, Dr Parnas takes a simple application and compares two different ways that it could be broken down into modules, comparing and contrasting the results.

In a section titled What is Modularization? Dr Parnas sets the context for the following discussion:

In this context “module” is considered to be a responsibility assignment rather than a subprogram. The modularizations include the design decisions which must be made before the work on independent modules can begin.

In other words, this discussion is primarily about designing, not implementing. Just as the first step in designing a database is generating a data model, so the first step in designing a program is to identify where the functional responsibilities lie. And just as the data model for a database doesn’t translate directly into table structures, so this design modularization will not always translate directly into code. However, when done properly this initial design work serves a far greater good than simply providing you with a blueprint of the code you need, it also teaches you how to think about the problem you are trying to solve.

It can be easy to get tied-up in the buzzwords of the work we do and think that they (the buzzwords) will protect us from creating bad code like talismans or magical words of power. But there is no magic to be found in words like “object-oriented”, “hierarchical structure” or “modular design”. As Parnas shows, it is possible to create very bad code that is modular, hierarchical and object oriented.

Until next time…

Mike…