Building a Proper LabVIEW State Machine Design Pattern – Pt 2

Last week’s post was rather long because (as is often the case in this work) there was a lot we had to go over before we could start writing code. That article ended with the presentation of a state diagram describing a state machine for a very simple temperature controller. This week we have on our plate the task of turning that state diagram into LabVIEW code that implements the state machine functionality.

The first step in that process is to consider in more detail the temperature control process that the state diagram I presented last week described in broad terms. By the way, as we go through this discussion please remember that the point of this exercise is to learn about state machines — not temperature control techniques. So unless your application is to manage the internal temperature of a hen-house, dog-house or out-house; don’t use this temperature control algorithm anywhere.

How the demo works

The demonstration is simulating temperature control for an exothermic process — which is to say, a process that tends to warm or release heat into the environment over time. To control the temperature, the system has two resources at its disposal: an exhaust fans and a cooler. Because the cooler is actively putting cool air into the area, it has a very dramatic effect on temperature. The fan, on the other hand, has a much smaller effect because it is just reduces the heat through increased ventilation.

When the system starts, the state machine is simply monitoring the area temperature and as long as the temp is below a defined “High Warning Level” it does nothing. When that level is exceeded, the system turns on the fan, as shown by the fan light coming on. In this state, the temperature will continue to rise, but at a slower rate.

Eventually the temperature will, exceed the “High Error Level” and when it does, the system will turn on the cooler (it has a light too). The cooler will cause the temperature to start dropping. When the temperature drops below the “Low Warning Level” the fan will turn off. This action will reduce the cooling rate, but not stop it completely. When the temperature reaches the “Low Error Level”, the cooler will turn off and the temperature will start rising again.

So let’s look at how our state machine will implement that functionality.

“State”-ly Descriptions

As I stated last week, the basic structure is an event structure with most of the state machine functionality built into the timeout case. To get us started with a little perspective, this is what the structure as a whole looks like.

State Machine Overview

Obviously, from the earlier description, this state machine will need some internal data in order to perform its work. Some of the data (the 4 limit values and the sample interval) is stored in the database, while others are generated dynamically as the state machine executes. Regardless of its source, all of this data is stored in a cluster, and you can see two of the values that it contains being unbundled for use. Timeout is the timeout value for the event structure and is initialized to zero. The Mode value is an enumeration with two values. In the Startup case is the logic that implements startup error checking and reads the initial setup values from the database. When it finishes, it sets Mode to its other value: Run. This is where you will find the bulk of the state machine logic. Note that while I don’t implement it here, this logic could be expanded to provide the ability to do such things as pause the state machine.

The following sections describe the function of each state and shows the code that implements it.

Initialization

This state is responsible for getting everything initialized and ready to run.

Initialization State

In a real system, the logic represented here by a single hardware initialization VI would be responsible for initializing the data acquisition, verifying that the system is capable of communicating with the fan and cooler, and reading their operational states. Consequently, this logic might be 1 subVI or there might be 2 or 3 subVIs. The important point is to not show too much detail in various states. Use subVIs. Likewise, do not try to expand on the structure by adding multiple initialization states.

Finally, note that while the selection logic for the next state may appear to be a default transition, it isn’t. The little subVI outside the case structure actually creates a two-way branch in the logic. If the incoming error cluster is good, the incoming state transitions (in this case, Read Input) is passed through unmodified. However, if an error is present, the state machine will instead branch to the Error state where the problem can be addressed as needed.

Read Input

This state is responsible for reading the current temperature (referred to as the Process Variable) and updating the state machine data cluster.

Read Input State

In addition to updating the last value, there are a couple other values that it sets. Both of these values relate to how the acquisition delay is implemented in the state machine. The first of these is the Timeout value, and since we want no delay between states, we set this to zero. The other value is the Last Sample Time. It is a timestamp indicating when the reading was made. You’ll see in a bit how these values are used.

This state also updates two front panel indicators, the graph and a troubleshooting value.

Test Fan Limits

This state incorporates a subVI that analyzes the data contained in the state machine data to determine whether or not the fan needs to change state.

Test Fan Limits State

The selector in this state results in what is the potential for a 3-way branch. If a threshold has been crossed, the next state will be Set Fan, if it has not, the next state will be Test Cooler Limits, and if an error has occurred, the next state will be Error.

Set Fan

Since the fan can only be on or off, all this state needs to do is reverse its current operating condition.

Set Fan State

In addition to the subVI toggling the fan on or off, the resulting Fan State is unbundled from the state machine data and the value is used to control the fan LED.

Test Cooler Limits

This state determines whether the cooler needs to be changed. It can also only be on or off.

Test Cooler Limits State

The logic here is very similar to that used to test the fan limits.

Set Cooler

Again, not unlike the corresponding fan control state, this state changes the cooler state and sets the cooler LED as needed.

Set Cooler State

Acquisition TO Wait

This state handles the part of the state machine that is in often the Achilles Heel of an implementation. How do we delay starting the control sequence again without incurring the inefficiency of polling?

Acquisition TO Wait State

The answer is to take advantage of the timeout that event structures already have. The heart of that capability is the timeout calculation VI, shown here:

Calculate Time Delay

Using inputs from the state machine data, the VI adds the Sample Interval to the Last Sample Time. The result is the time when the next sample will need to be taken. From that value, the code subtracts the current time and converts the difference into milliseconds. This value is stored back into the state machine data as the new timeout. This technique is very efficient because it requires no polling.

But wait, you say. This won’t work if some other event happens before the timeout expires! True. But that is very easy to handle. As an example, here is the modified event handler for the save data button.

Handling Interrupting Events

It looks just as it did before, but with one tiny addition. After reading, formatting and saving the data, the event handler calls the timeout calculation VI again to calculate a new timeout to the intended next sample time.

Error

This state handles errors that occur in the state machine. That being the case, it is the new home for our error VI.

Error State

Deinitialize

Finally, this state provides the way to stop the state machine and the VI running it. To reach this state, the event handler for the UDE that shuts down the application, will branch to this state. Because it is the last thing to execute before the VI terminates, you need to be sure that it includes everything you need to bring the system to a safe condition.

Deinitialize State

With those descriptions done, let’s look at how the code works.

The Code Running

When you look at the new Temperature Controller screen you’ll notice that in addition to the graph and the indicators showing the states of the fan and cooler, there are a couple of numbers below the LEDs. The top one is the amount of elapsed between the last two samples, the bottom one is the delay calculated for the acquisition timeout.

If you watch the program carefully as its running, you’ll notice something a bit odd. The elapsed time indicator shows a constant 10 seconds between updates (plus or minus a couple of milliseconds — which is about all you can hope for on a PC). However, the indicator showing the actual delay being applied is never anywhere near 10,000 milliseconds. Moreover, if you switch to one of the other screens and the back, the indicated delay can be considerably less than 10,000 milliseconds, but the elapsed time never budges from 10 seconds. So what gives?

What you are seeing in action is the delay recalculation, we talked about earlier. In order to better simulate a real-world system, I put a delay in the read function that pauses between 200- and 250-msec. Consequently when execution reaches the timeout calculation, we are already about 1/4 of a second into the 10 second delay. The calculation, however, automatically compensates for this delay because the timeout is always referenced to the time of the last measurement. The same thing happens if another event comes between successive data acquisitions.

On Deck

As always, if you have any questions, concerns, gripes or even (gasp!) complements, post ’em. If not feel free to use any of this logic as you see fit — and above all, play with the code see how you might modify it to do similar sorts of things.

Stay tuned. Next week we will take a deeper look at something we have used before, but not really discussed in detail: Dynamically Calling of VIs. I know there are people out there wondering about this, so it should be fun.

Until next time…

Mike…

Building a Proper LabVIEW State Machine Design Pattern – Pt 1

The other day I was looking at the statistics for this site and I noticed that one of the most popular post with readers was the one I wrote on the producer/consumer design pattern. Given that level of interest, it seemed appropriate to write a bit about another very common and very popular design pattern: the state machine. There’s a good reason for the state-machine’s popularity. It is an excellent, often ideal, way to deal with logic that is repetitive, or branches through many different paths. Although, it certainly isn’t the right design pattern for every application, it is a really good technique for translating a stateful process into LabVIEW code.

However, some of the functionality that state machines offer also means they can present development challenges. For example, they are far more demanding in terms of the design process, and consequently far less forgiving of errors in that process. As we have seen before with other topics, even the most basic discussion of how to properly design and use state machines is too big for a single post. Therefore, I will present one post to discuss the concepts and principles involved, and in a second post present a typical implementation.

State Machine Worst Practices

For some reason it seems like there has been a lot of discussions lately about state machine “best practices”. Unfortunately, some of the recommendations are simply not sound from the engineering standpoint. Meanwhile others fail to take advantage of all that LabVIEW offers because they attempt to mimic the implementation of state machines in primitive (i.e. text-based) languages. Therefore, rather than spinning out yet another “best practices” article, I think it might interesting to spend a bit of time discussing things to never do.

In describing bad habits to avoid, I think it’s often a good idea to start at the most general level and work our way down to the details. So let’s start with the most important mistake you can make.

1. Use the state machine as the underlying structure of your entire application

State machines are best suited for situations where you need to create a complex, cohesive, and tightly-coupled process. While an application might have processes within it that need to be tightly-coupled, you want the application as a whole to exhibit very low levels of coupling. In fact, much of the newest computer science research deprecates the usage of state machines by asserting that they are inherently brittle and non-maintainable.

While I won’t go that far, I do recognize that state machines are typically best suited for lower-level processes that rarely, if ever, appear to the user. For example, communications protocols are often described in terms of state machines and are excellent places to apply them. One big exception to this “no user-interface” rule is the “wizard” dialog box. Wizards will often be built around state machines precisely because they have complex interface functionality requirements.

2. Don’t start with a State Diagram

Ok, so you have a process that you feel will benefit from a state machine implementation. You can’t just jump in and start slinging code right and left. Before you start developing a state machine you need to create a State Diagram (also sometimes called a State Transition Diagram), to serve as a road-map of sorts for you during the development process. If you don’t take the time for this vital step, you are pretty much in the position of a builder that starts work on a large building with no blueprint. To be fair, design patterns exist that are less dependent upon having a completed, through design. However, those patterns tend to be very linear in structure, and so are easy to visualize in good dataflow code. By contrast, state machines are very non-linear in their structure so can be very difficult to develop and maintain. To keep straight what you are trying to accomplish, state machines need to be laid out carefully and very clearly. The unfortunate truth, however, is that state machines are often used for the exact opposite reason. There is a common myth that state machines require a minimum of design because if you get into trouble, you can always just, “add another state”. In fact, I believe that much of the bad advice you will get on state machines finds its basis in this myth.

But even if we buy the idea that state machines require a more through design, why insisted on State Diagrams? One of the things that design patterns do is foster their own particular way of visualizing solutions to programming problems. For example, I have been very candid about how a producer/consumer design pattern lends itself to thinking about applications as a collection of interacting processes. In the same way, state machines foster a viewpoint where the process being developed is seen as a virtual machine constructed from discrete states and the transitions between those states. Given that approach to problem solving, the state diagram is an ideal design tool because it allows you to visually represent the structure that the states and transitions create.

So what does it take to do a good state-machine design? First you need to understand the process — a huge topic on its own. There are many good books available on the topic, as well as several dedicated web sites. Second, having a suitable drawing program to create State Diagrams can be helpful, and one that I have used for some time is a free program called yEd. However fancy graphics aren’t absolutely necessary. You can create perfectly acceptable State Diagrams with nothing more than a paper, a pencil and a reasonably functional brain. I have even drawn them on a white board during a meeting with a client and saved them by taking a picture of them with my cell phone.

Moreover, drawing programs aren’t much help if you don’t know what to draw. The most important knowledge you can have is a firm understanding of what a state machine is. This is how Wikipedia defines a state machine:

A finite-state machine (FSM) or finite-state automaton (plural: automata), or simply a state machine, is a mathematical model of computation used to design both computer programs and sequential logic circuits. It is conceived as an abstract machine that can be in one of a finite number of states. The machine is in only one state at a time; the state it is in at any given time is called the current state. It can change from one state to another when initiated by a triggering event or condition; this is called a transition.

An important point to highlight in this description is that a state machine is at its core is a mathematical model — which itself implies a certain level of needed rigor in their design and implementation. The other point is that it is a model that consists of a “finite number of steps” that the machine moves between on the basis of well-defined events or conditions.

3. Ignore what a “state” is

Other common problems can arise when there is confusion over what constitutes a state. So let’s go back to Wikipedia for one more definition:

A state is a description of the status of a system that is waiting to execute a transition.

A state is, in short, a place where the code does something and then pauses while it waits for something else to happen. Now this “something” can be anything from the expiration of a timer to a response from a piece of equipment that a command had been completed successfully (or not). Too often people get this definition confused with that for a subVI. In fact one very common error is for a developer to treat states as though they were subroutines that they can simply “call” as needed.

4. Use strings for the state variable

The basic structure behind any state machine is that you have a loop that executes one state each time it iterates. To define execution order there is a State Variable that identifies the next state the machine will execute in response to an event or condition change. Although I once saw an interesting object-oriented implementation that used class objects to both identify the next state (via dynamic dispatch) and pass the state machine operational data (in the class data), in most cases there is a much simpler choice of datatype for the State Variable: string or enumeration.

Supposedly there is an ongoing discussion over which of these two datatypes make better state variables. The truth is that while I have heard many reasons for using strings as state variables, I have yet to hear a good one. I see two huge problems with string state variables. First, in the name of “flexibility” they foster the no-design ethic we discussed earlier. Think about it this way, if you know so little about the process you are implementing that you can’t even list the states involved, what in the world are you doing starting development? The second problem with state strings is that using them requires the developer to remember the names of all the states, and how to spell them, and how to capitalize them — or in a code maintenance situation, remember how somebody else spelled and capitalized them. Besides trying to remember that the guy two cubicles down can never seem to remember that “flexible” is spelled with an “i” and not an “a”, don’t forget that there is a large chunk of the planet that thinks words like “behavior” has a “u” in them…

By the way, not only should the state variable be an enumeration, it should be a typedef enumeration.

5. Turn checking the UI for events into a state

In the beginning, there were no events in LabVIEW and so state machines had to be built using what was available — a while loop, a shift register to carry the state variable, and a case structure to implement the various states. When events made their debut in Version 6 of LabVIEW, people began to consider how to integrate the two disparate approaches. Unfortunately, the idea that came to the front was to create a new state (typically called something like, Check UI) that would hold the event structure that handles all the events.

The correct approach is to basically turn that approach inside out and build the state machine inside the event structure — inside the timeout event to be precise. This technique as a number of advantages. To begin with, it allows the state machine to leverage the event structure as a mechanism for controlling the state machine. Secondly, it provides a very efficient mechanism for building state machines that require user interaction to operate.

Say you have a state machine that is basically a wizard that assists the user in setting up some part of your application. To create this interactivity, states in the timeout event would put a prompt on the front panel and sets the timeout for the next iteration to -1. When the user makes the required selection or enters the needed data, they click a “Next” button. The value change event handler for the button knows what state the state machine was last in, and so can send the state machine on to its next state by setting the timeout back to 0. Very neat and, thanks to the event-driven programming, very efficient.

On the other hand, if you are looking for a way to allow your program to lock-up and irritate your users, putting an event structure inside a state is a dandy technique. The problem is that all you need to stop your application in its tracks is one series of state transitions where the “Check UI” state doesn’t get called often enough, or at all. If someone clicks a button or changes something on the UI while those states are executing, LabVIEW will dutifully lock the front panel until the associated event is serviced — which of course can’t happen because the code can’t get to the event structure that would service it. Depending on how bad the overall code design is and the exact circumstances that caused the problem, this sort of lock-up can last several seconds, or be permanent requiring a restart.

6. Allow default state transitions

A default state transition is when State A always immediately transitions to State B. This sort of design practice is the logical equivalent of a sequence structure, and suffers from all the same problems. If you have two or more “states” with default transitions between them, you in reality have a single state that has been arbitrarily split into multiple pieces — pieces that hide the true structure of what the code is doing, complicates code maintenance and increases the potential for error. Plus, what happens if an error occurs, there’s a shutdown request, or anything else to which the code needs to respond? As with an actual sequence structure, you’re stuck going through the entire process.

7. Use a queue to communicate state transitions

Question: If default transitions are bad, why would anyone want to queue up several of them in a row?
Answer: They are too lazy to figure out exactly what they want to do so they create a bunch of pieces that they can assemble at runtime — and then call this kind of mess, “flexibility”. And even if you do come up with some sort of edge case where you might want to enqueue states, there are better ways of accomplishing it than having a queue drive the entire state machine.

Implementation Preview

So this is about all we have room for in this post. Next Monday I’ll demonstrate what I have been writing about by replacing the random number acquisition process in our testbed application with an updated bit of LabVIEW memorabilia. Many years ago the very first LabVIEW demo I saw was a simple “process control” demo. Basically it had a chart with a line on it that trended upwards until it reached a limit. At that point, an onscreen (black and white!) LED would come on indicating a virtual fan had been turned on and the line would start trending back down. When it hit a lower limit, the LED and the virtual fan would go off and the line would start trending back up again. With that early demonstration in mind, I came up with this State Diagram:

Demo State Machine

When we next get together, we’ll look at how I turn this diagram into a state-machine version of the original demo — but with color LEDs!

Until next time…

Mike…

Making UDEs Easy

For the very first post on this blog, I presented a modest proposal for an alternative implementation of the LabVIEW version of the producer/consumer design pattern. I also said that we would be back to talk about it a bit more — and here we are!

The point of the original post was to present the modified design pattern in a form similar to that used for the version of the pattern that ships with LabVIEW. The problem is that while it demonstrates the interaction between the two loops, the structure cannot be expanded very far before you start running into obstacles. For example, it’s not uncommon to have the producer and consumer loops in separate VIs. Likewise, as with a person I recently dealt with on the forums, you might want to have more than one producer loop passing data to the same consumer. In either case, explicitly passing around a reference complicates matters because you have to come up with ways for all the separate processes to get the references they need.

The way around this conundrum lies in the concept of information hiding and the related process of encapsulation.

Moving On

The idea behind information hiding is that you want to hide from a function any information that it doesn’t need to do its job. Hiding information in this sense makes code more robust because what a routine doesn’t know about it can’t break. Encapsulation is an excellent way if implementing information hiding.

In the case of our design pattern, the information that is complicating things is the detailed logic of how the user event is implemented, and the resulting event reference. What we need is a way to hide the event logic, while retaining the functionality. We can accomplish this goal by encapsulating the data passing logic in a set of VIs that hide the messy details about how they do their job.

Standardizing User-Defined Events

The remainder of this post will present a technique that I have used for several years to implement UDEs. The code is not particularly difficult to build, but if you are a registered subscriber the code can be downloaded from the site’s Subversion SCC server.

The first thing we need to do is think a bit and come up with a list of things that a calling program would legitimately need to do with an event — and it’s actually a pretty short list.

  1. Register to Receive the Event
  2. Generate the Event
  3. Destroy the Event When the Process Using it Stops

This list tells us what VIs the calling VI will need. However, there are a couple more objects that those VIs will be needed internally. One is a VI that will generate and store the event reference, the other is a type definition defining the event data.

Finally, if we are going to be creating 4 VIs and a typedef for each event in a project, we are going to need some way of keeping things organized. So let’s define a few conventions for ourselves.

Convention Number 1

To make it easy to identify what event VI performs a given function, let’s standardize the names. Thus, any VI that creates an event registration will be called Register for Event.vi. The other two event interface VI will, likewise, have simple, descriptive names: Generate Event.vi and Destroy Event.vi. Finally, the VI that gets the event reference for the interface VIs, shall be called Get Event Reference.vi and the typedef that defines the event data will be Event Data.ctl.

But doesn’t LabVIEW require unique VI names? Yes, you are quite right. LabVIEW does indeed require unique VI names. So you can’t have a dozen VIs all named Generate Event.vi. Thus we define:

Convention Number 2

All 5 files associated with an event shall be associated with a LabVIEW library that is named the same as the event. This action solves the VI Name problem because LabVIEW creates a fully-qualified VI name by concatenating the library name and the VI file name. For example, the name of the VI that generates the Pass Data event would have the name:
Pass Data.lvlib:Generate Event.vi
While the VI Name of the VI that generates the Stop Application event would be:
Stop Application.lvlib:Generate Event.vi

The result also reads pretty nice. Though, it still doesn’t help the OS which will not allow two files with the same name to coexist in the same directory. So we need:

Convention Number 3

The event library file, as well as the 5 files associated with the event, will reside in a directory with the same name as the event — but without the lvlib file extension. Hence Pass Data.lvlib, and the 5 files associated with it would reside in the Pass Data directory, while Stop Application.lvlib and its 5 files would be found in the directory Stop Application.

So do you have to follow these conventions? No of course not, but as conventions go, they make a lot of sense logically. So why not just use them and save your creative energies for other things…

The UDE Files

So now that we have places for our event VIs to be saved, and we don’t have to worry about what to name them, what do the VIs themselves look like? As I mentioned before, you can grab a working copy from our Subversion SCC server. The repository resides at:

http://svn.NotaTameLion.com/blogProject/ude_templates

To get started, you can simply click on the link and the Subversion web client will let you grab copies of the necessary files. You’ll notice that when you get to the SCC directory, it only has two files in it: UDE.zip and readme.pdf. The reason for the zip file is that I am going to be using the files inside the archive as templates and don’t want to get them accidentally linked to a project. The readme file explains how to use the templates, and you should go through that material on your own. What I want to cover here is how the templates work.

Get Event Reference.vi

This VI’s purpose is to create, and store for reuse, a reference to the event we are creating. Given that description, you shouldn’t be too surprised to see that it is built around the structure of a functional global variable, or FGV. However, instead of using an input from the caller to determine whether it needs to create a reference, it tests the reference in the shift-register and if it is invalid, creates a reference. If the reference is valid, it passes out the one already in the shift-register.

If you consider the constant that is defining the event datatype, you observe two things. First, you’ll see that it is a scalar variant. For events that essentially operate like triggers and so don’t have any real data to pass, this configuration works fine. Second, there is a little black rectangle in the corner of the constant indicating that it is a typedef (Event Data.ctl). This designation is important because it significantly simplifies code modification.

If the constant were not a typedef, the datatype of the event reference would be a scalar variant and any change to it would mean that the output indicator would have to be recreated. However, with the constant as a typedef, the datatype of the event is the type definition. Consequently you can modify the typedef any way you want and every place the event reference is used will automatically track the change.

Register for Event.vi

This VI is used wherever a VI needs to be able to respond to the associated event. Due to the way events operate, multiple VIs can, and often will, register to receive the same event. As you look at the template block diagram, however, you’ll something is missing: the registration output. The reason for this omission lies in how LabVIEW names events.

When LabVIEW creates an event reference it naturally needs to generate a name for the event. This name is used in event structures to identify the specific particular event handler that will be responding to an event. To obtain the name that it will use, LabVIEW looks for a label associated with the event reference wire. In this case, the event reference is coming from a subVI, so LabVIEW uses the label of the subVI indicator as the event name. Unfortunately, if the name of this indicator changes after the registration reference indictor is created, the name change does not get propagated. Consequently, this indicator can only be created after you have renamed the output of the Get Event Reference.vi subVI to the name that you wish the event to have.

The event naming process doesn’t stop with the event reference name. The label given to the registration reference can also become important. If you bundle multiple references together before connecting them to an event structure’s input dynamic event terminal, the registration indicator is added to the front of the event name. This fact has two implications:

  1. You should keep the labels short
  2. You can use these labels to further identify the event

You could, for example, identify events that are just used as triggers with the label trig. Alternately, you could use this prefix to identify the subsystem that is generating the event like daq or gui.

Generate Event.vi

The logic of this template is pretty straight-forward. The only noteworthy thing about it is that the event data (right now a simple variant) is a control on the front panel. I coded it this way to save a couple steps if I need to modify it for an event that is passing data. Changing the typedef will modify the front panel control, so all I have to do is attach it to a terminal on the connector pane.

Destroy Event

Again, this is very simple code. Note that this VI only destroys the event reference. If it has been registered somewhere, that registration will need to be destroyed separately.

Putting it all Together

So how would all this fit into our design pattern? The instructions in the readme file give the step-by-step procedure, but here is the result.

image

As explained in the instructions, I intend to use this example as a testbed of sorts to demonstrate some things, so I also modified the event data to be a numeric, and changed the display into a chart. You’ll also notice that the wire carrying the event reference is no longer needed. With the two loops thus disconnected from each other logically, it would be child’s play to restructure this logic to have multiple producer loops, or to have the producer loop(s) and the consumer loop(s) in separate VIs.

By the way, there’s no reason you can’t have multiple consumer loops too. You might find a situation where, for example, the data is acquired once a second, but the consumer loops takes a second and a half to do the necessary processing. The solution? Multiple consumer loops.

However, there is still one teeny-weensy problem. If you now have an application that consists of several parallel processes running in memory, how do you get them all launched in the proper order?

Until next time…

Mike…