Managing Distributed Data in Visual FoxPro

Josip Zohil, Koper, Slovenia, march 2013

 

When you distribute resources and work, you have to synchronize them; Visual FoxPro (VFP) has a built-in mechanism you can use in distribution management. Asynchronous running processes can help you in efficient data manipulation. In this article we present some problems in asynchronous data management.

1.   Introduction

 

A distributed computer system consists of multiple software components that are on multiple computers connected by a local network, but run as a single system. VFP mostly run on a distributed system as a client server system. The functions are separated. A client is a program that uses services that other programs called server provide: the client interacts between the user and the server and manage data; the server manage files. There is also a mechanism that synchronizes clients and server’s data. “Visual FoxPro buffers portions of tables in memory on your workstation. SET REFRESH can specify how often data that is buffered locally on your workstation is refreshed« (From VFP Help).

 

The spread of high-speed broadband networks and the continual increase in computing power have changed the way programmers create and manage software: they solve a large problem by giving (in parallel) small parts of the problem to many computers (network resources) and then combining the solutions for the parts into a solution for the problem.  VFP can run in a distributed and parallel way with multiple copies of the same data. You can manipulate data in parallel, with multiple data copies and solve many of the »up to date« problems of distributed programming. There are rare data manipulation systems with such diverse tools. For programmers, this is an added complexity and an added set of hard to debug programming errors. In this article we present some examples of this parallel data manipulation and emphasize the difference between the sequential and parallel method in solving problems in this field.

2.   VFP distribution modes and examples

In this article i shall show you how to extend a VFP application by adding it other VFP processes, running them in parallel and asynchronously and some problems you have to resolve. I shall focus on extending parallel data manipulation in VFP.

In a VFP client application you can manipulate data having a copy of data on the server (disk and/or cache) and in client’s cache. The client data are:

1) Synchronized with the nodes (server and clients),

2) Not synchronized with the nodes – buffer mode without refresh,

3) Synchronized in time intervals with the nodes. (See, for example 1001 Things You wonted to know about Visual FoxPro).

Note. To synchronize means maintaining multiple copies of data in sync across multiple caches (distributed resources). With a simple VFP command, for example »use employee« you get data in the client cache and more, this data is automatically synchronized with other copies of this data on other clients. VFP “service” replicates and synchronize data across nodes. The Set refresh command: “Determines whether to and how frequently to update ... or to refresh local memory buffers with changes from other users on the network“ (From VFP Help).

 The last mode is a mix of the first (refresh time interval is very small) and the second (the time interval is large). In all the three modes we use similar VFP commands but we are solving different data distribution problems.

 

*Program r1w

application.Visible=.t.

CLOSE DATABASES all

SET EXCLUSIVE off

Set refresh to 1,-1

Use employee    && at least one record and a field amount

BROWSE TITLE "R1: without refresh, amount 10"

Replace amount with amount + 10

 

*Program r2w

application.Visible=.t.

Set excl off

Close database all

Set refresh to 1,-1

Use employee

BROWSE TITLE "R2: without refresh, amount 20"

Replace amount with amount + 20

 

Distributed data, multiple copies of stale data and refresh interval

*Program r1

application.Visible=.t.

CLOSE DATABASES all

SET EXCLUSIVE off

Set refresh to 6,6   && or 18,18 

Use employee

CURSORSETPROP("Buffering" ,3)  && optimistic row buffering

BROWSE TITLE "R1: refresh 6, amount 10"

Replace amount with amount + 10

TABLEUPDATE(.t.)

 

*Program r2

application.Visible=.t.

Close database all

Set refresh to 6,6

SET MULTILOCKS on

Use employee

CURSORSETPROP("Buffering" ,3)

Lamount=amount

BROWSE TITLE "R2: refresh 6, amount 20"

If amount=lamount

Replace amount with amount + 20

Else

Messagebox(»Can not update. Value has changed.«)

endif

TABLEUPDATE(.t.)

 

We shall study four relatively similar programs: two »in buffer mode« (r1, r2) and two without VFP buffering (r1w, r2w). The difference between them is the amount they update. We shall compile them, create theirs EXE and run the first pair r1w and r2w in sequential and parallel order. The same is for the pair of programs r1 and r2.

The four programs r1, r2, r1w and r2w are similar: they update the field amount in the data table employee. It has at least one record and a field amount. We expect the behavior (results) of a program executed in serial order or in parallel is only a function of its inputs (it is independent of the used method). The operations of each individual process have to appear in the order specified by the programmer. The critical parts of the programs executed in parallel are, when they mutate shared state, in our examples a record in a VFP table.

3.   Distributed, synchronized and locked data

3.1.        Serial execution

 

We create two project r1w and r2w with a single program r1w and r2w respectively and compile them.

If you run the pair (r1w.exe, r2w.exe) using the Windows Explorer (WE) serially (one after the other: when the first end, start the second) and as a single user, the programs will execute in the expected manner with the expected results: to the start amount of 5 adds 10 and 20 and the result is 35. In this case (serial execution and single user) there is no shared state.  The pairs (r1, r2) and (r1w, r2w) generate different results, when you run each pair in parallel.

3.2.        Parallel execution

 

3.2.1.   Example with concurrent programs, locked and synchronized

From the local computer we run the applications r1w.exe and r2w.exe using WE. Independent of the order, we run them, they generate the expected result, the amount 35. Also the time difference between theirs start don't matter. The two programs are executing partly in parallel and partly serially. Behind the scene the VFP locking mechanism serialized the two programs using optimistic locking (first come first write). The coordination between the two processes does perform a VFP automatic lock.

 The two in parallel running programs are eventually in conflict on the statement »Replace amount with amount ...«: both try to get access to the table's employee record and write in it the new amount. This eventual conflict time interval is very small (The duration of the first write). For example, if the first client attempts write first, the VFP engine automatically locks the table record and blocks (for a moment) the second (parallel) process that is concurrent for a write operation. The first process writes the amount, unlock the record and the second process can start writing. After that, the two programs continue theirs flow.

 

3.2.2.   Locking and random execution

We can formalize the locking conflict: we denote with t1 the time the first program attempts to write and t2 the second. T1d and t2d denotes the length of theirs writes. The two processes are concurrent in the time intervals t1 <=t2<= t1+t1d or t2<=t1<=t2+t2d. These intervals are very small; theirs upper bound is max (t1d, t2d) (approximately a computer writes).

In this section we remove the browse command from the two programs, so they run without blocking on opening the browse window. When you test the programs, you have to answer the question: When to start the programs r1w and r2w to reproduce theirs concurrent write: the two programs attempt to write at the same time? T10 and t20 are the times of theirs start. The intervals t1-t10 and t2-t20 are random variables. The operating system doesn't guarantees  you this interval is always the same. You have only a statistical knowledge of them, for example they will update approximately in 1 second.

We are playing with very small time intervals (time differences). On a single computer it is relatively difficult to get an observable time difference, when you execute repeatedly the same program. In the network environment you can run the program concurrently on a slow/fast computer and slow/fast network, and the difference in the execution is observable and can produce observable differences (errors) in results. You normally have to program against this time differences in distributed world and suppose they affects the results (see next examples). In sequential programming you can ignore this cause.

 

3.2.3.   Synchronized data copies

We can manually distribute work. We run the programs r1w and r2w on two clients. They generate at least 3 copies of data: The data on the server (on the disk and in the cache) and one copy for each client. Behind the scene, VFP synchronizes the data we see on the clients between them and with the server's one. When a client modifies a value, it updates the value on the server and client. This appears to the programmer, he is working with one copy and he/she is making program against this data table.

When we run a client process, we get a local, in memory data copy.  We program against this local copy: we replace the amount and VFP automatically (behind the scene) updates the server and refresh the client’s copies. For example, all this work can be done using only two VFP commands: use employee and replace amount with amount + 10.

But attention: A message sent from one process to another will arrive in finite but unbounded time, so we can only suppose the time, when data modified on a client is visible to other clients. A table in a VFP database application with multiple clients (EXE) can have a copy in all clients’ memory. In such a case you have data distribution for free. VFP synchronizes these copies. You can look at it as a process that in the background coordinate distributed data.  

This distribution is very efficient for smaller tables and is less efficient for larger as we have to transport large amounts of data (indexes) from the server to the clients. Another solution would be to distribute to clients (nodes) only smaller tables (indexes, keys) and to differently distribute larger tables.

Note. Nodes (clients, workstations, peers) of a distributed system can have also server’s functions. For example, on node1 (Network address: \\node1\c\my) you run a query “Select * from employee where key=1 into table c:\my\test”. From the application on the computer node2 you can use the data in the table test: use \\node1\c\my\test. In this case the computer Node1 is your server.

The two programs r1w and r2w writes correctly and produce expected results using VFP automatic locking. This automatism helps us, as we can write parallel programs with the same method as the sequential. But if we don't respect the distribution rules, we can create erroneous and hard to debug programs.

 

3.2.4.   Serial, parallel methods and debugging

In VFP you can't always program using the serial techniques; a programmer has to manage also parallelism. We change the program r1w to a serial correct but parallel erroneous (r1we):

 

*Program r1we

application.Visible=.t.

SET MULTILOCKS on

CLOSE DATABASES all

SET EXCLUSIVE off

Set refresh to 1,-1

Use employee && the time window from here till the last statement is critical

Lcam=amount   && changed

BROWSE TITLE "R1w: simulating a long running computation, amount 20"

Replace amount with lcam + 10   && changed

 

If you run the programs (r1we and r2w) serially in the command window, you will have correct result: the start amount of 5 will be replaced with the last amount 35. Is this true also in the parallel execution?

Compile the project r1we and run the r1we.exe on a remote client. On the local computer (server) run the program r2w and exit from the browse window. Exit from the browse window also on the client computer (respect the order). The result (15) of this parallel experiment is not equal to the serial!

In the new version of the program r1we we have a critical time window between the statement lcam=amount and replace. The table employee is not locked, when the program »blocks« at the browse command, so another parallel program (r2w) can reach the table (record) and eventually change its amount. In the critical time interval the program r2w changes the amount to 25, after that the program r1 updates the amount to 15 (lost update). We lost a write.

We can cut the chance the error will happen (for example, by removing the BROWSE from the two programs). But we don't resolve the problem. By reducing the length of the time interval in which the error can happen, we made the problem more insidious and harder to debug. In the case of program r1we (without browse) the length of this interval is very small (one computer read) so also the chance of a lost update is very small.

 

3.2.5.   Multiple solutions

The reason with this erroneous new version of the program r1we is the non atomic update: we made the »update« in two steps with a time »window« (interval) between them. A better (not best) program r1 would be to lock the employee record after the statement »use employee« and to unlock after the update. In this last solution the record remains locked for a relatively long time and the overall application can become unresponsive because of too »expensive« locks. The first version of the program r1w (without lcam=amount) is technically a correct solution.

The best solution would be to avoid the replace command and use insert. Replace is a two steps task: read and write. If you have an error in the read, there is also an error in the result operation. Insert is basically only a write operation, not directly affected by a read, so is better. The »replace« is consuming less memory space, as insert need a new record (space consuming), but in parallel systems you can resolve the problem by adding parallel resources (the space is not as critical as you assume in RDMS world).

In parallel world we have to minimize writes, especially replace and delete. The conflict we have mentioned derives from two concurrent replace. If we avoid replaces in ours programs, there are fewer conflicts. In the RDMS theory you normalized data and instead of one table, you have two or more normalized tables (potentially you have also two or more replace). Two replaces mean two blocks and more: in case of multiple tables copy the VFP engine has to synchronize the changed data on multiple clients (synchronization means writing, so we have extra program blocking).  If you are only adding records to the table (and not replacing values or deleting them), you localize the changes (at the table end). You can partition the table in two parts (data partition): the non mutable, which you can cache and the mutable one. The immutable tables or its parts are easy to cache and/or distribute (replicate). You don't need locks on the immutable data, you can navigate without locking. It is not easy to do this in VFP. We shall studies this problem in another article.

 

3.2.6.   VFP multi processes execution

VFP distribute the data to the clients (nodes) and there (on distributed nodes) we elaborate the data: VFP approach the data to the distributed executors (client’s processor and RAM). We can interpret the distribution also in this way: We run one task (program) on the current computer and we dispatch (manually) the execution off the other program (EXE) to another. We run the two programs in parallel: we distribute the execution to two computers. We can do this also on one computer: on one, two, four or more processors we can run in parallel two, three or more VFP EXE. We do not run all of them from the same application or VFP command window. In such a case they run as one serial process. We run all the programs from the WE, it is ours manual dispatcher. It isn’t ensuring us high automation in managing the VFP processes, so we have to look for something better.

We don't directly manipulate threads, so we are not speaking of multithreading and/or multitasking.(See, for example, C.Hsia: More Multithread capabilities: interthread synchronization, error checking).

Normally, we distribute work to improve the exploitation of free resources (processors, RAM, disk...).

 

3.2.7.   Database locking and file system messages

Each VFP EXE runs in its own memory space: you have no public or similar variables to pass values (or functions) between processes. You have limited possibility of theirs communication. The processes can communicate using data tables: such us, one process writes a »message« in the dbf (or other) file, and the other processes consume this message.

In our examples, we have a high automated coordination between the parallel VFP EXE: when writing to a shared resource, in case of concurrent writes, the VFP locking system resolves them. But some VFP queries generate other type of results, for example cursors, tables, SDF files, XML ...). In this cases you have to pass results from a parallel process, that generate the results (producer, service)  to others that use them (consumers). Processes running in parallel require also theirs coordination. We need a protocol of communication between VFP parallel processes.

4.   Concurrency and buffer mode

In this section we repeat the experiments from the previous, but with a pair of programs (r1, r2) running in VFP buffer mode. In this VFP mode we study new distribution problems and new solutions.

After creating the two projects r1.pjt and r2.pjt with a single program r1 and r2, we compile them in r1.exe and r2.exe. If you run the two programs from the VFP command widow one after the other (serial execution), you will get the same result: it doesn't depend from the order of theirs execution.

4.1.        Stale data and VFP buffer mode

Working in buffer mode means also manipulate stale data. In a single-user and serial application stale data are not problematic. In parallel world, for example one user manipulate his copy of 6 seconds stale amount of 5 and in the same time the other user update the server amount with 15. The amount 5 the first user has become »very stale« and incorrect. In some way, we have to manage this very stale data and write in a table only the consistent.

The VFP refresh command has three distinct states:

- Always refresh the buffered/cached data copy (set refresh to 1,-1). For example, when somebody change the field value, and leave the record (unlocked it), yours local cache value is immediately (-1) refreshed, and the value in the browse window in 1 second.

- It never refresh the buffered data copy (set refresh to 0, 0),

- The programmer defines a refresh interval. For example, set refresh to 6, 6, refresh the local copy every 6 seconds.

 

The programmer has to make programs against three VFP processes:

1) Synchronized data without buffering (fresh data in cache),

2) Refreshed buffers (synchronized in a time interval) (relatively stale data in cache, refresh them with certain frequency),

2) Buffering without refresh (not synchronized) (stale data in cache).

In all the three processes a programmer uses similar VFP commands, except in critical (conflicting) region, where he/she has to use the proper programming methods.

4.2.        Random time intervals

When we run the applications r1.exe and r2.exe in parallel, the result depend also from the moment we start them. We have to resolve three problems here:

A) Global start. We are interested in the order in which the two processes start: sometimes we use a computer clock to denote them. In case we start the two EXE from two computers, we have to decide what a global time of the two computers is. They are executing very fast and very small time differences can influence the result. The computer clocks correctness depends on temperature, humidity etc. We can't suppose, they are correct (equal). For a moment, suppose we can get a »global« time for the two computers and denote theirs start by t10 and t20. T10<t20 means we start r1.exe before r2.exe.

B) Random time execution. As we have described in the preceding section, we cannot exactly predict the execution time of each of the two parallel programs.

C) Program entry time interval (its initialization) is also a random variable. Let us explain this interval by an example. Suppose that you lock the table at the beginning of the program:

 

*it.prg

use employee

do wile not Flock()

enddo

 

You start this program at time t10 and the processor lock the table at time t1s. The table employee is not »protected« in the time interval t10 till t1s. Suppose that at first you start this program, as by design it has to make a first update. After that you ran in parallel a similar EXE with a very small difference td in the start time t20=t10+td, and its execution time t2s approximately (statistically) equal to t1s. Because t1s and t2s are random variables, it can happen t20+t2s < t10+t1s.  The program you start last can update before the first. Contrary to the design, it can use the table first. As you see, it is difficult to protect data manipulation using the VFP locking engine, we need another tool.

In concurrent world we cannot program against the start programs times, but the time when we protect them (in our example the time the program lock a table). I am not scaring you. I am only presenting the problem and telling you that this problems are different from the sequential world and that it is hard to resolve them only using tools from this environment: we need tools to resolve this problems in a for a programmer most friendly way.

4.3.        Two parallel VFP processes with buffered tables

In order to test whether the parallel running r1.exe and r2.exe write the values in correct order, we will run the following tests:

1) Serial execution without considering buffer refreshes interval of 6 seconds-wrong result.

Start r1.exe at time 0 and about a second later r2.exe. Then close the browse window of the first program and after that also the second browse window. The last amount is 25 and is wrong. The first program (r1) replace the start amount of 5 with 15 and the second the start amount (stale data) of 5 with 25, the result. If the programs r1.exe and r2.exe serialized properly, the correct amount value after these updates will be 35. Guilty to this lost update is the programmer.

 

2) Reverse programs order - Serial execution without considering buffer refresh interval of 6 seconds-wrong result.

Start r2.exe at time 0 and about a second later r1.exe. Than close the browse window of the first program (r2.exe) and after that also the second browse window. The final amount is 15 and is wrong. The first program replace the start amount of 5 with 25 and the second the start amount of 5 with 15, the final result.

The result of the two in parallel running programs depends of the length of the refresh interval and the order of the programs execution. It is wrong.

 

3) Strictly serial execution and considering buffer refresh interval - correct result.

Start r1.exe at time 0 and about a second later close its browse window. After that start the second process (r2.exe) and close its browse window. The result is 35 and is correct. The first program replaces the start amount of 5 with 15 and the second its start amount of 15 with 35, the result.

 

4) Strictly serial execution with smaller buffer refresh interval - correct result.

In the two programs replace the command Set refresh with a new set refresh to 1,1. Remove the Browse command and compile the new programs. Start r1.exe at time 0. After a second start the second process (r2.exe). The result is 35 and is correct. The first program replaces the start amount of 5 with 15 and the second his refreshed start amount of 15 with 35, the result.

When we cut the refresh interval, the problem is approaching the problem we have with r2we in the preceding section. We cut the chance of having errors, but the eventual error become more insidious. Making smaller refresh interval does not help us:

- We are removing the added value VFP buffers mode offer to the programmer,

- We increase the program debug complexity,

- The system has bugs.

Let us take another direction.  The error derives from the stale read. The update statement use stale data and we cannot base our write on this value. If we remove the table from the buffer mode, we put it in the environment it was in the preceding section, its amount is synchronized and it updates as we expect. In the two programs (r1, r2) we insert a new command CURSORSETPROP("Buffering" ,1) before the command replace. We remove also the command tableupdate at the end of the two programs. With the modified programs we repeat the experiments from this section. The updates are correct.

Attention! The command CURSORSETPROP("Buffering" ,1) is a kind of table global variable, so it is not a good programmer practice to change it, as we did.

We present also the results of another solution.  What really matter is the order in which we write data and theirs values. We change the programs to insert new records instead of update the existing. After the inserts we have this data in the table:

RecNo    Key   Amount     AutoincrementNo

1.                1           5,                  7

2.                1     15 or 20            52    (the amount depend on the order we execute the programs),

3.                1     20 or 15            54     (the amount depend on the order we execute the programs),

From this data we can calculate the result, when we need it. From the table we can deduce also the update order. This data are free of the stale (incorrect) initially read amount.

4.4.        Protect data with message passing

Asynchronous processes can start at the same time (with very small time difference between theirs start). Sometimes you have to put in order theirs execution. For example, you serially execute two programs:

do prog1

do prog2.

They execute in the order you start them. When you run them asynchronous prog2 can execute its statements before prog1. One solution is to not parallelize them: you lose free computer resources. We are going to solve it using asynchronous execution, the file system and message passing.

Let us modify the programs r1w and r2w. We have:

 

*Program r1s

application.Visible=.f.

CLOSE DATABASES all

SET EXCLUSIVE off

Use employee

Replace amount with amount + 25

Fcreate('MESAGGE.txt')

Quit

 

*Program r2s

application.Visible=.f.

Set excl off

Close database all

Use employee

*block the execution till the message arrive

Do while not file('mesagge.txt')

Enddo

ERASE "message.txt" 

Replace amount with amount – 20 

Fcreate('ENDMESAGGE.txt')

quit

 

Our goal is to start the two programs r1s and r2s as soon as possible. There is also a chance the r2s start first. The r1s has to increase the amount by 25, before the second program removes 20. We have to put in order theirs write: r2s has to write after the r1s.

The program r1s replace the record’s amount, write a message in the file system (create a file) and stops. R2s starts and than blocks, it is waiting a message (the creation of the file message.txt). After receiving the message, it updates the amount, informs it is done (creates a message »endmessage«) and stops. 

We create two projects r1s with a program r1s and r2s with a program r2s and compile them. We erase the files message.txt and endmessage.txt, if they exist in the default folder. From the WE we start the r2s.exe and after that the r1s.exe. They generate the correct result 10 (5+25-20). They are executing in the background, in an asynchronous way so we get the execution control before they have finished theirs work. Asynchronous processes have the advantage to not interrupt the program flow (do not freeze the screen or block the process that launch it). You obtain execution control, but you are not sure the execution stops, and the result is ready to use (unless you do checks).

The VFP processes have limited communication capabilities. From a parallel world perspective this is theirs added value: programmers can easier control them (they have limited side effects). You can impose them yours communication protocol. Many such processes (an army) can run in parallel and asynchronously. You have to control theirs interaction (communication).

There is also the possibility to create COM objects and theirs communication protocols using proper interface. COM »reaches« communication capabilities can help you, but also create confusion. Sometimes it causes more hassle than benefit.

An isolated, single process VFP EXE is a good candidate to experiment with.  Errors occurring in a VFP process will not affect other processes in the system. It will eventualy cause errors in the database. This errors the programmer can manage using the VFP locking and transaction services.

5.   Reverse launcher

Till now we have launched the programs from the WE. Let us try a more automated launcher of asynchronous processes as in the following program:

 

*launcher

ERASE "message.txt"

ERASE "endmessage.txt"

USE employee

BROWSE

?"Start amount:",amount

use

LOCAL oShell

oShell = NewObject('_shellexecute','_environ.vcx')

?"Launched in reverse order."

oShell.ShellExecute("r2s.exe", "C:\moj\vfpcom\distr", "open")

?"First prosess, r2s"

oShell.ShellExecute("r1s.exe", "C:\moj\vfpcom\distr", "open")

?"Second proces, r1s"

Do while not file('ENDmesagge.txt') && wait for results

enddo

ERASE "ENDmessage.txt" 

USE employee && not protected, we don not know if r2s has updated

?"Finall amount:",amount

 

The program »launcher« use a VFP class '_shellexecute'.(The class '_environ.vcx' have to be in the VFP path). It first launches the r2s.exe. After starting, the process r2s waits in the background.  Then the launcher starts r1s.exe:  it replaces the amount with amount + 25. The message »file created« is passed to r2.exe and after receiving it, r2s.exe replaces the amount with amount - 20. The result is 10 (5+25-20).

Using the »launcher« program, we arrange the start order of r1s and r2s (r2s start first and r1s second) and with the help of a message we obtain the correct updates order (reverse execution). In the example we intentionally arrange the programs order. In asynchronous environment, because of random time of execution, you can launch asynchronous processes in correct order, but the OS scheduling system can rearrange them. You have to protect theirs execution order.

The problem is that the launcher program may not be able to determine, if the execution of the two background processes is finished. We use a second message »endmessage.txt« to notify the launcher process it can use the results. 

In this article we have presented two protected systems: locking and message passing. Locking can solve many problems in sequential programming (serialized the concurrent write and read operations). Locking is less adequate to serialized entire processes (too large time window, and possible deadlocks). Message passing can manage entire process execution. In asynchronous programming we have to protect processes and systems: paralleling and serialized them.

6.   Supervisor

You can create a supervisor to launch the VFP processes, watch and eventually stop them. Programs running in the background can crash (have errors). You need a mechanism like transaction that tell you about the eventual errors in process execution (messages from the background) and give you (or to a system) a chance to recover.

In case of errors in the background process VFP emit a message, but you cannot catch it. For example, suppose you are in luck and catch the background message (the error) from the program r2s. As you are not sure exactly in which phase (state) is the program r1s, you don not know how to recover from this complicated situation. You have to make yours programs to avoid such pathology.

One way is to coordinate yours processes with the supervisor (an old solution: two harnessed oxen and their supervisor). It is running in parallel with yours programs, is simple and stable and can help you in managing your processes.

In VFP there is a kind of supervisor, its locking system. It has limited missions: in background it supervises the write/read operations and coordinates them. The other VFP »supervisor« is the synchronization mechanism: it dispatch data to clients/nodes, it watch the changes and sych data between nodes. You can look at it as a messaging system. The programmers have not tools inside VFP to write this kind of supervisor, in programming they only use these services.

In VFP it is difficult to create smarter supervisors, maybe it is better to use them, for example from F# or Erlang.

7.   Conclusion

VFP distribute data in an automatic way (easy to manage). On multiple VFP clients you have multiple data copies. It synchronizes this distributed (parallel) data. You can add parallel and asynchronous processes to a VFP application. More you distribute work and data more precious is the VFP built-in synchronization of local processes.

It is difficult to efficiently manage these asynchronous processes with serial methods. In this article we have described some problems in asynchronous processes on the VFP client side. We have run in parallel multiple VFP EXE (on one or multiple nodes). In production environments you need tools and components, which can give your application the ability to manage this parallel and asynchronous work in a more automated way. Basically, you need:

- A tool that lunch the EXE and possibly supervise theirs execution (termination, cancellation, errors),   

- A protocol to efficiently coordinate processes and protect data.

In the next article, using Erlang, I shall show you, how to add to VFP more distribution.