In this document we describe Linda, PVM and Glenda. We define the Glenda operations and illustrate their use with examples. We describe how to use the C Glenda preprocessor and how to prepare a Makefile to simplify using the preprocessor. We describe how to install the Glenda software and use it on a system. Finally we describe some of future plans for Glenda.
When using the Linda model and agenda parallelism, worker processes are all equally capable and retrieve tasks from an agenda until all work is done. One master process starts slave processes using the eval function. The master then sends tasks using some agenda. Each slave consists of a loop which retrieves the tasks from the agenda and sents their results back to the master process. When all the needed results have been received, the master process sends ``poison pills'' out to the slave processes that terminates them.
PVM is a collection of functions that, like Linda, allow the user to make use of a multiprocessor system. Its key features are:
There are a few noticeable differences from Linda. First of all, there is no eval function. Instead, we join Glenda by calling gl_mytid and use gl_spawn to start subprocesses. gl_mytid returns a PVM task id number and enrolls it into PVM. gl_spawn also returns a PVM task id number for the spawned process. Also, every tuple has a character string for its first component. The functions outto and into were added to make use of PVM's multicast capability. Outto can be used along with a PVM task identifier to send a tuple directly to a task. Into must then be used to retrieve a tuple sent using outto. To exit out of PVM and the tuple server, the function gl_exit must be used. Finally, structures, unions and typedefs are not supported by the glenda operations.
Value can be a scalar or an array. If it is an array the length is implicit.
Here the length to output for x is given. X could be an array or a pointer.
Here we must specify the length to output. The preprocessor only keeps lengths for single dimension arrays.
The functions gl_in and gl_inp are used to input a tuple into tuple space. The function gl_in waits for a match before returning a tuple, while gl_inp returns a value of 1 if there was a matching tuple available and a 0 if a tuple was not available with no waiting involved. When a tuple is matched, that tuple is removed from tuple space.
These match on ``data'' i and k returning data for value. Value can be a scalar or an array. If it is an array the length is implicit.
These match on ``row'' and i returning data for the array x and returns the number of items as len.
Assuming that x is a two-dimensional array, these match on ``column'' and j, returning data into x[j]. The length is ignored.
The functions gl_rd and gl_rdp are used to read a tuple in tuple space. The function gl_rd waits for a match before returning a tuple, while gl_rdp returns a value of 1 if there was a matching tuple available and a 0 if a tuple was not available, without waiting. Unlike gl_in and gl_inp, gl_rd and gl_rdp only read a tuple; they do not remove the tuple from tuple space.
These match on ``data'' i and k returning data for value. Value can be a scalar or an array. If it is an array the length is implicit.
These match on ``row'' and i returning data for the array x and returns the number of items as len.
Assuming that x is a two-dimensional array, these match on ``column'' and j, returning data into x[j]. The length is ignored.
The functions gl_outto and gl_into make use of PVM 3.x's multicast capability. The function gl_outto outputs a tuple directly to a process using its PVM task id number, bypassing the tuple server. The function gl_into is used to receive a tuple sent using gl_outto.
In this example, the PVM task id number tid is used to output a tuple containing ``data'' i, k, and value. Value can be a scalar or an array. The corresponding gl_into matches on ``data'' i, and k returning the data for value.
This example is the same as the one above, but instead of sending the tuple to only one process, we are sending the tuple to all of the processes whose PVM task id numbers are in the array tid with length len. The length is only needed when tid is an array.
> uudecode glenda.tar.Z.uue
then decompress glenda.tar by typing
> uncompress glenda.tar.Z
and in the directory you wish to place Glenda type
> tar -xf glenda.tar
If you received the ``.shar'' version, you will have received about 6 email messages which should be saved into separate files. Each file should be edited to remove the header lines as instructed within the files. Then each file should be used as input for ``sh'' as in:
> sh glenda.shar.1 > sh glenda.shar.2 ...
Either of the distribution methods will create glenda directories for you within the current directory. The directories created are as follows. The top level directory is named glenda. It contains the Glenda source code contains a Makefile and several subdirectories:
To build Glenda you can use ``make ARCH=SUN4'' or whatever your architecture is. You might prefer to edit the Makefile in this directory and those in ts and examples to prevent compiling for the wrong architecture.
The Makefiles are designed expecting PVM_ROOT to be defined as the root directory for your PVM3.x software and expect to be able to write to the $PVM_ROOT/bin/$ARCH directory.
SGI machines will require the option ``-lsun'' when linking in order to link in the XDR routines. This must be changed in the ts/Makefile.
This version of the tuple server and support code is written for PVM 3.x and would need a little effort to connect to PVM 2.4. We have a file named pvmold.c which is nearly complete for providing PVM 3.x function calls using the PVM 2.4 library. The main missing piece is that the tuple server and gl_user.c get task information using pvm_tasks to determine which task is the tuple server. Pvmold.c does not have a version of pvm_tasks.
cgpp filename.cgThe source code file must end in ``.cg'' in order to be valid. In order to make it easier to compile your programs which make use of glenda, this sample makefile will help. Make sure that you have a directory named ``bin'' in your home directory and that ARCH is set to the proper architecture type.
ARCH = RS6K PVMBIN = $(PVM_ROOT)/bin/$(ARCH) PVMINCLUDE = -I$(PVM_ROOT)/include -I../include CC = c89 CFLAGS = -g $(PVMINCLUDE) PVMLIB = $(PVM_ROOT)/lib/$(ARCH) LIB = -L$(PVMLIB) -lpvm3 -lm USER = ../gts/gluser.o # This part converts a ``.cg'' file to a ``.o'' file. # The default .SUFFIXES parameter had to be changed to # accomplish this. # The -mv command can be removed at your convienience. .SUFFIXES: .SUFFIXES: .o .cg .c .f .y .l .s .cg.o: -cgpp $*.cg -$(CC) $(CFLAGS) -c $*.c -mv $*.c $*.x # Place each master file and its corresponding slave file here. all: $(PVMBIN)/a $(PVMBIN)/b $(PVMBIN)/a: a.o $(CC) -o $(PVMBIN)/a a.o $(USER) $(LIB) chmod go+rx $(PVMBIN)/a $(PVMBIN)/b: b.o $(CC) -o $(PVMBIN)/b b.o $(USER) $(LIB) chmod go+rx $(PVMBIN)/b clean: rm -f *.o
****Important****
Do not invoke the tuple server without invoking PVM first. Do not invoke your master process without invoking the tuple server first.
A typical sequence of commands is as follows:
> pvmd pvmhosts & > gts > master_filename
a.cg #include <stdio.h> #include <glenda.h> main(argc,argv) int argc; char *argv[]; { int my_tid, a; int Size, N, i; int *Data; int j, Kids; int kid, step; my_tid = gl_mytid(); if ( argc > 1 ) Size = atoi(argv[1]); else Size = 100000; if ( argc > 2 ) N = atoi(argv[2]); else N = 10; if ( argc > 3 ) Kids = atoi(argv[3]); else Kids = 10; gl_out ( "Size", Size ); gl_out ( "N", N ); for ( i = 0; i < Kids; i++ ) { gl_spawn ( "b" ); gl_out ( "Kid", i ); } Data = (int *) malloc ( Size * sizeof(int) ); for ( j = 0; j < N; j++ ) { printf("Step %d of %d\n", j+1, N ); for ( i = 0; i < Kids; i++ ) gl_out ( "data", i, Data:Size ); for ( i = 0; i < Kids; i++ ) { gl_in ( "OK", ? kid, ? step ); printf("Got OK from %d for step %d\n",kid,step+1); } } gl_in ( "Size", Size ); gl_in ( "N", N ); gl_exit(); } b.cg #include <stdio.h> #include <glenda.h> main(argc,argv) int argc; char *argv[]; { int my_tid, a; int Size, N, i; int *Data; int k; my_tid = gl_mytid(); gl_rd ( "Size", ? Size ); gl_rd ( "N", ? N ); gl_in ( "Kid", ? k ); fprintf(stderr,"Kid %d, Size %d, N %d\n",k,Size,N); Data = (int *) malloc ( Size * sizeof(int) ); for ( i = 0; i < N; i++ ) { gl_in ( "data", k, ? Data:Size ); gl_out ( "OK", k, i ); } gl_exit(); }These files, and other example files, are located in the directory /glenda/examples/. The file ``mm.c'' is a matrix multiplication program using PVM function calls. This file was acquired from the newsgroup comp.parallel.pvm and was created by Josef Fritscher, Technical University of Vienna. The files ``mmgl.cg'' and ``mmgl_worker.cg'' are glenda versions of ``mm.c'' and its worker program ``mmworker.c''. The files ``mmto.cg'' and ``mmto_worker.cg'' are also glenda versions of ``mm.c'' and ``mmworker.c'', but make use of gl_outto and gl_into.
To obtain a copy of the Glenda software, e-mail your request to seyfarth@whale.st.usm.edu and a copy will be sent to you as soon as possible. In the future, an anonymous ftp server may be set up to better facilitate distribution of the software.
Please send bug reports to seyfarth@whale.st.usm.edu and we will try to help. It would be helpful to describe your virtual machine configuration (hardware, PVM version), include a short segment of code illustrating the problem, and describe how it fails.
Good luck with the software.
1 Linda is a registered trademark of Scientific Computing Associates.