Step 3: MPI (Message Passing Interface) and the Cell SDK
MPI is one of the most popular open standards for clustering/supercomputing on commodity hardware. While MPI runs on many platforms, we assume that you now have Linux running on your system. To enable it as a cluster 'node' you need to add the necessary communications layers, and then MPI itself. Here we illustrate the use of the 'yum' installer in this section. However, you can install these packages in whatever method you are most comfortable with. Finally, we include a link to the IBM Cell SDK (currently 3.0). It is helpful to compile your programs using this compiler to get the fastest performance from your MPI grid.
Step 3a - Installing SSH: ssh is used for secure network communications in MPI. We use the 'yum' installer to first install the server component.
- Type "yum install openssh-server"
- If prompted about download size, type "Y".
- If prompted about a GPG key for this package, type "Y".
- If all went well, the install process should say "Complete!"
- Once installed, you need to start this server process (sshd). You can start the server by typing "yum sshd start"
- Finally, we need to install the ssh client components by typing "yum install openssh-clients".
- This process should complete without prompt, and ssh should be fully installed.
- Testing the connection: SSH requires a password between nodes by default. As such, a one-time prompt for a password from each node indicates a successful connection. You test by typing "ssh root@IPADDR" (where IPADDR = The IP Address of that node). A prompt for password indicates successful connection. To enable passwordless communication please see http://www.opensce.org/doc/sce1_7/enable_rsh_ssh
- To configure NFS, we type "yum install nfs-utils"
- When prompted about the download size being ok, type "Y", hit enter.
- This process should complete without prompt, and NFS should be fully installed.
- Testing the connection: Type "mount –t nfs YourMachineIP /mnt/nfs" (Where "YourMachineIP" is the node's IP address) For example:
- If properly configured, you can then type "cd /mnt/nfs/" and access the share. NFS should then be configured correctly.
- To install the OpenMPI components type "yum install openmpi openmpi-devel openmpi-libs"
- When prompted about the download size, type "Y", hit enter.
- Configuration: The next step is to configure MPI to run on your network of nodes. This only needs to be done on one 'Master' node machine. For this node, the main configuration file is "/etc/openmpi-default-hostfile". You may need to use the "touch" command to create this file. Type "touch /etc/openmpi-default-hostfile"
- You need to edit this file (try the nano editor, "nano /etc/openmpi-default-hostfile"), adding the IP address of each node and the number of 'slots'. The number of slots is the number of cores available to process. See one entry below (in this case with only one node entered). To edit this file...!Need to add better notes here on default editors!
- You should now have MPI properly configured, and a cluster enabled!
- Download the source code file from http://www.ps3cluster.org/distros/Pi.c, and place this file into the openmpi folder on your 'Master' node machine.
- Compile this program using the "mpicc" command. Specifically, type "mpicc -o Pi Pi.c" The resulting MPI linked binary executable will be called "Pi".
- To run this program locally, type "mpirun –np 2 Pi". The number 2 here denotes the number of slots to be utilized by the run.
- Now place this binary "Pi" on each machine within its nfs share. And re-run mpirun with a higher number of defined slots. For instance: "mpirun -np 16 Pi". View the result.
- Note that utilizing the "top" command you can see if multiple nodes are being utilized by the program's run. This will help you see your network.
- Final Step, the Cell SDK: You can download the Cell SDK from IBM
HERE. And you can find a quick install guide (from GNURadio)
HERE Note: You shouldnt need the 'sudo' command if you are logged in as root already.
More Info? -> Resources Page