Table of content:1. Hardware requirements2. Installation and
configuration of frontend node3. Installation of compute
node4. Start computing5. Getting cluster console
1. Minimum Hardware Requirements:
1.1 Frontend Node:
• Disk Capacity: 20 GB
• Memory Capacity: 512 MB (i386) and 1
GB (x86_64)
• Ethernet: 2 physical ports (e.g.,
"eth0" and "eth1")
1.2 Compute Node:
• Disk Capacity: 20 GB
• Memory Capacity: 512 MB
• Ethernet:
1 physical port (e.g., "eth0")
2. Install and Configuration of Frontend:
This
section describes how to install your Rocks cluster frontend.
The
minimum requirement to bring up a frontend is to have the following rolls:
•
Kernel/Boot Roll CD
• Base
Roll CD
• HPC Roll
CD
• Web
Server Roll CD
• OS Roll
CD - Disk 1
• OS Roll CD - Disk 2
OR
You can use
“area51+base+bio+condor+ganglia+hpc+java+kernel+kvm+os+perl+python+service-pack+sge+web-server+zfs-linux-6.1.x86_64.disk1.iso”
for the installation.
Download link:
Steps:
2.1. Boot the frontend system from the ISO. You
will get the following screen. For a frontend node or master node, simply write
“build” and press enter as shown in the screen.
It will start loading packages and your
installation will start.
2.2 After loading the packages from the ISO, you’ll
see a screen that looks like:
Here, you can give the hostname as per your
choice. I am leaving it default.
Click on “CD/DVD-based rolls” to select the
rolls.
2.3 You will find the list of rolls available.
2.4 Now, select the rolls you want to use and
click “submit”.
2.5. Next screen will show you the selected rolls
in the left.
Click next and you will get the screen like
this. Click Next.
2.6. Then you’ll see the Cluster Information screen:
And click next.
2.7. The private
cluster network configuration screen allows you to set up the networking
parameters for the Ethernet network that connects the frontend to the compute
nodes.
Here, you
can give the IP for the both of Ethernet cards. First give IP for ‘eth1’ and
click next. This IP will be used for ‘public network’.
2.8. Now give IP for
‘eth0’. This IP will be used for the ‘private network’.
2.9. Configure the Gateway and DNS entries:
2.10. Give the root password:
2.11. Configure the time:
2.12. The disk partitioning screen allows you to
select ‘automatic or manual’ partitioning.
I am selecting automatic partitioning.
Now the installation will start.
3. Installing your Compute Nodes:
3.1. Login
to the frontend node as root.
3.2. Run a
program which captures compute node DHCP requests and puts their information
into the Rocks MySQLdatabase:
# insert-ethers
If you get this message, wait for a while and run the command again. This presents a screen that looks like:
Take the
default selection, Compute, hit ’Ok’.
3.3. Then you’ll see:
3.4. Now boot a VM with ISO attached and with PXE
settings.
Boot order should be: CD/DVD, Hard-disk and
PXE.
Now boot the VM.
When the client VM will boot, it will show the
following window.
Now it will start taking the installation
files from PXE.
Its MAC address will be captured by the
frontend and you will get the screen like this:
Note: ‘Insert-ethers’ has discovered a
compute node. The "( )" next to compute-0-0 indicates the node has
not yet requested a kickstart file.
You will see this type of output for each
compute node that is successfully identified by insert-ethers. The star “(*)”
means that the node has picked the kickstart file.
3.5 When node installation is complete you can
press F8 to quit the screen on the frontend.
3.6 After node installation, you will get the
following screen for the installed node.
4. Start Computing:
4.1. Run MPI program
in cluster:
If you
don’t have a user account on the cluster, create one for yourself, and propagate
the information to thecompute nodes with:
# useradd username
# passwd username
# rocks sync
users
• Create a
file in your home directory named machines, and put two entries in it, such as:
compute-0-0
compute-0-1
• Now
launch the job from the frontend:
#ssh-agent $SHELL
#ssh-add
#
/opt/mpich/gnu/bin/mpirun -np 2 –machinefile machines ./hello
The hello.c MPI program:
----------------------------------------------------------------------------------------------------------------
#include <stdio.h>
#include <mpi.h>
int main(int argc, char *argv[]) {
int numprocs, rank, namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name(processor_name, &namelen);
printf("Process %d on %s out of %d\n", rank, processor_name, numprocs);
MPI_Finalize();
}
----------------------------------------------------------------------------------------------------------------
#include <mpi.h>
int main(int argc, char *argv[]) {
int numprocs, rank, namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name(processor_name, &namelen);
printf("Process %d on %s out of %d\n", rank, processor_name, numprocs);
MPI_Finalize();
}
----------------------------------------------------------------------------------------------------------------
4.2. You need to compile the hello.c program
first.
Run the command:
# mpicc hello.c –o hello
Note: Now, to execute the MPI program successfully across the cluster,
you need to keep following things in mind:
1. The MPI program should be run from a
non-root user.
2. After creating a new user, you have to sync
the user to all of your nodes. To do that, run the command:
# rocks sync
users
3. The same executable program must be present
on all the nodes at the non-root user’s home directory.
4. You can run the program:
# mpirun –np
4 –H compute-0-0 ./hello
OR
# mpirun –np
4 –H compute-0-0,compute-0-1 ./hello
OR
# mpirun –np
4 –machinefile machines ./hello
5. To get the GUI of the cluster, go to in
your browser:
MPI program download link.
Rocks Cluster document
No comments:
Post a Comment