High Performance Computing
Many problems interesting to scientists and engineers simply cannot make use of PCs. They may require too much memory and disk space, would take too long to run, or both. Simply put, HPC is about solving big problems on big machines. More verbosely, high performance computing is the name given to problems requiring, and systems providing, computing power that is not available using solutions available to typical consumers.
Supercomputers are constructed out of smaller, off the shelf components (nodes) that are similar to standard desktop computers. These individual computers are linked together via a high speed "fabric" of connections (interconnection network or interconnect). In order to take advantage of nodes, software to control communication and allocation. On our side, Torque and Moab are used to manage machines and submitted jobs. Your programs should implement the MPI communication system in order to utilize the machines efficiently.
Useful For: Users interested in running parallel programs, running with large memory requirements, or running for long periods of time should consider this option.
Not Useful For: Users wanting to run a large batch of small, single processor jobs. Please see the Condor documentation for suggestions.
Condor (High Throughput Computing)
Condor is a batch system for queuing, scheduling, and prioritizing compute-intensive jobs. It is developed by the Condor team at the University of Wisconsin. In a nutshell, Condor matches user submitted jobs to available computer resources. These resource could be desktop machines or owner-based clusters. As long as the machine has the required resources the job will be sent to the machine and run.
Useful For: Users wanting to run a large batch of small, single processor jobs. These jobs may be short (minutes) or long (days/weeks).
Not Useful For: Users interested in running parallel programs, running with large memory requirements. Please see the High Performance Computing documentation for suggestions.