oregon chub beowulf cluster
Post on 06-Jan-2016
49 Views
Preview:
DESCRIPTION
TRANSCRIPT
CS-EE 481 Spring 2004
Founder’s Day, 2004 1University of Portland School of Engineering
Oregon Chub Beowulf Cluster
AuthorsA.J. Supinski
Billy Sword
AdvisorDr. Rylander, Dr. Lillevik
Industry RepresentativeMr. Noah Van Dresser
Intel Corp.
CS-EE 481 Spring 2004
Founder’s Day, 2004 2University of Portland School of Engineering
Agenda
• Introduction A.J. Supinski • Background A.J. Supinski
• Methods A.J. Supinski
• Results Billy Sword
• Conclusions Billy Sword
• Demonstration Billy Sword
CS-EE 481 Spring 2004
Founder’s Day, 2004 3University of Portland School of Engineering
Introduction
Thanks to Dr. Rylander and Noah Van Dresser for all of their help.
Overview.
Allowing non-homogenous clusters via Genetic Algorithms (GAs).
Cost efficiency increase for those who use non-homogenous clusters.
CS-EE 481 Spring 2004
Founder’s Day, 2004 4University of Portland School of Engineering
Background
Beowulf Clusters are sets of computers which have software that makes them act like one supercomputer.
Process Scheduling is an NP-Complete problem.GAs have been shown to produce good (non-
optimum) solutions to NP-Complete problems in polynomial time.
Our project is to check for speedup from existing cluster process scheduling to the GA scheduling.
CS-EE 481 Spring 2004
Founder’s Day, 2004 5University of Portland School of Engineering
Methods
The Plan:
Build a non-homogenous Beowulf cluster, benchmark it and then modify it to use GA scheduling and perform the benchmarks again.
Ensure no change in environment by using the same hardware and almost entirely the same software for both tests.
Results will be restricted by limitation of 4 PCs.
CS-EE 481 Spring 2004
Founder’s Day, 2004 6University of Portland School of Engineering
Methods
The Action:
Build Hardware 4 PC cluster.
RedHat Linux 9.0
OSCAR (Open Source Cluster Application Resources) .
PXE Boot Kernel issues.
MPICH (A Portable Implementation of Message Passing Interface)
CS-EE 481 Spring 2004
Founder’s Day, 2004 7University of Portland School of Engineering
Results
Working 4 PC Beowulf Cluster running MPICH.
Working GA prototype but not in MPICH.
MPICH software architecture challenges.
CS-EE 481 Spring 2004
Founder’s Day, 2004 8University of Portland School of Engineering
Life
CS-EE 481 Spring 2004
Founder’s Day, 2004 9University of Portland School of Engineering
Benchmarking #1
0
0.5
1
1.5
2
2.5
1 2 3
# of Subnodes
Tim
e (
se
c)
200 x 200 Life
300 x 300 Life
400 x 400 Life
Pi using 10 millionIntervals
CS-EE 481 Spring 2004
Founder’s Day, 2004 10University of Portland School of Engineering
Mastermind
CS-EE 481 Spring 2004
Founder’s Day, 2004 11University of Portland School of Engineering
Benchmarking #2
0
5
10
15
20
25
1 2 3
# of Subnodes
Tim
e (
se
c)
10 colors x 10columns Mastermind
20 colors x 10columns Mastermind
25 colors x 10columns Mastermind
CS-EE 481 Spring 2004
Founder’s Day, 2004 12University of Portland School of Engineering
Fractal Image
CS-EE 481 Spring 2004
Founder’s Day, 2004 13University of Portland School of Engineering
Conclusions
The use of GA processor scheduling remains a viable idea
MPICH modification would require a lot of work.
This might be possible as a future Senior Design project starting from our point of ending.
We have a Working Beowulf Cluster and good instructions for future attempts.
CS-EE 481 Spring 2004
Founder’s Day, 2004 14University of Portland School of Engineering
Demonstration
CS-EE 481 Spring 2004
Founder’s Day, 2004 15University of Portland School of Engineering
Questions?
top related