dell high performance visualization cluster · software architecture the visualization cluster...
TRANSCRIPT
Dell High Performance Visualization Cluster
Product Group - Enterprise
October, 2007
Dell White Paper
By
Li Ou
Massively Scale-Out Systems Group
October 2007
Contents Introduction.................................................................................................................. 3 Technical Challenges and Innovative Solutions ...................................... 3 Architecture ................................................................................................................. 4
Hardware Architecture.................................................................................................... 4
Software Architecture ..................................................................................................... 6
Software Configuration and User Environment ........................................ 8 Graphics Card Driver and OpenGL Library ................................................................... 8
Distributed Multihead X ................................................................................................. 8
Configuration .............................................................................................................. 8 Starting DMX and Windows Manager ....................................................................... 9
Chromium ..................................................................................................................... 10
Building..................................................................................................................... 10 Configuration ............................................................................................................ 10 Deployment and Launching Applications ................................................................ 12
Paraview........................................................................................................................ 13
Building and Deployment ......................................................................................... 13 Running Paraview..................................................................................................... 13
Hardware Configuration ...................................................................................... 14 Automated Deployment Process .................................................................... 15
Software Components................................................................................................... 15
Script for Installing the Master Node ........................................................................... 15
Configuration of the Render Nodes .............................................................................. 15
Appendix A: Script of dell-viz-master at master node................................................ 17 Appendix B: .bashrc file at master node.................................................................... 18 Appendix C: /opt/viz/module/module-cr file at master node ..................................... 19 Appendix D: A Script to configure render node with Platform OCS (extend-compute.xml) .................................................................................................................... 20 Special Notices:....................................................................................................... 22
October 2007 Page 2 Dell Product Group - Enterprise
Introduction Dell’s visualization system, the Dell High Performance Visualization Cluster, is a clustering system of high performance workstations, utilizing the aggregate performance of data processors, graphics accelerators, and network technologies. The visualization cluster provides scientists and engineers with the visualization power to study/simulate large scale problems such as earth simulation, very high resolution problems such as molecular bonding, or high resolution datasets. This document explains the visualization cluster reference architecture, components, HW/SW configuration, and installation process.
Technical Challenges and Innovative Solutions Supercomputers and high‐performance computing (HPC) clusters enable demanding software—such as real‐time simulation, animation, virtual reality, and scientific visualization applications—to generate high‐resolution data sets at sizes that have not typically been feasible in the past. However, efficiently visualizing these large, dynamic data sets, especially those with high‐resolution display requirements, can be a significant challenge. For complex data sets or high‐resolution images, the visualization process is highly compute intensive. Applications requiring rapid turnaround time and human perception place additional demands on processing power. State‐of‐the‐art hardware can significantly enhance visualization performance, but a single piece of hardware is often limited by processor performance and available memory. If very high resolution is required, the visualization task can simply be too large for one piece of hardware to handle. A visualization cluster is a scalable visualization solution that brings the power of parallel processing to bear on many demanding visualization challenges. A visualization cluster utilizes the aggregate performance of commodity graphics accelerators, data processors, and network technologies in clusters of HPC workstations, and integrates the latest generations of these components into Dell’s high performance clustering architecture. The key benefits of this solution include the following: Cost‐effectiveness: Commodity graphics hardware and workstations remain far less expensive than high‐end parallel graphic computers, and some PC graphics accelerators can provide performance levels comparable to or greater than those of high‐end graphics hardware.
Scalability: As long as the network is not saturated, the aggregate hardware capacity of a visualization cluster grows linearly as the number of HPC workstations increases.
Flexibility: The performance of commodity hardware has been increasing rapidly, and its development cycles are typically much shorter than those of custom‐designed, high‐end parallel hardware. In addition, open interfaces for hardware, such as PCI Express (PCIe), and open interfaces for software, such as Open Graphics Library (OpenGL), allow organizations to easily take advantage of new hardware to increase cluster performance.
October 2007 Page 3 Dell Product Group - Enterprise
Deployment and manageability: The visualization cluster architecture described in this paper builds on Dell’s standard high performance clustering architecture. It shares the great flexibility of the deployment and management of this successful architecture.
Architecture Visualization applications require resources that are not provided by a pure compute cluster. Visualization cluster architecture extends the Dell High Performance Cluster (HPC) hardware to include visualization capabilities. The visualization cluster builds a software stack based on Platform OCS cluster software, and Open Source software.
Hardware Architecture Dell offers HPC is a solution package with high performance Dell PowerEdge servers, high‐speed interconnects and optional storage devices. A visualization cluster adds visualization capability to HPC architectures using Dell Precision workstations equipped with powerful PCIe 16X NVIDIA graphics cards, with video outputs for large immersive displays. Figure 1 illustrates the makeup of a visualization cluster with integrated visualization components:
Figure 1: Visualization Cluster Hardware Components
October 2007 Page 4 Dell Product Group - Enterprise
Compute node, I/O node, storage, cluster master node, ITA node, and cluster interconnect are major components used by standard Dell High Performance Clusters. The render node, high resolution display, and visualization application master node are new components which a visualization cluster adds to HPC architecture to provide visualization capabilities. Typically, the visualization application master node and the cluster master node are configured to share the same physical machine. However, in some systems, those two components can be separated. Dedicated compute nodes enable highly compute‐intensive programs, such as real‐time simulations and scientific applications, to generate large scale high‐resolution data sets for visualization. I/O nodes and storage devices build a high performance and high availability I/O platform, allowing parallel access to large data sets from visualization nodes and compute nodes. Cluster master node and ITA node provide a cluster management framework for all components, not only traditional compute cluster components, but also newly added visualization components. Industry standard graphic workstations with standard OpenGL 3D graphics cards serve as render nodes and run open source visualization software. With the support of powerful graphics cards and parallel rendering programs, a series of render nodes converts an abstract description of a scene (a data set) to an image, and then transfers the rendered output to the high‐resolution display devices and synchronizes multi‐tile displays. Visualization midware runs on a visualization application master node to coordinate the visualization efforts of worker components. It communicates with the visualization nodes over the cluster network, sending control information and OpenGL commands. The Dell Precision Workstation 690 is recommended for the active visualization components of a visualization cluster, such as render nodes, and the visualization application master node. The Precision Workstation 690 is a high‐performance, scalable workstation offering that supports dual Intel Xeon processors; Quad Channel FBD‐DDR2 memory (up to 64 GB); four SAS/SATA ports, three SATA ports, one ATA‐100 port; three x4 PCIe slots, two 64‐bit PCI‐X slots, and one PCI slot. To maximize graphic processing capability, a Precision Workstation 690 supports dual NVIDIA Quadro FX graphics cards through its built‐in x16 PCIe slots (up to 250W power each). The cluster interconnect supports data transfer among all the components. It carries file I/O and traditional cluster computing communications, as well as visualization traffic, such as pixel data and OpenGL drawing commands. A high‐speed, low‐latency interconnect, such as InfiniBand, can be used to enhance the transfer of pre‐rendered datasets and image data among visualization nodes. The visualization cluster architecture enables display selection from a wide range of configurations—from single displays to tiled displays in walls and immersive CAVE environments. A visualization cluster aggregates the resolution of a single screen to output images from tens to hundreds of megapixels, satisfying resolution requirements from various visualization applications.
October 2007 Page 5 Dell Product Group - Enterprise
Software Architecture The visualization cluster software is layered on top of Dell HPC software package—Platform OCS with Red Hat Enterprise Linux. Figure 2 illustrates the stack of the visualization cluster software architecture.
Figure 2: Visualization Cluster Software Stack Each workstation in a visualization cluster system requires the three bottom layers—the graphics card, the driver, and the X Window System and OpenGL. Platform OCS software builds a clustering platform on multiple visualization workstations and compute servers. Chromium and DMX create another layer to provide parallel rendering by utilizing the rendering resources of individual workstations. Paraview enables high performance visualization on the cluster platform with capabilities of distributed data processing. Chromium is an open source software stack for parallel rendering on clusters of workstations. It is designed to increase three aspects of graphics scalability: • Data scalability: Chromium can process increasingly larger data sets by distributing workloads to larger clusters.
• Rendering performance: Chromium can scale out rendering performance by aggregating commodity graphics hardware.
• Display performance: Chromium helps systems output large, high‐resolution graphics such as those for a tiled display. DMX allows a single X server to run across a cluster of systems so that the X display or desktop can be presented across many physical displays. For example, a cluster of 12 workstations running one X server can provide an image to a large tiled display in a 4 × 3 screen configuration. Working with DMX, Chromium can render data sets and output large images to a unified X server, which controls multiple graphics cards connected to the physical displays and allows logical windows to cross the display’s physical boundaries. In a visualization cluster, Chromium plus DMX is mainly used to provide the parallel rendering capability of resolution scaling with multi‐tile output. This parallel rendering layer is transparent
October 2007 Page 6 Dell Product Group - Enterprise
to applications, offering an easy way to deploy traditional OpenGL applications to visualization cluster environments, particularly for applications requiring very high resolution with large tiled displays. Paraview is a popular Open Source software for visualizing varying size data sets. It supports distributed computation models to process very large data sets on a visualization cluster. Paraview also allows users to easily manipulate raw data with various built‐in visualization algorithms, and then output high resolution results to large tiled displays with a parallel rendering module. Paraview has the capability to optimize data partitions across the cluster of visualization nodes, and parallelize data processing with MPI interfaces provided by clustering software, such as Platform OCS.
October 2007 Page 7 Dell Product Group - Enterprise
Software Configuration and User Environment The following subsections outline the software that is configured on a visualization cluster to add visualization capabilities. Some of the packages come with Platform OCS, while others must be downloaded and built as Open Source software.
Graphics Card Driver and OpenGL Library While Platform OCS installs the operating system and most drivers, and properly configures visualization workstations as cluster nodes, the dedicated driver for the NVIDIA graphics card is not provided. To fully utilize the visualization capability and hardware acceleration of the graphics card, an updated driver for the card should be downloaded from the Dell Support site, and installed on all workstations equipped with NVIDIA graphics cards. The driver includes OpenGL library and driver installation automatically configures the X Window System.
Distributed Multihead X Distributed Multihead X (DMX) comes with the basic necessary rolls of Platform OCS packages, so no custom installation is necessary, but some configuration steps are required to enable correct functionality.
Configuration DMX uses a configuration file on the master node to determine which render nodes to connect to as well as their geometry in the tiled display. The monitor bezels in the geometry should be accounted for when configuring the visualization applications spanning the tiled display. An example configuration file for a 4x3 tiled display is shown in Figure 3. The explanations of the format of the file can be found from the Xdmx man page. In this example, each screen contributes 2560x1600 resolutions to the tiled displays. C0‐0 to C0‐11 are the Linux machine names of the 12 render nodes. The underlying network protocol is transparent to DMX. An IP over Infiniband protocol can be configured for the render nodes, using the high speed, low latency Infiniband interconnect to speed up the rendering performance. DMX requires that the X servers of the master node and render nodes share the same font configurations; otherwise, a –ignorebadfontpaths parameter must be used when starting the DMX server. A shared font server may be configured for all X servers. Detailed information about font configuration can be found in Linux documentation and is out the scope of this paper.
October 2007
/opt/viz/wall.conf:
virtual wall1 { display c0-0:0 2560x1600@0x0; display c0-1:0 2560x1600@2760x0; display c0-2:0 2560x1600@5520x0; display c0-3:0 2560x1600@8280x0; display c0-4:0 2560x1600@0x1800; display c0-5:0 2560x1600@2760x1800; display c0-6:0 2560x1600@5520x1800; display c0-7:0 2560x1600@8280x1800; display c0-8:0 2560x1600@0x3600; display c0-9:0 2560x1600@2760x3600; display c0-10:0 2560x1600@5520x3600; display c0-11:0 2560x1600@8280x3600; }
Figure 3: Example DMX Configuration File for 4x3 Tiled Displays
Starting DMX and Windows Manager When starting a DMX server, a windows manager must be specified to create a GUI desktop environment. GNOME and KDE are two popular windows managers. A startup script for DMX with a KDE manager is shown in Figure 4. /opt/viz/start-dmx-wall:
export XINITRC="/opt/viz/Xdmx.xinitrc.kde"
xinit -- /usr/X11R6/bin/Xdmx :1 +xinerama -configfile /opt/viz/wall.conf -config wall1 -input c0-0:0.0 -ac -norender -maxscreens 12
/opt/viz/Xdmx.xinitrc.kde:
Startkde
Figure 4: DMX Startup Script
October 2007 Page 9 Dell Product Group - Enterprise
Chromium Chromium is Open Source software and must be downloaded from “http://sourceforge.net/project/showfiles.php?group_id=16529” and manually installed. Detailed information about building and using Chromium is provided in its document. The following subsections outline the basic steps to use Chromium on a visualization cluster.
Building Chromium can be built in a 64‐bit or 32‐bit version. Most applications work well with the 64‐bit version of Chromium, but some still require the 32‐bit version. 64‐bit version and 32‐bit version can be built on a single machine, but the source code needs to be extracted to different directories. The option.mk file must be modified to enable DMX support and specify the version to be built. The necessary modifications for 64‐bit version and 32‐bit version are shown in Figure 5 and Figure 6, respectively. In the example, 64‐bit version is extracted to /opt/chromium/cr‐1.9, and 32‐bit version is extracted to /opt/chromium/cr‐1.9‐32
/opt/chromium/cr-1.9/options.mk:
USE_DMX=1 DMX_INCDIR=/usr/X11R6/include/X11/extensionsDMX_LIBDIR=/usr/X11R6/lib64
Figure 5: Modifications to Chromium option.mk File (64‐bit version)
/opt/chromium/cr-1.9-32/options.mk:
FORCE_32BIT_ABI=1
USE_DMX=1 DMX_INCDIR=/usr/X11R6/include/X11/extensionsDMX_LIBDIR=/usr/X11R6/lib
Figure 6: Modifications to Chromium option.mk File (32‐bit version)
Configuration With support of DMX, Chromium can be configured to launch automatically when an OpenGL application is started, without user knowledge of how the render nodes are configured. To accomplish this, the following tasks must be completed before launching an application.
October 2007
First, applications are required to link to the libcrfaker library provided by Chromium, instead of the standard OpenGL libraries. Libcrfaker.so is a runtime library supporting OpenGL application programming interfaces (APIs), which simply dispatches the OpenGL calls to the fol‐lowing processing chain for parallel rendering. To accomplish this, a symbolic link is created from libGL.so to libcrfaker.so in /opt/chromium/cr‐1.9/lib/Linux, and then the library directory is added to LD_LBRARY_PATH. Second, a Chromium configuration script must be written. Chromium uses Python scripts to help increase system flexibility. Users can write a script to define the desired system behavior. The Chromium source directory already includes an example python script for running through DMX. This script is located at /opt/chromium/cr‐1.9/mothership/configs/autodmx.conf. Two modifications need to be made to the script for the visualization cluster. First, the installation directory of Chromium must be specified in the file. Second, the original script uses rsh to start crservers on remote render nodes; the default remote commands for the visualization cluster is ssh, so the rsh commands must be changed to ssh. These modifications of the autodmx.conf script are illustrated in Figure 7. /opt/chromium/cr-1.9/mothership/configs/autodmx.conf:
crdir = "/opt/chromium/cr-1.9" if AUTOSTART: servernode.AutoStart( ["/usr/bin/ssh", host, "/bin/sh -c 'DISPLAY=:1.0 LD_LIBRARY_PATH=%s %s/crserver -mothership %s:%d'" % (crlibdir, crbindir, localHostname, mothershipPort) ] )
Figure 7: Modifications to Chromium Script File autodmx.conf Finally, a configuration file used to automatically launch autodmx.conf must be created. When an OpenGL application is started, the Libcrfaker.so is linked to look for this configuration file to determine the next possible action. Basically, this file will tell Chromium where to launch the autodmx.conf script. The contents of this file are shown in Figure 8. A user environment parameter, CR_CONFIG_PATH, must be set to specify the path of the file.
/opt/viz/crconfigs:
/opt/chromium/cr-1.9/mothership/configs/autodmx.conf %m %p
Figure 8: Configuration File to Automatically Launch autodmx.conf
October 2007 Page 11 Dell Product Group - Enterprise
Deployment and Launching Applications To deploy Chromium, copy the configured /opt/Chromium directory from the master node to /opt/Chromium on all render nodes. Ensure that the symbolic link from libGL.so to libcrfaker.so is only created on the master node, not on render nodes. An OpenGL application can be launched from the master node. As long as the configuration is correct, Chromium will be started automatically and the parallel rendering takes place transparent to the traditional OpenGL applications. Before launching applications, ensure that the DMX server has been started successfully using the script in Fig. 4.
October 2007 Page 12 Dell Product Group - Enterprise
Paraview Paraview is an optional Open Source visualization software and must be downloaded from http://www.paraview.org/New/download.html” and manually installed. The binary code is available from the Paraview website, but the default building only supports visualization on a single node. It is possible to output the visualization result of Paraview to tiled displays using Chromium and DMX, but to fully utilize distributed computation models to process very large data sets on a visualization cluster, the Paraview source code should be downloaded and compiled with the MPI facility provided by Platform OCS packages.
Building and Deployment Paraview uses the CMake tool to configure compilation parameters. For the latest step‐by‐step instruction of how to build and use Paraview, please refer to Paraview manuals. In CMake interface, three parameters must be modified to use MPI package. Those parameters are summarized in Figure 9. OpenMPI is used as an example, but any available MPI package may be selected. When Paraview build is successfully configured, the software can be compiled and installed using the command make && make install.
MPI_INCLUDE_PATH /opt/openmpi/1.1.4/include MPI_LIBRARY /opt/openmpi/1.1.4/lib/libmpi.so VTK_USE_MPI ON
Figure 9: Summary of Modifications in CMAKE Interface for Paraview Building
After Paraview is installed, six executable files are created under the /usr/local/bin directory: pvclient, pvserver, pvdataserver, pvrenderserver, paraview, and pvbatch. To deploy Paraview on a visualization cluster, copy those files from the master node to the /usr/local/bin directories of all visualization nodes.
Running Paraview Paraview can be run either as a standalone model or a distributed computation model. To run as a standalone model, type Paraview from a command line on the master node. To use Paraview with Chromium for parallel rendering on a tiled display, the environment must be configured as described in the previous sections before starting Paraview. Running Paraview in a distributed computation model takes advantage of parallel data processing on a visualization cluster. Figure 10 shows a simple script to start Paraview in this model. Basically, the script first uses openmpi to start pvservers on all visualization nodes, then starts a pvclient on the master node to connect to a pvserver instance. This script uses the TCP/IP protocol, but, alternatively, openmpi can be configured to utilize an Infiniband network.
October 2007
/opt/viz/start-paraview:
mpirun -np 12 -hostfile /opt/viz/hosts -mca btl tcp pvserver --use-offscreen-rendering & sleep 2 pvclient -sh=c0-0
Figure 10: Script to Start Paraview in a Distributed Computation Model
In a distributed computation model, Paraview natively supports tiled displays. It is only necessary to add two command‐line parameters when starting pvservers. Figure 11 shows a simple script to start Paraview with 4x3 tiled displays.
/opt/viz/start-paraview-wall:
mpirun -np 12 -hostfile /opt/viz/hosts -mca btl tcp pvserver --use-offscreen-rendering -tdx=4 -tdy=3 & sleep 2 pvclient -sh=c0-0
Figure 11: Script to Start Paraview in a Distributed Computation Model
Hardware Configuration The following are hardware components used to build the visualization cluster described in this document: • Visualization workstation: Dell Precision 690 • Graphics card: NVIDIA Quadro FX 4500. • Ethernet switch: Dell PowerConnect 5324 • Infiniband switch: Cisco SFS 7000D • Infiniband HCA: Cisco SFS‐HCA‐320‐A1 • Computing server: Dell PowerEdge 1950 • Tiled displays: Dell 3007WFP flat panel monitor
October 2007 Page 14 Dell Product Group - Enterprise
Automated Deployment Process The configured visualization software, including graphics card driver, DMX, Chromium, and Paraview, can be deployed manually from the master node to all visualization workstations, following the steps provided in the previous sections. However, an automated deployment and redeployment methodoledge is recommended for a cluster with a considerable number of workstations and servers. Platform OCS provides a simple mechanism to install and distribute customized packages and scripts in a cluster. A package including all software introduced in Section 4 and corresponding installation scripts is designed to automate the deployment process of a visualization cluster with Platform OCS.
Software Components The package includes the following necessary components for a visualization cluster, assuming that all components are properly built and configured with the directions described in the previous sections: • dell‐nvidia‐8756‐5dkms.x86_64.rpm: The graphic card driver downloaded from Dell Support
site. • Chromium.tar.gz: The tar file of the Chromium directory configured with the steps described
in previous section. • libGLU.so.1: The file copied from /usr/X11R6/lib64. • libglut.so.3.7: The GLUT library file, compiled from the source code downloaded from
“http://www.opengl.org/resources/libraries/glut/”. • Paraview.tar.gz: The tar file of the six executable Paraview files under /usr/local/bin,
described in the previous section.
Script for Installing the Master Node The configuration file dell‐viz‐master is an executable script for the master node of a visualization cluster. The annotated scripts for this file and two supporting scripts used by this file are shown in Appendices A, B, and C. It is assumed that the dell‐viz‐master file and other software are under directory /opt/viz.
Configuration of the Render Nodes The following steps are used to configure the render nodes of a visualization cluster: • Copy all files listed above to /export/home/install/contrib/4.4.0/x86_64/SRPMS on the master
node. • Edit the file extend‐compute.xml under /export/home/install/site‐profiles/4.4.0/nodes and
add the script shown in the Appendix D to the file.
October 2007 Page 15 Dell Product Group - Enterprise
• From the directory /export/home/install, run the command rocks‐dist dist to rebuild images. • Install Platform OCS on the workstations.
October 2007 Page 16 Dell Product Group - Enterprise
Appendix A: Script of dell-viz-master at master node ARCH=x86_64 log=/root/log/dell-viz.log package=dell-nvidia-8756-5dkms.x86_64.rpm chrom=chromium.tar.gz paraview=paraview.bin.tar.gz < !—install nVidia drivers -- > echo -e "\nInstall Dell nVidia Drivers " >> $log /bin/rpm -i --force --nodeps $package >> $log 2>&1 < !—install glut lib -- > /bin/cp libglut.so.3.7.1 /usr/X11R6/lib64 /bin/ln -s /usr/X11R6/lib64/libglut.so.3.7.1 /usr/lib64/libglut.so.3 < !—install chromium under /opt -- > /bin/cp $chrom /opt cd /opt echo -e "\nInstall chromium in /opt " >> $log /bin/tar xzf $chrom >> $log 2>&1 /bin/rm -f $chrom < !—install Paraview files -- > /bin/cp $paraview /usr/local/bin cd /usr/local/bin echo -e "\nInstall paraview in /usr/local/bin " >> $log /bin/tar xzf $paraview >> $log 2>&1 /bin/rm -f $paraview < !—enable font server -- > chkconfig xfs on service xfs restart >> $log 2>&1 < !—disable firewall so the port used by Chromium is enabled -- > chkconfig iptables off service iptables stop >> $log 2>&1 < !— modify the inittab thus the default startup level is x window -- > ex - /etc/inittab << script /:initdefault:/d i id:5:initdefault: . wq! Script < !—Copy .bashrc -- > /bin/cp -f .bashrc /root/.bashrc
October 2007 Page 17 Dell Product Group - Enterprise
Appendix B: .bashrc file at master node # .bashrc # User specific aliases and functions alias rm='rm -i' alias cp='cp -i' alias mv='mv -i' # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi module load hpc/openmpi 2>/dev/null PATH=$PATH:/opt/viz export PATH MODULEPATH=$MODULEPATH:/opt/viz/module export MODULEPATH
October 2007 Page 18 Dell Product Group - Enterprise
Appendix C: /opt/viz/module/module-cr file at master node #%Module1.0#################################################################### ## ## Chromium modulefile ## proc ModulesHelp { } { global version puts stderr "\n\tChromium module" puts stderr "\t****************************************************" puts stderr "\n\t This module sets up the following environment" puts stderr "\t variables for Chromium:" puts stderr "\t PATH" puts stderr "\t LD_LIBRARY_PATH" puts stderr "\t****************************************************\n" } module-whatis "Set up environment for Chromium" # for Tcl script use only set version "3.1.6" prepend-path PATH /opt/chromium/cr-1.9/bin/Linux prepend-path LD_LIBRARY_PATH /opt/chromium/cr-1.9/lib/Linux setenv CRMOTHERSHIP cluster setenv CR_CONFIG_PATH /opt/viz/crconfigs
October 2007
Appendix D: A Script to configure render node with Platform OCS (extend-compute.xml) <?xml version="1.0" standalone="no"?> <kickstart> <description/> <changelog/> < !-- the following are code added into the extend-compute.xml -- > ARCH=x86_64 log=/root/log/dell-viz.log package=dell-nvidia-8756-5dkms.x86_64.rpm chrom=chromium.tar.gz paraview=paraview.bin.tar.gz < !—install nVidia drivers -- > wget -nv http://<var name="Kickstart_PrivateAddress"/>/install/rocks-dist/lan/$ARCH/SRPMS/$package >> $log 2>&1 echo -e "\nInstall Dell nVidia Drivers " >> $log /bin/rpm -i --force --nodeps $package >> $log 2>&1 /bin/rm -f $package < !—install Chromium -- > cd /opt wget -nv http://<var name="Kickstart_PrivateAddress"/>/install/rocks-dist/lan/$ARCH/SRPMS/$chrom >> $log 2>&1 echo -e "\nInstall chromium in /opt " >> $log /bin/tar xzf $chrom >> $log 2>&1 /bin/rm -f $chrom /bin/rm /opt/chromium/cr-1.9/lib/Linux/libGL.so.1 /bin/rm /opt/chromium/cr-1.9-32/lib/Linux/libGL.so.1 < !—install Paraview files -- > cd /usr/local/bin wget -nv http://<var name="Kickstart_PrivateAddress"/>/install/rocks-dist/lan/$ARCH/SRPMS/$paraview >> $log 2>&1 echo -e "\nInstall paraview in /usr/local/bin " >> $log /bin/tar xzf $paraview >> $log 2>&1 /bin/rm -f $paraview < !— install glut lib -- > wget -nv http://<var name="Kickstart_PrivateAddress"/>/install/rocks-dist/lan/$ARCH/SRPMS/libglut.so.3.7.1 >> $log 2>&1 wget -nv http://<var name="Kickstart_PrivateAddress"/>/install/rocks-dist/lan/$ARCH/SRPMS/libGLU.so.1.3 >> $log 2>&1 /bin/mv libglut.so.3.7.1 /usr/X11R6/lib64 /bin/mv libGLU.so.1.3 /usr/X11R6/lib64 /bin/ln -s /usr/X11R6/lib64/libglut.so.3.7.1 /usr/lib64/libglut.so.3 /bin/ln -s /usr/X11R6/lib64/libglut.so.3.7.1 /usr/lib64/libglut.so /bin/ln -s /usr/X11R6/lib64/libGLU.so.1.3 /usr/lib64/libGLU.so.1 /bin/ln -s /usr/X11R6/lib64/libGLU.so.1.3 /usr/lib64/libGLU.so < !—enable font server -- > chkconfig xfs on < !—disable firewall -- > chkconfig iptables off < !—Add necessary environmental variables -- > echo -e "\nexport LD_LIBRARY_PATH=\"/opt/chromium/cr-1.9/lib/Linux:\$LD_LIBRARY_PATH\" " >> /root/.bashrc
October 2007 Page 20 Dell Product Group - Enterprise
echo -e "export PATH=\"/opt/chromium/cr-1.9/bin/Linux:\$PATH\" " >> /root/.bashrc echo -e "export CRMOTHERSHIP=\"<var name="Kickstart_PrivateAddress"/>\" " >> /root/.bashrc < !—modify the inittab thus the default startup level is x window -- > ex - /etc/inittab << script /:initdefault:/d i id:5:initdefault: . wq! Script </post> < !-- the new code end here -- > </kickstart>
October 2007 Page 21 Dell Product Group - Enterprise
Special Notices:
Cisco is a registered trademark
Ethernet is a trademark of Xerox Corporation USA.
GNOME is a trademark of GNOME Foundation.
InfiniBand is a registered trademark and service mark of the InfiniBand Trade Association.
Intel is a registered trademark and XEON is a trademark of Intel Corporation.
KDE is a trademark of KDE e.V, Incorporated
Linux is a registered trademark of Linus Torvalds.
OpenGL is a trademark of Silicon Graphics.
ParaView and VTK is a registered trademark of Kitware Inc. in the United States.
PCI‐X is a registered trademark and PCIe is a trademark of the PCI‐SIG.
Plaftform OCS is a registered trademark of Platform Computing Corporation
SAS is a trademark and registered trademark of SAS institute in the USA and other countries
Serial ATA is a trademark of the Serial ATAsm Trade Association
Xinerama, X Window System are trademarks of The Open Group
Other company, product, and service names may be trademarks or service marks of others.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. © Dell Inc. 2007. All rights reserved. Dell, PowerEdge, PowerVault, and the Dell logo are trademarks of Dell Inc. Other trademarks and trade names are the property of their respective owners and Dell disclaims proprietary interest in the marks and names of others.
October 2007