distributed ansys guide

Upload: ettore-mazzei

Post on 30-Oct-2015

101 views

Category:

Documents


0 download

TRANSCRIPT

  • Distributed ANSYSGuideANSYS Release 9.0

    002114November 2004

    ANSYS, Inc. is aUL registeredISO 9001: 2000Company.

  • Distributed ANSYS Guide

    ANSYS Release 9.0

    ANSYS, Inc.Southpointe275 Technology DriveCanonsburg, PA [email protected]://www.ansys.com(T) 724-746-3304(F) 724-514-9494

  • Copyright and Trademark InformationCopyright 2004 SAS IP, Inc. All rights reserved. Unauthorized use, distribution or duplication is prohibited.

    ANSYS, DesignSpace, CFX, DesignModeler, DesignXplorer, ANSYS Workbench environment, AI*Environment, CADOE and any and all ANSYS, Inc. productnames referenced on any media, manual or the like, are registered trademarks or trademarks of subsidiaries of ANSYS, Inc. located in the United States orother countries. ICEM CFD is a trademark licensed by ANSYS, Inc. All other trademarks and registered trademarks are property of their respective owners.

    ANSYS, Inc. is a UL registered ISO 9001: 2000 Company.

    ANSYS Inc. products may contain U.S. Patent No. 6,055,541.

    Microsoft, Windows, Windows 2000 and Windows XP are registered trademarks of Microsoft Corporation.Inventor and Mechanical Desktop are registered trademarks of Autodesk, Inc.SolidWorks is a registered trademark of SolidWorks Corporation.Pro/ENGINEER is a registered trademark of Parametric Technology Corporation.Unigraphics, Solid Edge and Parasolid are registered trademarks of Electronic Data Systems Corporation (EDS).ACIS and ACIS Geometric Modeler are registered trademarks of Spatial Technology, Inc.

    FLEXlm License Manager is a trademark of Macrovision Corporation.

    This ANSYS, Inc. software product and program documentation is ANSYS Confidential Information and are furnished by ANSYS, Inc. under an ANSYSsoftware license agreement that contains provisions concerning non-disclosure, copying, length and nature of use, warranties, disclaimers and remedies,and other provisions. The Program and Documentation may be used or copied only in accordance with the terms of that license agreement.

    See the ANSYS, Inc. online documentation or the ANSYS, Inc. documentation CD for the complete Legal Notice.

    If this is a copy of a document published by and reproduced with the permission of ANSYS, Inc., it might not reflect the organization or physical appearanceof the original. ANSYS, Inc. is not liable for any errors or omissions introduced by the copying process. Such errors are the responsibility of the partyproviding the copy.

  • Table of Contents

    1. Overview of Distributed ANSYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112. Configuring Distributed ANSYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    2.1. Prerequisites for Running Distributed ANSYS or the Distributed Solvers .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.1.1. MPI Software ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.1.2. Using MPICH ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    2.1.2.1. Configuration for Windows Systems Running MPICH ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.1.2.2. Configuration for UNIX/Linux Systems Running MPICH ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2.2. Installing Distributed ANSYS ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3. Setting Up the Environment for Distributed ANSYS or the Distributed Solvers .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    2.3.1. Using the mpitest Program .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.3.1.1. Running a Local Test .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.3.1.2. Running a Distributed Test .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    2.3.2. Other Considerations ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4. Starting Distributed ANSYS .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    2.4.1. Starting Distributed ANSYS via the Launcher ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.4.2. Starting Distributed ANSYS via Command Line ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3. Running Distributed ANSYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1. Advantages of Using Distributed ANSYS ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.2. Supported Analysis Types ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.3. Supported Features ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.4. Running a Distributed Analysis .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5. Understanding the Working Principles and Behavior of Distributed ANSYS ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.6. An Example Distributed ANSYS Analysis (Command Method) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    List of Tables1.1. Parallel Solvers Available in Distributed ANSYS (PPFA License Required) .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.2. Parallel Solvers Available in Shared-Memory ANSYS with a PPFA License ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3. Parallel Solvers Available in Shared-Memory ANSYS Without a PPFA License ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.1. Platforms and MPI Software ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

  • vi

  • Chapter 1: Overview of Distributed ANSYSSolving a large model with millions of DOFs can require many CPU hours. To decrease processing time, ANSYSoffers several options for distributing model-solving power using multiple processors.

    You can run the entire model using Distributed ANSYS. With Distributed ANSYS, the entire /SOLUTION phaseruns in parallel, including the stiffness matrix generation, linear equation solving, and results calculations. Withmultiple processors, you can see significant speedup in the time it takes to run your analysis.

    You can also run just the equation solver step itself in a parallel manner under two primary configurations: shared-memory architecture and distributed-memory architecture.

    This document discusses the first option, running Distributed ANSYS. For more information on running the dis-tributed solvers in shared-memory ANSYS, see Chapter 14, Improving ANSYS Performance and Parallel Perform-ance for ANSYS of the ANSYS Advanced Analysis Techniques Guide. Both Distributed ANSYS and the distributedsolvers running under shared-memory ANSYS require Parallel Performance for ANSYS (PPFA) licenses.

    The following tables show how the various solvers, element formulation, and results calculation behave underDistributed ANSYS and shared-memory ANSYS. You can run Distributed ANSYS in either distributed parallelmode (across multiple machines) or in shared-memory parallel mode (using multiple processors on a singlemachine). However, while both modes will see a speedup in CPU time, only distributed parallel mode will allowyou to take advantage of increased memory availability across the multiple machines.

    It is important to fully understand the terms distributed-memory parallel and shared-memory parallel as theyrelate to the physical hardware. The terms Distributed ANSYS and shared-memory ANSYS refer to our softwareofferings, which run on the following hardware configurations:

    Shared Memory: In a shared memory environment, a single shared memory address space is accessibleby all processors. Therefore each CPU shares the memory with the others. A common example of ashared memory system would be an IA-32 bit Windows machine with two processors. Both processorsshare the same main memory space through a common bus architecture.

    Distributed Memory: In a distributed memory environment, each CPU or computing node has its ownmemory address space which is not shared by other CPUs or computing nodes. Communication betweenmachines is by MPI (Message Passing Interface) software on the network. A common example of a distrib-uted memory system would be any collection of desktop workstations linked by a network. When thecollection of linked processors is a dedicated computing engine for a given period of time and is not usedduring that time for everyday tasks such as email or browsing, it is called a cluster.

    Mixed Memory: Mixed memory indicates that the cluster is using a combination of both shared anddistributed memory. A common example of a mixed memory system would be a cluster of IA-64 bit CPUswith two CPUs in each physical box, sharing that memory, but with a number of these units connectedto each other by a network. The AMG solver does not support this as it is a shared memory only solver.However, the DPCG, DJCG and DDS solvers all support both shared memory and distributed memory bytreating the shared memory as if it were distributed memory.

    Table 1.1 Parallel Solvers Available in Distributed ANSYS (PPFA License Required)

    Mixed-MemoryHardware

    Distributed-Memory Hardware

    Shared-MemoryHardware

    Solvers/Feature

    YYYDPCG/PCG

    YYYDJCG/JCG

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

  • Mixed-MemoryHardware

    Distributed-Memory Hardware

    Shared-MemoryHardware

    Solvers/Feature

    YYYDistributed sparse

    ------AMG

    --*--*--*Sparse

    ------DDS

    ------ICCG

    YYYElement formulation, results calculation

    *Runs in shared-memory parallel mode on the local machine only; element formulation and results calculationwill still run in distributed-memory parallel mode.

    Table 1.2 Parallel Solvers Available in Shared-Memory ANSYS with a PPFA License

    Mixed-MemoryHardware

    Distributed-Memory Hardware

    Shared-MemoryHardware

    Solvers/Feature

    YYYDPCG

    YYYDJCG

    ------Distributed sparse

    ----YPCG

    ----YJCG

    ----YAMG

    ----YSparse

    YYYDDS

    ----YICCG

    ----YElement formulation, results calculation

    Table 1.3 Parallel Solvers Available in Shared-Memory ANSYS Without a PPFA License

    Mixed-MemoryHardware

    Distributed-Memory Hardware

    Shared-MemoryHardware*

    Solvers/Feature

    ------DPCG

    ------DJCG

    ------Distributed sparse

    ----YPCG

    ----YJCG

    ------AMG

    ----YSparse

    ------DDS

    ----YICCG

    ----YElement formulation, results calculation

    *Via /CONFIG,NPROC,n.

    In ANSYS, the solution time is typically dominated by three parts: the time spent to create the element matricesand form the global matrices or global systems of equations, the time to solve the linear system of equations,

    Chapter 1: Overview of Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.12

  • and the time spent post-processing the solution (i.e., calculating derived quantities such as stress and strain).The distributed solvers (DDS, DPCG, DJCG) that currently exist in shared-memory ANSYS can significantly decreasethe time spent to solve the linear system of equations. However, when using these distributed solvers, the timespent creating the system of equations and the time spent calculating the derived quantities from the solutionto the system of equations is not reduced.

    Shared-memory architecture (/CONFIG,NPROC) runs a solution over multiple processors on a single multiprocessormachine. When using shared-memory ANSYS, you can reduce each of the three main parts of the overall solutiontime by using multiple processors. However, this architecture is limited by the memory bandwidth; you typicallysee very little reduction in solution time beyond two to four processors.

    The distributed-memory architecture of Distributed ANSYS runs a solution over multiple processors on a singlemachine or on multiple machines. It decomposes large problems into smaller domains, transfers the domainsto each processor, solves each domain, and creates a complete solution to the model. Because each of the threemain parts of the overall solution time are running in parallel, the whole model solution time is significantly re-duced. The memory required is also distributed over multiple systems. This memory-distribution method allowsyou to solve very large problems on a cluster of machines with limited memory.

    Distributed ANSYS works by launching ANSYS on multiple machines. The machine that ANSYS is launched on iscalled the master machine and the other machines are called the slave machines. (If you launch DistributedANSYS using multiple processors on the same machine, you would use the same terminology when referring tothe processors, e.g., the master and slave processors). All pre-processing and post-processing commands areexecuted only on the master machine. Only the SOLVE command and any necessary supporting commands (e.g.,/SOLU, FINISH, /EOF, /EXIT, etc.) are sent to the slave machines to be processed.

    Files generated by Distributed ANSYS are named JobnameN.ext, where N is the process number. The masterprocess is always 0, and the slave processes are 1, 2, etc. When the solution is complete and you issue the FINISHcommand in /SOLU, Distributed ANSYS combines all JobnameN.rst files into a single Jobname.rst file, locatedon the master machine.

    The remaining chapters explain how to configure your environment to run Distributed ANSYS, how to run aDistributed ANSYS analysis, and what features and analysis types are supported in Distributed ANSYS. You shouldread these chapters carefully and fully understand the process before attempting to run a distributed analysis.The proper configuration of your environment and the installation and configuration of the appropriate MPIsoftware are critical to successfully running a distributed analysis.

    13Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Chapter 1: Overview of Distributed ANSYS

  • 14

  • Chapter 2: Configuring Distributed ANSYS

    2.1. Prerequisites for Running Distributed ANSYS or the DistributedSolvers

    Your system must meet the following requirements to run either Distributed ANSYS or the distributed solvers(DDS, DPCG, DJCG) under shared-memory ANSYS.

    Homogeneous network: All machines must be the same type and OS level.

    You must be able to remotely log in to all machines, and all machines in the cluster must have identicalworking directory structures. Working directories should not be NFS mounted to all machines. Do not usemapped drive letters on Windows.

    For Distributed ANSYS, all machines in the cluster must have ANSYS installed. For distributed solvers undershared-memory ANSYS, you can install just the distributed solvers on the slave machines. The mastermachine must have a full ANSYS installation.

    You will need at minimum one PPFA license for each job.

    All machines must have the same version of MPI software installed and running. You can run the system'snative MPI (typically installed with the OS), or MPICH (installed with an ANSYS installation, where supported).The table below shows the native MPI software and version level for each platform.

    If you plan to use only the AMG solver in shared-memory ANSYS, MPI software is not required. It is requiredonly for the distributed solvers. If you are using only the AMG solver, skip the rest of this document andcontinue with Using the Parallel Performance for ANSYS Add-On in the ANSYS Advanced Analysis TechniquesGuide.

    2.1.1. MPI Software

    The MPI software you use depends on the platform. The following table lists the type of MPI software requiredfor each platform. For questions regarding the installation and configuration of the MPI software, please contactyour MPI vendor.

    The distributed solvers running under shared-memory ANSYS run on all of the platforms listed in the table below.Distributed ANSYS runs on the following platforms:

    HP PA8000 / HP Itanium2 (native MPI only; no MPICH)SGI (native MPI and MPICH)Intel IA-32 Linux (MPI/Pro and MPICH)Intel IA-64 Linux (MPICH only; no native MPI)

    Table 2.1 Platforms and MPI Software

    More InformationMPI SoftwarePlatform

    http://h21007.www2.hp.com/dspp/tech/tech_TechSoftwareDe-tailPage_IDX/1,1703,5840,00.html

    HP MPI Version 2.0 forTru64

    HP AlphaServer/ Tru64 UNIXV5.1

    http://h21007.www2.hp.com/dspp/tech/tech_TechSoftwareDe-tailPage_IDX/1,1703,3438,00.html

    HP MPI B.01.08.03.00,HP PA8000 64-bit / HP-UX 11.0(64-bit)

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

  • More InformationMPI SoftwarePlatform

    http://h21007.www2.hp.com/dspp/tech/tech_TechSoftwareDe-tailPage_IDX/1,1703,3438,00.html

    HP MPI 02.00.00.00HP Itanium2IA64 / HP-UX11.22

    http://www.austin.ibm.com/support/sp/sp_secure/

    readme/pssp3.1/poe.html[1]POE Version 3.2.0.14IBM AIX64 64-bit

    / AIX 5.1 Update5

    http://www.sgi.com/software/mpt/overview.htmlMPI 4.3 (MPT 1.8) witharray services 3.5

    SGI 64-bit/IRIX64 6.5.23m

    http://www.sun.com/hpc/communitysource/HPC CLUSTERTOOLS5.0

    Sun UltraSPARC64-bit / Solaris 8,UltraSPARC IIIand IV 64-bit /Solaris 8

    http://www.mpi-softtech.comMPI/Pro 1.6.5Intel IA-32 Linux/ RedHat AS 2.1Kernel 2.4.9

    See MPICH discussion, below.MPICH-1.2.5Intel IA-64 Linux/ RedHat AS 2.1Kernel 2.4.18and AMD Opter-on 64-bit Linux /SuSE Kernel2.4.21

    Contact your Fujitsu vendorParallelnavi 2.1Fujitsu SPARC64IV / Solaris 8

    http://www.mpi-softtech.comMPI/Pro 1.6.5Intel IA-32 bit /Windows XPHome or Profes-sional (Build2600) Version5.1, Win-dows 2000 Ver-sion 5.0 (Build2195)

    [1] Not downloadable -- this is an informational file only.

    2.1.2. Using MPICH

    As an alternative to using the native versions of MPI listed above, you can also use MPICH. MPICH is installedautomatically when you install ANSYS on UNIX platforms. MPICH is included on the installation media on Windowsplatforms; see Configuration for UNIX/Linux Systems Running MPICH for details.

    If you are running on a 64-bit Linux platform (Intel IA-64 or AMD Opteron), you must use MPICH. We do notsupport a native MPI version for either of these platforms. We do not support an MPICH version for the HP, Sun,and IBM platforms.

    2.1.2.1. Configuration for Windows Systems Running MPICH

    As an alternative to MPI/Pro, you can use MPICH-1.2.5. MPICH is also included on the ANSYS media under\MPICH\Setup.exe. Follow the instructions below to install it on your system. You will need to install MPICH onall systems to be used in the distributed run. See http://www-unix.mcs.anl.gov/mpi/mpich/ for more information.

    Chapter 2: Configuring Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.22

  • 1. Insert the CD in your CD drive. Choose Start>Run and type D:\MPICH\Setup.exe (replace D with theletter of your CD drive).

    2. Exit all running Windows programs and click Next.

    3. The Software License Agreement screen appears. Read the agreement, and if you accept, click Yes.

    4. Choose the destination for installation. To choose the default installation location, click Next. To changethe default installation location, click Browse to navigate to the desired location, and click Next.

    5. On the Select Components screen, select the default options, then click Next.

    6. On the Start Copying Files screen, verify that the information is correct and click Next to continue theinstallation.

    7. The Setup Complete dialog box appears. Click Finish.

    8. Run C:\Program Files\MPICH\mpd\bin\MPIRegister.exe and enter your login and password.

    If you are using MPICH and running the distributed solvers in shared-memory ANSYS, you will need to use analternate ANSYS script and executable when using the distributed solvers. For MPICH, use the ansddsmpichscript and the ansddsmpich.exe executable. See the ANSYS Advanced Analysis Techniques Guide for more inform-ation.

    2.1.2.2. Configuration for UNIX/Linux Systems Running MPICH

    As an alternative to MPI/Pro, you can use MPICH-1.2.5. MPICH is also included on the ANSYS media under/MPICH/Setup.exe and is installed automatically when you install ANSYS. See http://www-unix.mcs.anl.gov/mpi/mpich/ for more information.

    2.2. Installing Distributed ANSYS

    Install ANSYS following the instructions in the Installation and Configuration Guide for your platform. Besure to complete the installation, including all required post-installation procedures.

    2.3. Setting Up the Environment for Distributed ANSYS or the DistributedSolvers

    After you've ensured that your cluster meets the prerequisites and you have ANSYS and the correct versions ofMPI installed, you need to configure your distributed environment using the following procedure. This procedureapplies to both Distributed ANSYS (on supported platforms) and to the distributed solvers running under shared-memory ANSYS.

    1. Obtain the machine name for each machine on the cluster. You will need this name to set up the Con-figure Cluster option of the ANS_ADMIN utility in Step 3.

    Windows:Right-click on My Computer, left-click on Properties, and select the Network Identification orComputer Name tab. The full computer name will be listed. Note the name of each machine (notincluding the domain).

    UNIX/Linux:Type hostname on each machine in the cluster. Note the name of each machine. You will need thisname to set up the .rhosts file, as well as for the ANS_ADMIN utility.

    23Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Section 2.3: Setting Up the Environment for Distributed ANSYS or the Distributed Solvers

  • 2. (UNIX/Linux only) Set up the .rhosts file on each machine. The .rhosts file lists all machines in the cluster.The machines should be listed using their complete system name, as taken from uname. For example,an .rhosts file for a two-machine cluster might look like this:

    golinux1.ansys.com jqdgolinux2 jqd

    Change/verify .rhosts file permissions on all machines by issuing:

    chmod 600 .rhosts

    Verify communication between machines via rsh (e.g., rsh golinux2 ls). You should not be promptedfor a password. If you are, check the .rhosts permissions and machine names for correctness.

    If you plan to run the distributed solvers on one machine with multiple processors in a shared memoryenvironment, you need to have the MPI software installed, but you do not need the .rhosts file.

    3. Configure the hosts90.ans file. Use the ANS_ADMIN utility to configure this file. You can manuallymodify the file later, but we strongly recommend that you use ANS_ADMIN to create this file initially toensure that you establish the correct format.

    Windows:Start >Programs >ANSYS 9.0 >Utilities >ANS_ADMIN

    UNIX/Linux:

    /ansys_inc/v90/ansys/bin/ans_admin90

    Choose Configuration options, and then click Configure Cluster. Choose the hosts90.ans file to beconfigured and click OK. Then enter the system name (from Step 1) in the Machine hostname field andclick Add. On the next dialog box, enter the system type in the Machine type drop-down, and thenumber of processors in the Max number of jobs field for each machine in the cluster. The workingdirectory field also requires an entry, but this entry is not used by Distributed ANSYS or the distributedsolvers. The remaining fields do not require entries.

    The hosts90.ans should be located in your current working directory, your home directory, or the apdldirectory.

    4. For running Distributed ANSYS with MPICH: The ANSYS90_DIR and the dynamic load library path (e.g.,LD_LIBRARY_PATH) must be set by the appropriate shell startup script in order to run Distributed ANSYSwith MPICH. Use the following scripts (supplied with ANSYS) to configure the distributed environmentcorrectly for MPICH.

    For csh or tcsh shells, add the following line to your .cshrc, .tcshrc, or equivalent shell startup file:

    source /ansys_inc/v90/ansys/bin/confdismpich90.csh

    For sh or bash shells, add the following line to your .login, .profile, or equivalent shell startup file:

    . /ansys_inc/v90/ansys/bin/confdismpich90.sh

    As a test, rsh into all machines in the cluster (including the master) and verify that the ANSYS90_DIRand the LD_LIBRARY_PATH are set correctly. For example:

    rsh master1 env | grep ANSYS90_DIR

    The output should read:

    ANSYS90_DIR=/ansys_inc/v90/ansys

    and

    Chapter 2: Configuring Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.24

  • rsh master1 env | grep LD_LIBRARY

    The output should read:

    LD_LIBRARY_PATH=/ansys_inc/v90/ansys/lib/:/ansys_inc/v90/ansys/ syslib/:/ansys_inc/v90/commonfiles/Tcl/lib/

    Note Adding these confdismpich90 lines to your shell startup file will place the requiredANSYS load library path settings in front of any existing system load library settings and will likelyaffect other applications, including native MPI. If you have problems running other applicationsafter including these scripts, you will need to comment out these lines to run the other applica-tions.

    5. On UNIX/Linux systems, you can also set the following environment variables:

    ANSYS_RSH - This is the remote shell command to use in place of the default rsh.

    ANSYS_NETWORK_START - This is the time, in seconds, to wait before timing out on the start-upof the client (default is 15 seconds).

    ANSYS_NETWORK_COMM - This is the time to wait, in seconds, before timing out while communic-ating with the client machine (default is 5 seconds).

    ON IBM systems:

    LIBPATH - on IBM, if POE is installed in a directory other than the default (/usr/lpp/ppe.poe), youmust supply the installed directory path via the LIBPATH environment variable:

    export LIBPATH=nondefault-directory-path/lib

    On SGI systems:

    On SGI, in some cases, the default settings for environment variables MPI_MSGS_PER_PROC andMPI_REQUEST_MAX may be too low and may need to be increased. See the MPI documentationfor SGI for more information on settings for these and other environment variables.

    On SGI, when you install the SGI MPI software, you must also install the array 3.2 software (availablefrom the Message Passing Toolkit 1.3 distribution). The array daemon must be running on each systemyou plan to use for the distributed solvers. Update the /usr/lib/arrayd.conf file to list each systemon which you plan to run the distributed solvers. The local hostname of the machine must be listedfirst in this file.

    To verify that these environment variables are set correctly on each machine, run:

    rsh machine1 env

    On Windows systems only (for running the distributed solvers):

    If Windows running MPICH: Add C:\Program Files\MPICH\mpd\bin to the PATH environmentalvariable on all Windows machines (assuming MPICH was installed on the C:\ drive). This line mustbe in your path for distributed processing to work correctly.

    25Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Section 2.3: Setting Up the Environment for Distributed ANSYS or the Distributed Solvers

  • 2.3.1. Using the mpitest Program

    The mpitest program is a ping program to verify that MPI is set up correctly. The mpitest program should startwithout errors. If it does not, check your paths, machines file, .rhosts file, and permissions; correct any errors,and rerun.

    When running the mpitest programs, you must use an even number of nodes. The following examples use twonodes. To change the number of nodes, edit the scripts.

    2.3.1.1. Running a Local Test

    On Windows:

    mpirun -np 2 "C:\Program Files\ANSYS Inc\V90\ANSYS\bin\platform\mpitest.exe"

    On UNIX:

    /ansys_inc/v90/ansys/bin/mpitest90

    Note If you are using MPICH, run mpitestmpich instead of mpitest.

    2.3.1.2. Running a Distributed Test

    On Windows running MPI/Pro:

    1. Create a file named machines in your local/home directory. Open the machines file in an editor.

    2. Add your master and slave machines in your cluster. For example, in this cluster of two machines, themaster machine is gowindows1. List the machine name separately for each processor (CPU) on thatmachine. For example, if gowindows1 has four processors and gowindows2 has two, the machines filewould look like this:

    gowindows1gowindows1gowindows1gowindows1gowindows2gowindows2

    Note You can also simply list the number of processors on the same line: gowindows1 4.

    3. From a command prompt, navigate to your working directory. Run the following:

    mpirun -np x -mf machines "C:\Program Files\ANSYS Inc\V90\ANSYS\bin\platform\mpitest.exe"

    where x is the number of processors in your machines file (6 in this example).

    On Windows running MPICH::

    1. Create a file named machines in your local/home directory. Open the machines file in an editor.

    2. Add your master and slave machines in your cluster. For example, in this cluster of two machines, themaster machine is gowindows1. List the machine name separately for each processor (CPU) on thatmachine. For example, if gowindows1 has four processors and gowindows2 has two, the machines filewould look like this:

    Chapter 2: Configuring Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.26

  • gowindows1gowindows1gowindows1gowindows1gowindows2gowindows2

    Note You can also simply list the number of processors on the same line: gowindows1 4.

    3. From a command prompt, navigate to your working directory. Run the following:

    mpirun -np x -machinefile machines "C:\Program Files\ANSYS Inc\V90\ANSYS\bin\platform\mpitest.exe"

    where x is the number of processors in your machines file (6 in this example).

    On Linux running MPICH:

    Note These instructions are typically done one time by the system administrator. Individual users maynot have the necessary privileges to complete all of these steps.

    1. Edit the machines file. Navigate to the /ansys_inc/v90/ansys/mpich//share subdirectory.Open the file machines.LINUX in an editor.

    2. In the machines.LINUX file, change machine1 to the master machine in your Linux cluster. For example,in our cluster of two machines, the master machine is golinux1. List the machine name separately foreach processor (CPU) on that machine. For example, if golinux1 has four processors and golinux2 hastwo, the machines.LINUX file would look like this:

    golinux1golinux1golinux1golinux1golinux2golinux2

    Delete any other machines listed.

    Note If you are running an SMP box, you will simply list the number of processors on the sameline: golinux1 4.

    3. Edit the mpitestmpich90 script to read np = x where x is the number of processors in your ma-chines.LINUX file.

    4. Navigate to your working directory. Run the following:

    /ansys_inc/v90/ansys/bin/mpitestmpich90

    On Linux running MPI/Pro:

    1. Edit the machines file in the /etc subdirectory. Open the machines file in an editor.

    2. In the machines file, change machine1 to the master machine in your Linux cluster. For example, in ourcluster of two machines, the master machine is golinux1. List the machine name separately for eachprocessor (CPU) on that machine. For example, if golinux1 has four processors and golinux2 has two,the machines file would look like this:

    27Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Section 2.3: Setting Up the Environment for Distributed ANSYS or the Distributed Solvers

  • golinux1golinux1golinux1golinux1golinux2golinux2

    Delete any other machines listed.

    3. Edit the mpitest90 script to read np = x where x is the number of processors in your machines file.

    4. Navigate to your working directory. Run the following:

    /ansys_inc/v90/ansys/bin/mpitest90

    On UNIX machines running native MPI:

    The process is the same as described for running MPICH on Linux machines (above), but you will need to contactyour MPI vendor to find out where the appropriate machines file resides and how to edit it. Once you haveproperly edited the machines file following your vendor's instructions, edit mpitest90 to read np = x where xis the number of processors in your machines file.

    2.3.2. Other Considerations

    Other factors can also affect your distributed analysis.

    Hardware: Low-end hardware, such as cables, can reduce the speed improvements you see in a distributedanalysis. We typically recommend that you use a network cable with a communication speed of 160megabits/second (20 megabytes/second) or higher.

    A single 64-bit machine with multiple processors running under shared memory typically works as wellas a cluster (multiple machines).

    PCG considerations: Review the following guidelines if you will be using the PCG or DPCG solver. Notethat these are not steadfast rules, but rather recommendations to help you get started.

    The master machine needs more memory than the slave machines.

    Deploy 64-bit platforms such as Linux on Itanium or AMD chips.

    Use the /3GB switch on 32-bit platforms on Windows systems.

    As a broad guideline, use the following formula to get a general idea of memory usage for the DPCGsolver. In this formula, n is the total number of CPU processors used, and MDOF is million degrees offreedom.

    Master machine (Machine 0): MDOF(maximum) = Machine(0) Memory(GB) / (0.1 + 1.0/No. of machines)

    Slave machines (Machine 1 - n): MDOF(maximum) = Machines (1...n)Memory(GB)* n

    For example, if you have a master machine that is a 32-bit Windows machine with 2.2 GB availableRAM, using the /3GB switch, and a total of four machines in the cluster, you could solve a problem upto 6.3 MDOF:

    MDOF = 2.2 GB / (0.1 + 1 / 4) = 6.3 MDOF

    In this scenario, the slave machines must have 6.3 / 4 or 1.575 GB of available memory.

    Chapter 2: Configuring Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.28

  • 2.4. Starting Distributed ANSYS

    After you've completed the configuration steps, you can start Distributed ANSYS via the launcher or via thecommand line. We recommend that you use the ANSYS launcher to ensure the correct settings. Both methodsare explained here.

    2.4.1. Starting Distributed ANSYS via the Launcher

    Use the following procedure to start Distributed ANSYS via the launcher.

    1. Open the ANSYS launcher:

    UNIX:

    launcher90

    2. Select the correct environment and license on the Launch tab. Select the Parallel Performance for ANSYSadd-on.

    3. Go to the Solver Setup tab. Select Run Distributed ANSYS.

    Specify the MPI type to be used for this distributed run. MPI types include:

    MPI Native

    MPICH

    MPICH_SH (Shared-memory Linux machines)

    You must also specify either local machine or multiple hosts. If local machine, specify the number ofprocessors on that machine. If multiple hosts, select the machines you want to use from the list of availablehosts. The list of available hosts is populated from the hosts90.ans file. Click on the machines you wantto use and click Add to move them to the Selected Hosts list to use them for this run. You can also addor remove a host, but be aware that adding or removing a host from here will modify only this run; thehosts90.ans file will not be updated with any new information from this dialog box.

    4. Click Run to launch ANSYS.

    You can view the actual mpirun command line that the launcher issues by setting the ANS_SEE_RUN_COMMANDenvironment variable to 1. Setting this environment variable is useful for troubleshooting.

    2.4.2. Starting Distributed ANSYS via Command Line

    You can also start Distributed ANSYS via the command line using the following procedures.

    Local Host If you are running distributed ANSYS locally (i.e., running across multiple processors on a singlelocal machine), you need to specify the number of processors:

    For native MPI or MPI/Pro:

    ansys90 -pp -dis -np n

    For MPICH:

    ansys90 -pp -mpi mpich -dis -np n

    where n is the number of processors.

    29Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Section 2.4: Starting Distributed ANSYS

  • For example, if you run a job in batch mode on a local host using four processors and MPI, with an input filenamed input1 and an output file named output1, the launch command would be:

    ansys90 -pp -dis -np 4 -b -i input1 -o output1

    Multiple Hosts If you are running distributed ANSYS across multiple hosts, you need to specify the numberof processors on each machine:

    For native MPI or MPI/Pro:

    ansys90 -pp -dis -machines machine1:np:machine2:np:machine3:np

    For MPICH:

    ansys90 -pp -mpi mpich -dis -machines machine1:np:machine2:np:machine3:np

    where machine1 (or 2 or 3) is the name of the machine and np is the number of processors you want touse on the corresponding machine.

    For example, if you run a job in batch mode using two machines (one with four processors and one with twoprocessors) and MPI, with an input file named input1 and an output file named output1, the launch commandwould be:

    ansys90 -pp -dis -b -machines machine1:4:machine2:2 -i input1 -o output1

    For MPICH, you can also use just the -np n options to run across multiple hosts. To use this option, you will needto modify the default machines.LINUX file, located in /ansys_inc/v90/ansys/mpich//share. Theformat is one hostname per line, with either hostname or hostname:n, where n is the number of processors inan SMP. The hostname should be the same as the result from the command hostname.

    By default, the machines.LINUX file is set up with only one machine:

    machine1machine1machine1machine1

    To run multiple machines, you need to modify this file to list the additional machines:

    machine1machine1machine2machine2machine3machine3

    The order in which machines are listed in this file is the order in which the work will be distributed. For example,if you wanted to run Distributed ANSYS across one processor of each machine before being distributed to thesecond processor of any of the machines, you might want to order them as follows:

    machine1machine2machine3machine1machine2machine3

    Note Do not run Distributed ANSYS in the background via the command line (i.e., do not append anampersand (&) to the command line).

    Chapter 2: Configuring Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.210

  • Chapter 3: Running Distributed ANSYS

    3.1. Advantages of Using Distributed ANSYS

    In Distributed ANSYS, the entire /SOLUTION phase runs in parallel, including the stiffness matrix generation,linear equations solving, and results calculations. As a result, it is scalable from 2 to 16 processors, with up to 8Xspeedup when using a sufficient number of processors.

    You can run one job in Distributed ANSYS on a single machine with multiple processors or on multiple machineswith one or more processors in each machine.

    The processor you use to launch the run will do preprocessing in Distributed ANSYS, as well as any postprocessing.During /SOLUTION, each processor creates its own JOBNAME.RST or .RTH file. These files are then automaticallymerged for postprocessing.

    3.2. Supported Analysis Types

    The following analysis types are supported by Distributed ANSYS:

    Static linear or nonlinear analyses for single field structural problem (DOFs:UX,UY,UZ,ROTX,ROTY,ROTZ,WARP) and

    Single field thermal analyses (DOF: TEMP)

    SHELL131 and SHELL132 are not supported.

    Full transient analyses for single field structural and single field thermal analysis

    Spectrum analyses, cyclic symmetry analyses, and modal analyses are not supported.

    FLOTRAN, low- and high-frequency electromagnetics, and coupled-field analyses (including multifield, ROM,and FSI) are not supported.

    3.3. Supported Features

    The following nonlinearities are supported by Distributed ANSYS:

    Large deformations (NLGEOM,on)

    Line search (LNSRCH,on)

    Auto time stepping (AUTOTS,on)

    Solution controls

    Nonlinear material properties specified by the TB command

    Gasket elements and pre-tension elements

    U/P formulations introduced by the 18n elements and TARGE169 - CONTA178

    Contact nonlinearity (TARGE169 - CONTA178, CONTAC52), with the following restrictions for the CONTA17xelements:

    KEYOPT(1) = 0, 2 only

    KEYOPT(10) = 0, 1, 2 only

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

  • KEYOPT(11) = 0 only

    All other KEYOPTS are supported as documented in the element descriptions.

    The following ANSYS features are not supported by Distributed ANSYS:

    p-Elements

    Superelements/automatic substructuring

    Element morphing

    Arc-length method (ARCLEN)

    Inertia relief (IRLF)

    Prestress effects (PSTRES)

    Initial conditions (IC)

    Initial stress (ISTRESS,ISFILE)

    Nonlinear diagnostic tool (NLHIST)

    Partial solution (PSOLVE)

    Fast thermal solution option (THOPT)

    The radiosity surface elements (SURF251, SURF252)

    Optimization and probabilistic design are not supported under Distributed ANSYS. Restarts are not supported.The PGR file is not supported.

    3.4. Running a Distributed Analysis

    The following steps assume that you have set up your distributed environment and launched ANSYS followingthe steps in Chapter 2, Configuring Distributed ANSYS. After you have your distributed environment correctlyconfigured, you run a distributed analysis in Distributed ANSYS the same way you do in shared-memory ANSYS.

    1. Set up your analysis (geometry, loads, etc.) as you normally would.

    2. Specify your solution output using OUTRES. We recommend that you not use OUTPR.

    3. Set your analysis options by running the Solution Controls dialog: Main Menu> Solution> AnalysisType> Sol'n Controls and click on the Sol'n Options tab. Choose the solver you want to use and clickOK. You can use one of the following solvers:

    Distributed sparse (EQSLV,dsparse): This solver performs factorization of the matrix and back/forwardsubstitution in distributed parallel mode, and has demonstrated the best performance, with speedimprovements of 6 - 8X using 12-16 processors. However, you cannot use this solver for nonsymmetricmatrices or u-P formulations.

    Sparse (EQSLV,sparse): Can be run sequentially or in shared memory parallel mode (/CON-FIG,NPROC,n). This solver is the default and will generally solve most models. However, your partic-ular model may be better suited to one of the other solvers (such as the PCG solver), depending onthe particular configuration. Read the solver descriptions carefully when selecting a solver. If youhave sufficient memory in the system, you may want to try the distributed sparse solver usingEQSLV,dsparse.

    PCG (EQSLV,pcg)

    JCG (EQSLV,jcg) (not available from the Solution Controls gui.)

    Chapter 3: Running Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.32

  • Choosing the PCG or JCG solver when you're running Distributed ANSYS will automatically run the dis-tributed version of these solvers.

    Other solvers (ICCG, frontal, etc.) will not work in a distributed environment.

    4. Solve the analysis.Command(s): SOLVEGUI: Main Menu> Solution> Solve> Current LS

    5. After the solution completes, specify the set of results to be read from the results file. Note that a SETcommand is required as not all solution data is in the database.

    Command(s): SETGUI: Main Menu> General Postproc>Read Results

    6. Postprocess your results as you would for any analysis.

    Notes on Running Distributed ANSYS:

    Only the master machine reads the config.ans file.

    Distributed ANSYS ignores the /CONFIG,noeldb command.

    3.5. Understanding the Working Principles and Behavior of DistributedANSYS

    The fundamental difference between Distributed ANSYS and shared-memory ANSYS is that n number of ANSYSjobs will be running at the same time (where n is the total number of CPU processors used) for one model. Thesen jobs are not aware of each other's existence unless the individual CPU processors are communicating (sendingmessages). Distributed ANSYS is the method by which the CPU processors communicate with each other in theright location and at the appropriate time.

    Before entering the /SOLUTION phase, Distributed ANSYS automatically decomposes the problem into n CPUdomains so that each CPU processor works on only a portion of the model. When the existing /SOLUTION phaseends (e.g., FINISH is issued), Distributed ANSYS works on the entire model again (i.e., it behaves like shared-memory ANSYS). However, at this time, the database name is Jobname0.DB (not Jobname.DB).

    Following is a summary of behavioral differences between Distributed ANSYS and shared-memory ANSYS.

    Jobname Conventions The master processor will create Jobname0 and slaves Jobname1 throughJobnamen. When you issue a SAVE (or PASAVE, CDWRITE, or LSWRITE) to de-fault files, Distributed ANSYS will save all of these to Jobname0 with the appro-priate extension. When you RESUME (or PARESU, CDREAD, or LSREAD) fromthe default file, ANSYS reads first from Jobname0.EXT. If Jobname0.EXT is notfound, it will then try to read Jobname.EXT. All actions in PREP7 or POST whilein Distributed ANSYS will work on Jobname0.EXT by default. If a non-defaultfile is specified, then Distributed ANSYS behaves the same as shared-memoryANSYS.

    Post-Processing Files andFilenames

    During the /SOLUTION phase, Distributed ANSYS creates Jobnamen.RST orJobnamen.RTH files in each CPU processor (you may see these working files inthe directory). At the successful completion of the run, Distributed ANSYS auto-matically merges the Jobnamen.RST (or .RTH) files into a single file called Job-name.RST (or .RTH). From that point, postprocessing is the same as in shared-memory ANSYS.

    33Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Section 3.5: Understanding the Working Principles and Behavior of Distributed ANSYS

  • Use of APDL In pre- and post-processing, APDL works the same in Distributed ANSYS as inshared-memory ANSYS. However, in /SOLUTION, Distributed ANSYS does notsupport certain *GET items. In general, Distributed ANSYS supports global solu-tion *GET results such as total displacements and reaction forces. It does notsupport element level results specified by ESEL, ESOL, and ETABLE labels. Unsup-ported items will return a *GET value of zero.

    Error Handling and JobHanging

    When an error occurs in one of the CPU processors during the Distributed ANSYSexecution, the processor sends an error message to all other CPU processors sothat the entire run exits gracefully. However, if an error message fails to send,the job may hang, and you will need to manually kill all the processes. You canremove each Jobnamen.ERR file if you suspect that one of the jobs hung. Thisaction should rarely be required.

    Batch and InteractiveMode

    You can launch Distributed ANSYS in either interactive or batch mode on themaster processor. However, the slave processor is always in batch mode. Theslave process cannot read the START90.ANS or STOP90.ANS files. The masterprocess sends all /CONFIGURE,LABEL commands to the slave processors asneeded.

    Postprocessing with Data-base File and SET Com-mands

    Shared-memory ANSYS can postprocess using the Jobname.DB file (if the solu-tion results were saved), as well as using the Jobname.RST file. Distributed ANSYS,however, can only postprocess using the Jobname.RST file and cannot use theJobname.DB file as no solution results are written to the database. You will needto issue a SET command before postprocessing.

    Print Output (OUTPRCommand)

    In Distributed ANSYS, the OUTPR command prints NSOL and RSOL in the samemanner as in shared-memory ANSYS. However, for other items such as ESOL,Distributed ANSYS prints only the element solution on the CPU domain of themaster processor. Therefore, OUTPR, ESOL has incomplete information and isnot recommended. Also, the order of elements is different from that of shared-memory ANSYS due to domain decomposition. A direct one-to-one elementcomparison with shared-memory ANSYS will be different if using OUTPR.

    ASCII Job Output Files When a Distributed ANSYS job is executed, the output for the master processoris written to the screen by default. If you specified an output file via the launcheror the -o option, the output is written to that file. Distributed ANSYS automaticallyoutputs the ASCII files from each slave processor to Jobnamen.OUT. Normallythese slave output files have little value because all of the job information is onthe master processor (Jobname0.OUT). The same principle also applies to theother ANSYS ASCII files such as Jobname0.ERR, Jobname0.MNTR, etc. However,if the job hangs, you may be able to determine the cause by studying the contentsof the slave output files.

    Shared Memory SparseSolver in Distributed AN-SYS

    You can use /CONFIG,NPROC,n to activate shared-memory parallel behavior forthe shared-memory sparse solver in Distributed ANSYS (EQSLV,sparse). The dis-tributed sparse solver is more scalable than the shared-memory sparse solver.However, the distributed sparse solver uses more memory than the shared-memory sparse solver and is less robust. For very difficult problems, we recom-mend that you try using the shared-memory sparse solver because you can stillachieve scalable performance in element formulation and results calculations,which often arise in medium-size nonlinear problems.

    Chapter 3: Running Distributed ANSYS

    Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.34

  • Large Number of CE/CPand Contact Elements

    In all PPFA products (both shared-memory ANSYS and Distributed ANSYS), theprogram can handle a large number of coupling and constraint equations (CE/CP)and contact elements. However, specifying too many of these items can forceDistributed ANSYS to communicate more among each CPU processor, resultingin longer elapsed time to complete a distributed parallel job. You should reducethe number of CE/CP if possible and make potential contact pairs in a smallerregion to achieve non-deteriorated performance.

    3.6. An Example Distributed ANSYS Analysis (Command Method)

    The following input listing demonstrates a simple analysis run under Distributed ANSYS.

    /title A simple example to run Distributed Ansys

    /com,Use eqslv,pcg or dsparse, or sparse to toggle solvers. /com,Launch the job by "ansys90 -np 2 -pp -dis -i input -o output". /com,This file is "input", np is # of processors == 2. /com,For this input deck, np can be set to 2, 3 or 4, since it only /com,has 4 elements.

    /prep7 mp,ex,1,10 mp,dense,1,1.0 mp,nuxy,1,0.3 n,1, n,2,1 n,3,1,1 n,4,0,1 n,5,2,0 n,6,2,1 n,7,1,2 n,8,0,2 n,9,2,2 et,1,182 e,1,2,3,4 e,2,5,6,3 e,4,3,7,8 e,3,6,9,7 d,4,all d,6,all f,3,fy,-1 f,3,fx,-10 cp,2,all,1,8 cp,20,all,2,7 cp,200,all,5,9 fini

    /solution acel,1,1,1 eqslv,pcg,,, ! use Dpcg solver in the Distributed ansys run

    !eqslv,dsp ! use Distributed sparse solver !eqslv,sparse ! use default shared memory version of sparse solver

    outres,all solve fini /post1

    prnsol, ! print nodal solution of the whole model fini

    35Distributed ANSYS Guide . ANSYS Release 9.0 . 002114 . SAS IP, Inc.

    Section 3.6: An Example Distributed ANSYS Analysis (Command Method)

    Distributed ANSYS GuideChapter 1: Overview of Distributed ANSYSChapter 2: Configuring Distributed ANSYS2.1. Prerequisites for Running Distributed ANSYS or the Distributed Solvers2.1.1. MPI Software2.1.2. Using MPICH2.1.2.1. Configuration for Windows Systems Running MPICH2.1.2.2. Configuration for UNIX/Linux Systems Running MPICH

    2.2. Installing Distributed ANSYS2.3. Setting Up the Environment for Distributed ANSYS or the Distributed Solvers2.3.1. Using the mpitest Program2.3.1.1. Running a Local Test2.3.1.2. Running a Distributed Test

    2.3.2. Other Considerations

    2.4. Starting Distributed ANSYS 2.4.1. Starting Distributed ANSYS via the Launcher2.4.2. Starting Distributed ANSYS via Command Line

    Chapter 3: Running Distributed ANSYS3.1. Advantages of Using Distributed ANSYS3.2. Supported Analysis Types3.3. Supported Features3.4. Running a Distributed Analysis3.5. Understanding the Working Principles and Behavior of Distributed ANSYS3.6. An Example Distributed ANSYS Analysis (Command Method)