091201 egee-osg interoperability · 2018. 10. 23. · egee-osg상호연동인프라구축보고서...
TRANSCRIPT
상호연동 인프라 구축 보고서EGEE-OSG
머리말
본 책자는 하나의 로컬 자원을 상이한 그리드 미들웨어를 기반으로 하는 그리드 인프라에 동시에 연동되도록 함으로써 자원 제공자 관점에서는 로컬 자원의 활용률의 극대화 사용자 관점에서는 활용 가능한 자원의 극대화라는 효과,
를 얻을 수 있도록 하기 위한 기술을 정리한 것입니다 향후 프로덕션 레벨 그.
리드 인프라를 지속적으로 구축확대하는 과정에서 이 책자가 귀중한 정보로․활용되기를 기대합니다.
이 책자를 만들어내기까지 도움을 주신 모든 분들께 감사드립니다.
년 월 일2009 12 1
작 성 자 김 법 균: (KISTI)
유 진 승 (KISTI)
윤 희 준 (KISTI)
권 석 면 (KISTI)
Christophe Bonnaud (KISTI)
박 형 우 (KISTI)
장 행 진 (KISTI)
상호연동 인프라 구축 보고서EGEE-OSG
제목 차례
Glossary ······························································································································· 1
개요Ⅰ ································································································································ 5
1. Grid Interoperability ································································································· 5
보유 프로덕션 레벨 그리드 인프라KISTIⅡ ····························································· 6
1. EGEE ························································································································· 6
2. OSG ··························································································································· 9
보유 자원3. KISTI ····································································································· 11
와 간 호환을 위한 기술EGEE OSGⅢ ··········································································· 12
1. GlideinWMS ············································································································· 12
2. gLexec ······················································································································· 15
호환 인프라 구축EGEE-OSGⅣ ··················································································· 17
설치1. OSG-CE ··········································································································· 17
서버 설치2. GUMS ···································································································· 29
설정3. gLexec ·············································································································· 31
Appendix config.ini ············································································································ 35
상호연동 인프라 구축 보고서EGEE-OSG
그림 차례
그림< 1> Partner Countries of the EGEE-III Project ··················································· 6
그림< 2> EGEE Project Phase ························································································ 7
그림< 3> EGEE Workload in 2007 ················································································ 7
그림< 4> Users and resource distribution in 2007 for EGEE ····································· 8
그림< 5> EGEE Project Activities ·················································································· 8
그림 북미 지역을 중심으로 개 자원이 연결되어 있음< 6> OSG Map : 85 ············ 9
그림 활용 통계 당 사용된 시간< 7> OSG - VO ·························································· 10
그림 활용 통계 자원별 활용 시간< 8> OSG - ···························································· 10
그림 연동 자원의 그리드 미들웨어 배치도< 9> EGEE KISTI ································· 11
그림 사용자 관점에서의 역할< 10> GlideinWMS ······················································· 12
그림 의 파일럿 패러다임< 11> GlideinWMS ································································· 13
그림< 12> GlideinWMS ···································································································· 14
그림 의 주요 컴포넌트< 13> GlideinWMS ····································································· 14
그림 풀에서의< 14> Condort gLexec ············································································· 15
그림 의 프록시 관리< 15> gLexec ·················································································· 16
그림 호환 연동된 그리드 미들웨어 배치도< 16> EGEE-OSG ·································· 34
상호연동 인프라 구축 보고서EGEE-OSG
- 1 -
Glossary
□ AFS : Andrew File System
□ ALICE : A Large Ion Collider Experiment
□ AliEn : ALICE Environment
□ API : Application Programming Interface
□ BDII : Berkeley Database Information Index
□ CASTOR : CERN Advanced STORage manager
□ CE : Computing Element
□ CERN : European Laboratory for Particle Physics
□ ClassAd : Classified advertisement (Condor)
□ CLI : Command Line Interface
□ CNAF : INFN's National Center for Telematics and Informatics
□ CREAM : Computing Resource Execution And Management
□ dcap : dCache Access Protocol
□ DIT : Directory Information Tree (LDAP)
□ DLI : Data Location Interface
□ DN : Distinguished Name
□ EDG : European DataGrid
□ EDT : European DataTAG
□ EGEE : Enabling Grids for E-sciencE
□ ESM : Experiment Software Manager
□ FCR : Freedom of Choice for Resources
□ FNAL : Fermi National Accelerator Laboratory
□ FTS : File Transfer Service
□ GFAL : Grid File Access Library
□ GG : Grid Gate (aka gatekeeper)
□ GGF : Global Grid Forum (now called OGF)
□ GGUS : Global Grid User Support
□ GIIS : Grid Index Information Server
□ GLUE : Grid Laboratory for a Uniform Environment
□ GMA : Grid Monitoring Archictecture
상호연동 인프라 구축 보고서EGEE-OSG
- 2 -
□ GOC : Grid Operations Centre
□ GRAM : Grid Resource Allocation Manager
□ GRIS : Grid Resource Information Service
□ GSI : Grid Security Infrastructure
□ gsidcap : GSI-enabled version of the dCache Access Protocol
□ gsirfio : GSI-enabled version of the Remote File Input/Output protocol
□ GUI : Graphical User Interface
□ GUID : Grid Unique ID
□ HSM : Hierarchical Storage Manager
□ ICE : Interface to CREAM Environment
□ ID : Identifier
□ INFN : Istituto Nazionale di Fisica Nucleare
□ IS : Information Service
□ JDL : Job Description Language
□ kdcap : Kerberos-enabled version of the dCache Access Protocol
□ LAN : Local Area Network
□ LB : Logging and Bookkeeping Service
□ LDAP : Lightweight Directory Access Protocol
□ LFC : LCG File Catalogue
□ LFN : Logical File Name
□ LHC : Large Hadron Collider
□ LCG : LHC Computing Grid
□ LRC : Local Replica Catalogue
□ LRMS : Local Resource Management System
□ LSF : Load Sharing Facility
□ MDS : Monitoring and Discovery Service
□ MPI : Message Passing Interface
□ MSS : Mass Storage System
□ NS : Network Server
□ OGF : Open Grid Forum (formerly called GGF)
□ OS : Operating System
□ OSG : Open Science Grid
□ PBS : Portable Batch System
상호연동 인프라 구축 보고서EGEE-OSG
- 3 -
□ PFN : Physical File name
□ PID : Process IDentifier
□ POOL : Pool of Persistent Objects for LHC
□ PPS : Pre-Production Service
□ RAL : Rutherford Appleton Laboratory
□ RB : Resource Broker
□ RFIO : Remote File Input/Output
□ R-GMA : Relational Grid Monitoring Archictecture
□ RLI : Replica Location Index
□ RLS : Replica Location Service
□ RM : Replica Manager
□ RMC : Replica Metadata Catalogue
□ RMS : Replica Management System
□ ROC : Regional Operations Centre
□ ROS : Replica Optimization Service
□ SAGA : Simple API for Grid Applications
□ SAM : Service Availability Monitoring
□ SASL : Simple Authorization & Security Layer (LDAP)
□ SE : Storage Element
□ SFN : Site File Name
□ SMP : Symmetric Multi Processor
□ SN : Subject Name
□ SRM : Storage Resource Manager
□ SURL : Storage URL
□ TURL : Transport URL
□ UI : User Interface
□ URI : Uniform Resource Identifier
□ URL : Uniform Resource Locator
□ UUID : Universal Unique ID
□ VDT : Virtual Data Toolkit
□ VO : Virtual Organization
□ WLCG : Worldwide LHC Computing Grid
□ WMS : Workload Management System
상호연동 인프라 구축 보고서EGEE-OSG
- 4 -
□ WN : Worker Node
□ WPn : Work Package #n
상호연동 인프라 구축 보고서EGEE-OSG
- 5 -
개요Ⅰ
1. Grid Interoperability
□ 대부분의 그리드 인프라는 또는 와 같은 특정 그리드 인프라에 전용EGEE OSG
할당 중임○ 로컬 자원의 낮은 활용률○ 지원하지 않는 그리드 인프라를 활용하는 응용연구 지원에 어려움
□ Grid Interoperability Solutions
○ 사용자 관점– SAGA (Simple API for Grid Applications)
– 를 중심으로 서로 다른 그리드 미들웨어를 기반으로 하는 그리드 인OGF
프라에 대한 이 가능하도록 제정한 표준Job Submission
– 표준 항목․ Files
․ Local Files
․ Job Submission and Management
․ Streaming Communication between processes
– 특정 그리드 오퍼레이션과는 무관한 핵심 가 다루는 내용API
․ Tasks
․ Sessions
․ Security
○ 자원 제공자 관점– 로컬 자원이 상이한 미들웨어를 기반으로 구축된 그리드 인프라에 동시에연동될 수 있도록 구성
– 현존 핵심 기술․ 기반Condor GlideinWMS
․ gLexec
상호연동 인프라 구축 보고서EGEE-OSG
- 6 -
보유 프로덕션 레벨 그리드 인프라KISTIⅡ
1. EGEE
□ 현황WLCG/EGEE
○ 는 세계에서 가장 큰 그리드 인프라스트럭쳐이며 다양한 분야의 연구를EGEE
지원하고 있다 아래 정보는 년 월 현재 기록임. ( 2008 7 )
– 개 기관 참여120
– 개국 개 사이트52 259
– Number of CPUs available to users (24x7) : ~ 72,000
– Storage Available : ~ 20 PB disk + tape MSS
– 개 공식 여개 포함VO : 274 ( VO 130 )
– 사용자 명 등록 사용자 여명 포함: 14,000 ( 7,500 )
– 하루 개 이상의 작업 수행150,000
○ 의 지원을 받아 를 거쳐 년 월부터 가 시작EU EGEE-I, EGEE-II 2008 5 EGEE-III
되어 년 월까지 진행될 예정이다2010 4 .
상호연동 인프라 구축 보고서EGEE-OSG
- 7 -
상호연동 인프라 구축 보고서EGEE-OSG
- 8 -
□ 활동은 다음과 같이 구분된다EGEE .
NA12%
NA25%
NA38%
NA42 0%
NA51%SA1
48%
SA22%
SA39%
JRA15%
상호연동 인프라 구축 보고서EGEE-OSG
- 9 -
2. OSG
□ 미국 내 과학연구 커뮤니티 국립 연구소 실험 교육 커뮤니티 컴퓨터과학, , , LHC , ,․자 및 기술자들의 컨소시엄
□ 대학 및 연구소들을 분산된 컴퓨팅 및 저장 자원으로 구성된 국가 규모의 사이버 인프라로 연결
□ 주요 연구 분야○ HEP: ATLAS and CMS experiments at LHC + CDF / D at FNAL∅○ LIGO experiment
○ Biochemistry (protein structure, folding)
○ Climate modeling
○ 다수의 기타 분야□ 펀딩 소스 와: DoE (Department of Energy) NFS (National Foundation of Science)
□ 년 여름 현재 맵2009 OSG
그림< 6 북미 지역을 중심으로 개 자원이 연결되어 있음> OSG Map : 85
□ 사용자 관점에서의 OSG
○ 분산된 와 디스크 자원의 네트워크CPU
– 기관 및 각 들은 로컬 자원을 제공하고 에 연VO(Virtual Organization) OSG
동
상호연동 인프라 구축 보고서EGEE-OSG
- 10 -
․ 는 에 연결되어 제공되는 캠퍼스 그리드의 대표적인 사례Fermigrid OSG
․ 현재 약 만개 와 의 디스크가 연동되어 있음5 CPU 10PB
– 계약 준수 사항 외의 추가적인 실험들도 지원– 그리드 자원의 활용을 위해 활용되는 공통 소프트웨어․ VDT (Virtual Data Toolkit) + OSG specific configuration
․ 외부 제공자에 의해 제공되는 소프트웨어들도 포함– GOC (Grid Operations Center)
․ 모니터링 문제 트래킹 및 사용자 지원,
○ 활용 통계 정보
그림< 7 활용 통계 당 사용된 시간> OSG - VO
그림< 8 활용 통계 자원별 활용 시간> OSG -
상호연동 인프라 구축 보고서EGEE-OSG
- 11 -
3. 보유 자원KISTI
□ 자원은 년부터 시작된 센터를 위해 구축된 자원을 기반KISTI 2007 ALICE Tier2
으로 확장되었음○ 기본적으로 에 연동되어 있으며 이를 위해 를 기본 그리드 미들웨EGEE gLite
어로 활용
그림< 9 연동 자원의 그리드 미들웨어 배치도> EGEE KISTI
□ 본 보고서는 그림 9와 같이 에 기본적으로 연동되어 있는 보유자원EGEE KISTI
을 에도 동시에 연동하기 위한 기술적인 배경과 과정을 기술한다OSG .
상호연동 인프라 구축 보고서EGEE-OSG
- 12 -
와 간 호환을 위한 기술EGEE OSGⅢ
1. GlideinWMS
□ 개요○ 목적 그리드 자원에 대한 간단한 접근법 제공:
○ 에서 동작하는 일종의 으로 을 기반Condor Workload Management System Glidein
으로 한다. (Glidein Based WMS)
□ 그리드에서의 GlideinWMS
○ 그리드 사이트 네트워크 위에 계층을 추가○ 사이트들간의 구성 및 적용 기술의 상이함을 감추는 효과○ 그리드를 하나의 단일 풀로 만들어 사용자에게 제공
그림< 10 사용자 관점에서의 역할> GlideinWMS
□ 를 활용한 파일럿GlideinWMS (pilot) Job
○ 그리드 사이트들을 통합하는 상위레이어를 구성하기 위해 을 활용pilot job
상호연동 인프라 구축 보고서EGEE-OSG
- 13 -
그림< 11 의 파일럿 패러다임> GlideinWMS
○ 사용자는 실제 을 제출하지 않음job
– 을 보내고 이 사용자의 요구를 만족시키기 위한 역할을pilot job pilot job
수행○ 사용자의 이 그리드 계산 노드에 도착한 뒤의 실행 내용pilot job
– 그리드 자원의 검증 자원의 성능 네트워크 어플리케이션 등( , , )
– 환경 설정– 사용자의 실제 을 로부터 다운로드job WMS (pull)
○ 그리드를 구성하는 자원들의 이질성 를 감춤(heterogeneity)
– 사용자는 단일 컴퓨팅 풀 만을 바라보게 됨(pool)
□ Condor glideins
○ http://www.cs.wisc.edu/condor/
○ 는 기본적으로 분산 아키텍쳐를 기반으로 함Condor
○ 자체는 실제 일반적인 데몬을 구동하는 그리드 으Condor glideins Condor job
로 정의할 수 있음– 즉 는 로 구현된 임, Condor glideins Condor pilot job
상호연동 인프라 구축 보고서EGEE-OSG
- 14 -
그림< 12> GlideinWMS
□ 의 아키텍처GlideinWMS
그림 13 의 주요 컴포넌트GlideinWMS
상호연동 인프라 구축 보고서EGEE-OSG
- 15 -
○ 개의 논리적 컴포넌트로 구성6
– A Condor central manager (collector + negotiator)
– One or more Condor submit machines
– A glideinWMS collector
– One or more VO frontends
– One or more glidein factories
– The glideins
2. gLexec
□ 의 는 일반 유저 권한으로 수행하는 것이 보안 측면에서는 최선임Condor startd
○ 이럴 경우 발생가능한 문제점– 사용자의 을 시작할 때 를 스스로 변경할 수 없음job UID
– 동일한 를 활용할 경우 악의적인 사용자에 의해 데몬이 탈취 될uid , (hijack)
수 있음
□ gLexec
그림< 14 풀에서의> Condort gLexec
○ 의 그리드 버전sudoexec
○ 를 활용하기 위한 가 개발됨gLexec Condor interface
상호연동 인프라 구축 보고서EGEE-OSG
- 16 -
– 는 주어진 사용자의 프록시에 의해 를 바꿀 수 있는 방법을 제gLexec UID
공함– 동일한 노드에서 을 실행하더라도 서로다른 를 활용하도록 함으로job uid
써 다른 사용자들로부터 보호받을 수 있음– 자체도 사용자의 으로부터 보호 받을 수 있음startd job
□ 의 프록시 관리gLexec
○ 모든 에 을 보내기 위해 단일 활용glidein job identity
– 반드시 을 가져야 함 참조pilot role ( x.509 standard)
○ 는 그리드 계산노드에 보내지는 에 사용자의 를 탑재Condor pilot job proxy
– 사용자는 다른 자원들을 엑세스할 때도 이를 활용○ 의 과 은 에서 관리되지 않음proxy lifetime renewal glideinWMS
– 사용자가 직접 관리– 즉 사용자가 자기 의 을 관리하고 을 해야함, proxy lifetime renewal
그림< 15 의 프록시 관리> gLexec
상호연동 인프라 구축 보고서EGEE-OSG
- 17 -
호환 인프라 구축EGEE-OSGⅣ
□ 다음은 의 인프라를 에 동시 연동되도록 구축한 과정을 기술한KISTI EGEE OSG
것이다.
1. 설치OSG-CE
□ 설치pacman
○ 의 를 설치하기 위해서는 을 먼저 설치해야 함OSG VDT pacman
○ 를 개별적으로 설치할 수도 있으나 가장 손쉬운 방법이며 기본 설치 후VDT
업그레이드 등도 을 통해 수행할 수 있음pacman
cd /opt
wget
http://atlas.bu.edu/~youssef/pacman/sample_cache/tarballs/pacman-latest.tar
.gz
tar xvfz pacman-latest.tar.gz
cd pacman-3.29
. setup.sh
○ 디렉토리내의 는 반드시 커멘드를 통해 읽어들여야 하pacman setup.sh source
며 이 파일에는 을 통해 설치하는 과정에 필요한 환경 설정내용이 포pacman
함되어 있음
□ 를 설치하기 위한 디렉토리 생성OSG-CE
## create osg directory
mkdir /opt/osg-1.2
## create data directory
mkdir /opt/data
## create app directory with correct permissions
mkdir -p /opt/app/etc
chmod 1777 /opt/app/etc
상호연동 인프라 구축 보고서EGEE-OSG
- 18 -
## create osg wn client directory
mkdir /opt/wn-1.2
## create grid security directory
mkdir /etc/grid-security
mkdir /etc/grid-security/http
□ 로부터 이 호스트를 위한 인증서와 서비스 인증서를 발급받아야CA CE host http
한다.
○ 이 과정은 인프라 구축에서 설치하는 과정과 동일하다EGEE .
□ 소프트웨어 설치CE
[root@ce03 opt]# cd /opt/osg-1.2
[root@ce03 osg-1.2]# pacman -allow trust-all-caches -get
http://software.grid.iu.edu/osg-1.2:ce
Beginning VDT prerequisite checking script vdt-common/vdt-prereq-check...
All prerequisite checks are satisfied.
========== IMPORTANT ==========
Most of the software installed by the VDT *will not work* until you install
certificates. To complete your CA certificate installation, see the notes
in the post-install/README file.
INFO: The Globus-Base-Info-Server package is not supported on this platform
INFO: The Globus-Base-Info-Client package is not supported on this platform
Pacman Installation of OSG-1.2.0 Complete
설치 부분## managed fork
pacman -allow trust-all-caches -get
http://software.grid.iu.edu/osg-1.2:ManagedFork
설치 부분## jobmanager setup package
pacman -allow trust-all-caches -get
상호연동 인프라 구축 보고서EGEE-OSG
- 19 -
http://software.grid.iu.edu/osg-1.2:Globus-Condor-Setup
pacman -allow trust-all-caches -get
http://software.grid.iu.edu/osg-1.2:Globus-PBS-Setup
○ 설치된 소프트웨어 디렉토리의 를 반드시 커멘드를 통해 읽CE setup.sh source
어들여야 한다.
cd /opt/osg-1.2
source setup.sh
□ 설정LRMS (Local Resource Management System)
○ 의 경우 로 를 사용하고 있다KISTI LRMS Torque .
* Download and compile, install
$ wget
http://www.clusterresources.com/downloads/torque/torque-2.3.6.tar.gz
$ tar zxvf torque-2.3.6.tar.gz
$ cd torque-2.3.6
$ ./configure
$ make
$ make install
* Modify torque server information
$ cat /var/spool/torque/server_name
ce02.sdfarm.kr
□ 설치Job Manager
* For the Globus-PBS-Setup package,
PBS binaries should be in $PATH prior to installation.
* If it is not, "pacman -remove OSG:Globus-PBS-Setup" first
and install PBS and do following things
$ pacman -get OSG:Globus-PBS-Setup
□ 설치CA Certificate Updater
○ 아래 파일 내의 이 아래와 같은 값을 갖도록 설정 변경#cacerts_url
– 파일
상호연동 인프라 구축 보고서EGEE-OSG
- 20 -
$VDT_LOCATION/vdt/etc/vdt-update-certs.conf
– 변경된 설정
cacerts_url = http://software.grid.iu.edu/pacman/cadist/ca-certs-version
○ 을 다운로드 받도록 아래 커멘드 실행certificate$ source (or 'dot' execute) $VDT_LOCATION/vdt-questions.sh
$ $VDT_LOCATION/vdt/sbin/vdt-setup-ca-certificates
vdt-update-certs
Log file: /opt/osg-1.0.3/vdt/var/log/vdt-update-certs.log
Updates from: http://software.grid.iu.edu/pacman/cadist/ca-certs-version
Will update CA certificates from version unknown to version 1.7.
Update successful.
– 기본적으로 들은 아래 디렉토리에 설치된다certificate .
/opt/osg-1.0.3/globus/share/certificates-1.7
– 가 활성화 되어 있는지 확인해야 한다vdt-update-certs .
․ 이 설정을 통해 자동으로 다운로드 된다.
․ 을 통해 수행한다 사용자여야 함vdt-control . (root )$ vdt-control --enable vdt-update-certs
$ vdt-control --on vdt-update-certs
□ 설치Globus-Base-WS-Essentials
○ 사용자 어카운트를 이용하여 의 서비스를 사용한다daemon Globus web .
○ 컨테이너 의 와 를 생성한 뒤 사용자 권한을 설정한다(container) key certificate .
$ cd /etc/grid-security/
$ cp hostkey.pem containerkey.pem
$ cp hostcert.pem containercert.pem
$ chown daemon: containerkey.pem containercert.pem
□ 설치Globus-Base-WSGRAM-Server
상호연동 인프라 구축 보고서EGEE-OSG
- 21 -
○ 은 어카운트로 실행된다GRAM ‘daemon' .
기존에는 어카운트로 실행되었음( 'root' )
○ 어카운트에 권한 설정을 준다'daemon' sudo .
$ cat /etc/sudoers
...
## ------------------------------------------------------------------------
## for OSG
## by kyun : 2009/07/02
Runas_Alias GLOBUSUSERS = ALL, !root
daemon ALL=(GLOBUSUSERS) \
NOPASSWD: /opt/osg-1.0.4/globus/libexec/globus-gridmap-and-execute \
-g /etc/grid-security/grid-mapfile \
/opt/osg-1.0.4/globus/libexec/globus-job-manager-script.pl *
daemon ALL=(GLOBUSUSERS) \
NOPASSWD: /opt/osg-1.0.4/globus/libexec/globus-gridmap-and-execute \
-g /etc/grid-security/grid-mapfile \
/opt/osg-1.0.4/globus/libexec/globus-gram-local-proxy-tool *
## ------------------------------------------------------------------------
...
* Runas_Alias GLOBUSUSERS = ALL, !root
□ 설정PRIMA Callout
○ 향후 다른 기반 실험 지원을 위해 를 활용하는 버전을 설치OSG PRIMA full
한다.
○ 활성화시키기 위해서는 디렉토리 내의 와post-install gsi-auth.conf
파일을 디렉토리에 복사한다prima-authz.conf /etc/grid-security .
$ cp /opt/osg-1.0.4/post-install/gsi-authz.conf /etc/grid-security/
$ cp /opt/osg-1.0.4/post-install/prima-authz.conf /etc/grid-security/
$ ls -l /etc/grid-security
total 24
-rw-r--r-- 1 root root 94 Jul 2 16:07 gsi-authz.conf
-rw-r--r-- 1 root root 1513 Jul 1 12:51 hostcert.pem
-rw-r--r-- 1 root root 631 Jul 1 12:51 hostcert_request.pem
-rw------- 1 root root 887 Jul 1 12:51 hostkey.pem
-rw-r--r-- 1 root root 1298 Jul 2 16:07 prima-authz.conf
drwxr-xr-x 2 root root 4096 Jul 1 12:41 vomsdir
상호연동 인프라 구축 보고서EGEE-OSG
- 22 -
□ 지원 의 로컬 어카운트와 그룹 생성VO
○ 각 로컬 어카운트의 홈디렉토리는 워커노드에서 공유되어야 하므로 적절한위치를 설정해야 한다.
□ 설정GUMS-Client
○ 서버 정보GUMS
$ cat /opt/osg-1.0.4/gums/config/gums-client-properties
gums.location=https://gums.sdfarm.kr:8443/gums/services/GUMSAdmin
gums.authz=https://gums.sdfarm.kr:8443/gums/services/GUMSAuthorizationServi
cePort
○ 활성화 및 구동gums-host-cron
– 의 주기적인 자동 변경이 이루어짐grid-mapfile
$ vdt-control --enable gums-host-cron
$ vdt-control --on gums-host-cron
○ 수정osg-user-vo-map.txt
$ $VDT_LOCATION/gums/scripts/gums-host-cron --gumsdebug
□ 정보 수정Monalisa
[root@ce03 osg-1.0.4]# ./vdt/setup/configure_monalisa --prompt
To configure MonaLisa you need to specify a few parameters.
Please answer the following questions:
Please specify user account to run MonaLisa daemons as: [daemon]
This is the name you will be seen by the world, so please choose
a name that represents you. Make sure this name is unique in the
MonaLisa environment.
Please specify the farm name: [ce03.sdfarm.kr]
Your Monitor Group name is important to group your site correctly
in the global site list. OSG users should enter "OSG"
상호연동 인프라 구축 보고서EGEE-OSG
- 23 -
Please enter your monitor group name: [OSG]
Please enter your contact name (your full name): [Beob Kyun Kim]
Contact email (your email): [[email protected]]
City (server's location): [Daejeon]
Country: [Republic of Korea (South Korea)]
You can find some approximate values for your geographic location from:
http://geotags.com/
or you can search your location on Google
For USA: LAT is about 29 (South) ... 48 (North)
LONG is about -123 (West coast) ... -71 (East coast)
Location latitude ( -90 (S) .. 90 (N) ): [36.666] 36.366
Location longitude ( -180 (W) .. 180 (E) ): [127.366]
Will you connect to a Ganglia instance (y/n): [y] n
Do you want to run OSG_VO_Modules (y/n): [y]
Please specify GLOBUS location: [/opt/osg-1.0.4/globus]
Please specify CONDOR location: []
Please specify the path to condor_config: []
Please specify PBS location: [] /var/spool/torque
Please specify LSF location: []
Please specify FBS location: []
Please specify SGE location: []
Do you want to enable the MonALISA auto-update feature (y/n): [n] y
□ 설정CEMon-Server
○ 호스트의 설치CE consumer
$ $VDT_LOCATION/vdt/setup/configure_cemon --consumer
https://ce03.sdfarm.kr:1981
□ 설정CE
○ 커멘드를 통해 기본적인 설정을 마친다vdt-post-install .
[root@ce03 osg-1.2]# vdt-post-install
Starting...
Configuring PRIMA... Done.
상호연동 인프라 구축 보고서EGEE-OSG
- 24 -
Configuring EDG-Make-Gridmap... Done.
Configuring PRIMA-GT4... Done.
Completed all configuration.
○ 을 설정한다CA certificate .
[root@ce03 osg-1.2]# vdt-ca-manage setupca --location local --url osg
Setting CA Certificates for VDT installation at '/opt/osg-1.2'
Setup completed successfully.
○ 파일을 사이트 정책에 맞게 수정한다config.ini .
– 참조 :
https://twiki.grid.iu.edu/bin/view/ReleaseDocumentation/ComputingElementHands
On#Modify_config_ini
○ 필요한 디렉토리를 생성한다.
[root@ce03 osg-1.2]# mkdir /opt/app
[root@ce03 osg-1.2]# mkdir /opt/app/etc
[root@ce03 osg-1.2]# mkdir /opt/data
○ 을 검증한다Configuration .
[root@ce03 osg-1.2]# configure-osg -v
Using /opt/osg-1.2/osg/etc/config.ini for configuration information
Configuration verified successfully
○ 사용자 어카운트를 생성한다rsv .
groupadd -g *** osgrsv
useradd -u *** -g osgrsv -d /home-osg/core/rsv -m -c "OSG RSV Core Service"
rsvuser
○ 커멘드를 통해 시스템 설정을 진행한다configure-osg .
[root@ce03 osg-1.2]# configure-osg -c
상호연동 인프라 구축 보고서EGEE-OSG
- 25 -
Using /opt/osg-1.2/osg/etc/config.ini for configuration information
running 'vdt-register-service --name condor --enable'... ok
running 'vdt-register-service --name mysql5 --enable'... ok
running 'vdt-register-service --name gsiftp --enable'... ok
running 'vdt-register-service --name gratia-condor --enable'... ok
INFO: Attempting to install RSV consumers.
INFO: Attempting to install RSV probes on appropriate URI(s).
INFO: Creating .sub files for RSV probes of type OSG-Local-Monitor
for URI: ce03.sdfarm.kr (host: ce03.sdfarm.kr)
INFO: Re-using metrics config file for ce03.sdfarm.kr
/opt/osg-1.2/osg-rsv/config/ce03.sdfarm.kr_metrics.conf
Existing settings like on/off and metric intervals will be re-used.
Any new metrics found in probe set will be added with their default settings.
INFO: Creating .sub files for RSV probes of type OSG-CE
for URI: ce03.sdfarm.kr (host: ce03.sdfarm.kr)
INFO: Re-using metrics config file for ce03.sdfarm.kr
/opt/osg-1.2/osg-rsv/config/ce03.sdfarm.kr_metrics.conf
Existing settings like on/off and metric intervals will be re-used.
Any new metrics found in probe set will be added with their default settings.
INFO: Creating .sub files for RSV probes of type OSG-GridFTP
for URI: ce03.sdfarm.kr (host: ce03.sdfarm.kr)
INFO: Re-using metrics config file for ce03.sdfarm.kr
/opt/osg-1.2/osg-rsv/config/ce03.sdfarm.kr_metrics.conf
Existing settings like on/off and metric intervals will be re-used.
Any new metrics found in probe set will be added with their default settings.
INFO: Attempting to configure Apache to serve local RSV probe output
Apache appears to have RSV output directory options already.
Apache is configured for use with the current RSV installation already.
Apache setup properly to serve results from current RSV installation.
Restart Apache for changes to take effect.
Enabling the Apache service using vdt-control ...
Pages can be viewed at https://HOSTNAME:8443/rsv
running 'vdt-register-service --name condor-cron --enable'... ok
running 'vdt-register-service --name fetch-crl --enable'... ok
CRLs exist, skipping fetch-crl invocation
running 'vdt-register-service --name vdt-update-certs --enable'... ok
running 'vdt-register-service --name gums-host-cron --enable'... ok
running 'vdt-register-service --name edg-mkgridmap --disable'... ok
상호연동 인프라 구축 보고서EGEE-OSG
- 26 -
PRIMA for GT4 web services has been enabled.
Modifications to the /etc/sudoers file are still required.
You will need to restart the /etc/init.d/globus-ws container
to effect the changes.
Running /opt/osg-1.2/gums/scripts/gums-host-cron, this process make take
some time to query vo and gums servers
running 'vdt-register-service --name vdt-rotate-logs --enable'... ok
running 'vdt-register-service --name apache --enable'... ok
running 'vdt-register-service --name globus-gatekeeper --enable'... ok
The following consumer subscription has been installed:
HOST: https://osg-ress-1.fnal.gov:8443/ig/services/CEInfoCollector
TOPIC: OSG_CE
DIALECT: OLD_CLASSAD
running 'vdt-register-service --name tomcat-55 --enable'... ok
The following consumer subscription has been installed:
HOST: http://is1.grid.iu.edu:14001
TOPIC: OSG_CE
DIALECT: RAW
running 'vdt-register-service --name tomcat-55 --enable'... ok
The following consumer subscription has been installed:
HOST: http://is2.grid.iu.edu:14001
TOPIC: OSG_CE
DIALECT: RAW
running 'vdt-register-service --name tomcat-55 --enable'... ok
Configure-osg completed successfully
○ 커멘드를 통해 설정된 데몬 및 서비스를 활성화시킨다vdt-control .
[root@ce03 osg-1.2]# vdt-control --on --force
enabling cron service fetch-crl... ok
enabling cron service vdt-rotate-logs... ok
enabling cron service vdt-update-certs... ok
enabling inetd service globus-gatekeeper... ok
removed non-VDT entry for port 2119 from /etc/services (see
logs/vdt-control.log)
removed non-VDT entry for port 2119 from /etc/services (see
logs/vdt-control.log)
상호연동 인프라 구축 보고서EGEE-OSG
- 27 -
enabling inetd service gsiftp... ok
removed non-VDT entry for service gsiftp from /etc/services (see
logs/vdt-control.log)
removed non-VDT entry for service gsiftp from /etc/services (see
logs/vdt-control.log)
enabling init service mysql5... ok
enabling init service globus-ws... ok
enabling cron service gums-host-cron... ok
skipping init service 'MLD' -- marked as disabled
enabling init service condor-cron... ok
enabling init service apache... ok
enabling init service tomcat-55... ok
enabling init service condor... ok
enabling cron service gratia-condor... ok
skipping cron service 'edg-mkgridmap' -- marked as disabled
enabling init service osg-rsv... ok
○ 파일을 찾아 아래와 같은 환경변수를 추가한다/etc/xinetd.d/globus-gatekeeper .
env = GLOBUS_TCP_PORT_RANGE=40000,50000
– 이 변수는 로컬 사이트의 규모에 따라 조절할 수 있으며 중소규모 코(1000
어 이하 에서는 개 정도로 제한하여도 동작에는 이상이 없다) 5000 .
○ 방화벽 설정(firewall)
[root@ce03 osg-1.2]# cat /etc/sysconfig/iptables
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport
40000:50000 -j ACCEPT
# Monalisa, grabs 3 ports from the following range
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 9000:9010
-j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p ucp -m ucp --dport 9000 -j
ACCEPT
# GRAM
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2119 -j
ACCEPT
# Gridftp
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2811 -j
상호연동 인프라 구축 보고서EGEE-OSG
- 28 -
ACCEPT
# Optional Services
# RLS Server
#-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 39281 -j
ACCEPT
# MyProxy
#-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 7512 -j
ACCEPT
# GSISSH/SSH
#-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 22 -j
ACCEPT
# MDS
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2135 -j
ACCEPT
# GIIS
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2136 -j
ACCEPT
# GUMS/VOMS
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 8443 -j
ACCEPT
# WebServices
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 9443 -j
ACCEPT
□ 소프트웨어 설치Worker node
○ 이 소프트웨어는 의 워커 노드에서 공유된다Backend .
○ 기존의 워커 노드에는 이미 이 설치되어 있으나 다른 디렉토리에 설gLite-WN
치되어 있고 일부 공통 라이브러리는 공유된다.
○ 워커 노드에는 과 모두 데몬 기능이 없으므로 포트 중복을gLite-WN OSG-WN
염려할 필요는 없다.
[root@ce03 opt]# cd /opt/wn-1.2
[root@ce03 opt]# pacman -allow trust-all-caches -get
http://software.grid.iu.edu/osg-1.2:wn-client
Beginning VDT prerequisite checking script vdt-common/vdt-prereq-check...
All prerequisite checks are satisfied.
상호연동 인프라 구축 보고서EGEE-OSG
- 29 -
========== IMPORTANT ==========
Most of the software installed by the VDT *will not work* until you install
certificates. To complete your CA certificate installation, see the notes
in the post-install/README file.
INFO: The Globus-Base-Info-Client package is not supported on this platform
The OSG Worker Node Client package OSG version 1.2.0 has been installed.
2. 서버 설치GUMS
□ 설치 및 환경 설정 와 동일pacman (CE )
□ 설치GUMS
mkdir vdt-2.0.0
cd vdt-2.0.0
pacman -get http://software.grid.iu.edu/osg-1.2:gums
Do you want to add [http://software.grid.iu.edu/osg-1.2] to
[trusted.caches]? (y/n/yall): y
Do you want to add [http://vdt.cs.wisc.edu/vdt_200_cache] to
[trusted.caches]? (y/n/yall): y
Beginning VDT prerequisite checking script vdt-common/vdt-prereq-check..
□ vdt-post-install
○ 과 설치host certitificate key
– 와 동일CE
** put host cert&key and http cert&key into /etc/grid-security and
/etc/grid-security/http
** change ownership of /etc/grid-security/http & its files.
> chown -R daemon.daemon /etc/grid-security/http
○ 실행vdt-post-install
. setup.sh
vdt-post-install
상호연동 인프라 구축 보고서EGEE-OSG
- 30 -
vdt-ca-manage setupCA --location local --url osg
ln -s /usr/local/vdt-2.0.0/globus/TRUSTED_CA
/etc/grid-security/certificates
vdt-control --enable fetch-crl vdt-rotate-logs mysql5 apache tomcat-55
vdt-update-certs
○ 서비스 리스트 확인
[root@gums config]# vdt-control --list
Service | Type | Desired State
------------------------+--------+--------------
fetch-crl | cron | enable
vdt-rotate-logs | cron | enable
vdt-update-certs | cron | enable
apache | init | enable
tomcat-55 | init | enable
mysql5 | init | enable
gums-host-cron | cron | do not enable
○ 서비스 구동
vdt-control --on
○ 추가GUMS admin
cd tomcat/v55/webapps/gums/WEB-INF/scripts
./gums-add-mysql-admin "YOUR DN"
– 파일을 통해 반영 여부 확인gums.config
> ./gums-add-mysql-admin "your DN"
./gums-add-mysql-admin "/C=KR/O=KISTI/O=GRID/O=KISTI/CN=84035421 Beob Kyum
Kim"
WARNING: You must have created the database before running this script!
Adding the following DN to the local database:
Certificate DN for administrator: "/C=KR/O=KISTI/O=GRID/O=KISTI/CN=84035421
Beob Kyum Kim"
Is this correct? (Enter 'yes' to proceed)
상호연동 인프라 구축 보고서EGEE-OSG
- 31 -
yes
Adding the admin:
Enter the root mysql password
○ 웹 인터페이스를 통해 엑세스 권한 확인
https://gums.sdfarm.kr:8443/gums/index.jsp
○ 기본 설정 변경OSG
./gums-create-config --osg-template
[root@gums scripts]# ./gums-create-config --osg-template
Downloading OSG GUMS template...
2009-08-13 14:29:11
URL:http://software.grid.iu.edu/pacman/tarballs/vo-version/gums.template
[48689/48689] -> "/tmp/gums.template" [1]
Searching for MySQL username in current configuration...
found MySQL user "gums"
Searching for MySQL password in current configuration...
found MySQL password "QXTUTGPGLIX"
Searching for MySQL server in current configuration...
found MySQL server "gums.sdfarm.kr:49152"
will use domain name "sdfarm.kr" in hostToGroupMapping
WARNING: gums.config already present. Would you like to overwrite it?
(Enter 'yes' to overwrite)
yes
3. 설정gLexec
□ 다음 사이트 참조
https://twiki.grid.iu.edu/bin/view/ReleaseDocumentation/GlexecInstall
□ 로컬 사용자 어카운트 생성'glexec'
○ 각 워커 노드의 의 배 정도의 그룹 아이디 생성job slot 4
– 다른 워커 노드에서도 동일한 어카운트를 활용할 수 있음– 코어 머신의 경우 개 그룹 어카운트 생성8 32
상호연동 인프라 구축 보고서EGEE-OSG
- 32 -
[root@wn1011 ~]# cat ./create-glexec-account
#!/bin/bash
echo
"========================================================================="
echo " Creating gLexec Accounts "
echo
"-------------------------------------------------------------------------"
echo " -- Creating gLexec gids "
for i in {0..31}
do
if [ $i -lt 10 ]
then
echo " - group glexec0$i ( gid : 6500$i )"
groupadd -g 6500$i glexec0$i
else
echo " - group glexec$i ( gid : 650$i )"
groupadd -g 650$i glexec$i
fi
done
echo " -- Creating gLexec user account nologin "
echo " user \"glexec\" ( uid : 999 ) "
useradd -u 999 -c "gLexec user" -s /sbin/nologin glexec
echo
"========================================================================="
[root@wn1011 ~]# ./create-glexec-account
===========================================================================
Creating gLexec Accounts
---------------------------------------------------------------------------
-- Creating gLexec gids
- group glexec00 ( gid : 65000 )
- group glexec01 ( gid : 65001 )
- group glexec02 ( gid : 65002 )
- group glexec03 ( gid : 65003 )
- group glexec04 ( gid : 65004 )
- group glexec05 ( gid : 65005 )
- group glexec06 ( gid : 65006 )
상호연동 인프라 구축 보고서EGEE-OSG
- 33 -
- group glexec07 ( gid : 65007 )
- group glexec08 ( gid : 65008 )
- group glexec09 ( gid : 65009 )
- group glexec10 ( gid : 65010 )
- group glexec11 ( gid : 65011 )
- group glexec12 ( gid : 65012 )
- group glexec13 ( gid : 65013 )
- group glexec14 ( gid : 65014 )
- group glexec15 ( gid : 65015 )
- group glexec16 ( gid : 65016 )
- group glexec17 ( gid : 65017 )
- group glexec18 ( gid : 65018 )
- group glexec19 ( gid : 65019 )
- group glexec20 ( gid : 65020 )
- group glexec21 ( gid : 65021 )
- group glexec22 ( gid : 65022 )
- group glexec23 ( gid : 65023 )
- group glexec24 ( gid : 65024 )
- group glexec25 ( gid : 65025 )
- group glexec26 ( gid : 65026 )
- group glexec27 ( gid : 65027 )
- group glexec28 ( gid : 65028 )
- group glexec29 ( gid : 65029 )
- group glexec30 ( gid : 65030 )
- group glexec31 ( gid : 65031 )
-- Creating gLexec user account nologin
user "glexec" ( uid : 999 )
===========================================================================
□ 을 이용한 설치pacman glexec
source /opt/pacman-3.29/setup.sh
cd /opt/gLexec
pacman -allow trust-all-caches -get
http://software.grid.iu.edu/osg-1.2:Glexec
상호연동 인프라 구축 보고서EGEE-OSG
- 34 -
그림< 16 호환 연동된 그리드 미들웨어 배치도> EGEE-OSG
상호연동 인프라 구축 보고서EGEE-OSG
- 35 -
Appendix config.ini
;===================================================================
; IMPORTANT
;===================================================================
;
;
; You can get documentation on the syntax of this file at:
;
https://twiki.grid.iu.edu/twiki/bin/view/Integration/ITB090/ConfigurationFi
leFormat
; You can get documentation on the options for each section at:
;
https://twiki.grid.iu.edu/twiki/bin/view/Integration/ITB090/ConfigurationFi
leHelp
;
[DEFAULT]
; Use this section to define variables that will be used in other sections
; For example, if you define a variable called dcache_root here
; you can use it in the gip section as %(dcache_root)s (e.g.
; my_vo_1_dir = %(dcache_root)s/my_vo_1
; my_vo_2_dir = %(dcache_root)s/my_vo_2
; Defaults, please don't modify these variables
unavailable = UNAVAILABLE
default = UNAVAILABLE
; Name these variables disable and enable rather than disabled and enabled
; to avoid infinite recursions
disable = False
enable = True
; You can modify the following and use them
;localhost = my.host.name
localhost = ce03.sdfarm.kr
;admin_email = [email protected]
상호연동 인프라 구축 보고서EGEE-OSG
- 36 -
admin_email = [email protected]
osg_location = UNAVAILABLE
;===================================================================
; Site Information
;===================================================================
[Site Information]
; The group option indicates the group that the OSG site should be listed in,
; for production sites this should be OSG, for vtb or itb testing it should be
; OSG-ITB
;
; YOU WILL NEED TO CHANGE THIS
;group = OSG-ITB
group = OSG
; The host_name setting should give the host name of the CE that is being
; configured, this setting must be a valid dns name that resolves
;
; YOU WILL NEED TO CHANGE THIS
host_name = %(localhost)s
; The site_name setting should give the registered OSG site name (e.g.
OSG_ITB)
;
; YOU WILL NEED TO CHANGE THIS
;site_name = %(unavailable)s
site_name = KISTI-NSDC
; The sponsor setting should list the sponsors for your cluster, if your
cluster
; has multiple sponsors, you can separate them using commas or specify the
; percentage using the following format 'osg, atlas, cms' or
; 'osg:10, atlas:45, cms:45'
;
; YOU WILL NEED TO CHANGE THIS
sponsor = %(unavailable)s
; The site_policy setting should give an url that lists your site's usage
; policy
site_policy = %(unavailable)s
상호연동 인프라 구축 보고서EGEE-OSG
- 37 -
; The contact setting should give the name of the admin/technical contact
; for the cluster
;
; YOU WILL NEED TO CHANGE THIS
;contact = %(unavailable)s
contact = [email protected]
; The email setting should give the email address for the technical contact
; for the cluster
;
; YOU WILL NEED TO CHANGE THIS
email = %(admin_email)s
; The city setting should give the city that the cluster is located in
;
; YOU WILL NEED TO CHANGE THIS
;city = %(unavailable)s
city = Daejeon
; The country setting should give the country that the cluster is located in
;
; YOU WILL NEED TO CHANGE THIS
;country = %(unavailable)s
country = "South Korea"
; The longitude setting should give the longitude for the cluster's location
; if you are in the US, this should be negative
; accepted values are between -180 and 180
;
; YOU WILL NEED TO CHANGE THIS
;longitude = %(unavailable)s
longitude = 127.366
; The latitude setting should give the latitude for the cluster's location
; accepted values are between -90 and 90
;
; YOU WILL NEED TO CHANGE THIS
;latitude = %(unavailable)s
latitude = 36.366
상호연동 인프라 구축 보고서EGEE-OSG
- 38 -
;===================================================================
; For the following job manager sections (LSF, SGE, PBS, Condor)
; you should delete the sections corresponding to job managers that
; you are not using. E.g. if you are just using LSF on your site,
; you can delete
;===================================================================
;===================================================================
; PBS
;===================================================================
[PBS]
; This section has settings for configuring your CE for a PBS job manager
; The enabled setting indicates whether you want your CE to use a PBS job
; manager
; valid answers are True or False
; enabled = %(disable)s
enabled = %(enable)s
; The home setting should give the location of the pbs install directory
;home = %(unavailable)s
home = /var/spool/torque
; The pbs_location setting should give the location of pbs install directory
; This should be the same as the home setting above
pbs_location = %(home)s
; The job_contact setting should give the contact string for the jobmanger
; on this CE (e.g. host.name/jobmanager-pbs)
job_contact = %(localhost)s/jobmanager-pbs
; The util_contact should give the contact string for the default jobmanager
; on this CE (e.g. host.name/jobmanager)
util_contact = %(localhost)s/jobmanager
; The wsgram setting should be set to True or False depending on whether you
; wish to enable wsgram on this CE
상호연동 인프라 구축 보고서EGEE-OSG
- 39 -
wsgram = %(disable)s
;===================================================================
; Condor
;===================================================================
[Condor]
; This section has settings for configuring your CE for a Condor job manager
; The enabled setting indicates whether you want your CE to use a Condor job
; manager
; valid answers are True or False
;enabled = %(disable)s
enabled = False
; The home setting should give the location of the condor install directory
home = %(unavailable)s
;home = /opt/osg-1.2/condor
; The condor_location setting should give the location of condor install
directory
; This should be the same as the home setting above
condor_location = %(home)s
; The condor_location setting should give the location of condor config file,
; This is typically etc/condor_config within the condor install directory
condor_config = %(unavailable)s
; The job_contact setting should give the contact string for the jobmanger
; on this CE (e.g. host.name/jobmanager-condor)
job_contact = %(localhost)s/jobmanager-condor
; The util_contact should give the contact string for the default jobmanager
; on this CE (e.g. host.name/jobmanager)
util_contact = %(localhost)s/jobmanager
; The wsgram setting should be set to True or False depending on whether you
; wish to enable wsgram on this CE
;wsgram = %(disable)s
wsgram = %(enable)s
상호연동 인프라 구축 보고서EGEE-OSG
- 40 -
;===================================================================
; SGE
;===================================================================
[SGE]
; This section has settings for configuring your CE for a SGE job manager
; The enabled setting indicates whether you want your CE to use a SGE job
; manager
; valid answers are True or False
enabled = %(disable)s
; The home setting should give the location of the sge install directory
home = %(unavailable)s
; The sge_location setting should give the location of sge install directory
; This should be the same as the home setting above
sge_location = %(home)s
; The sge_root setting should give the location of sge install directory
; This should be the same as the home setting above
sge_root = %(home)s
; The job_contact setting should give the contact string for the jobmanger
; on this CE (e.g. host.name/jobmanager-sge)
job_contact = %(localhost)s/jobmanager-sge
; The util_contact should give the contact string for the default jobmanager
; on this CE (e.g. host.name/jobmanager)
util_contact = %(localhost)s/jobmanager
; The wsgram setting should be set to True or False depending on whether you
; wish to enable wsgram on this CE
wsgram = %(disable)s
;===================================================================
; LSF
;===================================================================
상호연동 인프라 구축 보고서EGEE-OSG
- 41 -
[LSF]
; This section has settings for configuring your CE for a LSF job manager
; The enabled setting indicates whether you want your CE to use a LSF job
; manager
; valid answers are True or False
enabled = %(disable)s
; The home setting should give the location of the lsf install directory
home = %(unavailable)s
; The lsf_location setting should give the location of lsf install directory
; This should be the same as the home setting above
lsf_location = %(home)s
; The job_contact setting should give the contact string for the jobmanger
; on this CE (e.g. host.name/jobmanager-lsf)
job_contact = %(localhost)s/jobmanager-lsf
; The util_contact should give the contact string for the default jobmanager
; on this CE (e.g. host.name/jobmanager)
util_contact = %(localhost)s/jobmanager
; The wsgram setting should be set to True or False depending on whether you
; wish to enable wsgram on this CE
wsgram = %(disable)s
;===================================================================
; Managed Fork
;===================================================================
[Managed Fork]
; The enabled setting indicates whether managed fork is in use on the system
; or not. You should set this to True or False
;enabled = %(disable)s
;enabled = %(enable)s
enabled = True
; The condor_location setting should give the location of condor install
directory
상호연동 인프라 구축 보고서EGEE-OSG
- 42 -
; This should be the same as the home setting above
;condor_location = %(default)s
condor_location = /opt/osg-1.2/condor
; The condor_location setting should give the location of condor config file,
; This is typically etc/condor_config within the condor install directory
condor_config = %(default)s
;===================================================================
; Misc Services
;===================================================================
[Misc Services]
; If you have glexec installed on your worker nodes, enter the location
; of the glexec binary in this setting
glexec_location = %(unavailable)s
; If you wish to use the ca certificate update service, set this setting to
True,
; otherwise keep this at false
; Please note that as of OSG 1.0, you have to use the ca cert updater or the
rpm
; updates, pacman can not update the ca certs
;use_cert_updater = %(disable)s
;use_cert_updater = %(enable)s
use_cert_updater = True
;This setting should be set to the host used for gums host.
; If your site is not using a gums host, you can set this to %(unavailable)s
;gums_host = %(unavailable)s
gums_host = gums.sdfarm.kr
; This setting should be set to one of the following, gridmap, prima, xacml
; to indicate whether gridmap files, prima callouts, or prima callouts with
xacml
; should be used
;authorization_method = %(unavailable)s
authorization_method = prima
; This setting indicates whether the osg index page generation will be run,
상호연동 인프라 구축 보고서EGEE-OSG
- 43 -
; by default this is not run
enable_webpage_creation = %(disable)s
;===================================================================
; RSV
;===================================================================
[RSV]
; The enable option indicates whether rsv should be enable or disabled. It
should
; be set to True or False
; enabled = %(disable)s
;enabled = %(enable)s
enabled = True
; The rsv_user option gives the user that the rsv service should use. It must
; be a valid unix user account
;
; If rsv is enabled, and this is blank or set to unavailable it will default
to
; rsvuser
; rsv_user = %(default)s
;rsv_user = %(rsvuser)s
rsv_user = rsv
; The enable_ce_probes option enables or disables the RSV CE probes. If you
enable this,
; you should also set the ce_hosts option as well.
;
; Set this to true or false.
; enable_ce_probes = %(disable)s
enable_ce_probes = %(enable)s
; The ce_hosts options lists the FQDN of the CEs that the RSV CE probes should
check.
; This should be a list of FQDNs separated by a comma (e.g.
my.host,my.host2,my.host3)
;
; This must be set if the enable_ce_probes option is enabled. If this is set
to
상호연동 인프라 구축 보고서EGEE-OSG
- 44 -
; UNAVAILABLE or left blank, then it will default to the hostname setting for
this CE
; ce_hosts = %(default)s
ce_hosts = %(localhost)s
; The enable_gridftp_probes option enables or disables the RSV gridftp
probes. If
; you enable this, you must also set the ce_hosts or gridftp_hosts option as
well.
;
; Set this to True or False.
; enable_gridftp_probes = %(disable)s
enable_gridftp_probes = %(enable)s
; The gridftp_hosts options lists the FQDN of the gridftp servers that the
RSV CE
; probes should check. This should be a list of FQDNs separated by a comma
; (e.g. my.host,my.host2,my.host3)
;
; This or ce_hosts must be set if the enable_gridftp_probes option is
enabled. If
; this is set to UNAVAILABLE or left blank, then it will default to the
hostname
; setting for this CE
; gridftp_hosts = %(default)s
gridftp_hosts = %(localhost)s
; The gridftp_dir options gives the directory on the gridftp servers that
the
; RSV CE probes should try to write and read from.
;
; This should be set if the enable_gridftp_probes option is enabled. It will
default
; to /tmp if left blank or set to UNAVAILABLE
gridftp_dir = %(default)s
; The enable_gums_probes option enables or disables the RSV gums probes. If
; you enable this, you must also set the ce_hosts or gums_hosts option as
well.
;
; Set this to True or False.
상호연동 인프라 구축 보고서EGEE-OSG
- 45 -
enable_gums_probes = %(disable)s
; The gums_hosts options lists the FQDN of the CE that uses GUMS server that
the
; RSV GUMS probes should check. This should be a list of FQDNs separated by a
; comma (e.g. my.host,my.host2,my.host3)
;
; This or ce_hosts should be set if the enable_gums_probes option is enabled.
If
; this is set to UNAVAILABLE or left blank, then it will default to the
hostname
; setting for this CE
gums_hosts = %(default)s
; The enable_srm_probes option enables or disables the RSV srm probes. If
; you enable this, you must also set the srm_hosts option as well.
;
; Set this to True or False.
enable_srm_probes = %(disable)s
; The srm_hosts options lists the FQDN of the srm servers that the
; RSV SRM probes should check. This should be a list of FQDNs separated
; by a comma (e.g. my.host,my.host2,my.host3). You can specify the port
; on a host using host:port (e.g. localhost:8443 ).
;
; This or _hosts must be set if the enable_srm_probes option is enabled. If
; this is set to UNAVAILABLE or left blank, then it will default to the
hostname
; setting for this CE
srm_hosts = %(default)s
; The srm_dir options gives the directory on the srm servers that the
; RSV SRM probes should try to write and read from.
;
; This must be set if the enable_srm_probes option is enabled.
srm_dir = %(unavailable)s
; This option gives the webservice path that SRM probes need to along with the
; host: port. For dcache installations, this should work if left blank or
left out.
; However Bestman-xrootd SEs normally use srm/v2/server as web service path,
상호연동 인프라 구축 보고서EGEE-OSG
- 46 -
and so
; Bestman-xrootd admins will have to pass this option with the appropriate
value
; (for example: "srm/v2/server") for the SRM probes to work on their SE.
srm_webservice_path = %(unavailable)s
; Use the use_service_cert option indicates whether to use a service
; certificate with rsv
;
; NOTE: This can't be used if you specify multiple CEs or GUMS hosts
; use_service_cert = %(disable)s
use_service_cert = %(enable)s
; You'll need to set this if you have enabled the use_service_cert.
; This should point to the public key file (pem) for your service
; certificate
;
; If this is left blank or set to UNAVAILABLE and the use_service_cert
; setting is enabled, it will default to /etc/grid-security/rsvcert.pem
;rsv_cert_file = %(default)s
;rsv_cert_file = %(/etc/grid-security/rsvcert.pem)s
rsv_cert_file = /etc/grid-security/rsvcert.pem
; You'll need to set this if you have enabled the use_service_cert.
; This should point to the private key file (pem) for your service
; certificate
;
; If this is left blank or set to UNAVAILABLE and the use_service_cert
; setting is enabled, it will default to /etc/grid-security/rsvkey.pem
;rsv_key_file = %(default)s
;rsv_key_file = %("/etc/grid-security/rsvkey.pem")s
rsv_key_file = /etc/grid-security/rsvkey.pem
; You'll need to set this if you have enabled the use_service_cert. This
; should point to the location of the rsv proxy file.
;
; If this is left blank or set to UNAVAILABLE and the use_service_cert
; setting is enabled, it will default to /tmp/rsvproxy
rsv_proxy_out_file = %(default)s
; If you don't use a service certificate for rsv, you will need to specify a
상호연동 인프라 구축 보고서EGEE-OSG
- 47 -
; proxy file that RSV should use in the proxy_file setting.
; This needs to be set if use_service_cert is disabled
proxy_file = %(unavailable)s
; This option will enable RSV record uploading to central RSV collector at
the GOC
;
; Set this to True or False
enable_gratia = %(disable)s
; The print_local_time option indicates whether rsv should use local times
instead of
; GMT times in the local web pages produced (NOTE: records uploaded to
central RSV
; collector will still have UTC timestamps)
;
; Set this to True or False
print_local_time = %(disable)s
; The setup_rsv_nagios option indicates whether rsv try to connect to a locat
; nagios instance and report information to it as well
;
; Set this to True or False
setup_rsv_nagios = %(disable)s
; The rsv_nagios_conf_file option indicates the location of the rsv nagios
; file to use for configuration details. This is optional
;
rsv_nagios_conf_file = %(default)s
; The setup_for_apache option indicates whether rsv try to create a webpage
; that can be used to view the status of the rsv tests. Enabling this is
; highly encouraged.
;
; Set this to True or False
; setup_for_apache = %(disable)s
setup_for_apache = %(enable)s
;===================================================================
; Storage
상호연동 인프라 구축 보고서EGEE-OSG
- 48 -
;===================================================================
[Storage]
;
; Several of these values are constrained and need to be set in a way
; that is consistent with one of the OSG storage models
;
; Please refer to the OSG release documentation for an indepth explanation
; of the various storage models and the requirements for them
; If you have a SE available for your cluster and wish to make it available
; to incoming jobs, set se_available to True, otherwise set it to False
;se_available = %(disable)s
se_available = False
; If you indicated that you have an se available at your cluster, set
default_se to
; the hostname of this SE, otherwise set default_se to UNAVAILABLE
default_se = %(unavailable)s
; The grid_dir setting should point to the directory which holds the files
; from the OSG worker node package, it should be visible on all of the
computer
; nodes (read access is required, worker nodes don't need to be able to write)
;
; YOU WILL NEED TO CHANGE THIS
;grid_dir = %(unavailable)s
grid_dir = /opt/wn-client
; The app_dir setting should point to the directory which contains the VO
; specific applications, this should be visible on both the CE and worker
nodes
; but only the CE needs to have write access to this directory
;
; YOU WILL NEED TO CHANGE THIS
;app_dir = %(unavailable)s
app_dir = /opt/app
; The data_dir setting should point to a directory that can be used to store
; and stage data in and out of the cluster. This directory should be readable
; and writable on both the CE and worker nodes
상호연동 인프라 구축 보고서EGEE-OSG
- 49 -
;
; YOU WILL NEED TO CHANGE THIS
;data_dir = %(unavailable)s
data_dir = /opt/data
; The worker_node_temp directory should point to a directory that can be used
; as scratch space on compute nodes, it should allow read and write access on
the
; worker nodes but can be local to each worker node
;
; YOU WILL NEED TO CHANGE THIS
;worker_node_temp = %(unavailable)s
worker_node_temp = /tmp
; The site_read setting should be the location or url to a directory that can
; be read to stage in data, this is an url if you are using a SE
;
; YOU WILL NEED TO CHANGE THIS
;site_read = %(unavailable)s
site_read = /opt/data
; The site_write setting should be the location or url to a directory that can
; be write to stage out data, this is an url if you are using a SE
;
; YOU WILL NEED TO CHANGE THIS
;site_write = %(unavailable)s
site_write = /opt/data
;===================================================================
; Monalisa
;===================================================================
[MonaLisa]
; Set the enabled setting to True if you have monalisa installed and wish to
; use it, otherwise set it to False
enabled = %(disable)s
; If you want monalisa to use it's vo modules, set the use_vo_modules setting
; to true, otherwise set this to False
use_vo_modules = %(enable)s
상호연동 인프라 구축 보고서EGEE-OSG
- 50 -
; The ganglia_support setting should be enabled if you are using ganglia on
; your cluster and you wish monalisa to use it as well
ganglia_support = %(disable)s
; If you've enabled ganglia support, you should enter the hostname of the
; ganglia server in the ganglia_host option
ganglia_host = %(unavailable)s
; If you've enabled ganglia support, you should enter the port that ganglia
; is running on
ganglia_port = %(default)s
; Set this to the monitor group that monalisa will report, by default this
; will be set to the group set in the Site Information section
monitor_group = %(default)s
; This setting should be set to Y or N depending on whether you
; want monalisa to autoupdate itself, by default this is set to N
auto_update = %(enable)s
; This setting should be set to the user account that monalisa will
; run under, by default this is set to daemon
user = %(default)s
;===================================================================
; Squid
;===================================================================
[Squid]
; Set the enabled setting to True if you have squid installed and wish to
; use it, otherwise set it to False
enabled = %(disable)s
; If you are using squid, specify the location of the squid server in the
; location setting, this can be a path if squid is installed on the same
; server as the CE or it can be a hostname
location = %(unavailable)s
; If you are using squid, use the policy setting to indicate which cache
; replacement policy squid is using
policy = %(unavailable)s
상호연동 인프라 구축 보고서EGEE-OSG
- 51 -
; If you are using squid, use the cache_size setting to indicate which the
; size of the disk cache that squid is using
cache_size = %(unavailable)s
; If you are using squid, use the memory_size setting to indicate which the
; size of the memory cache that squid is using
memory_size = %(unavailable)s
;===================================================================
; GIP
;===================================================================
[GIP]
; ========= These settings must be changed ==============
;; This setting indicates the batch system that GIP should query
;; and advertise
;; This should be the name of the batch system in lowercase
;batch = %(unavailable)s
;batch = condor
batch = pbs
;; Options include: pbs, lsf, sge, or condor
; ========= These settings can be left as is for the standard install
========
;; This setting indicates whether GIP should advertise a gsiftp server
;; in addition to a srm server, if you don't have a srm server, this should
;; be enabled
;; Valid options are True or False
advertise_gsiftp = %(enable)s
;; This should be the hostname of the gsiftp server that gip will advertise
gsiftp_host = %(localhost)s
;; This setting indicates whether GIP should query the gums server.
;; Valid options are True or False
advertise_gums = %(disable)s
상호연동 인프라 구축 보고서EGEE-OSG
- 52 -
;
; NOTE ABOUT PREVIOUS GIP OPTIONS:
; There used to be many more options in the GIP section, mostly involving the
; configuration of the site's subclusters and storage elements. These
options
; have been moved into separate sections - one section per subcluster or SE.
; However, backward compatibility with the older format has been retained,
and
;
;===================================================================
; Subclusters
;===================================================================
; For each subcluster, add a new subcluster section.
; Each subcluster name must be unique for the entire grid, so make sure to not
; pick anything generic like "MAIN". Each subcluster section must start with
; the words "Subcluster", and cannot be named "CHANGEME".
; There should be one subcluster section per set of homogeneous nodes in the
; cluster.
; This data is used for our statistics collections in the OSG, so it's
important
; to keep it up to date. This is important for WLCG sites as it will be used
; to determine your progress toward your MoU commitments!
; If you have many similar subclusters, then feel free to collapse them into
; larger, approximately-correct groups.
; See example below:
;[Subcluster CHANGEME]
[Subcluster Main]
; should be the name of the subcluster
;name = SUBCLUSTER_NAME
name = Main
; number of homogeneous nodes in the subcluster
;node_count = NUMBER_OF_NODE
node_count = 38
; Megabytes of RAM per node.
상호연동 인프라 구축 보고서EGEE-OSG
- 53 -
; ram_mb = MB_OF_RAM
; by kyun
; below is for test. after this test it should be changed to the real size of
memory (16*1024)
; ram_mb = MB_OF_RAM
ram_mb = 16384
; CPU model, as taken from /proc/cpuinfo. Please, no abbreviations!
; cpu_model = CPU_MODEL_FROM_/proc/cpuinfo
cpu_model = "Intel(R) Xeon(R) CPU X5450 @ 3.00GHz"
; Should be something like:
; cpu_model = Dual-Core AMD Opteron(tm) Processor 2216
; Vendor's name -- AMD or Intel?
;cpu_vendor = VENDOR_AMD_OR_INTEL
cpu_vendor = Intel
; Approximate speed, in MHZ, of the chips
; cpu_speed_mhz = CLOCK_SPEED_MHZ
cpu_speed_mhz = 3000
; Must be an integer. Example: cpu_speed_mhz = 2400
; Platform; x86_64 or i686
;cpu_platform = x86_64_OR_i686
cpu_platform = x86_64
; Number of CPUs (physical chips) per node
; cpus_per_node = #_PHYSICAL_CHIPS_PER_NODE
cpus_per_node = 2
; Number of cores per node.
; cores_per_node = #_CORES_PER_NODE
cores_per_node = 8
; For a dual-socket quad-core, you would put cpus_per_node=2 and
; cores_per_node=8
; Set to true or false depending on inbound connectivity. That is, external
; hosts can contact the worker nodes in this subcluster based on their
hostname.
inbound_network = FALSE
; Set to true or false depending on outbound connectivity. Set to true if the
; worker nodes in this subcluster can communicate with the external internet.
outbound_network = TRUE
; Non-mandatory attributes
; The amount of swap per host in MB
상호연동 인프라 구축 보고서EGEE-OSG
- 54 -
; swap_mb = 4000
; The per-core SpecInt 2000 score. This is usually computed for you.
; SI00 = 2000
; The per-core SpecFloat 2000 score. This is usually computed for you
; SF00 = 2000
; Here's a full example. Remember, globally unique names!
; [Subcluster Dell Nodes UNL]
; name = Dell Nodes UNL
; node_count = 53
; ram_mb = 4110
; swap_mb = 4000
; cpu_model = Dual-Core AMD Opteron(tm) Processor 2216
; cpu_vendor = AMD
; cpu_speed_mhz = 2400
; cpus_per_node = 2
; cores_per_node = 4
; inbound_network = FALSE
; outbound_network = TRUE
;===================================================================
; SE
;===================================================================
; For each storage element, add a new SE section.
; Each SE name must be unique for the entire grid, so make sure to not
; pick anything generic like "MAIN". Each SE section must start with
; the words "SE", and cannot be named "CHANGEME".
; There are two main configuration types; one for dCache, one for BestMan
; Don't forget to change the section name! One section per SE at the site.
[SE CHANGEME]
; The first part of this section shows options which are mandatory for all
SEs.
; dCache and BestMan-specific portions are shown afterward.
; Set to False to turn off this SE
;enabled = True
상호연동 인프라 구축 보고서EGEE-OSG
- 55 -
enabled = False
; Name of the SE; set to be the same as the OIM registered name
name = SE_CHANGEME
; The endpoint of the SE. It MUST have the hostname, port, and the server
; location (/srm/v2/server in this case). It MUST NOT have the ?SFN= string.
srm_endpoint = httpg://srm.example.com:8443/srm/v2/server
; dCache endpoint template: httpg://srm.example.com:8443/srm/managerv2
; How to collect data; the most generic implementation is called "static"
provider_implementation = static
; WLCG sites with a SE *must* use bestman, dcache, or dcache19
; Implementation and version of your SRM SE; usually dcache or bestman
implementation = bestman
; Version refers to the SE version, not the SRM version.
version = 2.2.1.foo
; dCache example: version = 1.9.1
; Default paths for all of your VOs; VONAME is replaced with the VO's name.
default_path = /mnt/bestman/home/VONAME
; Set a specific path for VOs which don't use the default path.
; Comma-separated list of VO:PATH pairs. Not required.
; vo_dirs=cms:/mnt/bestman/cms, dzero:/mnt/bestman2/atlas
; For BestMan-based SEs, uncomment and fill in the following.
; provider_implementation = bestman
; implementation = bestman
; Set to TRUE if the bestman provide can use 'df' on the directory referenced
; above to get the freespace information. If set to false, it probably won't
; detect the correct info.
; use_df = True
; For dCache-based SEs, uncomment and fill in the following
; How to collect data; set to 'dcache' for dcache 1.8 (additional config
req'd
; for this case), 'dcache19' for dcache 1.9, or 'static' for default values.
; provider_implementation = dcache19
; implementation = dcache
; If you use the dcache provider, see
; http://twiki.grid.iu.edu/bin/view/InformationServices/DcacheGip
; If you use the dcache19 provider, you must fill in the location of your
상호연동 인프라 구축 보고서EGEE-OSG
- 56 -
; dCache's information provider:
; infoprovider_endpoint = http://dcache.example.com:2288/info
; SE implementation name; leave as 'dcache'
; Here are working configs for BestMan and dCache
; [SE dCache]
; name = T2_Nebraska_Storage
; srm_endpoint = httpg://srm.unl.edu:8443/srm/managerv2
; provider_implementation = static
; implementation = dcache
; version = 1.8.0-15p6
; default_path = /pnfs/unl.edu/data4/VONAME
; [SE Hadoop]
; name = T2_Nebraska_Hadoop
; srm_endpoint = httpg://dcache07.unl.edu:8443/srm/v2/server
; provider_implementation = bestman
; implementation = bestman
; version = 2.2.1.2.e1
; default_path = /user/VONAME
;===================================================================
; Install Locations
;===================================================================
[Install Locations]
; The osg option is used to give the location of the directory where the
; osg ce software is installed
osg = %(osg_location)s
; The globus option is used to give the location of the directory where the
; globus software is installed, it is the globus subdirectory of the osg
; install location normally
globus = %(osg)s/globus
; This is the location of the file that contains the gridftp logs, it is
usually
; the globus/var/log/gridftp.log file in the osg install directory
gridftp_log = %(globus)s/var/log/gridftp.log