infrastructure for the lhcb rttc artur barczyk cern/ph rttc meeting, 26.10.04

9
Infrastructure for Infrastructure for the LHCb RTTC the LHCb RTTC Artur Barczyk Artur Barczyk CERN/PH CERN/PH RTTC meeting, 26.10.04 RTTC meeting, 26.10.04

Upload: dominick-young

Post on 14-Dec-2015

216 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

Infrastructure for the Infrastructure for the LHCb RTTCLHCb RTTC

Artur BarczykArtur Barczyk

CERN/PHCERN/PH

RTTC meeting, 26.10.04RTTC meeting, 26.10.04

Page 2: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

22CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

BackgroundBackground

Proposed setup for RTTC (Beat, 29.10.04):Proposed setup for RTTC (Beat, 29.10.04):C

on

tro

ls s

wit

ch

Disk server

SFC

Node

Data switch

Node

EC

S

Disk server

SFC

Node

Data switch

Node

Data switch

Page 3: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

33CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

BackgroundBackground

Existing equipment in 157:Existing equipment in 157: 46 compute nodes46 compute nodes 4 SFCs4 SFCs

1 dual Xeon (32 bit architecture)1 dual Xeon (32 bit architecture)2 dual Opteron (64 bit architecture)2 dual Opteron (64 bit architecture)1 dual Itanium (64 bit architecture)1 dual Itanium (64 bit architecture)

1 ECS server (Windows)1 ECS server (Windows) 1 NFS server (Linux)1 NFS server (Linux) 3 24 port GbE switches3 24 port GbE switches 1 48 port FE switch (farm connectivity for controls)1 48 port FE switch (farm connectivity for controls)

2 complete Sub-Farms with 23 nodes each2 complete Sub-Farms with 23 nodes each(although aging, so no speed record to be expected…(although aging, so no speed record to be expected…… but planned to buy 23 dual CPU farm nodes)… but planned to buy 23 dual CPU farm nodes)All hosts (incl. switches) are on LHCb private network All hosts (incl. switches) are on LHCb private network

Page 4: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

44CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

Private NetworkPrivate Network

Private Network is:Private Network is: IP network using private address rangeIP network using private address range Private = administered within organisation, i.e. the LHCb Online team in Private = administered within organisation, i.e. the LHCb Online team in

this casethis case Not directly connected to the internet Not directly connected to the internet access via Gateway access via Gateway Reserved private numbers are (RFC 1918)Reserved private numbers are (RFC 1918)

Class A: 10.0.0.0 / 8 (16 Mhosts)Class A: 10.0.0.0 / 8 (16 Mhosts)Class B: 172.16.0.0 / 12 (1 Mhosts)Class B: 172.16.0.0 / 12 (1 Mhosts)Class C: 192.168.0.0 / 16 (64 khosts)Class C: 192.168.0.0 / 16 (64 khosts)

In general, all hosts are accessible via gatewayIn general, all hosts are accessible via gatewaySome boxes, in particular the servers, can be accessed from the Some boxes, in particular the servers, can be accessed from the CERN network as usual (Network Address Translation (NAT) on the CERN network as usual (Network Address Translation (NAT) on the Gateway machine transparent to the user)Gateway machine transparent to the user)Gateway functions also as a firewall, need to identify services from Gateway functions also as a firewall, need to identify services from outside, and open corresponding ports (e.g. AFS, DNS etc.)outside, and open corresponding ports (e.g. AFS, DNS etc.)

Page 5: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

55CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

Why botherWhy botherFuture: Readout network will be a private network, as will be the Controls Future: Readout network will be a private network, as will be the Controls Network etc.Network etc.

Present: DAQ test bed in 157 runs out of CERN IP numbers (“our” Present: DAQ test bed in 157 runs out of CERN IP numbers (“our” segment has 127 possible addresses, 101 already used up)segment has 127 possible addresses, 101 already used up)Good opportunity to switch over, and test functionality before/during the Good opportunity to switch over, and test functionality before/during the Trigger ChallengeTrigger Challenge

CERN/IT

LHCb

Point 8IT

Controls

Storage

Workstations

Gateway

Page 6: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

66CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

Control interfacesControl interfacesThe setup in 157 uses class A private numbersThe setup in 157 uses class A private numbersSubnet 10.1.0.0/16 used for control interfacesSubnet 10.1.0.0/16 used for control interfacesUse 3Use 3rdrd octet to distinguish between octet to distinguish between

Farm nodes ( 10.1.N.0 / 24 )Farm nodes ( 10.1.N.0 / 24 ) SFCs ( 10.1.100.0 / 24 )SFCs ( 10.1.100.0 / 24 ) Servers ( 10.1.101.0 / 24 )Servers ( 10.1.101.0 / 24 )

Gateway

Farm NSRCs SRVs SFCs

CERN NETWORKLBTBGW

137.138.137.239

DAQ PRIVATE NETWORK

10.254.254.254 / 8

10.1.102.0 / 24e.g.

10.1.102.12 for pclbtbsrc12

10.1.N.0 / 24e.g.

10.1.2.7 for PC 7in farm 2

10.1.101.0 / 24e.g.

10.1.101.2 for pclbtbsrv02

10.1.100.0 / 24e.g.

10.1,100.5 for pclbtbsfc05

Page 7: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

77CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

User accessUser access

Generally through gateway (lbtbgw), in two steps:Generally through gateway (lbtbgw), in two steps: pclhcb114> ssh lbfarmer@lbtbgwpclhcb114> ssh lbfarmer@lbtbgw lbtbgw> ssh lbfarmer@farm0001lbtbgw> ssh lbfarmer@farm0001

Firewall currently open only forFirewall currently open only for sshssh IP-timeIP-time DNSDNS AFSAFS

AFS can be accessed AFS can be accessed as usual on directly NATed boxes (servers, SFCs)as usual on directly NATed boxes (servers, SFCs) via dynamic NAT from all other boxes (farm nodes)via dynamic NAT from all other boxes (farm nodes)

This means that only the host in question can start a connection, and that only This means that only the host in question can start a connection, and that only a limited number of hosts can access AFS at the same timea limited number of hosts can access AFS at the same timeMeant for e.g. system upgradesMeant for e.g. system upgrades

Other services will be allowed to pass the gateway when identified as Other services will be allowed to pass the gateway when identified as neededneededIn principle, the RTTC traffic should be local within our domainIn principle, the RTTC traffic should be local within our domain

Page 8: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

88CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

Data interfacesData interfacesSubnet 10.2.0.0/16 used for data interfacesSubnet 10.2.0.0/16 used for data interfacesUse 3Use 3rdrd octet to distinguish between octet to distinguish between

Data source N ( 10.2.N.0 / 24 )Data source N ( 10.2.N.0 / 24 ) SFC M ( 10.2.10M.0 / 24 )SFC M ( 10.2.10M.0 / 24 ) Farm (K) node ( 10.1.20K.0 / 24 )Farm (K) node ( 10.1.20K.0 / 24 )

Note: no gateway!Note: no gateway! Source 1

10.2.1.1

SFC 5

Farm 1, node 15

10.2.1.3

10.2.105.1

10.2.105.2

10.2.201.15

Page 9: Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting, 26.10.04

99CERN, 26.10.2004CERN, 26.10.2004 Artur Barczyk, CERN/PHArtur Barczyk, CERN/PH

Status/OutlookStatus/Outlook

The setup is running on the private network as of recentlyThe setup is running on the private network as of recentlySo far used for switch testing and SFC benchmarkingSo far used for switch testing and SFC benchmarkingWe have to gain experience with running behind a firewall:We have to gain experience with running behind a firewall:

Identify outside services neededIdentify outside services needed Install whatever is missing/usefulInstall whatever is missing/useful Other operational details like e.g. ssh tunnelling, security/OS Other operational details like e.g. ssh tunnelling, security/OS

updates etc.updates etc.

Hardware installations:Hardware installations: 1-2 disk servers for RTTC data1-2 disk servers for RTTC data 23 state-of-the-art farm nodes23 state-of-the-art farm nodes