atlas hlt/daq valerio vercesi on behalf of all people working referee marzo 2006

45
ATLAS HLT/DAQ ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Valerio Vercesi on behalf of all people working Referee Marzo 2006 Referee Marzo 2006

Upload: katherine-dougherty

Post on 27-Mar-2015

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

ATLAS HLT/DAQ ATLAS HLT/DAQ

Valerio Vercesi on behalf of all people workingValerio Vercesi on behalf of all people working

Referee Marzo 2006Referee Marzo 2006

Page 2: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 2

S. Falciano (Roma1) Coordinatore Commissioning HLT A. Negri (Irvine, Pavia) Coordinatore Event Filter Dataflow A. Nisati (Roma1) TDAQ Institute Board chair e Coordinatore PESA Muon Slice F. Parodi (Genova) Coordinatore b-tagging PESA V. Vercesi (Pavia) Deputy HLT leader e Coordinatore PESA (Physics and Event

Selection Architecture) Attività italiane

Trigger di Livello-1 muoni barrel (Napoli, Roma1, Roma2) Trigger di Livello-2 muoni (Pisa, Roma1) Trigger di Livello-2 pixel (Genova) Event Filter Dataflow (LNF, Pavia) Selection software steering (Genova) Event Filter Muoni (Lecce, Napoli, Pavia, Roma1) DAQ (LNF, Pavia, Roma1) DCS (Napoli, Roma1, Roma2) Monitoring (Cosenza, Napoli, Pavia, Pisa) Pre-series commissioning and exploitation (Everybody)

Page 3: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 3

ATLAS Trigger & DAQATLAS Trigger & DAQ

40 MHz

~100 kHz

2.5 s

~3 kHz

~10 ms

~ 1 s

~200 Hz

Muon

LVL1

Calo Inner

PipelineMemories

Read-OutDrivers

RatesLatency

RoI

LVL2

Event builder cluster

Local Storage: ~ 300 MB/s

Read-Out Subsystems

hosting Read-Out

Buffers

Event Filter farm

EF

ROBROBROBROBROBROBROBROBROBROBROBROB

RODRODRODRODRODROD

RODRODROD

ROBROBROBROBROBROBROBROBROBROBROBROB

Hardware based (FPGA, ASIC)Hardware based (FPGA, ASIC)

Calo/Muon (coarse granularity)Calo/Muon (coarse granularity)

Software (specialised algs)Software (specialised algs)

Uses LVL1 Uses LVL1 Regions of InterestRegions of Interest

AllAll sub-dets, sub-dets, fullfull granularity granularity

Emphasis on early rejectionEmphasis on early rejection

Offline-like algorithmsOffline-like algorithms

Possibly seeded by Possibly seeded by LVL2 resultLVL2 result

Work with Work with full eventfull event

Full calibration/alignment infoFull calibration/alignment info

Hig

h L

evel Tri

gg

er

Hig

h L

evel Tri

gg

er

Page 4: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 4

TDAQ Networks and ProcessingTDAQ Networks and Processing

Dual(quad)-CPU nodes

SDX1

USA15

UX15

ATLASdetector

Read-Out

Drivers(RODs) First-

leveltrigger

Read-OutSubsystems

(ROSs)

UX15

USA15

Dedicated links

Timing Trigger Control (TTC)

1600Read-OutLinks

Gig

abit

Eth

erne

t

RoIBuilder

Reg

ions

Of

Inte

rest

VME~150PCs

Data of events acceptedby first-level trigger

Eve

nt d

ata

requ

ests

Del

ete

com

man

ds

Req

uest

ed e

vent

dat

a

Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each

LVL2Super-visor

DataFlowManager

EventFilter(EF)

pROS

~ 500 ~1600

stores LVL2output

~100 ~30

Network switches

Event data pulled:partial events @ ≤ 100 kHz, full events @ ~ 3 kHz

Event rate ~ 200 HzData

storage

LocalStorage

SubFarmOutputs

(SFOs)

LVL2 farm

Network switches

EventBuilder

SubFarmInputs

(SFIs)

Second-leveltrigger

Page 5: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 5

One Switch

rack-

TDAQ rack-

128-port GEth for L2+EB

One ROS rack

-

TC rack+ horiz. Cooling

-

12 ROS48 ROBINs

One Full L2

rack-

TDAQ rack-

30 HLT PCs

PartialSuperv’r

rack-

TDAQ rack

-3 HE PCs

Partial EFIO rack

-

TDAQ rack

-10 HE PC(6 SFI - 2 SFO - 2 DFM)

Partial EF rack

-

TDAQ rack

-12 HLT

PCs

Partial ONLINE

rack-

TDAQ rack-

4 HLT PC(monitoring)

2 LE PC(control)2 Central

FileServers

RoIB rack

-

TC rack + horiz. cooling

-50% of RoIB

5.5

surface: SDX1underground : USA15

Pre-series system in ATLAS point-1Pre-series system in ATLAS point-18 racks (10% of final dataflow, 2% of 8 racks (10% of final dataflow, 2% of EF)EF)

•ROS, L2, EFIO and EF racks: one Local File Server, one or more Local Switches

•Machine Park: Dual Opteron and Xeon nodes, uniprocessor ROS nodes

•Operating System: Net booted and diskless nodes, running SLC3

Page 6: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 6

Commissioning and exploitationCommissioning and exploitation

Fully functional, small scale, version of the complete HLT/DAQ Equivalent to a detector’s ‘module 0’

Purpose and scope of the pre-series system Pre-commissioning phase

To validate the complete, integrated, HLT/DAQ functionality To validate the infrastructure, needed by HLT/DAQ, at point-1

Commissioning phase To validate a component (e.g. a ROS) or a deliverable (e.g. a Level-2

rack) prior to its installation and commissioning TDAQ post-commissioning development system

Validate new components (e.g. their functionality when integrated into a fully functional system)

Validate new software elements or software releases before moving them to the experiment

Page 7: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 7

Pre-series tests at Point 1Pre-series tests at Point 1 Used integrated software release (

installation image ) with offline release 10.0.6, Event Format version 2.4, TDAQ release 01-02-00, HLT release 02-01-01

First time e/γ- and μ-selections run in a combined menu with algorithms

muon calorimeter inner detector

E.g. Level-2 setup 8 ROS emulators with preloaded data Data with Level-1 simulation: di-jets (17

GeV) , single e (25 GeV), single μ (100 GeV)

Dataflow applications with instrumentation measure execution times, network access times and transferred data sizes

Used recently up to 20 Level-2 processors each with up to 4 applications

LVL2 Farm Load Balancing

050

100150200250300350

1 3 5 7 9 11 13 15 17

CPU Index

LV

L2

Dec

isio

n R

ate

(Hz)

1-20 L2PU nodes with mufast at point 1 with Muon - 20K Muon events (1-4 L2PU

applications/node)

0

2000

4000

6000

8000

10000

12000

14000

16000

0 5 10 15 20

L2PU nodes

Th

rou

gh

pu

t (H

z)

1 l2pu/node

2 l2pus/node

3 l2pus/node

4 l2pus/node

Factor 1.9 improvement respect to one

application/node

Page 8: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 8

LVL2 testsLVL2 tests

Fraction of events passing LVL2 as a function of the decision latency

0

0.2

0.4

0.6

0.8

1

1.2

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29

Latency (ms)

Fra

cti

on

of

ev

en

ts

mu

jet

e

Data File LVL2 RateLVL2Latency Processing Time RoI Col DAQ Time Data Rate Data Size

#Reqs/Event

Data/Req

(Hz) (ms) (ms) (ms) Fraction (MB/s) bytes bytes

mu 293.1 3.41 2.78 0.62 0.19 0.084 287 1.3 223

jet 280.3 3.57 3.26 0.28 0.09 0.781 2785 1.2 2283

e 58.2 17.18 15.48 1.66 0.10 0.921 15820 7.4 2147

Prefiltered

Page 9: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 9

Infrastruttura Event FilterInfrastruttura Event Filter

Caratteristiche principali SW infrastruttura EF Completo disaccoppiamento tra

data flow (EFD) e data processing (PTs) sicurezza trattamento dei dati

Massimo sfruttamento delle architetture SMP Design flessibile e completamente configurabile

SFI

SFO

SFI

SFO

SFI

SFO

SFI

SFO

Muon

ROD ROD ROD

LVL1

Calo Inner

RoI

LVL2

Event builder network

Storage: ~ 300 MB/s

ROBROB ROBROB ROBROB

EFSubFarm

Node n

PT#1

PTIO

PT#2

PTIO

EFD

Sorting

ExtPTs ExtPTs

Output Output Output

Trash

SFI

Input

PT#a

PTIO

PT#b

PTIO

SFO[debug]

Ca

lib

rati

on

Implementation

Implementation

example

exampleSFO

[calib]SFO[std]

Page 10: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 10

EF testsEF tests Verifiche e studi sulla parte infrastrutturale

Ottimizzazione del protocollo di comunicazione tra EF e SFI/SFO:miglioramento delle performance per eventi piccoli (calibrazione) e farm remote

Aggiunta di funzionalità addizionali

Integrazione e validazione degli algoritmi di selezione

Algoritmi derivati dall'offline Ma condizioni operative diverse, es:

adattamento delle job-option all'online concorrenza nell'accesso al DB

Integrata e validata la muon slice Altre slice in corso di validazione

Tested with timing: EF-only, 9 EFDs per 2 PTs, TrigMoore algo, 1 MySQL (CERN site)

All 9 nodes connect to MySQL simultaneously all 18 PTs do not 1 but 3 connections to CDI

(3x18=54 - fast scaling) 6.90.2 s – geometry 0.10.03 s – AMDCsimRecAthena 0.060.03 s – magnetic field

DB-caching was used

Page 11: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 11

Software Installation ImageSoftware Installation Image

TDAQ

Offline

HLTTDAQ

Common

Software repositories

Example Partitions / Data Files

Test suites

Setup / installation scripts Originally developed for Large Scale Test 2005

Contains a consistent set of all software in one file needed to run a standalone HLT setup

Completely tested before deployment by PESA, HLT and DAQ specialists

Used for first exploitation of pre-series

Useful for outside CERN installations and new test bed setups

P1 installation procedure presently being worked out Future images snapshot of P1 installation https://twiki.cern.ch/twiki/bin/view/Atlas/HltImage

~ 6.5 GByte software

Project builds

Page 12: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 12

HLT Core SoftwareHLT Core Software Work plan defined for design review 2005 (

https://uimon.cern.ch/twiki/bin/view/Atlas/HLTReviewsPage) HLT compliant with trigger operation

Integration with most recent TDAQ software Cycling through TDAQ state machine (start/stop/reinitialize/…) HLT trigger configuration from data base Use of conditions DB in HLT Integration with online services for error reporting and system monitoring Many of these issues have a direct impact on selection algorithms Functionality

needs to be available early in core software to give time to algorithm developers. System performance optimization instrumentation for measurement of

network transfer times, data volumes and ROS access patterns ( complementary to work in PESA group)

For commissioning and readout tests Basic fault tolerance Stability

Move to new compiler gcc 3.4.4 and operating system SLC 4

Page 13: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 13

Trigger Configuration Data BaseTrigger Configuration Data Base

TriggerDB

onlinerunning

offlinerunning

shift crewoffline user expert

TriggerTool DB population scripts

R/O interface ConfigurationSystem

compilers

TriggerTool: • GUI for DB population• menu changes for experts (HLT and

LVL1)

TriggerDB: • stores all information to configure the

trigger: LVL1 menu, HLT menu, HLT algorithm parameters, HLT release information

• Versions identified with key Configuration and Condition DB

Retrieval of information for running: get information via a key, either as:• XML/JobOption files• direct DB read-out for both online + offline running

LVL1 + HLT as integrated system

http://indico.cern.ch/getFile.py/access?contribId=72&sessionId=2&resId=7&materialId=slides&confId=048

Page 14: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 14

Example: Data Base SchemaExample: Data Base Schema

LVL1

algorithms,jobOptionstrigger menu

software release

HLT

keys: stored in CondDB, to retrieve information (online and offline)

Early prototype of HLT part already run on 6 node system with muon selection algorithm

Page 15: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 15

Global Monitoring SchemeGlobal Monitoring SchemeOHP

Even

t Mo

nito

ring

Service

EventBuilder

GNAMDetector Specific Plug-in

On

line H

istog

ramm

ing

Service

AthenaDetector Specific Athena

Algorithm

AthenaMonitoring

Event Displays

Gath

erer

ROD

ROS

An

alysis F

ramew

ork

Mo

nito

ring

Data

Sto

rage

Page 16: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 16

GNAM MonitoringGNAM MonitoringPrincipio: disaccoppiare e mascherare le azioni

comuni dagli algoritmi di monitoring

GNAM

CORE

USER LIB

USER LIB

USER LIBEv

en

t M

on

ito

rin

g

Se

rvic

e

On

-lin

e

His

tog

ram

min

g

Se

rvic

e

Istogrammi

ComandiEventi

Dal dataflow

PresenterViewer

Checker

GNAM core: azioni comuni sicronizzazione con la DAQ campionamento degli eventi decodifica della parte detector-ind pubblicazione e salvataggio degli histo gestione dei comandi (update, reset, rebin) tools per gli algoritmi

(circular buffer, histogram flags, histogram metadata, ...)

Algoritmi di monitoring (librerie dinamiche a run-time) decodifica detector-dependent booking e filling degli istogrammi gestione di comandi specifici

Page 17: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 17

Online Histogram Presenter (OHP)Online Histogram Presenter (OHP) Interactive presenter developed in close

connection to GNAM monitoring However used to display histograms

published on the OHS by any producer Designed to be used both as

expert mode: a browser of all the histograms on OHS

shifter mode: an histogram presenter to show only predefined sets of histograms in configured tabs

Completely interactive with the GNAM Core (rebin, reset, …)

Completely redesigned, after the CTB experience, to minimize network traffic and to have a scalability appropriate for whole ATLAS A very useful collaboration with

Computer Science students has been established.

Browser part

Preconfigured set of

histograms in tabs

Commands to the Core : rebinning, reset ...

Page 18: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 18

Monitoring: commissioningMonitoring: commissioning Sviluppato un sistema di monitoring/analisi/

validazione on-line dei rivelatori basato su GNAM produzione di istogrammi visualizzati con

On-line Histogram Presenter (OHP) on-line event display (in collaborazione con Saclay)

In uso al commissioning dal settembre 2005 In sviluppo

reperire la configurazione dei rivelatori da DB controlli automatici e generazioni di allarmi

Utilizzato da Tile e MDT, interesse espresso da altri

Page 19: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 19

ROD Crate DAQROD Crate DAQ

RCD usato come interafccia verso i RODs per Control, Configuration, Monitoring, Data readout (via VME)

Gli sviluppi RCD hanno avuto sostanzialmente due fasi ReadoutApplication (ovvero l'applicazione che costituisce il ROD Crate

DAQ, il ROS ed il Data Driven Event Builder) modificata in modo sostanziale per accomodare tutte le richieste dei rivelatori ed essere pronta con tutte le fuzionalità necessarie per il commissioning

accesso standardizzato ad Information Service ed Online Histogramming possibilità di accesso ai dati in risposta agli interrupt semplificazione della costruzione delle classi per il controllo e l'acquisizione dei

moduli definizione e realizzazione di un data driven event builder librerie per gestione standardizzata delle condizione di errore

Supporto dei rivelatori per il commissioning Nuovo sviluppo necessario per garantire tramite una semplice interfaccia

comune a RAL/CORAL che l'accesso al database di configurazione sia thread safe (fase di inizializzazione)

Page 20: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 20

Attività RCDAttività RCD Parte specifica del detector del ROD Crate DAQ di MDT ed RPC Database

database di cablaggio (molto lavoro!) database di configurazione Interfacce di online e monitoring con questi

Detector Control System (DCS) Italiana tutta la parte di DCS degli RPC ed il controllo di HV e LV degli MDT

Settore 13 Muoni Run combinati MDT-Tile triggerati da scintillatori Studi di sincronizzazione

Page 21: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 21

MDT online calibrationMDT online calibration Required precision for t0 and r-t autocalibration needs inclusive muon rates of 0.33 KHz

Not suitable for EF calibration streams Need partial Event Building and streaming (under study) Already possible using LVL2 infrastructure with some modifications

L2PUL2PU

ThreadThread

Thread Thread

ThreadThread

Calibration

Server

Local

Server

Local

Server

Local

Server

GathererGathererCalibration

farm

disk

Server

x 25x 25

x x ~~2020

~ ~ 9.6 MB/s9.6 MB/s

TCP/IP, UDP, etc.

~ ~ 480 kB/s480 kB/s

~ ~ 480 kB/s480 kB/s

DequeueMemoryqueue

Page 22: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 22

Routing Routing calibration data calibration data

Page 23: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 23

Total of 99 racks can be placed in SDX•Lower Level: 49 (LVL2, EB,…)•Upper Level: 50 (EF)

SDX1 – TDAQ Room @ P1SDX1 – TDAQ Room @ P1

Page 24: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 24

ROS OverviewROS Overview

Read-OutSubsystems

(ROSs)

LVL2Super-visor

USA15

SDX1

Timing Trigger Control (TTC)

1600Read-OutLinks

10-G

igab

it E

ther

net

RoIBuilder

DataFlowManager

EventFilter(EF)

pROS

~ 500 ~1600R

egio

ns O

f Int

eres

t

~150PCs

Eve

nt d

ata

requ

ests

Del

ete

com

man

ds

Req

uest

ed e

vent

dat

a

dual-CPU nodes

~100 ~30

Network switches

Event rate ~ 200 Hz

LocalStorage

SubFarmOutputs

(SFOs)

LVL2 farm

Network switches

EventBuilderSubFarm

Inputs

(SFIs)

• In total ~150 ROS PCs will have to be installed

• Each ROS PC will be equipped with 3 or 4 ROBIN cards and one 4-port G-bit Ethernet NIC

ROBIN ROS PCs in USA15

ATLASdetector

Read-Out

Drivers(RODs) First-

leveltrigger UX15

Dedicated linksVME

Data of events acceptedby first-level trigger

Page 25: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 25

Hardware ProcurementHardware Procurement

1st batch (50 PCs) Ordered and received

2nd batch (60 PCs) Ordered. Delivery scheduled for May

Remaining ROS PCs + spares Will be ordered later

ROS PCs

ROBINs

4-port NICs

German production (350 cards) Ordered and received (~20 cards did not pass the production test and still need to be repaired)

UK production (350 cards) Ordered. Delivery scheduled for March

Ordered. Delivery scheduled for May Silicom4-port NIC

Page 26: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 26

Y.09-16.A2 Y.08-16.A2 Y.07-16.A2 Y.06-16.A2 Y.05-16.A2 Y.04-16.A2

Liquid Argon

Control switch

ROS PCs

Installed

Not installed

Power & network cablesCommissioned(ROS level)

yes

yes

no

yes

no

no

yes

no

no

no

no

no

no

no

no

no

no

noCommissioned(ROD - ROS)

Current Status of ROS-Racks in USA15Current Status of ROS-Racks in USA15

Y.09-14.A1

TileCal

yes

yes

50 %

Page 27: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 27

PESAPESA PESA Core SW is responsible for the implementation of the

Steering and Control (built around standard Athena components) PESA Algorithms develops HLT software using realistic data

access and handling specialized LVL2 and EventFilter algorithms adapted from on-line deployment in HLT testbeds

PESA Validation and Performance evaluates algorithms on data samples to extract efficiency, rates, rejection factors, and physics coverage

Stems from original structure, laid out in parallel with the organization of the Combined Performance working groups, in “vertical slices" (LVL1+LVL2+EF) Electrons and photons Muons Jets / Taus / ETmiss b-jet tagging B-physics

Page 28: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 28

HLT Reconstruction AlgorithmsHLT Reconstruction Algorithms

HLT Feature extraction algorithms are available for each slice Calorimeter algorithms

LVL2 and EF algorithms ready for e/ implementation ready at LVL2 Offline tool adapted to the EF is ready for JetCone

Muon algorithms LVL2 and EF algorithms are available for the barrel region; work has

started on extending the LVL2 algorithm to the endcap ID to muon track matching tools are available at LVL2 and EF Muon isolation studies using calorimeters are being performed

ID tracking Tracking with Si data ready at LVL2 and EF; more approaches studied in

parallel Tools available for both track extension to the TRT and stand-alone TRT

reconstruction; emphasis on providing a robust tool for commissioning and early running

Page 29: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 29

Selections: e/Selections: e/Eff % Rate

L1 95.5 4.7 KHz

L2 Calo 94.9 890 Hz

L2 ID 91.0 280 Hz

L2 Match 89.7 98 Hz

EF Calo 87.6 65 Hz

EF ID 81.8 35 Hz

EF Match 81.0 35 Hz

We 21%

Zee 5%

Direct photons or quark brem 5%

e from b, c decays 37%

rest 32%

Rate and efficiency studies performed for main physics triggers: e25i , 2e15i, e60, 60, 220i

Results for 11.0.4 perfectly in agreement with Rome results

Tools have been developed to optimize the selections

In the future, results will be provided as efficiency vs. rejection curves, to provide a continuous set of working points: essential for trigger bandwidth optimization

Cluster composition

Page 30: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 30

Selections: LVL2 Selections: LVL2 Implemented curvature radius instead of sagitta

More suitable for the endcap, recover efficiency in the barrel

Same algorithm across ± 2.4 in

Endcap extension in progress Combined reconstruction (Comb) with ID

Refine the Fast pT by means of ID data

sharper 6 GeV threshold

Resolution New LUTS for Radius Slightly worse than 10.0.3 Resolution is OK for Standard sectors

Turn-on curves 11.0.3 comparable with 10.0.3 Resolution is OK for Standard sectors Worse efficiency in the feet region (Special

Sectors)

Page 31: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 31

LVL2 cosmics LVL2 cosmics

MDT hits

RPC hits (pair of phi,eta strips)

Muon track from the surface

MDT hits are station centers in X-Y.

X-Y Z-R: bending plane

Straight line extrapolation from y=+98.3 m

MDT,RPC hits are there and looks fine. Conversion of RDO to coordinates seems fine too. Next steps: MuFast modifications

BIS

BMLBMS

BOS

/castor/cern.ch/user/m/muonprod/cosmics/cosmics.dig.atlas-dc3-02._0004.pool.root Monte Carlo!

Page 32: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 32

Selections: EF Selections: EF

lower values of efficiency plateau less sharp curves near the thresholds more points are needed for a better

curve definition

Sorgenti di muoni

L=1034

No backgr.

L=1034

s.f. x1

L=1034

s.f. x5

/K/K 54 Hz 54 Hz 48 Hz

bb 77 Hz 77 Hz 68 Hz

cc 30 Hz 30 Hz 26 Hz

WW 22 Hz 22 Hz 19 Hz

tt negligible negligible negligible

TotalTotal ~185 Hz ~190 Hz ~180 Hz

Studies on single muon selections have been performed for two scenarios: 6 GeV threshold at 1033cm−2s−1 luminosity and 20 GeV at 1034cm−2s−1.

Cuts are defined so that a 95% efficiency is achieved at the threshold values.

Layout Q (barrel only) MuId Combined used at EF MuComb rate reduction still to be included

at LVL2 Fake rates expected to be ~1% (~12%) of

total rate for s.f.x1 (s.f. x5) with this threshold (seeded mode)

Page 33: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 33

Jets/Taus/ETmissJets/Taus/ETmiss LVL2 calo algorithm for taus recently separated from egamma Ongoing performance studies for selection strategies on variables At present only EM calibration for cluster energies: need for a tau

calibration (also for EF, H1 style as in the offline mode?) First implementation of EF “seeded” TrigtauRec is already working

making use of offline tools Once the selection strategies are defined, physics trigger-aware

analyses (studying the effect of the hadronic tau trigger) can be performed

Three different strategies (concerning the data preparation) are being considered Read out calorimeter and unpack the cells (unpacking time may dominate) Read out calorimeter, get Ex/ Ey calculated in ROD (faster but …

resolution?) Read out TriggerTower from LVL1 Preprocessor

Ongoing work to define and studies general strategy for pre-scales, in particular for jet objects

Page 34: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 34

Jet triggers and prescalesJet triggers and prescales

Page 35: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 35

Selections: b-taggingSelections: b-tagging

Two classes of tagging variables can be used: track variables (xT ) and collective (vertex) variables (xV ).The weight of each RoI is computed using the likelihood-ratio method

where Ssig and Sbkg are the probability densities for signal (b-jets) and background WT : transverse (d0/d0 ) and longitudinal (z0)

WV : secondary vertex energy and mass (statistical approach)

Recent work to combine SimpleVertex (1-dim fit) and VKalVrt (offline algorithm adapted to LVL2)

Impact parametersImpact parameters + probabilistic vertexImpact parameters + VKalVrt/SimpleVertex

combined

Page 36: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 36

RoI Based B-physicsRoI Based B-physics Aim: use the calorimeter to identify

regions of the event containing B decay products EM RoI for e and gamma. Jet RoI for hadronic B-decays

Keep multiplicity low, to minimize data transfer and cpu, whilst maximising efficiency for events used in physics studies multiplicity= 1-2

The effect of different thresholds (EM&HAD and the jet RoI size on this multiplicity was studied using Rome data (1x1033) with the new TTL LVL1 simulation and pile up

The requirement on multiplicity implies an ET threshold of ~ 2GeV for LVL1 EM RoI

Ro

I Mu

ltipl

icity

LVL1 Threshold Energy (GeV)

Towerthresh=500MeV (default)

Towerthresh=750MeV

Towerthresh=1000MeV

LVL1 EM RoI multiplicity vs. ET cut

Page 37: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 37

Trigger-aware physics analysisTrigger-aware physics analysis Analyses which use trigger information as a “pre-processor” to correctly

evaluate efficiencies, physics reach, etc. What this requires

Trigger decision in AOD + Tag DB More detailed trigger-related information in ESD/AOD Ability to re-run hypothesis part of event selection on AOD Contribution by physics groups in selection tuning

Steering Supports serializing simple objects to include in LVL2 result EF result in progress

Muon slice L1 RoI seeding L2 MuFast seeding EF TrigMoore is working MuonFeature serialized and passed via L2 result TrigMoore output individually persistent via POOL

so already in AOD but without HLT navigation Electron slice

TrigIndetTrack and TrigElectron persistable (HLT Serializer and POOL) EMShowerMinimal + CaloCluster not compatible with serializer so cannot use for the moment Suitable electron hypotheses being developed

Special ones within constraints of demo

Page 38: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 38

Trigger & Physics WeeksTrigger & Physics Weeks

To proceed with the implementation of the guidelines of the ATLAS Operation Model, it has been recently proposed to have few-day trigger workshops in 2006, where the full experiment gets together to discuss trigger issues

It was also considered good to couple these meetings to the physics workshops organized since some time by Physics Coordination, in order to strengthen more and more the links between trigger, (combined) detector performance and physics

The aim of these weeks is to bring together trigger, detector performance and physics studies, and to expose trigger issues and strategy to a broad ATLAS audience

Defined dates 20 - 23 March 2006 29 May - 1 June 2006 30 October - 2 November 2006

Page 39: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 39

2006 PESA Milestones2006 PESA Milestones LVL1/HLT AODs fully available in Rel 12 for

trigger-aware analyses – Apr 06 Very preliminary AOD information available in Rel 11 Detailed description of Rel 12 deliverables prepared by Simon

HLT algorithm reviews complete – Jun 06 Detailed review of ID LVL2 algorithms already taken place Focus on system performance and implementation Results fed back into Rel 13

Online tests of selection slices with preloaded mixed files and large menus – Sep 06 First production version of trigger configuration

Selection software ready for cosmic run – Oct 06 Already in PPT: need to refine meaning

Blind test of HLT selection – Dec 06 In discussion with physics coordination Sample of representative events from initial ATLAS output & run full

menu

T&P Week

T&P Week

T&P Week

Page 40: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 40

PESA PlanningPESA Planning Several interactions with PESA Slice coordinators and with Algorithms developers Try and bring together something to help reinforcing the content of proposed milestones and

monitoring the development process Only gone through first iteration until now… Try always to describe the work in a “task oriented” fashion, to help identifying weak areas as well as

facilitate the job assignment Attempt to build a full PESA planning (Excel) starting from this information to monitor progress and

allow for updates, suggestions, improvements Clearly more details on near-future objectives than on far-away ones http://agenda.cern.ch/askArchive.php?base=agenda&categ=a057236&id=a057236s1t0/schedule

PESA Planning

Task Comments Expected PPT Workpackage

LVL1 Trigger      

Definition of EDM Done? dec-05  

 ……………………………………………………………………………………………  ……………………………………………………………………………………  ………………………  …………………………..

Slices      

e/gamma implementation in common framework RTT, ESD, Root Analysis Framework February 2006 DH-W101

Develop tools for automatic optimisations of e/gamma selections scanning of parameter space, minuit fitting there, neural net, multi-variant method being developed

March 2006DH-W101

Check trigger selection w.r.t offline selection for electrons/photons Need new evaluations from offline groups March 2006 DH-W101

Establish set of pre-scaled e-triggers using Rome datasets Photons as well February 2006 DH-W101

First evaluation of trigger efficiencies from data For electrons, photons and muons March 2006 DH-W101

Strategies for ETmiss calculations   March 2006  

Revised Steering Configuration     DH-W110

Prototype LVL2 Hypothesis algorithm for all Examples to be further developed in validation February 2006 DH-W147

Provide documentation and examples to physics community For all selections March 2006  

Milestone April 2006 LVL1/HLT AODs completely available in version 12 for trigger-aware analyses

Page 41: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 41

AccountingAccounting Contributo INFN alla Pre-serie

140 KCHF (ROS Racks, Monitoring, Operations, Switches, FileServer) completamenti spesi entro il 2005

Per questo e per il resto VV riceve in copia tutte le fatture Contributo CORE 2005-2006

Online Computing System: 45+135 KCHF (Monitoring, Operations) Inviati al CERN 45 KCHF a Maggio 2005 Già acquistati quattro file server

Read-Out System: 275+275 KCHF (ROS Racks) Gara CERN espletata con un congruo ritardo per la prima tranche (50 ROS), la parte

rimanente è in consegna (60 ROS a Maggio) Imputati all’INFN per ora circa 200 KCHF (su Roma1)

LVL2 processors, Event Building, Event Filter processors: 65+50+170 KCHF

In corso di perfezionamento le specifiche dettagliate (soprattutto per i processori HLT) Può darsi si possa utilizzare un marker survey fatto da CERN-IT Studi in corso anche per la valutazione delle ultime tecnologie (Moore’s law failures…)

Infrastruttura: 80 KCHF (cavi, racks, cooling,…)

Page 42: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 42

Cost Profile (KCHF)Cost Profile (KCHF)

2004 2005 2006 2007 2008 2009 Total

Pre-series 140 0 0 0 0 0 140

Detector R/O 0 275 275 0 0 0 550

LVL2 Proc 0 0 65 195 230 160 650

Event Builder 0 0 50 50 110 70 280

Event Filter 0 0 170 180 570 380 1300

Online 0 45 135 0 0 0 180

Infrastructure 0 0 80 80 20 20 200

INFN Total 140 320 775 505 930 630 3300

TDR Total 1048 3357 4087 4544 7522 4543 25101

INFN Percentage(%) 13.4 9.5 19.0 11.1 12.4 13.9 13.1

Page 43: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 43

INFN MilestonesINFN Milestones 30/06/2005

TDAQ - Installazione, test e uso della "Pre-serie" (~ 10% TDAQ slice)

“compiutamente” raggiunta in Ottobre: ritardi accumulati soprattutto sugli acquisti delle componenti

Proponiamo di indicare il 100% e modificare la “matching date” 24/12/2005

TDAQ - Installazione e test dei ROS di Pixel, LAr, Tile, Muon (interfacciamento al ROD Crate e integrazione nel DAQ)

Forte dipendenza dalla data di consegna dei ROS (lentezza gara, etc) Nessun problema “di principio”, il programma di lavoro è chiaro, l’esperienza della pre-

serie è direttamente trasferibile Proponiamo di indicare 50% alla data prevista

30/04/2006 Completamento dei test sulla pre-serie e definizione delle funzionalità per il

supporto al commissioning TDAQ 31/08/2006

Commissioning delle slice di ROS dei rivelatori utilizzando le funzionalità della pre-serie (modulo-0 del sistema finale)

31/12/2006 Presa dati integrata dei rivelatori nel pozzo con raggi cosmici

Page 44: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 44

Goal of Early Commissioning Goal of Early Commissioning Near ATLAS Calibration Workshop Strba, Nov. 2004

Prepare for unexpected events…Prepare for unexpected events…

Page 45: ATLAS HLT/DAQ Valerio Vercesi on behalf of all people working Referee Marzo 2006

Referee Marzo 2006 Valerio Vercesi - INFN Pavia 45

… … e l’unexpected è dietro l’angolo…e l’unexpected è dietro l’angolo…