summary computing and daq walter f.j. müller, gsi, darmstadt 5 th cbm collaboration meeting gsi,...

22
Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

Upload: arlene-bryan

Post on 05-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

Summary Computing and DAQ

Walter F.J. Müller, GSI, Darmstadt

5th CBM Collaboration MeetingGSI, March 9-12, 2005

Page 2: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 2

Computing and DAQ Session

Thursday 14:00 – 17:00 – Theory Seminar Room

Handle 20PB a year

CBM Gridfirst steps

Controls, not anafter sought this time

Network & processing

p-p, p-A – >108 int/sec

Page 3: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 3

Computing and DAQ Session

Page 4: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

4

Data rates

• Data rates into HLPS• Open charm

• 10 kHz * 168 kbyte = 1.7 Gbyte/sec

• Low-mass di-lepton pairs

• 25 kHz * 84 kbyte = 2.1 Gbyte/sec

• Data volume per year – no HLPS action• 10 Pbyte/year

• ALICE = 10 Pbyte/year: 25% raw, 25% reconstructed, 50% simulated

slide from D. Rohrich

Page 5: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

5

Processing concept• HLPS’ tasks

• Event reconstruction with offline quality

• Sharpen Open Charm selection criteria – reduce event rate further

• Create compressed ESDs • Create AODs

• No offline re-processing• Same amount of CPU-time needed for unpacking and

dissemination of data as for reconstruction

• RAW->ESD: never

• ESD->ESD’: only exceptionally

slide from D. Rohrich

Page 6: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

6

Data Compression Scenarios

• Loss-less data compression– Run-Length Encoding (standard technique)

– Entropy coder (Huffman) – Lempel Ziff

• Lossy data compression– Compress 10-bit ADC into 8-bit ADC using logarithmic

transfer function (standard technique)

– Vector quantization – Data modeling

Perform all of the above wherever possible

slide from D. Rohrich

Page 7: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

7

Offline and online issues

• Requirements to software– offline code = online code

• Emphasis on– Run-time performance

– Clear interfaces

– Fault tolerance and error recovery

– Alignment

– Calibration

– ”Prussian” programming

slide from D. Rohrich

Page 8: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

8

Storage concept

Main challenge of processing heavy-ion data:

logistics

• No archival of raw data• Storage of ESDs

– Advanced compressing techniques: 10-20%– Only one pass

• Multiple versions of AODs

slide from D. Rohrich

Page 9: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

9

Dubna educational and scientific network Dubna-Grid Project (2004)

More than 1000 CPU

Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden

Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations

slide from V. Ivanov

Page 10: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

summary (middlewares)● LCG-2: GSI and Dubna • - pro: large distribution, support• - contra: difficult to set up, no distributed analysis● AliEn: GSI, Dubna, Bergen

- pro: in production since 2001

- contra: unsecure future, no support

Globus 2: GSI, Dubna, Bergen?

- pro/contra: simple, but functioning (no RB, no FC, no support)

gLite/GT4: new on the market

- pro/contra: nobody has production experience (gLite)

slide from K.Schwarz

Page 11: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 11

CBM Grid – Status CBM VO Server setup First certificate in work Use for MC Transport production this summer Initial participants:

Bergen, Dubna, GSI, ITEP Initial Middleware:

AliEn (available on all 4 sites, good working horse)

Page 12: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 12

ECS (Experiment Control System) Definition of Functionality of ECS and DCS Draft of URD (user requirements document) Constitute ECS working group

Page 13: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 13

FEE – DAQ Interface

FEE

Hit Data(out only)

Clock and Time

(in only)Control

(bidirectional)

3 logical interfaces

FEE FEE FEE FEE

Concentrator orread-out controller

Cave

Shack

3 Specs: Time DAQ DCS

First Drafts ready

for fall 2005CBM TB Meeting

Diversityinevitavbl

e

Commoninterfaces

indispensible

Page 14: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005GSI, Mar 2005

5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

14

DAQBNet Currently investigated structure

switch n × n • • • • switch n × nn

n * (n - 1) / 2 bidirectional connections

• • • • • • • n

CNet PNet

n - 1

n - 1 ports n - 1 ports

H: histogrammerTG: event taggerHC: histogram collectorBC: schedulerDD: data dispatcherED: event dispatcher

TG/BC

DD/ED

CNet PNet

DD/ED

HCNet

DD/HC active buffer

BNet controller

n=4 : 16x16

slide from H. Essel

Page 15: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005GSI, Mar 2005

5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

15

DAQBNet Simulation with SystemC

• event generator

• data dispatcher (sender)

• histogram collector

• tag generator

• BNet controller (schedule)

• event dispatcher (receiver)

• transmitter (data rate, latency)

• switches (buffer capacity, max. # of package queue, 4K)

Running with 10 switches and 100 end nodes.

Simulation takes 1.5 *105 times longer than simulated time.

Various statistics (traffic, network load, etc.)

Modules:

slide from H. Essel

Page 16: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005GSI, Mar 2005

5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

16

DAQBNet Some statistic examples

single buffers excluded!

slide from H. Essel

Page 17: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005GSI, Mar 2005

5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

17

DAQBNet Topics for investigations

• Event shaping

• Separate meta data transfer system

• Addressing/routing schemes

• Broadcast

• Synchronization

• Determinism

• Fault tolerance

• Real test bed

slide from H. Essel

Page 18: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

18

Overview of Processing Architecture

Processing resources

• Hardware processors– L1/FPGA

• Software processors– L1/CPU

• Active Buffers

• Sub-farm network– Pnet

Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering

slide from J. Gläß

Page 19: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

19

Architecture of R&D Prototype• communication via backplane

– 4 boards, all-to-all– different length of traces– up to 10 Gbit/s serial– => FR4 Rogers

DDR

DDR

ZBT

ZBTFPGA

connector

SFP

XC2VPX20

FlashRS232

Ethernet

PPC

2 2 2

2

zeroXT 10GB SMT

Eth

ern

et

Fla

sh

DD

R

LinuxµC

• FPGA with MGTs– up to 10 Gbit/s serial– => XC2VPX20 (8 x MGT)– => XC2VPX70 (20 x MGT)

• externals– 2 x ZBT SRAM– 2 x DDR SDRAM– for PPC: Flash, Ethernet, …

• initialization and control– standalone board/system – microcontroller running Linux

Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering

slide from J. Gläß

Page 20: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

20

Conclusion

• R&D prototype to learn:– physical layer of communication

• 2.5 Gbit/s up to 10 Gbit/s• chip-to-chip• board-to-board (-> connectors, backplane)• PCB layout, impedances• PCB material (FR4, Rogers, …)

– next step:• communication protocols

– more resources needed => XC2VPX70?, Virtex4? (availability?)

– external memories• fast controllers for ZBT and DDR RAM• PCB layout, termination, …

Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering

slide from J. Gläß

Page 21: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

21

DAQ Challenge

Incredibly small (unknown) cross-section:

ppx

4.13s at 90 GeV beam energy

Q = 13.4 – 9.5 – 1. -1. = 1.9 GeV ( near threshold)

What is the theoretical limit for the hardware and DAQ?

How can one improve the sensitivity by clever algorithm?

More questions than answers.

Page 22: Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005

11 March 2005 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 22

Algorithms

Performance of L1 feature extraction algorithms is essential critical in CBM: STS tracking + vertex reconstruction

TRD tracking and Pid

Look for algorithms which allow massive parallel implementation Hough Transform Tracker

needs lots of bit level operations, well suited for FPGA Cellular Automaton Tracker

Co-develop tracking detectors and tracking algorithms L1 tracking is necessarily speed optimized (>109 tracks/sec)

→ possibly more detector granularity and redundancy needed Aim for CBM:

Validate final hardware design with at least 2 trackers suitable for L1