appliances clustersmicrocode trends in post-production infrastructure

Post on 07-Jan-2016

35 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Tom Burns Technicolor Creative Services. Appliances ClustersMicrocode Trends in Post-production Infrastructure. - PowerPoint PPT Presentation

TRANSCRIPT

AppliancesClustersMicrocode

Trends in Post-production Infrastructure

Tom Burns

Technicolor Creative Services

Disclaimer

The views expressed herein are exclusively those of the presenter, and are not indicative of any Thomson /

Technicolor official position nor an endorsement of any current or future technology development or strategy.

Appliances

•Turnkey workstations> Bosch FGS-4000, Quantel

Paintbox, Ampex ADO

•Proprietary circuit boards> Expensive & time-

consuming to improve

•Custom software> Steep learning curve for

developers

Clusters

•High Performance Computing > Fast, low-latency network> Shared storage> All nodes work on the same task

•3D Render farm> “embarrassingly parallel” i.e. 1

frame per CPU

Microcode

Technology Migration over time

“Software running on general-purpose computers will outlast custom hardware every time” – but it might take years to catch up

• How can we predict which innovations are likely to succeed?

• Business processes (VFX == “pipeline”, Post == “workflow”) move up the Stack

• Appliances move down the Stack

• HW & SW solutions (once paid off) become Appliances

• Software evolves much more quickly than either of these

• Confusion between continuous and discontinuous innovation is the cause of many technology product failures

Up and Down the Technology Stack

VFX Technology Stack

VFX Render Farm

Location render farming

Merging infrastructure and workflow == pipelining

• Compression> JPEG-2000> H.264

• Transcoding> Software VBR> Multi-pass

• Audio QC> Automated, file-based> Faster than real-time

• Deliverables> Multiple simultaneous

file-based renders

Migrate bottlenecked processes to GPU

SIMD – Single Instruction Multiple data

> Shared memory instead of message-passing architecture

> Memory accesses are expensive

> Code and Data packing since computation is cheap

Rapid Mind development tools

> AMD, Intel Multi-core x86 CPUs

> ATI/AMD FireStream 9250 GPU

> NVIDIA Tesla 10P

> Cell Broadband Engine, PS3

• Packing scalar data into RGBA in texture memory suits GPU architecture very well

GPU De-Bayering = Data Level parallelism

Object intelligence migrates from blocks to files

> A file system contains a certain amount of intelligence in the file itself, in the form of:

• the filename + numerical extension• directory placement (pathname)• attributes (e.g. atime, mtime)

> Post uses SAN, VFX uses NAS• Performance and cost• Intelligence “built-in” to the object

level allows more flexibility in the pipeline without changing the infrastructure

Enterprise Service Bus Goals:

• Remap the fixed stack into a flexible pipeline

• Plan exit costs of HW as well as entry costs

• Adapt to changing business cycles

Designing an Enterprise Service Bus for Post

• Virtualize dedicated h/w processes on clusters

> Profile and provide GPU support for bottlenecks

• Re-factor pipeline for different projects

> Swap out software to adapt to business cycles

• Service Orchestration

> The “Project Coordinator” moves up the stack

> QC, Delivery, Audit, Monitoring

• Scale to distributed bus

> Decentralized – smart endpoints

> Post facility becomes hub of networked community

top related