(med304) the future of rendering: a complete vfx studio in the aws cloud | aws re:invent 2014
DESCRIPTION
Today's studios and visual effects companies require massive computing power and large amounts of storage to produce high-end digital scenes and videos. Maintaining the infrastructure required for these jobs is expensive and operationally difficult, plus demand fluctuates day to day. Geographically diverse workforces adds additional complexity to data and content transfer. The low-cost, utility computing model as well as unique virtualization capabilities offered by AWS are well-suited to addressing these challenges. In this session you will learn how to build and deploy a studio-quality, scalable Arnold render farm on AWS with reusable templates. We'll also demonstrate how to run Maya and Deadline remotely with AWS AppStream, and use them to edit scenes and coordinate render jobs entirely in the cloud.TRANSCRIPT
November 13th, 2014 – Las Vegas, NV
Usman Shakeel, Gerald Tiu, and Matt Yanchyshyn
STARRING
THE BACKDROP
EPISODE I
What about the Cloud?
• Infinite
• Infinite
• Infinite
• Globally distributed
studio in a template
• Flexibility
turnaround time
• Secure
– Lustre, Avere, iSCSI, …
– Arnold, Mental Ray, V-Ray, Mantra, Renderman…
– Deadline, Tractor, Backburner, …
– Maya, 3ds Max, Houdini, Nuke, Cinema 4D…
– FlexLM, Pixar License Manager …
Amazon AppStream
AWS Direct Connect
AMI + AWS CloudFormation Template (Automated Deployment)
Lustre Render Nodes Storage Servers (Cache)Amazon S3
On-Premises Storage
Modeling Dumb Client
GPU Based Modeling
Client Nodes
Render Pipeline
Amazon
Glacier
Amazon S3
bucket with
content
Parallel Reads using
Range Gets
Render Farm
Scale
Instance NIC
Instance NIC
Instance NIC
Parallel Writes using
Multi-part Uploads
Periodically write back
to Amazon S3 bucketAutomated deployment
and Scale
Amazon Virtual Private Cloud
Lustre Storage Nodes
Lustre Metadata Nodes
Scale horizontally to
add more performance
Scale horizontally to
add more storage
Rendering/Modeling
Nodes
Amazon S3
bucket with
content
Hydrate metadata
nodes at launch
Lustre Client on each
processing node
AWS
CloudFormation
Amazon S3
bucket with
content
The Resting Place FS On Demand Terminate
Amazon Glacier
archive
Amazon S3
bucket
Amazon EBS
snapshot
AMI
AWS
CloudFormation
template
Scale
Create a Render Project
Storage NodesMetadata Nodes
Select Amazon S3 bucket path to
hydrate the Lustre File System
Pre-hydrate from Amazon S3 and
sync back on an on-going basis
Add or remove additional
Storage or metadata nodes by
calling Stack Updates
Delete all the resources
attached to the Lustre FS by
calling delete stack at the end of
the job
AWS Cloud On-Premises NAS
A fleet of virtual Storage
Servers running on the CloudClients
10 G AWS
DirectConnect
– Automated
– Scale
• Backed
quickly
Disposable Storage Disposable Render farm On Demand Terminate
Amazon S3
bucketCreate a Render Project
Compositing
Scalable Lustre
Select AWS CloudFormation templates to
generate an on-demand Render Farm
Scale the Render Farm appropriately
based on producer’s wallet
Delete all the resources
attached to the Render Farm by
calling delete stack at the end of
the render queue
Virtual Storage Servers (Cache)EffectsRendering
License server
Render Farm
manager
create_role_and_profile
create_autoscalegroup
instance_type
security_groups
min_size
max_size
<s3 bucket>/Deadline/qt.repo
mount -v -t lustre -o rw 10.1.150.219:/scratch /mnt
<s3 bucket>/Deadline/DeadlineClient-7.0.0.39-linux-x64-installer.run
./DeadlineClient-7.0.0.39-linux-x64-installer.run --licenseserver @<location>
<s3 bucket>/Autodesk_Maya_2015_SP5_English_Linux.tgz
rpm -i Maya2015_64-2015.0-733.x86_64.rpm
<s3 bucket>/Arnold/arnoldRenderer.xml
cp arnoldRenderer.xml /usr/maya2015-x64/bin/rendererDesc/arnoldRenderer.xml
create_autoscalegroup
instance_type
security_groups
min_size
max_size
use_spot
DirectX OpenGL
YUV444
GPU
Disposable Storage Disposable Graphics Workstation On Demand Terminate
Amazon S3
bucket
Create a Render project
Compositing
Scalable Lustre
Launch Amazon AppStream with modeling
software installed
Connect to geographically diverse
graphic artists
Delete all the Amazon
AppStream resources after the
graphic artists are finishedVirtual Storage Servers (Cache)
EffectsRendering Render farm manager
Licensing server
Disposable Render Farm
Storage Nodes
Metadata Nodes
Virtual Storage Servers
(Cache)
AWS Direct Connect
Amazon S3
Amazon
EBS (GP2)
3000 256K
IOPS
Render Nodes
<No. Of Storage Servers>
X NIC throughput
<No. Of Lustre nodes>
X NIC throughput
Each node can saturate
its link to EBS
S3 scales up to
saturating each nodes
N X 10 Gbps
Up to 5GB/sec per Storage
node – 64 PB
Up to 35K/s create, 100K/s
stat metadata operations
Each Virtual Storage Server
can technically be able to
push up to the NIC capacity
of the instance it is running
onRender Nodes
SEASON FINALE
On to a bigger project with a fancy VFX studio in the Cloud …
http://bit.ly/awsevals