scaling an extreme temporary event network for burning man
DESCRIPTION
Scaling an extreme temporary event network at Burning ManTRANSCRIPT
Terry Ratcliff
Reuters/Jim Urquhart
Black Rock City, NV
duncan.co
2014 Burning Man IP NetworkTeam I.T.S. is Backbone, Camera Girl, Cat, Domo, Huckleberry, Little Meat, MattStep, Mushroom, Prof. Fox, PornStar, Ralf, Reset, Sawdust, Spank Me, Taz, Whiskey Devil, and Wild Card
gw-noclo0 - 162.212.145.252
Primary Cost 100
SecondaryCost 150
Tertiary Cost 400
Channel 5TX 11.385GHzRX 10.895GHz
gw-depotlo0 - 162.212.145.254
Channel 2TX 11.265GHzRX 10.775GHz
Secondary Cost 150
Channel 1505.750GHz
Channel 140TX 5.700GHzRX 5.750GHz
dw-tower-nocport1 - 162.212.144.26
dw-noc-towerport1 - 162.212.144.27
dw-tower-depotport1 - 162.212.144.10
dw-depot-towerport1 - 162.212.144.11
dw-noc-depotport1 - 162.212.144.75
dw-depot-nocport1 - 162.212.144.74
ap-towerbr0.2 - 162.212.144.2
st-noc-towerbr0.6 - 162.212.144.34
Poito Eagle Ridge
st-depot-towerbr0.4 - 162.212.144.18
ge0/3 - 162.212.144.25 ge0/2 - 162.212.144.9
Primary Cost 100
TertiaryCost 400
ma t t@ bur n ing m an. c om 20 1 4-1 0 -02 18: 1 0
Channel 5TX 10.895GHzRX 11.385GHz
Channel 2TX 10.775GHzRX 11.265GHz
Channel 1TX 17.765GHzRX 19.325GHz
Channel 1TX 19.325GHzRX 17.765GHz
sw-noc2vl7 - 162.212.144.123
sw-noc1vl7 - 162.212.144.122
v94 - NOC LAN
v114 - Media Mecca
v115 - NOC Inside
v121 - Webcast
v111 - First Camp
v94 - NOC LAN
v103 - Artery
v210 - Digerati & Devas
ap-noc2br0.25 - 162.212.144.146
ap-noc7br0.25 - 162.212.144.148
ap-noc10br0.25 - 162.212.144.149
sw-depotvl8 - 162.212.144.130
Channel 62.437GHz
apb-noc-localbr0 - 100.96.94.2
Channel 12.412GHz
apb-depot-localbr0 - 100.96.91.2
Channel 12.412GHz
apb-noc1-insidebr0 - 100.96.115.2
v122 - OC1/West Wing
UnlicensedLicensedUTP Ethernet
OSPF weight
Radio Frequency
High Desert Internet
Transit
Channel 1475.735GHz
Channel 1535.765GHz
Channel 1515.755GHz
Channel 1575.785GHz
apa-nocbr0.33 - 162.212.145.35
Channel 1655.825GHz
v91 - Depot, Logistics
st-light-depotbr0.17 - 162.212.144.210
st-light-nocbr0.27 - 162.212.144.202
st-airportbr0.101 - 100.96.101.2
v107 - Commissary Of cefi
v125 - Big Of cefi
v126 - Human Resources
v127 - Container Of cefi
v113 - Laminates
gw-lightlo0 - 162.212.145.248
v100 - Accounting
v215 - GPE Camp
v221 - Ticket yfl
v224 - Boob
v197 - Commissary Public
sw-commissarymgmt.34 - 162.212.146.142
ap-depot-omnibr0.10 - 162.212.144.66
Channel 1635.815GHz
Channel 112.462GHz
apb-commissary-localbr0 - 100.96.107.2
st-poopdudesbr0.216 - 100.96.216.2
st-quad4-depotbr0.15 - 162.212.144.106
st-quad4-nocbr0.23 - 162.212.144.170
v104 - BMIR
v120 - Ranger HQ
v108 - DMV
gw-quad4lo0 - 162.212.145.249
v118 - Playa Info
v120 - Ranger HQ
sw-quad4mgmt.35 - 162.212.146.130
Channel 62.437GHz
apb-quad4-localbr0 - 100.96.95.2
st-cafebr0.216 - 100.96.216.2
Channel 112.462GHz
apb-cafe-localbr0 - 100.96.106.3
st-pgepoint1br0.117 - 100.96.117.2
Channel 62.437GHz
apb-pgepoint1br0 - 100.96.117.4
st-ghettobr0.211 - 100.96.211.2
Channel 12.412GHz
apb-ghetto-localbr0 - 100.96.211.3
gw-esdlo0 - 162.212.145.253
st-esd-depotbr0.14 - 162.212.144.98
st-esd-nocbr0.22 - 162.212.144.162
v20 - ESD Comm.
v111 - Incident Comm. Post
st-box-depotbr0.13 - 162.212.144.90
st-box-nocbr0.21 - 162.212.144.154
gw-boxlo0 - 162.212.145.250
v105 - Box Of cefi
v116 - GPE
st-power-depotbr0.18 - 162.212.144.218
st-power-nocbr0.28 - 162.212.144.226
gw-powerlo0 - 162.212.145.247
v119 - Power Camp
Channel 112.462GHz
apb-power-localbr0 - 100.96.97.2
st-heavy-depotbr0.16 - 162.212.144.114
st-heavy-nocbr0.24 - 162.212.144.178
gw-heavylo0 - 162.212.145.251
v96 - Heavy Camp
v112 - Heavy Of cefi
v109 - HGH "Rampart"
apb-heavy-localbr0 - 100.96.96.2
Channel 62.437GHz
st-esdstation9br0.130 - 100.96.130.2
st-esdstation3br0.129 - 100.96.129.2
ap-noc5br0.25 - 162.212.144.147
ap-depot-sectorbr0.10 - 162.212.144.67
Channel 1535.795GHz
Channel 62.437GHz
apb-depot-dispatchbr0 - 100.96.91.3
Where We Started• The network worked, but it wasn’t easy– Large L2 bridged architecture, minimal L3
segmentation, multiple NAT layers
– Two distinct “business units”
–Manual configuration, “tribal knowledge”
–Numerous single points of failure
Where We Went• Needed to operate as a unified team– Consistent support experience, improved RF
spectral efficiency, coordinated IP allocations• Standardized COTS equipment– “CCIE off the street” factor, escalation path
• Standardized service offerings– Org department handoff’s always wired gigE; as
aggregated “islands” or single demarc– Participant camps supply very prescriptive
equipment, “self-install” provisioning
Where We Went• Route, always– No L2 segments past a single device– OSPF everywhere, core backbone & “islands”– Segment where possible, even over WiFi
• Automation– Initially covering all routers & switches– Target goal to cover any device with a config or
supplemental service (DNS, monitoring)
Automation!• Held bakeoff (mid 2011 evaluation)–Homegrown YAML config templates– Prototyped NCG (see NANOG49 Tutorial
“Automating Network Configuration”)
• NCG won (3 yrs ago)– Open source, vendor agnostic– Initial steep curve, very easy to embrace– Principal developer already a team member
Actual git example
Summary Overview• {Automation} data modeling isn’t easy– Imagine all your inputs & outputs (device configs,
DNS, monitoring, billing, etc.)
• Single source of truth– Git, a wiki, fancy IPAM: choose what fits your
organization’s workflow, stress level, & budget
• Start at L8 and L1, meet in the middle– People + physical layer = organic processes– End goal is to be efficient, not become a SW dev
Matt [email protected]
More.. unconference?