testing the feasibility of a low-cost network performance
TRANSCRIPT
1
Testing the Feasibility of a Low-Cost NetworkPerformance Measurement Infrastructure
Scott Chevalier, Jennifer M. SchopfInternational Networks
Indiana UniversityBloomington, IN 47408
{schevali, jmschopf}@indiana.edu
Kenneth MillerTelecommunications & Networking Services
The Pennsylvania State UniversityUniversity Park, PA 16802
Jason ZurawskiEnergy Sciences Network
Lawrence Berkeley National LaboratoryBerkeley, CA 94720
Abstract—Todays science collaborations depend on reliable,high performance networks, but monitoring the end-to-end per-formance of a network can be costly and difficult. The mostaccurate approaches involve using measurement equipment inmany locations, which can be both expensive and difficult tomanage due to immobile or complicated assets.
The perfSONAR [11] framework facilitates network mea-surement making management of the tests more reasonable.Traditional deployments have used over-provisioned servers,which can be expensive to deploy and maintain. As scientificnetwork uses proliferate, there is a desire to instrument morefacets of a network to better understand trends.
This work explores low cost alternatives to assist with networkmeasurement. Benefits include the ability to deploy more re-sources quickly, and reduced capital and operating expenditures.We present candidate platforms and a testing scenario thatevaluated the relative merits of four types of small form factorequipment to deliver accurate performance measurements.
I. INTRODUCTION
Networks are essential to modern research and educa-tion [28]. Distance education requires stable network perfor-mance to facilitate audio and video. Research innovation relieson bulk data movement that is free of transmission errors andplentiful bandwidth.
Almost all current research collaborations depend on stablenetworks that are reliable and error free in order to besuccessful. In fact, the average network user is generallyunaware of the specifics of why a network experience maynot go smoothly [30], but can detect deviation from theirexpectations [34].
Initiatives such as the National Science Foundation’s Cam-pus Cyberinfrastructure program [18] have brought new focusto the plight of the state of R&E networks in the UnitedStates. These programs have collectively invested $82 Millionvia 170 awards in 46 states and territories [36]. The goal ofthese programs is to upgrade and rethink network architecturesvia the seminal work on the Science DMZ [23], as wellas to encourage network monitoring using tools such asthe perfSONAR Monitoring Framework [26] to better gaugenetwork performance, locate problems, and bring them tofaster resolution.
Ensuring end-to-end performance, e.g. observed from thepoint of view of a network user, is complex even withintelligent tools due to the complexity of the path [32]. Theenvironment features many layers [38] and different adminis-trative domains, which can complicate the path and result ina reduction in the overall performance. Debugging problemsof this nature is equally challenging, and requires knowledgeof a myriad of components: applications, communicationsprotocols, computational hardware, and network hardware, toname several broad areas of focus.
Visibility into performance characteristics is crucial. Keep-ing with the theme of Metcalfe’s Law, a monitoring in-frastructure becomes more useful as the deployed instancesgrow [33]. However, potential deployments need to keep thecosts associated with long term operation low in order to befeasible for network operators. There is a need to ensure thatthe initial cost and long term maintenance requirements ofnetwork measurement equipment remains low, and usabilityof the resulting framework remains high.
The difficulties in deploying and using network monitoringsoftware is being addressed by the perfSONAR project, whichhas invested considerable resources into simplifying the taskof software deployment. Early incarnations required buildingdedicated machines with a customized operating system. Re-cent improvements [12] now facilitate deployment via a seriesof software “bundles”: one of which is specifically targetedtowards use on “low cost” hardware offerings. The rationaleis simple: if the software is easy to deploy and maintainon inexpensive resources, the number of these resources willgrow and benefit the original deployment site as well as thecommunity at large. Responding to community feedback, theproject is addressing the desire for operation on devices witha price point of around $200 [21].
However, simplifying the software is only one part of thedeployment issue. For large scale deployments which will havemany test instances, we must ensure that low cost resourcesused in this environment will offer observations that are freeof self-inflicted error and are designed to be free of internalbottlenecks. Additionally, the resources must be capable of
This manuscript has been authored by an author at Lawrence Berkeley National Laboratory under Contract No. DE-AC02-05CH11231 with the U.S. Department of Energy. The U.S.Government retains, and the publisher, by accepting the article for publication, acknowledges, that the U.S. Government retains a non-exclusive, paid-up, irrevocable, world-widelicense to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes.
continuous operation for a number of years, otherwise theinvestment, no matter how small it may be, will be wasted. Weinvestigated whether single board technologies, also referredto as Small Form Factor (SFF) machines employing the Micro,Mini, Nano, or Pico ITX motherboard technology, might becapable of being used for network measurement.
This paper presents several options with an evaluationto better understand the choices of available hardware fornetwork performance measurement activities. Starting with aselection of hardware offerings, we show a comparison ofcost, performance, maintenance, and overall usability whendeploying a network measurement infrastructure. We describeour comprehensive study of perfSONAR operation on thesedevices in several pragmatic environments. We conclude withsome preliminary guidance on purchasing and maintaining adeployment of inexpensive testing resources.
The rest of the paper proceeds as follows. Section IIdiscusses similar measurement projects, and how they re-late to this experimentation. Section III talks about networkmeasurement preliminaries, and Section IV discusses possibledeployment strategies. Section V describes our experimentsplan as part of the SC15 SCinet [14]. Section VI offerscommentary on the observed results after the deploymentwas tested within the SCinet [15] infrastructure. Section VIIdiscusses the experience and outlines future work by theperfSONAR project and similar community efforts.
II. RELATED WORK
Deployment of network testing resources is a well re-searched topic. In [20], the authors perform a comprehensivereview of available technologies, many of which are targetedtowards a broad deployment of dedicated devices. Theseprojects have a common goal to better understand networktraffic from the point of view of an end user, as well as to lo-cate and fix architectural bottlenecks. Many of these solutionsare inexpensive, meeting at least some of the criteria that droveour current work. Some of these solutions are designed to beblack boxes, with little programmatic interaction or insightinto the underlying reasons for why the results of a test are asgood, or as poor, as reported.
Many small node solutions are geared toward the homeuser, and are not designed (or capable) of handling the higherspeeds and requirements of the Research and Education (R&E)network infrastructure, which is our focus area. Few of thesemeasure network throughput, which is required for our work,and instead focus on traceroutes or ICMP measurement data.These related projects include:
• BISmark [35] is a platform to perform measurements ofInternet Service Provider (ISP) performance and trafficinside home networks. This device functions as a tradi-tional broadband router, performing normal functions inaddition to periodic network performance measurementsof throughput and latency.
• RIPE Atlas [19], [27] is a global network of probesthat measure Internet connectivity and reachability, andis primarily deployed by home users to provide anunderstanding of the state of the commercial Internet (notR&E networks) in real time.
• NetBeez [6] is a product designed for network managersprimarily interested in early fault detection and quicktroubleshooting of networks, primarily in a LAN, nota WAN, environment. Via broad deployment of smallnetwork monitoring agents at each office, it is possibleto quickly detect and fix network and application issuesat that scale.
• CAIDA deploys and maintains a globally distributedmeasurement called Archipelago (Ark) [25], based on theRaspberry PI platform [13]. This infrastructure is formedby distributing hardware measurement devices with geo-graphical and topological diversity, but is not collectingthroughput data due to limitations in the hardware.
Each of these approaches offers an inexpensive platform,easy integration with a proprietary central management sys-tem, and the ability to collect a variety of measurements. Anunfortunate downside is the lack of ability to federate instancesthat span different domains of control or easily share resultsfor visualization and analysis.
perfSONAR is designed to provide federated coverage ofpaths using common network tools that are accessible viaa common API. It can help to establish end-to-end usageexpectations. There are 1000s of perfSONAR instances de-ployed world wide, many of which are available for opentesting of key measures of network performance. This globalinfrastructure helps to identify and isolate problems as theyhappen, making the role of supporting network users easier forengineering teams, and increasing productivity when utilizingnetwork resources.
It is desirable to adopt a solution that integrates easily withthis global framework, knowing that we can use it to addresslocal and remote performance observations. The perfSONARapproach differs from other frameworks because:
• Each probe is individually owned and operated;• Federation of resources, within and between domains, is
available by default;• “Open” testing and data access policies may be set by
the local sites;• The software is designed to work on commodity hard-
ware;• There are several broad possibilities for use: end-users,
network operators, and application developers.
By adopting perfSONAR as the measurement software, it isalso possible to integrate into other real-time debugging frame-works such as OnTimeDetect [22], Pythia [29], or UNIS [24].
There is ongoing work examining the use of very smallnodes, such as the Raspberry Pi [13] or BeagleBone [1] with
perfSONAR distributions installed on them [37]. However, wefocused on links that needed to be tested at close to 1 Gigabitsper second (Gbps), which is beyond the capability of this typeof hardware.
III. BACKGROUND AND METHODOLOGY
Hardware
Operating System
Application (iperf)
LAN Network
WAN Network
Hardware
Operating System
Application (iperf)
LAN Network
Fig. 1. Layers of Performance
To accurately measure performance of the underlying net-work infrastructure, it is crucial to remove imperfectionscaused by the measurement device. This is traditionally doneby ensuring that the measurement infrastructure is performingat peak efficiency in terms of both the hardware and software.It is possible to compensate for experimental error that thetest infrastructure introduces to the resulting measurement ifenough information is available, although this adds significantcomplexity to the environment.
As part of the measurement functionality of perfSONAR,the tools can estimate an error range for latency and through-put. These calculations are based on the complete end-to-endpicture, as shown in Fig 1, and are not indicative of any oneone component. It is challenging to know precisely whichfactor on the end to end path (the end hosts, the software, theintermediate network devices, the protocols, etc.) is causingany form of error, but the error can be used as a confidenceinterval when evaluating the final result.
Care has been taken to optimize the perfSONAR softwareplatform at both the operating system and application layers toensure that these are always operating at peak efficiency. Toolswill always give more accurate measurements of networkbehavior if they are not bottlenecked by the measurementdevices themselves, either hardware or software. Since thesoftware product is designed to run on commodity hardware,the initial hardware choices have a large impact on resultingmeasurement. In practice, the performance of the hardwareis a function of the design and cost characteristics. We con-sider three classes of hardware: traditional servers, virtualizedenvironments, and low cost hardware.
A. Server Class Hardware
In the world of computing a “server” is often distinguishedas a device that is capable of providing service to multipleclients simultaneously. The hardware used is often morepowerful and reliable than standard personal computers, andthus capable of more intense activities over a longer period oftime. Modern servers feature an architecture that can supporta single or multiple processors, with fast clock speeds, on amotherboard that can support communication with peripherals.The main memory is measured in Gigabytes, and can supportthe needs of the operating system and concurrent servicerequests. Network interfaces may range in size from 1 to 100Gbps.
The bottlenecks of this computing architecture occur in fourcommon places:
• Processor Speed and Availability: A single stream ofTCP can only be bound to a single processor core. Theperformance you achieve with a software measurementtool will always be limited by the performance of theCPU. A system may have many other tasks to do si-multaneously in a multi-threaded environment, thus is ishighly beneficially to have a CPU with a high clock speedand multiple cores available for system operation.
• Contention for system bus: The system bus handles allcommunication between peripherals. If there are otherdevices that are using this limited resource during ameasurement, the background noise can impart additionalerror.
• Improper tuning of Network Interface Card (NIC):Modern NICs feature processors on-board that can at-tempt to offload the task of network communicationaway from the main CPU. Knowing the performancecharacteristics of the NIC is important, and how it willinteract with the system as a whole.
• Memory Speed and Availability: Most network testtools are “memory” based testers, meaning they create,store, send, and receive directly from the main memoryof a system. If the memory is slow in relation to the CPU,bus, or NIC, it will become a bottleneck in testing.
In most cases server class hardware is able to perform at ornear “line rate”, i.e. the maximum throughput given protocoloverhead, due to the nature of the components. Tuning of theoperating system and components can result in moderate gainsover a standard configuration.
B. Virtual Hardware
In computing, the act of “virtualization” refers to creatinga virtual (rather than physical) version of computing com-ponents. This approach is not new, and has proliferated incomputing since the 1960s when large shared resources (i.e.mainframe computers) were divided up to support multipleusers. Modern virtualization focuses on delivering clonable
environments that are identical to a physical resource, emu-lating a complete hardware and software stack through thededication of a small number of physical resources.
However, when used in a network measurement environ-ment, virtualization can strongly affect the accuracy andstability of measurements, particularly those that are sensi-tive to environmental considerations on a host or operatingsystem. perfSONAR was designed to“level the playing field”when it comes to network measurements by removing hostperformance from the equation as much as possible. The useof virtualized environments can introduce unforeseen compli-cations into the act of measurement.
Hardware
Operating System
Application (hypervisor)
LAN Network
WAN Network
LAN Network
Virtual HW
Virtual OS
Application
Virtual HW
Virtual OS
Application
Hardware
Operating System
Application (hypervisor)
Virtual HW
Virtual OS
Application
Virtual HW
Virtual OS
Application
Fig. 2. Virtual Layers of Performance
As shown in Fig 2, additional layers are added to the end-to-end measurement. These additional layers impart severalchallenges:
• Time Keeping: Some virtualization environments imple-ment clock management as a function of the hypervisor(i.e. software that manages the environment) and virtualmachine communication channel, rather than using astabilizing daemon such as NTP [31]. This could meantime skips forward, or backward, and is generally unpre-dictable for measurement use.
• Data Path: Network packets are timed from when theyleave the application on one end, until arrival in theapplication on the other end. The additional virtual layersadd latency, queuing, and potentially packet ordering ortransmission errors, to the calculation.
• Resource Management: Virtual machines share thephysical hardware with other resources, and may beremoved (“swapped”) from physical hardware to allowother virtual machines to run as needed. This exercise ofswapping virtual resources could impart additional errormeasurement values that may only be seen if a longterm view of data is established and matched againstvirtualized environment performance.
These challenges show up in the resulting measurement andcan often be difficult to fully justify, even with the availability
of error estimation. For these reasons, virtual measurementplatforms are not encouraged for the type of deployments weuse to gain a full understanding of network performance.
C. Low Cost Hardware
Thus far two extremes have been presented: dedicatedhardware capable of line rate performance that comes with alarge investment in hardware and maintenance, and virtualizedshared resources that are inexpensive to deploy and maintain,but do not deliver on performance goals.
A third option is the use of single-board computers. Theseare not a new innovation, the first of which appeared at thedawn of personal computing [17]. Small Form Factor (SFF)personal computers, e.g. those that were smaller than theMicro-ATX, became popular in the latter part of the 2000s.Utilizing commodity processors found in consumer electron-ics, such as cell phones, it was possible to construct small,cost effective, devices for common computing tasks such asserving media files or controlling stand-alone hardware andsoftware tasks. Coupled with the release of Linux distributionscomplied specifically for these computing architectures, thesedevices have proliferated [1], [3], [13].
Indeed the SFF environment has grown quickly, whichis both a blessing and a curse. Currently it is possible topurchase a turn-key device for less than $200 that promises1Gbps network speeds and computing power similar to PC.These devices offer dedicated resources, a step better thanvirtualization, but may feature some of the same types ofbottlenecks, if not orders of magnitude worse, then in theirlarger server-class relatives. In particular, the shared systembus, single core processor with a slower clock speed, andlimited memory footprint are all reasons for concern whenit comes to network measurement.
There are many available options in this space given thecurrent size and growth pattern, so it is not feasible to examineall of them. However, in the next section we detail guidance onthe broad requirements that will lead to accurate and reliablenetwork measurement activities and discuss several options.
IV. DEPLOYMENT SCENARIOS
When planning the deployment of a measurement frame-work, the most important factor is to position measurementequipment along highly utilized paths. This principle holdstrue for deployments that span a continent, a region, or a cam-pus. Having measurement equipment along critical junctionsmakes them more useful for ensuring performance or duringa debugging exercise.
As an example of a continental scale network, the En-ergy Sciences Network (ESnet) [2] is a high-performance,unclassified network built to support scientific research. ESnetprovides services to more than 40 Department of Energy(DoE) research sites, including the entire National Laboratorysystem, its supercomputing facilities, and its major scientific
instruments. ESnet maintains measurement equipment thatsupports throughput and latency tests at ESnet points ofpresence (PoPs) as well as near the site network boundaryof many DoE facilities for a total of nearly 60 locations. TheESnet measurement resources are high-end servers. Each ofthe 60 measurement resources are connected to the ESnetnetwork via a top-of-rack switch. The throughput measurementequipment are also connected via 10Gbps fiber connections tothe hub/site router at each PoP. The initial capital expenditureinvestment into measurement hardware totaled approximately$300, 000. That number does not include ongoing support orrefresh expenses.
Not all networks span an entire continent. KENET [5],the Kenya Education Network, is Kenya’s National researchand education network. KENET connects facilities through-out the country to the leading internet exchanges. Work-ing with partners International Networks at Indiana Univer-sity (IN@IU) [4] and the Network Startup Resource Center(NSRC) [7], KENET researched and deployed components ofan 11 node measurement infrastructure using a low-cost serversolution as shown in Fig 3. The measurement equipment wasdesigned to be a lower price point, costing around $10, 000,than an over-provisioned server and more stable than a SFFdevice.
Fig. 3. KENET Network
Networks that span a small physical area, e.g. a singlecampus or those joined via metro area connectivity, can alsobenefit from having a smaller scale monitoring infrastructurein place. At the Pennsylvania State University [16], the Real-Time Measurement (RTM) service continuously monitors andmeasures the University Enterprise Network (UEN) to identifyissues and enhance performance [9]. Within each of the 23campus locations there are numerous network PoPs. Eachcampus is home to a dedicated full scale server used tomeasure parameters that characterize network performance
back to the main campus.As a part of an NSF Campus Cyberinfrastructure award,
the campus is working toward core network upgrades, resilientpaths, and a Science DMZ, as shown in Fig 4. The updateddesign features additional SFF devices at key intersections.These additional measurement resources will enable testingalong any section of the path associated with the end to endperformance across the multi-campus system. This deploymenthighlights using the right resource in the right setting: full scaleservers for the heavily used connections, and smaller, cheapernodes to pull out additional information, with more flexibility,wherever needed. The SFF nodes may also be used to provideextra information on demand as part of the network debuggingprocess.
Fig. 4. Penn State Network
V. QUANTIFICATION AND EXPERIMENTATION
To better understand the feasibility of deploying SFF mea-surement equipment in a network measurement infrastructure,an experiment was devised to validate several varieties duringoperation of a large scale network that mixed both enterpriseand research traffic profiles. A set of criteria was establishedto compare the relative merits of each platform when runningperfSONAR measurement components as they operated forapproximately a week under typical working conditions.
A. Environment
The 27th annual International Conference for High Perfor-mance Computing Networking, Storage, and Analysis (SC)was held in Austin, TX in November of 2015. SCinet is theconferences dedicated high performance research network. Itis the fastest and most powerful network in the world, builtby volunteer expert engineers from industry, academia, andgovernment for the duration of SC: just over one week. SCinetnetwork traffic peaked at more than 500Gbps, supporting high-performance research demos, wireless traffic for over 6000 si-multaneous wireless clients daily, and connected meeting roomconnections over 89 miles of optical fiber. This environment,
pictured in Fig 5, was chosen as a crucible for SFF testing dueto the magnitude of the measurement challenge, along with theat-scale qualities of the network.
A total of 18 SFF resources were targeted for deploymentlocations within the SCinet infrastructure. These locationswere chosen to cover several use cases:
• Near the demarcation of the SCinet network within theconference center;
• Within the core infrastructure, where all traffic traversesupon entry, egress, or transit; and
• Near key locations of congestion for the wireless andwired client connections.
We selected 4 types of SFF hardware to evaluate fordeployment, all with a price point below $200. Note wedid not include Raspberry Pi or equivalent equipment inthis experimentation, as a design criteria was that each nodewould be able to test close to 1Gbps of throughput. The fourevaluated technologies were:
• 3 BRIX by GigaByte, model GB −BXBT − 2807;• 9 LIVA by ECS, model batMINI 1;• 3 NUC by Intel, model NUC5CPY H; and• 3 ZBOX by ZOTAC, model CI320nano.Each instance was configured to mimic a traditional perf-
SONAR testing resource built on top of a supported Linuxplatform. Of the initial 18 total machines, 6 of the devices ex-perienced issues during transit (likely due to poorly connectedcomponents coming lose from jostling) or during operation.Due to this, 12 machines were deployed and organized intoa single “mesh” where each resource tested against all othernodes to provide a full set of measurements.
B. Evaluation criteria
To fully evaluate the SFF performance, we considered sev-eral factors related to the physical qualities of the devices, theiroverall performance, and their ability to be easily integratedand maintained over a period of time. The following factorswere evaluated for each:
• Usability– Unit Cost: The total cost to purchase all components
(case, power supply, board, processor, memory, stor-age, and basic peripherals such as networking). Somedevices were sold “as is”, others required purchasingadditional components.
– Operating System Support: The operating systemsthat are known to work for the hardware platform,along with any abnormalities with device support.
– Hardware Capabilities: The number of cores, alongwith the clock speed, of the Central Processing Unit.The amount of memory available. The capacity of theNIC.
1This model is no longer in production, and has been replaced by newerX and X2 models
– Power Delivery: The mechanism for power delivery:external brick or enclosed in the device.
– Ease of Installation: A subjective evaluation on theprocess to assemble (if required), install, and config-ure each tester.
– Ease of Operation: A subjectively evaluation of theprocess to use and maintain each tester.
• Performance– Observed Throughput: Observed average throughput
versus the maximum interface capacity.– NTP Synchronization: Ability to measure time accu-
rately and precisely.– Device Stability: Does not introduce jitter or other
systematic error into measurements.These factors are not a panacea, but provide a useful metric
for the feasibility of SFF as measurement infrastructure ondeployments large and small.
C. Usability results
Table I discusses the results of the usability survey. Factorsare rated between 1 and 3 stars, with 3 being the best. Severaltrends emerged during testing:
• The LIVA devices experienced the greatest number ofissues during testing. They were only able to supporta single operating system, which limited functionality.Driver support for peripherals, including an issue ob-served with SELINUX, remained a problem during test-ing. Additionally, it it required a larger external powersource (delivered via a brick, although 5V USB is apossibility) and that a keyboard and monitor be presentat bootup, e.g. “headless” operation was not possible asin other testers.
• The BRIX and NUC both required the use of a largerexternal power source (that caused plug blocking), butwere stable and straightforward to assemble.
• The ZBOX had a standard grounded power cable andrequired no assembly.
D. Performance results
Table II discusses the results of the performance survey.Factors are rated between 1 and 3 stars, with 3 being the best.Data analysis revealed several trends. Overall these devices,when operating and reporting data, showed fairly similarresults and achieved acceptable throughput, in aggregate. OneBRIX and one NUC device reported lower average throughput.Investigation at the time of collection could not find a fault inconfiguration of the device or network, thus the reason whythese two individual machines would report lower performanceremains unsolved.
As previously noted, there were many instances of mechani-cal and software failure for the tested devices. These included:
• Several LIVA devices were damaged during transit (in apadded bag for shipment) to the event and could not be
657-660661-664665-668
681-684677-680673-676
293-296 285-288 281-284 277-280
297-300 305-308 309-312 313-316
289-292
301-304
617-620621-624625-628
653-656
649-652645-648
229-232 221-224 217-220 213-216
233-236 257-260 261-264 265-268
225-228
253-256
581-584585-588589-592593-596
613-616609-612605-608
181-184 173-176 169-172 165-168
185-188 193-196 197-200 201-204
177-180
189-192
161-164
501-504505-508509-512513-516517-520
537-540533-536529-532525-528521-524
85-88 77-80 73-76 69-72
89-92 97-100 101-104 105-10893-96109-112
65-68
541-544545-548549-552553-556
577-580573-576569-572565-568
133-136 125-128 121-124 117-120
137-140 145-148 149-152 153-156
129-132
141-144
113-116
157-160
629-632
641-644637-640
633-636
561-564
597-600
AditTunnel
45-48 49-52 53-56 57-60 61-64
73-76 77-80 81-84 85-88 89-92 93-96F97-10069-72 D65-68
F65-68D97-100
129-132125-128 121-124 117-120 113-116 109-112 105-108 101-104
133-136137-140 141-144 145-148 149-152 153-156 157-160 161-164
193-196189-192 185-188 181-184 177-180 173-176 169-172 165-168
197-200201-204 205-208 209-212 213-216 217-220 221-224 225-228
253-256 249-252 245-248 241-244 237-240 233-236 229-232317-320
249-252
V007
V009
V008V004V005
237-240241-244
245-248
205-208
209-212
269-272
273-276669-672
601-604
17-2013-1625-2829-32
1-4 5-8 9-12
21-24
5-8
61-649-1213-16
21-24
17-20
1-4
81-84
441-444 437-440 433-436
417-420421-424 425-428
429-432
57-60
401-404
493-496 485-488 489-492
481-484
405-408
413-416
385-388
381-384
397-400
393-396
389-392
409-412
689-692F
685-688F
497-500
321-324
329-332
337-340
353-356
341-344
333-336
325-328
349-352
357-360349-352F
341-344F
337-340F
345-348F
345-348
333-336F
409-412F
389-392F
385-388F
369-372F
41-44
37 V02
33 37 41
45 49 53
5729
33
25V01465469473457449461
477453445V04
557357F
377361F
369353F
365V02
V01361
37-40
17-20 65-68 61-64
13-16
301/29-3225-28
21-24301/25-28
Terrace 1-4
5-8
95-98
99-102
103-106
111-114
107-110
115-118
69-70
63-66
71-74
75-7879-82
83-86
1-2 3-4
13-1415-16
17-18 19-20
33-3435-36
37-38 39-40
1-2
3-4
5-6
7-8
9-10
15-16 17-18Q1-4
19-22 23-24 25-26
27-28 29-30
37-38 39-4033-36
41-42 43-44
Wall Sits On Pockets
5-6 7-8
9-1011-12
21-22 23-24
25-2829-3031-32
41-42 43-44 45-48
49-5253-5455-5657-5859-60
61-62 63-64 65-66 67-68
11-12
13-14
47-50 51-52 53-54
55-5859-60 61-62
T1-4
T9-12
T5-8
91-94
87-90 109-112
113-116
105-108
93-96SER USER N
Q
45-48
57-60 53-56 49-52
173-176177-180
169-172R165-168R161-164V21-24
25-2829-32 33-36 37-40 45-48
41-4417-20
13-16
U89-92U85-88 141-144
145-148
57-6061-64
77-8081-84
R/185-18849-52
181-184V/53-56
SER V
V001
V004
V308/V4
85-88
V308/V2
BKV005
V00373-76 69-72 65-68 R/209-212
R/205-208 R/213U97-100
V002 R/214
R/201-204197-200
193-196189-192
129-132
Zone
RZo
ne V
Zone
TZo
ne V
SER
T
Zone RZone T
21-24 17-20
105-108
157-160
153-156
57-60149-152
49-5253-56
R301/29-32R301/33-36
33-36 T29-32 1-4
TER 1
V1-4V5-8V9-12121-124
117-120
73-76113-116
65-6869-72
109-112105-108 101-104 117-120 113-116 109-112
161-164133-136
137-140157-160
129-132153-156149-152 123-126
61-6421-24
13-16 53-56
57-6017-20
45-48 41-44 37-40
Terrace D1-4
R304/V1-V8
R302/1-4 R302/5-8
SER
R
101-104
125-128
61-64 65-6877-80 97-100
69-7273-76
81-84 85-88
89-9293-96
145-148 133-136
137-140141-144
25-2877-80
81-84 33-36
37-4089-92
85-88 29-32
93-96 45-4841-44
97-100
U77-80U81-84
13-16
Tunnel Zone 1 Tunnel Zone 2 Tunnel Zone 3
Admin OfficesCabling Via 3rd Level Zone N
SHOW OFFICES Cabling Via 3rd Level Zone Q
P
21-2481-84
65-6849-52 37-40
33-36
29-3221-2457-6073-7613-16
V004
V005
45-48
8953-56
25-28F21-24F 25-28F
17-20F33-36F
37-40F29-32F61-6469-729-12F
5-8F1-4F
17-2025-2885-88 77-8045-48F49-52F 41-44F
1-4
1-4F5-8
9-12
13-16
21-24
33-36
53-56 41-44
V001
45-4849-52
53-5657-60
9-12F
17-20F
41-44F25-28F 37-40F
61-64
17-20 25-28
65-68
69-72
V002
V003
Fiber Ports # Same as Data, Unless Noted Otherwise - Including ExHalls 4 & 5
ATM
ATM
West Side MR Rooms
South Side MR Rooms
172.25.1.29
172.25.1.30172.25.1.26
172.25.1.27 172.25.1.28
172.25.1.24172.25.1.25
172.25.1.27
172.25.1.24172.25.1.25
172.25.1.23
172.25.1.21
172.25.1.22
172.25.1.17172.25.1.18
172.25.1.20
172.25.1.16
172.25.1.13
172.25.1.15172.25.1.14
172.25.1.12
172.25.1.33 172.25.1.34172.25.1.32
Wall Sits On Pockets
Wal
l Sits
On
Poc
kets
2ND FLOOR (MEETING ROOMS 4-10, AUSTIN SUITE, ADMIN) DETAILS
3RD FLOOR (MEETING ROOMS 11-19, BALLROOMS D/E/F/G) DETAILS
MAIN FLOOR (ExHalls 1-5, BALLROOMS ABC, MRs 1-3) DETAILS
Ex Hall Floor PocketWallPlate1/4 Pocket, flush w/floorBallroom, Terrace Wallplates
P Single Payphone Location
LEGEND
SO 1 SO 2 SO 3 SO 4 SO 5
ATRIUM AREA
Ballroom D
Ballroom G
Ballroom F
Ballroom E19 A19 B 18 C18 D
11 A 11 B
12 A 12 B
13 A 13 B
14
Kitchens
17 A17 B18 A18 B
15
16 A
16 B
perfSONAR SFF NodePlacements marked w/
FPS
xxxxx
xxxxx
xxxx
xxxx
FreightElev.
ElevRotunda
RR
RR
RR RR
RR
RR
SER LL25 Located In Admin AirHandler Area
SER H113SouthElectric
L102IDF
SER M102 Located In Shop AirHandler Area
SER H203
SER A207
SER BM05 IncludesMain Floor And
Austin Suite Areas
SER HM02 IncludesMain Floor Areas
North Electric SER A106 Includes Mezzanine Areas
SER L102 Located In Central Elec
SER N Bllrm EFG, PreF, Lev2 Ofcs
SER U Bllrm D, PreF
SER Q PreF and Lev2 Ofcs
SER R MRs 13 - 18
SER V MRs 17-19, Kitchens
SER T MRs 11-14,18-19
H203IDF
HM02IDF
Austin Suite
West Pantry
Cedar RmMR 8C
CypressMR 8B
LiveOak MR 8A
Pine Rm MR 9A
Pecan Rm MR 9B
Hickory Rm MR 9C
Ash RoomMR 10A
Mulberry Rm MR 10B
Mezz 6
Mezz 7
Trinity
FSROfcs
Storage
Mezz 11
Mezz 10
Mezz 9
Mezz 8
BM05
Wes
t Mez
zani
ne R
oom
s
A/V Storage
STORAGE
Elev
South Mezzanine Rooms
Palazzo
Water RmMR 7 Lavaca Rm
MR 6BColorado Rm MR 6A
Pantry
BrazosMR 5C
SanJacinto MR 5B
South
TrinityMR 5A
NechesMR 4C
RedRiverMR 4B
SabineMR 4A
Audio Control
Mezz 5MIS
Offices
Mezz 2 Mezz 1AVW Ofcs
Mezz 12 Mezz 13 Mezz 14 Mezz 15 Mezz 16
EXHIBIT HALL 2
EXHIBIT HALL 3
EXHIBIT HALL 4
STORAGE
BALLROOM A
BALLROOM PREFUNCTION AREA
ROTUNDAPALAZZO
EXHIBIT HALL 1
BALLROOM B
BALLROOM C
EXHIBIT HALL 5
SERVICE YARD
Loading Dock
Load
ing
Doc
k
SHOPS
STORAGE
Kitchen AreasAramark Offices
PrepAreasServ
ice C
orridor
Waller Creek Terrrace
Elev.Projection Room
AUSTIN SUITE IDF located @ Mezz BM05
PayPhone532-2435 HC
98-101 150-153 154-15734-37 48-49F
38-41 42-45 102-105 106-109 158-161 162-165 166-16920-21F 36-37F 50-51F 52-53F
50-53 54-5746-49 110-113 114-117 170-175 174-17724-25F 38-39F 54-55F22-23F
58-61 62-6526-27F
118-121 122-12540-41F
178-181 182-18556-57F
186-18958-59F
28-29F70-73
42-43F130-133 190-193
60-61F194-19766-69 126-129
74-7730-31F
134-137 138-14144-45F
198-20162-63F
206-20964-65F
202-20578-81
218-22166-67F
214-217210-213142-14546-47F
146-14934-35F
86-8982-85 90-9332-33F
222-22568-69F94-97
33-36 69-7225-26F73-76 137-140
41-42F141-144105-108
17-18F37-40 41-44
27-28F77-80 81-84
33-34F109-112 145-148 149-152
47-48F113-116
45-4819-20F49-52 85-88
29-30F89-92 117-120
35-36F121-124 153-156
43-44F157-160 161-164
21-22F53-56 57-60
31-32F93-96 97-100
37-38F125-128 129-132
45-46F165-168 169-172
61-64 23-24F65-68 101-104 39-40F
133-136 173-176
1-2F1-4 49-52
25-26F101-10497-100 153-156
9-123-4F5-8 57-60
15-16F53-56 109-112
27-28F105-108 161-164
39-40F157-160
13-165-6F
17-2017-18F65-6861-64 113-116
29-30F117-120
41-42F169-172165-168 181-184
7-8F21-24
19-20F69-7225-28 73-76243-246
31-32F121-124 125-128
43-44F173-176 177-180
45-46F185-188
29-329-10F33-36 77-80
21-22F81-84 129-132
33-34F133-136 189-192
47-48F193-196
11-12F37-40
23-24F85-8841-44 89-92 201-204
49-50F197-200141-144
35-36F137-140
93-96 205-208145-148 37-38F149-152
13-14F45-48
54F215-216
53F213-214
52F211-212
51F209-210
55F217-218
69-72C19-20F
73-74C75-78C21-22F
65-68C17-18F
61-64C
15,16F57-60C
55-56C 41-44C
45-48C11-12F
13-14F51-54C 49-50C
9-10F35-38C
39-40C
25-28C
7-8F29-32C
33-34C
15-16C
5-6F21-24C
13-167-8F
17-209-10F
9-125-6F
5-83-4F
1-2F1-4
11-12F21-24
13-14F25-28
15-16F29-32
5-6F13-16
1-2F1-4
2F7-8
3F9-103F
5-6
4F226-227
4F11-12
3-4
1-2
1F5-6
5F7-8
6F9-10
7F11-12
8-9F13-15
12-13F19-21
14-15F22-23
16F24-25
17F26-27 18-19F
30-33
28-29
9-10
1-2F5-8
3-4F1-4
7-8
3-6 1-21-2F
15-169-10F
13-147-8F
11-125-6F
5F13-14
4F11-12
6-7F15-18
3F9-10
56F219-220
57F221-222
58F223-224
WCT 1-6 WCT 7-12
225-228C59-60F229-232C61-62F
233-236C63-64F237-240C65-66F
241-242C
10-11F16-18
27-28F87-90C
29-30F91-94C
95-96C31-32F
25-26F83-86C
23-24F75-78C
73,74C
79-82C 19-20F
71,72C
17-18F59-62C
55-58C13-14F
67-70C21-22F
63-66C
49-50C 43-44C
45-48C
9-10F33-36C51-54C
15-16F
39-42C11-12F
37-38C
29-32C
21-24C7-8F
19-20C5-6F
15-18C
11-12C
9-10C
1-4C25-28C 3-4F
5-8C1-2F
13-14C
15-16F
37-38C
31-32C13-14F
23-26C11-12F
21-22C
19-20C 17-18C
27-30C33-36C
27-28C
19-20C
23-24C
31-32C
25-26C
10-11F29-30C
8-9F21-22C
17-20C
3-4F11-14C
1-2F5-8C
9-10C 1-4C
14-15F37-40C
12-13F33-36C
A,B,C,D
CATV 12 CATV 11
CATV 13CATV 15
CATV 16
CATV 14
CATV 7 CATV 8 CATV 9 CATV 10
CATV 3
CATV 1 CATV 2
CATV 4
CATV 6 CATV 5
Rotunda Elev
Palazzo Elev
N.Trinity Elevs
4th StElevs
Mr 3Lone Star Room
Vending
Trinity Elev
Trinity StBox Office
NorthElec
Central Elec
Freight Elev
N.Trinity Elevs
4th StElevs
Mr 1Freedom
IndependenceMr 2
SouthElec
FHConc
Freight Elev
StaffElev
4th StElevs
N.Trinity Elevs
Sp244L/1130
WallPhoneH2/84
Sp421 H2/121
WallPhoneA2/96
Fig. 5. SC15 Showfloor
installed. Others were configured, but ceased to respondafter a number of days. Investigation found that they hadnot booted properly due to a lack of a keyboard andmonitor (e.g. “headless” operation was not functional).
• A BRIX device was configured before the event, butwould not respond to credentials upon deployment. Mem-ory corruption during transit was assumed.
• 2 different BRIX devices reported significant TCP packetloss when testing to one another, and little to no TCPpacket loss to all other nodes. Investigation at the timecould not determine if this was related to the path or theconfiguration.
• 2 ZBOX devices were never capable of testing to eachother, but tested to all other devices without issue. In-vestigation was inconclusive regarding a cause, althoughconfiguration could not be ruled out as a possibility.
VI. UNDERSTANDING THE RESULTS
After evaluation of the results in Section V, we detailthe following several key findings regarding this hardware.Universal findings are:
• Each of the small nodes may be inherently more fragileand cheaply constructed than a server. The price is anattractive option that may outweigh the durability.
• Hardware and operating system interaction is still chal-lenging, due to rapid changes in design and support.
• Prices fluctuate widely for the SFF devices the authorstested in the weeks before and after purchase.
Model availability continues to expand, and there will al-ways be newer, smaller, faster devices available on the market.This testing hopes to assist in selecting models or even helpto determine if these SFF nodes are the right choice for yourorganization.
Model Cost OS Number of Cores Power Install OperationBRIX $99 Base; CentOS & Dual-core Power brick Requires tools As expected
< $200 as tested Debian or 5V USBLIVA $160 Debian only; Dual-core Power brick Snap together Boot issues;
Driver Issues Hardware failuresNUC $130 Base; CentOS & Dual-core Power brick Requires tools As expected
< $200 as tested DebianZBOX $125 Base; CentOS & Quad-core Grounded cord Tool-free As expected
< $200 as tested Debian assemblyTABLE I
USABILITY RESULTS
Model Throughput NTP StabilityBRIX > 900Mbps average, one unit No NTP Sync issues found Two units reported significant
reported 725Mbps average TCP packet lossLIVA > 800Mbps average, found to No NTP Sync issues found Several units inexplicably halted during operation
be processor limited or showed damage due to poor manufacturingNUC > 900Mbps average, one unit No NTP Sync issues found No errors reported
reported 845Mbps averageZBOX > 900Mbps average No NTP Sync issues found No errors reported
TABLE IIPERFORMANCE RESULTS
A. BRIX
The BRIX by Gigabyte was found to be a solid performerin the tests. The device was found to support both CentOSand Debian operating systems. There are different processoroptions available, both are Intel Celeron Processors, either a1.58GHz or 2.16GHz dual core. As tested the unit retails for$99, and required additional components bringing the total toaround $200. This was found to be a good middle of the roadtester, despite some of the observations for performance ofcertain units.
B. LIVA
The LIVA by ECS was the smallest, most inexpensive (ataround $160), and least feature rich device that was tested. Thehardware was found to only support the Debian operating sys-tem, and in particular only supported one variant that featureda specific driver that supported the flash memory. The buildquality was questioned, given the number of units that didnot boot upon arrival, or failed during operation. The inabilityto operate without a keyboard and monitor was considered amajor flaw in operation. The under powered processor limitedusefulness as a throughput tester. The device was found tohave a low power draw at 15W, which makes it a candidatefor for Power over Ethernet (PoE). It is recommended fordeployments that require an extremely low price point andneed only limited test capability.
C. NUC
The NUC by Intel was also found to be a solid performerin the tests, and a good value for the money. Like the BRIXdescribed in Section VI-A, the NUC supported both CentOSand Debian operating systems and retails for $130, but withadditional components the cost is around $200. There are
different processor options available, both are Intel CeleronProcessors, either a 1.6GHz or 2.16GHz dual core. Thepositive aspects of this tester were that it featured a wellknown manufacturer and did not experience any unexplainedmechanical failures.
D. ZBOX
The ZBOX by Zotac performed best across the boardand featured the most conveniences, including several USBports and no assembly required. There are different processoroptions available, both are Intel Celeron Processors, either a1.8GHz or 2.16GHz quad core. The processor has the higheststability and performance of measurements for all testers.The device supported both CentOS and Debian operatingsystems and retails for a base price of $125. Including requiredadditional components raises the price to just under $200.
VII. CONCLUSION & FUTURE WORK
With the growing dependence on high performance net-works to support collaborative research, there is a need forextended networking monitoring in order to ensure good per-formance. Historically, this has been challenging for both soft-ware deployment and hardware cost issues. The perfSONARsuite of tools offers a solution to the former problem, and inthis paper we address possible approaches to the second.
Our experience with Small Form Factor (SFF) technologyemphasizes the need to consider many factors when selecting atest environment, including but not limited to cost, deployabil-ity, management, and measurement accuracy. Different settingswill require an emphasis on different aspects, but all of themimpact the end goal of usefulness in debugging problems andensuring performance.
In the pragmatic setting of SCinet, we evaluated SFFsincluding BRIX, LIVA, NUC, and ZBOX, although acknowl-edge that the offerings in this space are numerous. We foundthat although slightly cheaper than the others in price, theneed for the LIVA to have a keyboard and monitor at boottime was very limiting when it came time for the actualdeployments, and it only supported a single variant of Debian.LIVA machines also had numerous failures during the week.The BRIX were more reliable, but also had two nodes withsignificant stability issues over the week. The NUC hadsome operational issues and also variable results when testingthroughput. Overall the ZBOX, with its stability and quad coredesign (as well as the tool-free assembly) seemed to be bestsuited for our environment.
Going forward, there are many additional hardware offer-ings currently available, and prices change even more rapidly.This evaluation is only the first of many, and other communityefforts [8], [10] continue to test state of the art technologyofferings to answer questions about network performance.
VIII. ACKNOWLEDGMENTS
This material is based upon work supported by the NationalScience Foundation under grant no. 0962973.
The authors are grateful for the assistance provided bythe perfSONAR collaboration. Their support and suggestionsgreatly enhanced this experimentation.
IX. DISCLAIMER
This document was prepared as an account of work spon-sored by the United States Government. While this documentis believed to contain correct information, neither the UnitedStates Government nor any agency thereof, nor the Regentsof the University of California, nor any of their employees,makes any warranty, express or implied, or assumes any legalresponsibility for the accuracy, completeness, or usefulnessof any information, apparatus, product, or process disclosed,or represents that its use would not infringe privately ownedrights. Reference herein to any specific commercial product,process, or service by its trade name, trademark, manufacturer,or otherwise, does not necessarily constitute or imply itsendorsement, recommendation, or favoring by the UnitedStates Government or any agency thereof, or the Regents of theUniversity of California. The views and opinions of authorsexpressed herein do not necessarily state or reflect those ofthe United States Government or any agency thereof or theRegents of the University of California.
REFERENCES
[1] BeagleBoard. http://beagleboard.org/.[2] ESnet - The Energy Sciences Network. http://www.es.net/about/.[3] Intel NUC. http://www.intel.com/content/www/us/en/nuc/overview.html.[4] International Networks at IU. http://internationalnetworking.iu.edu/.[5] KENET - Kenya Education Network. https://www.kenet.or.ke/.[6] NetBeez. https://netbeez.net/.[7] Network Startup Resource Center. https://www.nsrc.org/.
[8] NTAC Performance Working Group. http://www.internet2.edu/communities-groups/advanced-networking-groups/performance-working-group/.
[9] Pennsylvania State University’s Science DMZ Research Network. http://rn.psu.edu/design/.
[10] PERFCLUB - A perfSONAR User Group. http://perfclub.org/.[11] perfSONAR. http://www.perfsonar.net/.[12] perfSONAR 3.5 Release. http://www.perfsonar.net/release-notes/
version-3-5-1/.[13] Raspberry PI. https://www.raspberrypi.org/.[14] SC: The International Conference for High Performance Computing,
Networking, Storage, and Analysis.[15] SCinet: The Fastest Network Connecting the Fastest Computers. http:
//sc15.supercomputing.org/scinet/.[16] The Pennsylvania State University. http://www.psu.edu/.[17] Build a Dyna-Micro 8080 Computer. Radio-Electronics, pages 33–36,
May 1976.[18] Campus Cyberinfrastructure - Data, Networking, and Innovation Pro-
gram (CC*DNI). https://www.nsf.gov/funding/pgm summ.jsp?pims id=504748, 2015.
[19] Vaibhav Bajpai, Steffie Jacob Eravuchira, and Jurgen Schonwalder.Lessons Learned From Using the RIPE Atlas Platform for MeasurementResearch. SIGCOMM Comput. Commun. Rev., 45(3):35–42, July 2015.
[20] Vaibhav Bajpai and Jurgen Schonwalder. A Survey on Internet Per-formance Measurement Platforms and Related Standardization Efforts.IEEE Communications Surveys and Tutorials, 17(3):1313–1341, 2015.
[21] E. Boyd, L. Fowler, and B. Tierney. perfSONAR: The Road to 100kNodes. 2015 Internet2 Global Summit, perfSONAR: Meeting theCommunity’s Needs, 2015.
[22] Prasad Calyam, Jialu Pu, Weiping Mandrawa, and Ashok Krishna-murthy. OnTimeDetect: Dynamic Network Anomaly Notification inperfSONAR Deployments. In MASCOTS 2010, 18th Annual IEEE/ACMInternational Symposium on Modeling, Analysis and Simulation of Com-puter and Telecommunication Systems, Miami, Florida, USA, August17-19, 2010, pages 328–337, 2010.
[23] E. Dart, L. Rotman, B. Tierney, M. Hester, and J. Zurawski. TheScience DMZ: A Network Design Pattern for Data-Intensive Science.In IEEE/ACM Annual SuperComputing Conference (SC13), Denver CO,USA, 2013.
[24] Ahmed El-Hassany, Ezra Kissel, Dan Gunter, and D. Martin Swany.Design and Implementation of a Unified Network Information Service.In IEEE SCC, pages 224–231. IEEE Computer Society, 2013.
[25] M. Fomenkov and K. Claffy. Internet Measurement Data ManagementChallenges. In Workshop on Research Data Lifecycle Management,Princeton, NJ, Jul 2011.
[26] A. Hanemann, J. Boote, E. Boyd, J. Durand, L. Kudarimoti, R. Lapacz,M. Swany, S. Trocha, and J. Zurawski. PerfSONAR: A Service-OrientedArchitecture for Multi-Domain Network Monitoring. In InternationalConference on Service Oriented Computing (ICSOC 2005), Amsterdam,The Netherlands, 2005.
[27] Thomas Holterbach, Cristel Pelsser, Randy Bush, and Laurent Vanbever.Quantifying Interference Between Measurements on the RIPE AtlasPlatform. In Proceedings of the 2015 ACM Conference on InternetMeasurement Conference, IMC ’15, pages 437–443, New York, NY,USA, 2015. ACM.
[28] W. Johnston, E. Chaniotakis, E. Dart, C. Guok, J. Metzger, and B. Tier-ney. The Evolution of Research and Education Networks and theirEssential Role in Modern Science. Technical Report LBNL-2885E,Lawrence Berkeley National Laboratory, 2010.
[29] Partha Kanuparthy, Danny H. Lee, Warren Matthews, ConstantineDovrolis, and Sajjad Zarifzadeh. Pythia: Detection, Localization, andDiagnosis of Performance Problems. IEEE Communications Magazine,51(11):55–62, 2013.
[30] Matt Mathis, John Heffner, and Raghu Reddy. Web100: Extended TCPInstrumentation for Research, Education and Diagnosis. SIGCOMMComput. Commun. Rev., 33(3):69–79, July 2003.
[31] D. L. Mills. Internet Time Synchronization: The Network Time Protocol,1989.
[32] J. H. Saltzer, D. P. Reed, and D. D. Clark. End-to-end Arguments inSystem Design. ACM Trans. Comput. Syst., 2(4):277–288, November1984.
[33] Carl Shapiro and Hal R. Varian. Information Rules: A Strategic Guideto the Network Economy. Harvard Business School Press, Boston, MA,USA, 2000.
[34] Leigh Shevchik. The Bandwidth Dilemma: ExceedingLow Expectations. https://blog.newrelic.com/2012/01/30/the-bandwidth-dilemma-exceeding-low-expectations/, 2012.
[35] Srikanth Sundaresan, Sam Burnett, Nick Feamster, and Walter De Do-nato. BISmark: A Testbed for Deploying Measurements and Appli-cations in Broadband Access Networks. USENIX Annual TechnicalConference. USENIX, June 2014.
[36] Kevin Thompson. CC*DNI and a Few Other Updates. http://www.thequilt.net/wp-content/uploads/KThompson Quilt Feb2016.pdf, 2016.
[37] Alan Whinery. UH SWARM: Dense perfSONAR Deployment WithSmall, Inexpensive Devices. 2015 Internet2 Global Summit, perf-SONAR: Meeting the Community’s Needs, 2015.
[38] H. Zimmermann. Innovations in internetworking. chapter OSI ReferenceModel - The ISO Model of Architecture for Open Systems Interconnec-tion, pages 2–9. Artech House, Inc., Norwood, MA, USA, 1988.