[ieee 2013 15th international conference on transparent optical networks (icton) - cartagena, spain...

16
Precambrian Research 235 (2013) 20–35 Contents lists available at ScienceDirect Precambrian Research journal h om epa ge : www.elsevier.com/locate/precamres Terminal Proterozoic cyanobacterial blooms and phosphogenesis documented by the Doushantuo granular phosphorites I: In situ micro-analysis of textures and composition Zhenbing She a,b,, Paul Strother c , Gregory McMahon d , Larry R. Nittler e , Jianhua Wang e , Jianhua Zhang f , Longkang Sang a , Changqian Ma a,b , Dominic Papineau c,g a State Key Laboratory of Geological Processes and Mineral Resources, Wuhan 430074, China b Faculty of Earth Sciences, China University of Geosciences, Wuhan 430074, China c Department of Earth and Environmental Sciences, Boston College, Chestnut Hill, MA 02467, USA d Nanofabrication Cleanroom Facility, Boston College, Newton, MA 02459, USA e Department of Terrestrial Magnetism, Carnegie Institution of Washington, Washington, DC 20015, USA f Mine Planning and Designing Institute of Yilin District, Yichang 443100, China g Geophysical Laboratory, Carnegie Institution of Washington, Washington, DC 20015, USA a r t i c l e i n f o Article history: Received 19 December 2012 Received in revised form 17 May 2013 Accepted 20 May 2013 Available online 13 June 2013 Keywords: Doushantuo Granular phosphorites Extracellular polymeric substances Accretionary growth Cyanobacteria Phosphatization a b s t r a c t In order to ascertain the origin of granular phosphorites and the roles of microorganisms in phospho- genesis, we conducted comprehensive petrographic surveys and correlated in situ micro-analyses of granular phosphorites from the Doushantuo Formation near Yichang, South China. Phosphatic gran- ules display organically-zoned internal structures often associated with abundant cyanobacteria-like microfossils. The internal ultrastructure of the granules, as documented by Raman microspectroscopy, scanning electron microscopy (SEM) and transmission electron microscopy (TEM), is characterized by randomly-oriented apatite nano-crystals embedded with ubiquitous carbonaceous particles in the apatite groundmass. These represent primary textures formed by the rapid growth of apatite provided with abundant nucleation sites within microbial biofabrics. NanoSIMS elemental mapping revealed close cor- respondence of carbon and nitrogen with microfossil structures at the cellular and sub-cellular level. We propose that the Doushantuo granules themselves were formed by microbially-mediated accretionary growth followed by rapid phosphatization occurring at the sediment–water interface. Extracellular poly- meric substances (EPS) produced by cyanobacteria would have played crucial roles in these processes by promoting aggregated granule growth in addition to providing nucleation sites for apatite crystalliza- tion. While previous studies have suggested a dominant role of sulfur-metabolizing microorganisms in the precipitation of phosphate in phosphorites, new observations indicate that the emplacement of most sulfur-bearing minerals in the Doushantuo phosphorites postdate phosphatization itself. Our new model of phosphorite formation thus places cyanobacterial EPS as an earlier key component of the mineralization of the Doushantuo granular phosphorites. © 2013 Elsevier B.V. All rights reserved. 1. Introduction Phosphorus usually occurs as a minor or trace constituent in igneous, metamorphic and sedimentary rocks but can be enriched in marine sedimentary phosphorites composed of carbonate fluo- rapatite. Although formation of sedimentary phosphate deposits is largely a Phanerozoic phenomenon (Cook, 1992), worldwide phos- phogenesis, representing periods of accelerated activity in global Corresponding author at: Faculty of Earth Sciences, China University of Geo- sciences, Wuhan 430074, China. Tel.: +86 27 67883001. E-mail address: [email protected] (Z. She). phosphorus cycles, has occurred at the end of the Paleoprotero- zoic and of the Neoproterozoic (Papineau, 2010). Phosphorus plays a vital role in governing primary productivity in the biosphere, thereby interacting with other biogeochemical cycles which, in turn, help regulate Earth’s climate (Föllmi, 1996). Models of the phosphorus cycle are largely constructed on the basis of observations of modern or recent phosphogenic environ- ments such as the coastal upwelling regions of Namibia, Chile and Peru (Arning et al., 2009; e.g., Föllmi, 1996). Phosphogene- sis within these upwelling environments occurs when pore-water becomes supersaturated with dissolved phosphate (Burnett, 1977; Föllmi, 1996). The immediate source of sedimentary phosphate is considered to be released from organic matter which is degraded 0301-9268/$ see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.precamres.2013.05.011

Upload: harm

Post on 20-Dec-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2013 15th International Conference on Transparent Optical Networks (ICTON) - Cartagena, Spain (2013.06.23-2013.06.27)] 2013 15th International Conference on Transparent Optical

ICTON 2013 Tu.D4.2

978-1-4799-0683-3/13/$31.00 ©2013 IEEE 1

Flow Controlled Scalable Optical Packet Switch for Low Latency Flat Data Center Network

Nicola Calabretta, Stefano Di Lucente, Jun Luo, Abhinav Rohit, Kevin Williams, and Harm Dorren COBRA Research Institute, Eindhoven University of Technology

PO. Box 512, 5600 MB – Eindhoven, The Netherlands e-mail: [email protected]

ABSTRACT Bandwidth-hungry internet services like cloud computing, social networking and video sharing generate large volumes of packetized traffic within data centers (DCs). Inter-cluster communication bottleneck of current DCs tree topology causes fragmented pools of servers and high latency by employing a large port count (100s) optical packet switch (OPS) to flatten the DC network topology. However, the reconfiguration time of several OPS architectures with centralized control is port count dependent. Scaling the port count causes larger latency, and thus larger buffers for storing the packets in a flow controlled operation. We present numerical and experimental results that validate the operation of a flow-controlled optical packet switch cross-connect with distributed control and nanosecond packet switching/retransmission. Real-time operation of a random packet traffic generator with variable load, FIFO queue packet storing with buffer managers for packet retransmission, contention resolution and fast switch reconfiguration control have been implemented by using an FPGA. Keywords: Optical packet switching, optical signal processing, label processor, in-band labels, optical switch,

data center network, high performance computers.

1. INTRODUCTION Emerging bandwidth-hungry internet services like cloud computing, social networking and video sharing generate large volumes of packetized traffic within data centers (DCs) [1]. Several research projects [2] attempt to eliminate the inter-cluster communication bottleneck of current DCs tree topology that causes fragmented pools of servers and high latency by employing a large port count (100s) optical packet switch (OPS) to flatten the DC network topology. The reconfiguration time of the OPS architectures with centralized control [2] is port count dependent. Scaling the port count causes larger latency, and thus larger buffers for storing the packets in a flow controlled operation. As result, large port count OPS required to implement a flat data center network was not implemented yet.

In [4] we numerically investigated a novel modular WDM OPS architecture with highly distributed control for flat inter-cluster data center network. Flow control with packet retransmission was considered when packet contentions occur. It turns out that the highly distributed control of the WDM OPS architecture allows parallel processing operation within 25 ns regardless the port count. As result, small electronic buffer size at each cluster is sufficient to provide inter-cluster communication with sub-microsecond latency, high throughput, and low packet loss for typical traffic load [4]. Although experimental demonstration of a WDM OPS architecture based on 1x4 optical switch modules employing an FPGA-based switch controller that performs label processing, contention resolution, and switch control signals was presented in [7], the operation and the performance of the WDM OPS architecture that includes flow control and packet retransmission have never been experimentally assessed.

In this work we experimentally investigate the performance of the WDM OPS with flow control and packet retransmission. The flow control functionality is implemented by buffering the packet labels instead of the packet payloads. However, packet payload retransmission is emulated in the experimental set-up. Real-time operation of a random packet traffic generator with variable load, FIFO queue packet storing with buffer managers for packet retransmission, contention resolution and fast switch reconfiguration control have been implemented by using an FPGA. Experimental results show that a buffer capacity of only 16 packets guarantees a packet loss less than 10-5 in the system for input loads up to 0.5 and fixed and slotted packet length of 1500 B.

2. SYSTEM OPERATION Figure 1(a) shows the system under investigation. We study a time-slotted system. The system operates as follows. Every time-slot 1500 B length optical packet payloads at four distinct WDM channels (ch1, ch2, ch3 and ch4) are generated by the packet payload generator. An FPGA implements four traffic generators to create four labels (one for each channel) with uniform distributed destinations, 220-1 PRBS, and variable input load. The input load determines the probability to have a new packet in a given time-slot. For example, an input load of 0.5 indicates that in each time-slot there is a probability of 50% to have an incoming packet. Each label is stored in the FIFO queue (unless it is not full, otherwise the label is dropped and counted as lost). Each buffer manager transmits the stored label and simultaneously provides a gate signal (label and payload have the same

Page 2: [IEEE 2013 15th International Conference on Transparent Optical Networks (ICTON) - Cartagena, Spain (2013.06.23-2013.06.27)] 2013 15th International Conference on Transparent Optical

ICTON 2013 Tu.D4.2

2

length) to the optical gate of the corresponding channel for generating the payload. In this way each label is associated with a single packet payload. At the OPS node, the electrical label, which determines the packet destination, is processed by the FPGA label processor while the optical payload is transparently forwarded by the OPS to the destination port. If more packets contend at the same time the same destination port, the contention solver selects (using a round robin algorithm) only one packet while the other ones are dropped and thus retransmitted. The contention solver reconfigures the optical module and sends a positive acknowledgement message (ACK) to the appropriate buffer managers for each packet that is successfully delivered. Each buffer manager removes the label from the queue in case of ACK, otherwise it retransmits the label and the payload by controlling the optical gates, emulating packets retransmission.

Figure 1. Experimental set-up (a) and photograph of the integrated 1x4 optical cross-connect (b).

3. EXPERIMENTAL SETUP AND RESULTS The experiment set-up employed to demonstrate the WDM OPS architecture with flow control is shown in Fig. 1. We consider WDM packets at 40 Gb/s OOK NRZ (ch1 = 1548.1 nm, ch2 = 1551.4 nm, ch3 = 1554.5 nm, and ch4 = 1557.7 nm) with 300 ns payload separated by 30 ns guard time. Labels are generated and processed in the FPGA-based system controller according to the RF-tone labeling technique implemented in [6]. The label consists of two bits for encoding the 4 addresses of the OPS output ports. The integrated 1×4 optical cross-connect module [7] shown in Fig. 1(b) integrates a broadcast stage (BS) based on 1×4 power splitter and four wavelength selective switches (WSSs) consisting of a cyclic AWG, four SOA based optical gates and a 4×1 combiners. Each WSS selects only one channel at the time according to the control signals generated by the contention solver. More details about the operation of the switch are reported in [5].

Figure 2(a) shows the ch1 buffer queue time evolution in the first 150 time slots when the input traffic load of the 4 WDM channels is equal to 1, 0.7 and 0.5. Equivalent results, not reported, are obtained for the other 3 channel buffer queues. This is because of the round robin algorithm is employed in the contention solver. We considered a buffer capacity of 16 packets (labels). A packet is lost when the buffer is full and a new label is generated by the traffic generator. Notice that the served packet is not considered in the queue. It is visible that when the load is 1 the buffer is rapidly filled up and there are no free time slots to drain the buffer queue. If the load is 0.7 the buffer queue increases as a function of the contentions while decreases when there is an empty time slot that can be used to successfully retransmit the label. When the load is 0.5, the buffer is never full in the time slot studied, suggesting that a buffer capacity of 16 is sufficient to avoid packet losses in the system. Figure 2(b) shows the packet loss performance of the system considering 109 time slots. Once again the figure refers to ch1 and the equivalent results of the other channels are omitted. The curve shows that the system has a packet loss lower than 10-5 for input loads up to 0.5.

In order to visualize and record the control signals and the packet time traces during the experiment the FPGA-based system controller is set as follows. The traffic generator associated with ch1 provides load equal to 1, thus in every time-slot there is a new incoming packet on this channel. The traffic generators associated with ch2, ch3 and ch4 are programmed to provide an input load equal to 0.3. All the traffic generators are programmed to independently assign the labels according to a 14 time slots periodic pattern. Thus every 14 time slots the 4 different label patterns are repeated. Moreover, the contention resolution algorithm is based on a fixed priority. Packets on ch1 have the highest priority, then packets on ch2, and so on.

Page 3: [IEEE 2013 15th International Conference on Transparent Optical Networks (ICTON) - Cartagena, Spain (2013.06.23-2013.06.27)] 2013 15th International Conference on Transparent Optical

ICTON 2013 Tu.D4.2

3

Figure 2(c) shows an example of the label with two bits (in this case associated with the packets of ch1). A detailed description of this labeling and label processing technique can be found in [6]. The labels associated with the other channels are omitted for the sake of space. However, we highlight the destinations of the optical WDM packets at the OPS input as shown in Fig. 2(d). Figure 2(d) shows the packet time traces associated with the periodic label pattern assigned to each WDM channel and they are reported to show the initial packet

Figure 2: ch1 buffer queue (a) and ch1 packet loss (b); label bits ch1 (c), input packets (d) load and

retransmission load control signals (e), input packets considering retransmissions (f), output 4 control signals (g) and output 4 packets (h).

sequence on each WDM channel. The optical gates control signals generated by the buffer managers are shown in Fig. 2(e) while the generated optical WDM packets are shown in Fig. 2(f). Packets destined to output port 4 of the integrated 1×4 optical module are highlighted in this figure in order to understand the retransmission process. It is evident the load increment due to the retransmissions. For example ch3 (input load = 0.3) has a total load equal to 1 due to multiple retransmissions. Fig. 2(g) shows the control signals generated by the contention solver that drives the SOAs of the WSS associated with output 4 of the integrated device. Figure 2(h) shows the time traces of the selected packets. Comparing Fig. 2(f), (g) and (h) it is evident that the contention solver is performing as expected, giving higher priority to ch1, then ch2 and so on. BER curves in back-to-back, after switching and after wavelength conversion, taken continuously switching one out of the 4 WDM channels at the time, have already been reported in [7].

4. CONCLUSIONS In this paper we experimentally demonstrate the implementation of an OPS node for data center application with packet flow control. We emulate the packet retransmission, packet buffering and control the OPS node by using an FPGA-based system controller. We show that the studied OPS architecture with highly distributed control allows an input buffer capacity of only 16 packets (per input) to guarantee packet losses lower than 10-5 for input loads up to 0.5.

REFERENCES [1] S. Sakr et al., “A survey on large scale data management approaches in cloud environments,” IEEE Com.

Sur.&Tut., pp 311-336, 2011.

Page 4: [IEEE 2013 15th International Conference on Transparent Optical Networks (ICTON) - Cartagena, Spain (2013.06.23-2013.06.27)] 2013 15th International Conference on Transparent Optical

ICTON 2013 Tu.D4.2

4

[2] C. Kachris et al. “A survey on optical interconnects for data centers,” IEEE Com. Sur.&Tut., pp. 1-16, 2012.

[3] L. A. Barroso et al., “The datacenter as a computer: An introduction to the design of warehouse-scale machines,” Synthesis Lectures on Computer Architectures 4(1), pp. 1-118, 2009.

[4] S. Di Lucente et al., “Scaling low-latency optical packet switches to a thousand ports,” JOCN, vol. 4, no. 9, 2012.

[5] S. Di Lucente et al., “FPGA controlled Integrated optical cross-connect module for high port-density optical packet switch,” in Proc. ECOC 2012, Amsterdam, 2012.

[6] J. Luo et al., “Optical RF tone in-band labeling for large-scale and low-latency optical packet switches,” JLT, vol. 30, no. 16, 2012.

[7] A. Rohit et al., “Multi-path routing in a monolithically integrated 4x4 broadcast and select WDM cross-connect,” in Proc. ECOC 2011, Geneva, 2011.