m2r m;h2 qm hb;?i@b?22i kb+`qb+qtv m/ `2 h@ibk2 bk ;2 t`q+ ... · gaahp66a:l_1a kxn *qkth2i2/.m...

123
A new angle on light-sheet microscopy and real-time image processing Bálint Balázs Pázmány Péter Catholic University Faculty of Information Technology and Bionics Roska Tamás Doctoral School of Sciences and Technology Advisor: Balázs Rózsa, M.D., Ph.D. A thesis submitted for the degree of Doctor of Philosophy 2018 DOI:10.15774/PPKE.ITK.2018.005

Upload: others

Post on 25-Sep-2019

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

A new angle on light-sheet microscopyand real-time image processing

Bálint Balázs

Pázmány Péter Catholic University

Faculty of Information Technology and BionicsRoska Tamás Doctoral School of Sciences and Technology

Advisor: Balázs Rózsa, M.D., Ph.D.

A thesis submitted for the degree ofDoctor of Philosophy

2018

DOI:10.15774/PPKE.ITK.2018.005

Page 2: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Abstract

Light-sheet fluorescence microscopy, also called single plane illumination microscopy, hasnumerously proven its usefulness for long term imaging of embryonic development. Thisthesis tackles two challenges of light-sheet microscopy: high resolution isotropic imagingof delicate, light-sensitive samples, and real-time image processing and compression oflight-sheet microscopy images.

A symmetric light-sheet microscope is presented, featuring two high numerical aper-ture objectives arranged in 120◦. Both objectives are capable of illuminating the samplewith a tilted light-sheet and detecting the fluorescence signal. This configuration allowsfor multi-view, isotropic imaging of delicate samples where rotation is not possible. Theoptical properties of the microscope are characterized, and its imaging capabilities aredemonstrated on Drosophila melanogaster embryos and mouse zygotes.

To address the big data problem of light-sheet microscopy, a real-time, GPU-basedimage processing pipeline is presented. Alongside its capability of performing commonlyrequired preprocessing tasks, such as image fusion of opposing views immediately duringimage acquisition, it also contains a novel, high-speed image compression method. Thisalgorithm is suitable for both lossless and noise-dependent lossy image compression, thelatter allowing for a significantly increased compression ratio, without affecting the resultsof any further analysis. A detailed performance analysis is presented of the differentcompression modes for various biological samples and imaging modalities.

ii

DOI:10.15774/PPKE.ITK.2018.005

Page 3: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Contents

Abstract ii

List of Figures vi

List of Tables vii

List of Abbreviations viii

Introduction 1

1 Live imaging in three dimensions 31.1 Wide-field fluorescence microscopy . . . . . . . . . . . . . . . . . . . . . . 31.2 Point scanning methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.3 Light-sheet microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161.4 Light-sheet imaging of mammalian development . . . . . . . . . . . . . . . 23

2 Dual Mouse-SPIM 302.1 Microscope design concept . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.2 Optical layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.3 Optical alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392.4 Control unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432.5 Validating and characterizing the microscope . . . . . . . . . . . . . . . . 462.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3 Image compression 573.1 Basics of information theory . . . . . . . . . . . . . . . . . . . . . . . . . . 573.2 Entropy coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583.3 Decorrelation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4 GPU-accelerated image processing and compression 664.1 Challenges in data handling for light-sheet microscopy . . . . . . . . . . . 664.2 Real-time preprocessing pipeline . . . . . . . . . . . . . . . . . . . . . . . 67

iii

DOI:10.15774/PPKE.ITK.2018.005

Page 4: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5 Conclusions 905.1 New scientific results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905.2 Application of the results . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Appendix 94A Bill of materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94B Supplementary Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96C Light collection efficiency of an objective . . . . . . . . . . . . . . . . . . . 98D 3D model of Dual Mouse-SPIM . . . . . . . . . . . . . . . . . . . . . . . . 99

References 101

Acknowledgements 113

iv

DOI:10.15774/PPKE.ITK.2018.005

Page 5: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

List of Figures

1.1 Tradeoffs in fluorescence microscopy for live imaging . . . . . . . . . . . . 41.2 Excitation and emission spectrum of enhanced green fluorescent protein

(EGFP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Wide-field fluorescence microscope . . . . . . . . . . . . . . . . . . . . . . 61.4 Axial cross section of the PSF and OTF of a wide-field microscope . . . . 81.5 Airy pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.6 Optical path length differences in the Gibson-Lanni PSF model . . . . . . 111.7 Resolution of a wide-field microscope . . . . . . . . . . . . . . . . . . . . . 121.8 Basic optical components of a confocal laser scanning and confocal-theta

microscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141.9 Axial cross section of the PSF and OTF of a confocal laser scanning mi-

croscope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.10 Basic concept of single-plane illumination microscopy . . . . . . . . . . . . 161.11 Different optical arrangements for light-sheet microscopy . . . . . . . . . . 171.12 Basic optical components of a SPIM . . . . . . . . . . . . . . . . . . . . . 191.13 Light-sheet dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211.14 DSLM illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.15 Inverted light-sheet microscope for multiple early mouse embryo imaging. 251.16 Imaging mouse post-implantation development . . . . . . . . . . . . . . . 261.17 Imaging adult mouse brain with light-sheet microscopy. . . . . . . . . . . 28

2.1 Dual view concept with high NA objectives . . . . . . . . . . . . . . . . . 312.2 Lateral and axial resolution of a multi-view optical system . . . . . . . . . 332.3 Dual Mouse-SPIM optical layout . . . . . . . . . . . . . . . . . . . . . . . 352.4 The core unit of the microscope . . . . . . . . . . . . . . . . . . . . . . . . 362.5 Illumination branch splitting unit . . . . . . . . . . . . . . . . . . . . . . . 382.6 Detection branch switching unit . . . . . . . . . . . . . . . . . . . . . . . . 402.7 Microscope control hardware and software architecture . . . . . . . . . . . 452.8 Digital and analog control signals . . . . . . . . . . . . . . . . . . . . . . . 45

v

DOI:10.15774/PPKE.ITK.2018.005

Page 6: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

LIST OF FIGURES

2.9 Completed DualMouse-SPIM . . . . . . . . . . . . . . . . . . . . . . . . . 472.10 Stability measurements of view switcher unit . . . . . . . . . . . . . . . . 492.11 Illumination beam profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 502.12 Simulated and measured PSF of Dual Mouse-SPIM . . . . . . . . . . . . . 522.13 Combined PSF of 2 views . . . . . . . . . . . . . . . . . . . . . . . . . . . 522.14 Maximum intensity projection of a Drosophila melanogaster embryo

recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542.15 Multi-view recording of a mouse zygote . . . . . . . . . . . . . . . . . . . 54

3.1 Building the binary Huffman tree . . . . . . . . . . . . . . . . . . . . . . . 613.2 Context for pixel prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.1 Experiment sizes and data rate of different imaging modalities . . . . . . 674.2 Real-time image processing pipeline for multiview light-sheet microscopy . 674.3 Operating principle of MuVi-SPIM . . . . . . . . . . . . . . . . . . . . . . 684.4 multiview fusion methods for light-sheet microscopy . . . . . . . . . . . . 704.5 Classes of LVCUDA library . . . . . . . . . . . . . . . . . . . . . . . . . . 714.6 Applying mask to remove background . . . . . . . . . . . . . . . . . . . . 724.7 Comparison of 3D and 2D fusion of beads . . . . . . . . . . . . . . . . . . 734.8 GPU fused images of a Drosophila melanogaster embryo . . . . . . . . . . 754.9 Image contrast depending on z position for raw and fused stacks . . . . . 754.10 B3D algorithm schematics . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.11 Options for noise-dependent lossy compression . . . . . . . . . . . . . . . 804.12 Theoretical and measured increase in image noise for WNL compression . 814.13 Compression performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 844.14 Image quality of a WNL compressed dataset . . . . . . . . . . . . . . . . . 844.15 Compression error compared to image noise . . . . . . . . . . . . . . . . . 854.16 Influence of noise-dependent lossy compression on 3D nucleus segmentation 854.17 Influence of noise-dependent lossy compression on 3D membrane segmen-

tation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.18 Influence of noise-dependent lossy compression on single-molecule local-

ization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864.19 Change in localization error only depends on selected quantization step . 87

C1 Light collection efficiency of an objective . . . . . . . . . . . . . . . . . . . 98D1 3D model of DualMouse-SPIM . . . . . . . . . . . . . . . . . . . . . . . . 100

vi

DOI:10.15774/PPKE.ITK.2018.005

Page 7: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

List of Tables

3.1 Examples of a random binary code (#1) and a prefix-free binary code (#2) 593.2 Letters to be Huffman coded and their probabilities . . . . . . . . . . . . 603.3 Huffman code table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

B1 Data sizes in microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96B2 Lossless compression performance . . . . . . . . . . . . . . . . . . . . . . . 96B3 Datasets used for benchmarking compression performance . . . . . . . . . 97

vii

DOI:10.15774/PPKE.ITK.2018.005

Page 8: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

List of Abbreviations

2D two-dimensional.3D three-dimensional.

BFP back focal plane.

CCD charge coupled device.CLSM confocal laser scanning microscopy.CMOS complementary metal oxide semiconductor.CPU central processing unit.

DCT discrete cosine transform.DFT discrete fourier transform.DPCM differential pulse code modulation.DSLM digitally scanned light-sheet microscopy.dSTORM direct stochastic optical reconstruction microscopy.DWT discrete wavelet transform.

eCSD electronic confocal slit detection.EM-CCD electron multiplying CCD.

FEP fluorinated ethylene propylene.FN field number.FOV field of view.FPGA field programmable gate array.fps frames per second.FWHM full width at half maximum.

GFP green fluorescent protein.GPU graphics processing unit.

HEVC high efficiency video codec.

ICM inner cell mass.

viii

DOI:10.15774/PPKE.ITK.2018.005

Page 9: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

List of Abbreviations

JPEG joint photographic experts group.

LSFM light-sheet fluorescence microscopy.

MIAM multi-imaging axis microscopy.MuVi-SPIM Multiview SPIM.

NA numerical aperture.

OPFOS orthogonal plane fluorescent optical sectioning.OTF optical transfer function.

PEEK polyether ether ketone.PSF point spread function.

RESOLFT reversible saturable optical fluorescence transitions.ROI region of interest.

sCMOS scientific CMOS.SMLM single molecule localization microscopy.SNR signal-to-noise ratio.SPIM single-plane illumination microscopy.STED stimulated emission depletion.

TE trophoectoderm.TLSM thin light-sheet microscopy.

WNL within-noise-level.

YFP yellow fluorescent protein.

ix

DOI:10.15774/PPKE.ITK.2018.005

Page 10: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Introduction

Imaging techniques are one of the most extensively used tools in medical and biologicalresearch. The reason for this is simple: visualizing something invisible to the naked eye isan extremely powerful way to gain insight into its inner workings. Our brain has evolvedto receive and process a multitude of signals from various sensors, and arguably the mostpowerful of these is vision.

As a branch of optics, microscopy (from ancient Greek mikros, “small” and skopein,“to see”) is based on observing the interactions of light with an object of interest, suchas a cell. To be able to see these interactions, the optics of the microscope magnifies theimage of the sample, which can be recorded on a suitable device. For the first microscopesin the 17th century, this was just an eye at the end of the ocular, and the recording wasa drawing of the observed image [1].

Microscopy is a truly multidisciplinary field: even in its simplest form, just usinga single lens, the principles of physics are applied to gain a deeper understanding ofbiology and nature. Today, microscopy encompasses most of natural sciences and buildson various technological advancements. While physics and biology are still in the mainfocus, the principles of chemistry (fluorescent molecules), engineering (automation) andcomputer science (image analysis) are all integrated in a modern microscopy environment.

Light-sheet fluorescence microscopy (LSFM), also called single-plane illumination mi-croscopy (SPIM), is a relatively new addition to the arsenal of tools that comprise lightmicroscopy methods, and is especially suitable for live imaging of biological samples,from within cells to entire embryos, over extended periods of time [2–5]. It is also eas-ily adapted to the sample, allowing to image a large variety of specimens, from entireorgans [6], to the subcellular processes occurring inside cultured cells [7]. Due to itsability to bridge large scales in space and time, light-sheet microscopy can provide anunprecedented amount of information on biological processes. Despite its indubitablebenefits, operating such a microscope can pose serious infrastructural challenges, as asingle overnight experiment can generate tens of terabytes of data.

This work tackles two challenges in light-sheet microscopy: high-resolution live imag-ing of delicate samples, such as mouse embryos, and real-time image processing andcompression of large light-sheet datasets. Before discussing the work in detail, Chapter 1

1

DOI:10.15774/PPKE.ITK.2018.005

Page 11: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

and Chapter 3 will give a short introduction to the concepts this thesis builds on. Thebasics of fluorescence microscopy, light-sheet microscopy, information theory and imagecompression will be covered.

Chapter 2 is devoted to presenting a new light-sheet microscope, the Dual Mouse-SPIM, designed for isotropic imaging of light-sensitive specimens. This microscope isbased on two identical objectives positioned in 120◦ as opposed to the conventional 90◦

orientation used in light-sheet microscopy, and it offers dual-view detection through bothlenses. We discuss the benefits of this arrangement and the design principles and opticallayout of the microscope. After characterizing the optical properties of the microscope,we demonstrate its imaging capabilities on various samples.

In Chapter 4 we present a GPU-based real-time image processing pipeline designedto efficiently handle large amounts of microscopy data. As 3D imaging is gaining moreand more traction, image datasets are generated at a faster pace than ever. We presenta real-time preprocessing solution for multiview light-sheet microscopy, and a new im-age compression algorithm to significantly reduce data size by taking image noise intoaccount. The theory behind these methods will be discussed before demonstrating theircapabilities and evaluating their performance on multiple biological samples.

Finally, Chapter 5 will give a summary of the new results presented in this thesis,and discuss the potential future applications.

2

DOI:10.15774/PPKE.ITK.2018.005

Page 12: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Chapter 1

Live imaging in three dimensions

Live imaging is indispensable to understand the processes at the interface of cell anddevelopmental biology. In an ideal setting, the ultimate microscope would be able torecord a continuos, three dimensional (3D), multicolor dataset of any biological processof interest with the highest possible resolution. Due to several limitations in physics andbiology this is not possible. Therefore, a compromise is necessary. The diffractive natureof light, the lifetime of fluorescent probes and the photosensitivity of biological specimensall require microscopy to be adapted so that the question at hand may be answered.

In order to acquire useful data, one has to choose a tradeoff between spatial andtemporal resolution and signal contrast, while making sure the biology is not affectedby the imaging process itself [8]. This challenge can be illustrated by a pyramid whereeach corner represents one of these criteria (Figure 1.1), while a point inside the pyramidcorresponds to the imaging conditions. As soon as we try to optimize one condition, i.e.,we move the point closer to one of the corners, it will move further from all the othersdue to the limited photon budget. In order to make a fundamental difference and movethe corners of the pyramid closer together, an change in microscope design is necessary.

1.1 Wide-field fluorescence microscopy

Fluorescence microscopy [9, 10], as a subset of light microscopy, is one of the few methodsthat allow subcellular imaging of live specimens with specific labeling. The first use ofthe term fluorescence is credited to George Gabriel Stokes [11], and it refers to thephenomenon of light emission following the absorption of light or other electromagneticradiation. As the name fluorescence microscopy suggests, this method collects fluorescentlight from the specimens. Since biological tissues are usually not fluorescent, except forsome autofluorescence mostly at shorter wavelengths, fluorescent dyes or proteins haveto be introduced to the system in order to be able to collect the necessary information.The advantage of this is that the signal of the labeled structures will be of very high

3

DOI:10.15774/PPKE.ITK.2018.005

Page 13: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

HighSpeed

HighContrast

LowPhotodamage

HighResolution

Figure 1.1: Tradeoffs in fluorescence microscopy for live imaging. Also called the “pyramid offrustration”. When optimizing the imaging conditions (red dot), a tradeoff has to be made between resolution,contrast, and imaging speed, while avoiding photodamage. One can only be improved at the expense of theothers due to the limited photon budget of the fluorescent molecules. Adapted from [8].

350 400 450 500 550 600 650

Wavelength (nm)

0

0.2

0.4

0.6

0.8

1

1.2

%T

Stokes shift

excitation emission

Figure 1.2: Excitation and emission spectrum of enhanced green fluorescent protein (EGFP).Excitation spectrum in blue, emission spectrum in yellow. The separation between the two spectra is due tothe Stokes shift, which is 19 nm for EGFP. Emission and excitation light can be separated by a long-passfilter at 500nm. Data from [12].

ratio compared to the background.A fluorescent molecule is capable of absorbing photons in a given wavelength range

(excitation spectrum) and temporarily store its energy by having an electron in a higherenergy state, i.e., in an excited state. This excited state, however, is not stable, and theelectron quickly returns to the ground state while emitting a photon. The energy of theabsorbed and emitted photons are not the same, as energy loss occurs due to internalrelaxation events, and the emitted photon has lower energy than the absorbed photon.This phenomenon is called the Stokes shift, or red shift, and can be exploited in mi-croscopy to drastically increase the signal-to-noise ratio by filtering out the illuminationlight (Figure 1.2).

4

DOI:10.15774/PPKE.ITK.2018.005

Page 14: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

1.1.1 Fluorescent proteins

Traditionally, synthetic fluorescent dyes were used to label certain structures in thespecimens. Some of these directly bind to their target, and others can be used whenconjugated to an antibody specific to the structure or protein of interest. A requirementfor these methods is that the fluorescent label has to be added to the sample from anexternal source, and, in many cases, this also necessitates sample preparation techniquesincompatible with live imaging, such as fixation [13].

The discovery of fluorescent proteins has revolutionized fluorescence microscopy.Since these molecules are proteins, they can be produced directly by the organism ifthe proper genetic modifications are performed. Even though this was a hurdle at thetime of discovering the green fluorescent protein (GFP) [14], genetic engineering tech-niques evolved since then [15], and not only has its gene been successfully integratedin the genome of a multitude of organisms [16–18], but many variants have been alsoengineered by introducing mutations to increase fluorescence intensity, and to change thefluorescence spectrum to allow multicolor imaging [18–21]. The usefulness and impact ofthese proteins are so profound, that in 2008 the Nobel Prize in chemistry was awarded toOsamu Shimomura, Martin Chalfie, and Roger Tsien “for the discovery and developmentof the green fluorescent protein, GFP” [22].

1.1.2 Wide-field image formation

By imaging fluorescently labelled specimens, a wide-field fluorescence microscope has thecapability of discriminating illumination light from emitted fluorescent light due to theStokes shift described in the previous section. The microscope’s operating principle isdepicted in Figure 1.3.

Light from a source, typically a mercury lamp is focused on the back focal plane ofthe objective to create even illumination of the sample. Before entering the objective,the light is filtered, so only the wavelengths that correspond to the excitation propertiesof the observed fluorophores are transmitted. Since the same objective is used for bothillumination and detection, a dichroic mirror is utilized to decouple the illuminationand detection paths. The emitted light is filtered again to make sure any reflected andscattered light from the illumination source is blocked to increase signal-to-noise ratio.Finally, light is focused by a tube lens to create a magnified image on the camera sensor.

This type of arrangement is called infinity-corrected optics, since the back focal pointof the objective is in “infinity”, meaning that the light from a point source exiting the backaperture is parallel. This is achieved by placing the sample exactly at the focal point ofthe objective. Infinity-corrected optics have the advantage that they allow placing variousadditional optical elements in the infinity space, (i.e., the space between the objectiveand the tube lens) without affecting the image quality. In this example such elements

5

DOI:10.15774/PPKE.ITK.2018.005

Page 15: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

CA

M

tube

lens

emission

filter

light

source

excitation

filter

dichroic

mirror

objective

sample

αf

r

Figure 1.3: Wide-field fluorescence microscope. The light source is focused on the back focal plane ofthe objective to provide an even illumination to the sample. Emitted photons are collected by the objective,and are separated from the illumination light by a dichroic mirror. Inset: Light collection of an objectivelens. α: light collection half-angle; f : focal length; r: radius of aperture.

are the dichroic mirror and the emission filter.The combination of the objective and tube lens together will determine the optical

magnification of the system, which will be the ratio of the focal lengths of these lenses:

M =fTL

fOBJ. (1.1)

The final field of view (FOV) of the microscope will depend on the magnification, thesize of the imaging sensor (D), and the objective field number (FN , specified by themanufacturer, the diameter of the view field in the image plane):

FOV =min(D,FN)

M. (1.2)

Apart from the magnification, the most important property of the objective is thehalf-angle of the light acceptance cone, α (Figure 1.3, inset). This not only determinesthe amount of collected light, but also the achievable resolution of the system (see Sec-tion 1.1.3). This angle depends on the size of the lens relative to its focal length. In other

6

DOI:10.15774/PPKE.ITK.2018.005

Page 16: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

words it depends on the aperture of the lens, which is why the expression numericalaperture (NA) is more commonly used to express this property of the objective:

NA = n · sinα, (1.3)

where n is the refractive index of the medium, and describes the light propagation speedin the medium relative to the speed of light in vacuum. For vacuum and air n = 1, forwater nH2O = 1.33, and for the commonly used optical glass BK7 nBK7 = 1.52.

For small α angles, the following approximation holds true: sinα ≈ tanα ≈ α. Thus,the numerical aperture can also be expressed as a ratio of the radius of the lens and thefocal length:

NA ≈ nr

f, when α ≪ 1. (1.4)

1.1.3 Resolution of a wide-field microscope

The resolution of an optical system depends on the size of the smallest distinguishablefeature on the image. A practical way of quantifying this is by measuring the smallestresolved distance, i.e., the minimum distance between two point-like objects so that thetwo objects can still be distinguished. This mainly depends on two factors: the NA ofthe objective, and the pixel size of the imaging sensor.

Even if the imaging sensor would have infinitely fine resolution, it is not possible toreach arbitrarily high resolutions due to the wave nature of light and diffraction effectsthat occur at the aperture of the objective. This means that depending on the wavelengthof the light, any point source will have a finite size on the image, it will be spread out,limiting the resolution. The shape of this image is called the point spread function, orPSF (Figure 1.4), as this function describes the behavior of the optical system whenimaging a point-like source. This property of lenses was already discovered by Abbe in1873 [23], when he constructed his famous formula:

δ =λ

2 ·NA . (1.5)

where δ is the smallest distance between two distinguishable features.Another representation of the optical performance, is the optical transfer function,

or OTF (Figure 1.4), which is the Fourier transform of the PSF:

OTF = F(PSF). (1.6)

As this function operates in the frequency space, it describes how the different spatialfrequencies are affected by the system. The resolution can also be defined as the maxi-mum of the support of the OTF, since this describes the highest frequency that is still

7

DOI:10.15774/PPKE.ITK.2018.005

Page 17: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

100 150 200 250

100

150

200

250

0

0.002

0.006

0.014

0.03

0.061

0.124

0.25

0.5

1

120 140 160 180 200

120

130

140

150

160

170

180

190

2000

0.2

0.4

0.6

0.8

1

PSF OTF

Figure 1.4: Axial cross section of the PSF and OTF of a wide-field microscope. Simulated PSF(left) and OTF (right) for a wide-field microscope with a water immersion objective (n = 1.33). NA = 1.1,λ = 510 nm. Intensity has been normalized relative to the maximum, and is visualized with different colors(see colorbar). For better visualization, the logarithm of the intensity is displayed for the PSF.

transmitted by the optical system. Any pattern with higher frequency will be lost, thuslies beyond the resolution limit. For circularly symmetric PSFs, the OTF will have realvalues. However, if this is not the case, the Fourier transform also introduces complexcomponents.

Abbe’s formula can be derived from the scalar theory of diffraction using a paraxialapproximation (Fraunhofer diffraction, [24]). It is useful to define the following opticalcoordinates instead of the commonly used Cartesian coordinates x, y and z:

v =2πnr

λ0sinα, u =

8πnz

λ0sin2

α

2(1.7)

where r =√

x2 + y2 is the distance from the optical axis, and α is the light collectionangle as shown on Figure 1.3. In this system the intensity of the electric field in the focusof a lens is [25]:

H(u, v) = C0

∫ 1

0J0(vρ)e

−i 12·uρ2ρdρ

2

, (1.8)

where C0 is a normalization constant, ρ = r/max(r) is the normalized distance fromthe optical axis, and J0 is the zeroth-order Bessel function of the first kind. This is alsocalled the Born-Wolf PSF model, named after the original authors [24].

To determine the lateral resolution of the system, let’s substitute u = 0 as the axialoptical coordinate, and evaluate Equation 1.8 which will give the intensity distributionin the focal plane:

H(0, v) = C0

∫ 1

0J0(vρ)ρdρ

2

=

(

2J1(v)

v

)2

, (1.9)

8

DOI:10.15774/PPKE.ITK.2018.005

Page 18: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

Figure 1.5: Airy pattern. Airy pattern calculated in Matlab based on Equation 1.9.

where J1 is the first-order Bessel function of the first kind. This equation describes thefamous Airy pattern (Figure 1.5) which will be the shape of the PSF in the focal plane.The width of this pattern will define the smallest resolvable distance, and although thereare multiple definitions for this, the most commonly accepted is the Rayleigh criterion[24, 26]. It defines the resolution as the distance between the central peak and the firstlocal minimum. As this lies at v = 3.38, the resolution can be expressed by substitutingthis value into Equation 1.7 and solving it for r:

δxy = r(v = 0.338) =3.83

λ0

n · sinα ≈ 0.61λ0

NA, (1.10)

which is equivalent to Abbe’s original formula (Equation 1.5). The only difference is thescaling factor which is due to the slightly different interpretations of the width of theAiry disk as mentioned earlier.

Similarly, to calculate the intensity distribution along the axial direction, let’s sub-stitute v = 0 into Equation 1.8:

H(u, 0) = C0

(

sin u4

u4

)2

. (1.11)

For this expression the first minimum lies at u = 4π. Converting back to Cartesiancoordinates, the axial resolution can be expressed as:

δz =2nλ0

NA2. (1.12)

So far we only considered a single, point-like emitter. As the intensity function de-

9

DOI:10.15774/PPKE.ITK.2018.005

Page 19: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

scribes how an optical system “spreads out” the image of a point, it is also called thePoint Spread Function (PSF, Figure 1.4). In a more realistic scenario, however the emit-ters are neither point-like, nor single. Effectively, however, for every emitter the PSFwould be imaged on the sensor, and this creates the final image. In mathematical terms,this can be expressed as a convolution operation between the underlying fluorophoredistribution of the object (O) and the PSF (H):

I(u, v) = O(u, v) ∗H(u, v). (1.13)

The effective result of this kind of diffraction-limited image formation is a blurredimage with a finite resolution of δxy in the lateral direction, and δz in the axial direction.

The PSF is further affected by the illumination pattern as well. Since the number ofemitted fluorescent photons are roughly proportional to the illumination intensity, theillumination pattern will have an effect on the overall PSF of the system, which can beexpressed as:

Hsys = Hill ·Hdet, (1.14)

where Hill is the point spread function of the illumination, and Hdet is the point spreadfunction of the detection.

1.1.4 Simulating the point spread function

To estimate the performance of a microscope, it is useful to simulate its point spreadfunction. Usually this entails the numerical evaluation of the diffraction integral nearthe focal point. The most commonly used model for the PSF is the Born-Wolf model,which we already introduced in Equation 1.8. The only parameters for this model are thewavelength of the light (λ), the numerical aperture (NA), and the refractive index of theimmersion medium (n). This model, however only accounts for an ideal image formingsystem without any aberrations.

For some experiments it is necessary to use the objectives in a slightly differentenvironment than the original design conditions. Changes in the medium refractive indexor coverslip thickness can lead to spherical aberrations, which are not accounted for inthe Born-Wolf PSF model. The Gibson-Lanni model [27] gives a more general approachand it accounts for any differences between the experimental conditions and the designparameters of the objective, thus, it can simulate any possible aberrations that may arise.The additional adjustable parameters are the thicknesses (t) and refractive indices (n)of the immersion medium (i), coverslip (g), and sample (s) (Figure 1.6).

Based of the parameters of the system, p = (NA, n, t), where n = (ni, n∗i , ng, n

∗g, ns)

represents the refractive indices, and t = (ti, t∗i , tg, t

∗g, ts) represents the medium thick-

10

DOI:10.15774/PPKE.ITK.2018.005

Page 20: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.1 Wide-field fluorescence microscopy

optical axis

objectiveimmersion medium

ticoverslip

tgspecimen

ts

ns ng ni

n*in*

i

P

Q

R

A

B C

D

t*g t*

i

Sd

esi

gn

ex

pe

rim

en

tal

zp O

z

Figure 1.6: Optical path length differences in the Gibson-Lanni PSF model. The optical pathlength difference is given by OPD = [ABCD] − [PQRS], where [ABCD] is the optical length of theexperimental path (red), and [PQRS] is the optical length of the design path (blue). Adapted from [28].

nesses, the optical path difference can be calculated as:

OPD(ρ, z; zp,p) = (z + t∗i )√

n2i − (NA ρ)2 + zp

n2s − (NA ρ)2−

−t∗i

(n∗i )

2 − (NA ρ)2 + tg

n2g − (NA ρ)2 − t∗g

(n∗g)

2 − (NA ρ)2,(1.15)

where ρ = r/max(r) is the normalized distance from the optical axis, z is the axialcoordinate of the focal plane, and zp is the axial location of the point source in thespecimen layer. Then, the intensity near the focus can be expressed as:

H(u, v) = C

∫ 1

0J0(vρ)e

iW (ρ,z;zp,p)ρdρ

2

, (1.16)

where the phase term W (ρ, z; zp,p) =2πλ ODP (ρ, z; zp,p) and C is a constant complex

amplitude.Multiple software packages offer numerical evaluations of Equation 1.8 and Equa-

tion 1.16. The ones extensively used in this thesis are the PSF Generator plugin [29] forFiji [30], and MicroscPSF [28], which is implemented in Matlab. The PSF Generator sup-ports the both PSF models, while MicroscPSF supports the Gibson-Lanni model. Thelatter implementation has the advantage that it is much faster due to the Bessel seriesapproximation of the integral term, however, it only calculates the axial cross-section ofthe PSF. In contrast, PSF generator evaluates the integral for any specified 3D volumearound the focal point.

11

DOI:10.15774/PPKE.ITK.2018.005

Page 21: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.2 Point scanning methods

0.0 0.2 0.4 0.6 0.8 1.0 1.2

0.1

1.0

10.0

05

10

15

20

25

30

axial

lateral

axial/lateral

NA

Resolu

tion (

µm

)

axia

l/la

tera

l ra

tio

Figure 1.7: Resolution of a wide-field microscope. Axial (blue) and lateral (red) resolutions of awide-field microscope are shown with respect to the numerical aperture (NA). Resolutions are calculatedwith λ = 510nm, the emission maximum of GFP and n = 1.33, the refractive index of water, for waterdipping objectives.

1.2 Point scanning methods

In most cases, a wide-field microscope is used to image a layer of cultured cells, or asectioned sample, thus axial resolution is not a concern. Imaging live specimens, howeveris not so straightforward, as these samples are usually much thicker than a typical section.For these samples 3-dimensional (3D) imaging is highly beneficial, which necessitatesthe use of optical sectioning instead of physical sectioning to be able to discriminate thefeatures at different depths.

Due to the design of the wide-field microscope, any photons emitted from outside thefocal plane will also be detected by the sensor, however as these are not originating fromthe focus, only a blur will be visible. This blur potentially degrades image quality andsignal-to-noise ratio to such an extent that makes imaging thick samples very difficult ifnot impossible in a wide-field microscope.

Evaluating Equations 1.10 and 1.12 for a range of possible numerical apertures revealsthe significant differences in lateral and axial resolution for any objective (Figure 1.7).Especially for low NAs, this can be significant, a factor of ∼20 difference. For higher(>0.8) NAs the axial resolution increases faster than the lateral, however they will onlybe equal when α = 180◦. This means that isotropic resolution with a single lens is onlypossible if the lens is collecting all light emitting from the sample, which seems hardlypossible, and would be highly impractical. For commonly used high NA objectives thelateral to axial ratio will be around 3–6.

12

DOI:10.15774/PPKE.ITK.2018.005

Page 22: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.2 Point scanning methods

Instead of using a single lens to achieve isotropic resolution, it is more practical toimage the sample from multiple directions to complement the missing information fromdifferent views. When rotating the sample by 90◦ for example, the lateral direction ofthe second view will correspond to the axial direction of the first view. If rotation isnot possible, using multiple objectives can also achieve similar results, such as in thecase of Multi-Imaging Axis Microscopy (MIAM) [31, 32]. This microscope consisted of 4identical objectives arranged in a tetrahedral fashion to collect as much light as possiblefrom multiple directions, and provide isotropic 3D resolution, albeit at the expense ofextremely difficult sample handling, since the sample was surrounded by objectives fromall directions.

1.2.1 Confocal laser scanning microscopy

Confocal laser scanning microscopy (CLSM) [33, 34] addresses most of the problemsof wide-field microscopy we mentioned in the previous section. It is capable of opticalsectioning by rejecting out-of-focus light, which makes it a true 3D imaging technique.Furthermore, the light rejection also massively reduces out-of-focus background, andincreases contrast.

This is achieved by two significant modifications compared to the wide-field opticalpath. To be able to reject the out-of-focus light, an adjustable pinhole is placed at thefocus of the tube lens. Light rays originating from the focal point will meet here, andare able to pass through the pinhole. However, out-of-focus light will converge eitherbefore or after the aperture, and thus the aperture blocks these rays. To maximize thefluorescence readout efficiency for the single focal point, a photomultiplier tube is usedinstead of an area sensor (Figure 1.8a).

As only a small focal volume is detected at a time, the illumination light is also focusedhere by coupling an expanded laser beam through the back aperture of the objective.This not only increases illumination efficiency (since other, not detected points are notilluminated), but has the added benefit of increasing the resolution as well. This is dueto the combined effect of illumination and detection PSFs as described in Equation 1.14(Figure 1.9). For Gaussian-like PSFs, the final resolution (along a single direction) canbe calculated in the following way:

1

δ2sys=

1

δ2ill+

1

δ2det, (1.17)

where δill and δdet are the resolutions for the illumination and detection, respectively.Since the same objective is used for both illumination and detection, and the difference

13

DOI:10.15774/PPKE.ITK.2018.005

Page 23: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.2 Point scanning methods

laser

PM

T

pinhole

tube

lens

emission

filter

excitation

filter

dichroic

mirror

beam

expander

objective

sample

(a) Confocal microscope

laser

tube lens

emission

filter

detection

objective

sample

PM

T

pinhole

θ

illumination

objective

beam

expander

(b) Confocal-theta microscope

Figure 1.8: Basic optical components of a confocal laser scanning and confocal-theta micro-scope. Both types of microscopes use confocal image detection, which means that a pinhole is used toexclude light coming from out-of-focus points. Light intensity is measured by a photomultiplier for everypoint in the region of interest. The final image is generated on a computer using the positions and recordedintensity values. A regular confocal microscope (a) uses the same objective for illumination and detection,while a confocal-theta microscope (b) uses a second objective that is rotated by θ around the focus. In thiscase, θ = 90◦.

in wavelength is almost negligible, δill = δdet = δ, the final system resolution will be:

δsys =1√2δ. (1.18)

This means that the distinguishable features in a confocal microscope are ∼0.7 timessmaller than in a wide-field microscope using the same objective.

Because of the different detection method in a confocal microscope, direct imageformation on an area sensor is not possible, since at any given time only a single point isinterrogated in the sample. Instead, it is necessary to move the illumination and detectionpoint in synchrony (or in a simpler, albeit slower solution, to move the sample) to scanthe entire field of view. The image can be later computationally reconstructed by acomputer program that records the fluorescence intensity of every point in the field ofview, and displays these values as a raster image.

14

DOI:10.15774/PPKE.ITK.2018.005

Page 24: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.2 Point scanning methods

100 150 200 250

100

150

200

250

0

0.002

0.006

0.014

0.03

0.061

0.124

0.25

0.5

1

120 140 160 180 200

120

130

140

150

160

170

180

190

2000

0.2

0.4

0.6

0.8

1

PSF OTF

Figure 1.9: Axial cross section of the PSF and OTF of a confocal laser scanning microscope.Simulated PSF and OTF for a laser scanning confocal microscope with a water immersion objective (n =1.33). NA = 1.1, λ = 510 nm. Intensity has been normalized relative to the maximum, and is visualizedwith different colors (see colorbar). For better visualization, the logarithm of the intensity is displayed forthe PSF.

1.2.2 Variants of confocal microscopy

Although confocal microscopy already has 3D capabilities, its axial resolution is stilllimited compared to the lateral, since it uses only one objective. An alternative realiza-tion of the confocal microscope, the confocal theta microscope [35] introduces a secondobjective to the system that is used to illuminate the sample (Figure 1.8b). Since thisdecouples the illumination and detection, using a dichroic mirror is no longer necessary.The second objective is rotated by θ around the focus, this is where the name of thissetup originates from.

As in the case of standard confocal microscopy, the system PSF is improved by theillumination pattern. Here, however, the axial direction of the detection coincides withthe lateral direction of the illumination, which results in a dramatic improvement ofaxial resolution compared to standard confocal microscopy. Lateral resolution will alsobe increased, but by a smaller extent, resulting in an almost isotropic PSF and equal axialand lateral resolutions. Although this is a big improvement to confocal microscopy interms of resolution, this technique did not reach a widespread adoption as it complicatessample handling, while still suffering from two drawbacks of confocal microscopy thatlimit its live imaging capabilities, namely phototoxicity and imaging speed.

One improvement to address these drawbacks is the use of a spinning disk with aspecific pattern of holes (also called Nipkow disk) to generate multiple confocal spots atthe same time [36]. If these spots are far enough from each other, confocal rejection ofout-of-focus light can still occur. As the disk is spinning, the hole pattern will sweep theentire field of view, eventually covering all points [37]. The image is recorded by an areadetector, such as a CCD or EM-CCD, which speeds up image acquisition [38].

15

DOI:10.15774/PPKE.ITK.2018.005

Page 25: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

detection

specimen

field of view

illumination

Figure 1.10: Basic concept of single-plane illumination microscopy. The sample is illuminated fromthe side by laser light shaped to a light-sheet (blue). This illuminates the focal plane of the detection lens,that collects light in a wide-field mode (yellow). The image is recorded, and the sample is translated throughthe light-sheet to acquire an entire 3D stack.

1.3 Light-sheet microscopy

A selective-plane illumination microscope (SPIM) uses a light-sheet to illuminate only athin section of the sample (Figure 1.10). This illumination plane is perpendicular to theimaging axis of the detection objective and coincides with the focal plane. This way, onlythe section in focus will be illuminated, thus providing much better signal-to-noise ratio.In case of conventional wide-field fluorescence microscopy, where the whole specimenis illuminated, out-of-focus light contributes to a significant background noise. Withselective-plane illumination, this problem is intrinsically solved, and it also provides atrue optical sectioning capability. This makes SPIM especially suitable for 3D imaging.

The main principle behind single-plane illumination microscopy, that is illuminatingthe sample from the side by a thin light-sheet, dates back to the early 20th century, whenSiedentopf and Zsigmondy first described the ultramicroscope [39]. This microscope usedsunlight as an illumination source that was guided through a precision slit to generatea thin light-sheet. This allowed Zsigmondy to visualize gold nanoparticles floating inand out of the light-sheet by detecting the scattered light from the particles. Since theseparticles were much smaller than the wavelength of the light, the device was called anultramicroscope. His studies with colloids and the development of the ultramicroscopeled Zsigmondy to win the Nobel Prize in 1925.

After Zsigmondy this method was forgotten until rediscovered in the 1990s, whenVoie et al. constructed their Orthogonal-plane Fluorescent Optical Sectioning (OPFOS)microscope [40]. They used it to image a fixed, optically cleared and fluorescently labelledguinea pig cochlea. In order to acquire a 3D dataset, the sample was illuminated from theside with a light-sheet generated by a cylindrical lens, then rotated around the center axis

16

DOI:10.15774/PPKE.ITK.2018.005

Page 26: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

a b c d

gfe

Figure 1.11: Different optical arrangements for light-sheet microscopy. (a) Original SPIM designwith a single lens for detection and illumination. [43] (b) Upright SPIM to allow for easier sample mountingsuch as using a petri dish (iSPIM [44, 45, J1]). (c) Inverted SPIM, where the objectives are below thesample, which is held by a thin foil [J2]. (d) Dual-view version of the upright configuration, where bothobjective can be used for illumination and detection (diSPIM [46]). (e) Multidirectional-SPIM (mSPIM)for even illumination of the sample with two objectives for illumination [47]. (f) Multi-view SPIM withtwo illumination and detection objectives for in toto imaging of whole embryos (MuVi-SPIM [48], SimView[49], Four-lens SPIM [50]). (g) A combination of (d) and (f), using four identical objectives, where bothcan illuminate and detect in a sequential manner, to achieve isotropic resolution without sample rotation(IsoView [51]).

to obtain multiple views. Although they only reached a lateral resolution of around 10µm

and axial resolution of 26µm, this method allowed them to generate a 3D reconstructionof the guinea pig cochlea [41].

Later, in 2002, Fuchs et al. developed Thin Light-Sheet Microscopy (TLSM) [42]and used this technique to investigate the microbial life in seawater samples withoutdisturbing their natural environment (by, e.g., placing them on a coverslip). Their light-sheet was similar to the one utilized in OPFOS, being 23µm thin, and providing a1mm× 1mm field of view.

Despite these early efforts, the method did not gain larger momentum. The realbreakthrough in light-sheet imaging happened at the European Molecular Biology Labo-ratory (EMBL) in 2004, where Huisken et al. [43] combined the advantages of endogenousfluorescent proteins and the optical sectioning capability of light-sheet illumination toimage Medaka fish embryos, and the complete embryonic development of a Drosopilamelanogaster embryo. They called this Selective-Plane Illumination Microscopy (SPIM),and it quickly became popular to investigate developmental biological questions.

Since then, light-sheet-based imaging has gained more and more popularity, as it canbe adapted and applied to a wide variety of problems. Although sample mounting canbe challenging because of the objective arrangement, this can also be an advantage, since

17

DOI:10.15774/PPKE.ITK.2018.005

Page 27: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

new microscopes can be designed with the sample in mind [3, J3] (Figure 1.11). Thismade it possible to adapt the technique for numerous other specimens, such as zebrafishlarvae [52], C. elegans embryos [45], mouse brain [6], and even mouse embryos [J2, 53,54].

As many of these specimens require very different conditions and mounting tech-niques, these microscopes have been adapted to best accommodate them. An uprightobjective arrangement (Figure 1.11b), for example, allows imaging samples on a cover-slip, while its inverted version is well suited for mouse embryos, where a foil is separatingthe samples from the immersion medium (Figure 1.11c). A modified version of the up-right arrangement allows for multi-view imaging using both objectives for illuminationand detection in a sequential manner (Figure 1.11d) [55].

To achieve a more even illumination in larger samples, two objectives can be used fromopposing directions to generate two light-sheets (Figure 1.11e) [47]. This arrangementcan further be complemented by a second detection objective, to achieve parallelizedmulti-view imaging (Figure 1.11f) [48–50]. For ultimate speed, 4 identical objectives canbe used to achieve almost instantaneous views from 4 different directions by using allobjectives for illumination and detection (Figure 1.11g) [51].

Furthermore, because of the wide-field detection scheme it is possible to combineSPIM with many superresolution techniques, such as single molecule localization [56],STED [57], RESOLFT [J1], or structured illumination [7, 58, 59].

Since illumination and detection for light-sheet microscopy are decoupled, two inde-pendent optical paths are implemented.

The detection unit of a SPIM is largely equivalent to a detection unit of a wide-fieldmicroscope without the dichroic mirror (Figure 1.12). The most important componentsare the objective together with the tube lens, filter wheel, and a sensor, typically a chargecoupled device (CCD) or scientific complementary metal–oxide–semiconductor (sCMOS)camera.

The resolution of a light-sheet microscope mainly depends on the detection objective.Since imaging biological specimens usually requires a water-based solution, the objectivesalso need to be directly submerged in the medium to minimize spherical aberrations. Asthe refraction index of water (n = 1.33) is greater than the refraction index of air, theseobjectives tend to have a higher NA, resulting in higher resolution. As the illuminationis decoupled in this system, the light-sheet thickness also has an influence on the axialresolution. Finally, the resolution also depends on the pixel size of the sensor whichdetermines the spatial sampling rate of the image.

Although image quality and resolution greatly depend on the detection optics, thereal strength of light-sheet microscopy is the inherent optical sectioning which is dueto the specially aligned illumination pattern that confines light to the vicinity of the

18

DOI:10.15774/PPKE.ITK.2018.005

Page 28: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

CA

M

laser

laser

90°

tube lens

emission

filter

detection

objective

sample

light-sheet

illumination

objective

cylindrical

lens

beam

expander

Figure 1.12: Basic optical components of a SPIM. A dedicated illumination objective is used togenerate the light-sheet, which is an astigmatic Gaussian beam, focused along one direction. Astigmatismis introduced by placing a cylindrical lens focusing on the back focal plane of the objective. Detection isperformed at a right angle, with a second, detection objective. Scattered laser light is filtered out, and atube lens forms the image on an area sensor, such as an sCMOS camera.

detection focal plane.There are two most commonly used options to generate a light-sheet: either by using

a cylindrical lens to illuminate the whole field of view with a static light-sheet, as in theoriginal SPIM concept [43]; or by quickly scanning a thin laser beam through the focalplane, thus resulting in a virtual light-sheet. This method is called Digitally ScannedLight-sheet Microscopy (DSLM) [52].

1.3.1 Static light-sheet illumination

For a static light-sheet, the normally circular Gaussian laser beam needs to be shapedin an astigmatic manner, i.e., either expanded or squeezed along one direction, to shapeit into a sheet instead of a beam. This effect can be achieved by using a cylindrical lens,

19

DOI:10.15774/PPKE.ITK.2018.005

Page 29: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

which, as the name suggests, has a curvature in one direction, but is flat in the other,thus focusing a circular beam to a sheet (Figure 1.12).

However, to achieve light-sheets that are sufficiently thin for optical sectioning, onewould need to use a cylindrical lens with a very short focal length, and these are hardlyaccessible in well corrected formats. For this reason, it is more common to use a longerfocal length cylindrical lens in conjunction with a microscope objective, which is wellcorrected for chromatic and spherical aberrations [60]. This way, the light-sheet length,thickness and width can be adjusted for the specific imaging tasks.

Light-sheet dimensions

The shape of the illumination light determines the optical sectioning capability and thefield of view of the microscope, so it is important to be able to quantify these measures.The most commonly used illumination source is a laser beam coupled to a single modefiber, thus its properties can be described by Gaussian beam optics.

For paraxial waves, i.e., waves with nearly parallel wavefront normals, the generalwave equation can be approximated with the paraxial Helmholtz equation [61]

∇2TU + i2k

∂U

∂z= 0, (1.19)

where ∇2T = ∂2

∂x2 +∂2

∂y2, U(r⃗) is the wave function, k = 2π

λ is the wavenumber and z is inthe direction of the light propagation.

A simple solution to this differential equation is the Gaussian beam:

U(r, z) = A0 ·W0

W (z)· e−

r2

W2(z) · e−i·φ(r,z), (1.20)

where A0 is the amplitude of the wave, W0 is the radius of the beam waist (the thinnestlocation on the beam), r =

x2 + y2 is the distance from the center of the beam, W (z)

is the radius of the beam at distance z from the waist, and ϕ(r, z) is the combined phasepart of the wave-function. Furthermore:

W (z) = W0

1 +

(

z

zR

)2

(1.21)

where the parameter zR is called the Rayleigh-range, and is defined the following way:

zR =πW 2

0

λ. (1.22)

Apart from the circular Gaussian beam, the elliptical Gaussian beam is also an eigen-function of Helmholtz equation (Equation 1.19) which describes the beam shape after a

20

DOI:10.15774/PPKE.ITK.2018.005

Page 30: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

2·W(z)

2·zR

wfov

hfov

hLS

(a)

−2 −1 0 1 2

0.0

0.5

1.0

1.5

2.0

z/zR

W0 √2·W

0

2·zR

W/W

0

(b)

−2 −1 0 1 2

0.0

0.2

0.4

0.6

0.8

1.0

2·zR

no

rma

lize

d in

ten

sity

(c)

Figure 1.13: Light-sheet dimensions. (a) The light-sheet, with the field of view indicated. Since thelight-sheet intensity is uneven, the field of view has to be confined to a smaller region. (b) The width andthickness of the field of view depends on the Rayleigh length of the beam (zR,y) and the beam waist (W0).(c) The height of the field of view is determined by the Gaussian profile of the astigmatic beam.

cylindrical lens:

U(x, y, z) = A0 ·√

W0,x

Wx(z − z0,x)

W0,y

Wy(z − z0,y)· e

− x2

W2x (z−z0,x) · e

− y2

W2y (z−z0,y) · e−i·φ(x,y,z).

(1.23)This beam still has a Gaussian profile along the x and y axes, but the radii (W0,x andW0,y), and the beam waist positions (z0,x and z0,y) are uncoupled, which results in anelliptical and astigmatic beam. The beam width can now be described by two independentequations for the two orthogonal directions:

Wx(z) = W0,x

1 +

(

z

zR,x

)2

and Wy(z) = W0,y

1 +

(

z

zR,y

)2

. (1.24)

Since the beam waist is different along the two axes, the Rayleigh range is also different:

zR,x =πW 2

x,0

λ, and zR,y =

πW 2y,0

λ. (1.25)

Based on these equations, the light-sheet dimensions and usable field of view can bespecified (Figure 1.13a). The light-sheet thickness will depend on the beam waist, W0,y

(if we assume the cylindrical lens is focusing along y), and the length of the light-sheetcan be defined as twice the Rayleigh range, 2 · zR,y (Figure 1.13b). As these are coupled(see Equation 1.25), having a thin light-sheet for better sectioning also means that itslength will be relatively short. Fortunately, because of the quadratic relation, to increasethe field of view by a factor of two, the light-sheet thickness only needs to increase by afactor of

√2.

Light-sheet height is determined by the intensity profile of the beam along the vertical

21

DOI:10.15774/PPKE.ITK.2018.005

Page 31: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.3 Light-sheet microscopy

CA

M

90°

laser

laser

tube lens

emission

filter

detection

objective

sample

virtual

light-sheet

illumination

objective

tube

lens

scan

lens

scanning

mirror

Figure 1.14: DSLM illumination. DSLM illuminates a specimen by a circularly-symmetric beam that isscanned over the field of view. This creates a virtual light-sheet, which illuminates a section of a specimenjust like the SPIM. The light-sheet in DSLM is uniform over the whole field of view and its height can bedynamically altered by changing the beam scan range.

axis (Figure 1.13c). Since this is a Gaussian function (see Equation 1.20), only a smallpart in the middle can be used for imaging, because towards the sides the intensitydramatically drops. When allowing a maximum 20% drop-off in intensity at the edges,the light-sheet height becomes hfov = 2 · 0.472 ·Wx,0 = 0.944 ·Wx,0.

1.3.2 Digitally scanned light-sheet illumination

Although generating a static light-sheet is relatively straightforward with the simple ad-dition of a cylindrical lens to the light path, it has some drawbacks. As already mentionedin the previous section, the light intensity distribution along the field of view is not con-stant, as the light-sheet is shaped from a Gaussian beam. Furthermore, along the lateraldirection of the light-sheet the illumination NA is extremely low, resulting in effectivelycollimated light. Because of this, shadowing artifacts can deteriorate the image quality[47].

22

DOI:10.15774/PPKE.ITK.2018.005

Page 32: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

A more flexible way of creating a light-sheet is by scanning a focused beam in the fo-cal plane to generate a virtual light-sheet (digital scanned light-sheet microscopy, DSLM[52]). Although this method might require higher peak intensities, it solves both draw-backs of the cylindrical lens illumination. By scanning the beam, the light-sheet heightcan be freely chosen, and a homogenous illumination will be provided. Focusing thebeam in all directions evenly introduces more angles in the lateral direction as well,which shortens the length of the shadows.

The basic optical layout of a DSLM is shown on Figure 1.14. A galvanometer con-trolled mirror that can quickly turn around its axis is used to alter the beam path, whichwill result in an angular sweep of the laser beam. To change the angular movement totranslation, a scan lens is used to generate an intermediate scanning plane. This plane isthen imaged to the specimen by the tube lens and the illumination objective, resultingin a scanned focused beam at the detection focal plane. The detection unit is identicalto the wide-field detection scheme, similarly to the static light-sheet illumination. Byscanning the beam at a high frequency, a virtual light-sheet is generated, and the flu-orescence signal is captured by a single exposure on the camera, resulting in an evenlyilluminated field of view.

1.4 Light-sheet imaging of mammalian development

Live imaging of mammalian embryos is an especially challenging task due to the in-trauterine nature of their development. As the embryos are not accessible in their naturalenvironment, it is necessary to replicate the conditions as closely as possible by provid-ing an appropriate medium, temperature, and atmospheric composition. Moreover, theseembryos are extremely sensitive to light, which poses a further challenge for microscopy[62]. lllumination with high laser power for an extended time frame can result in bleachingof the fluorophores, which in turn will lower the signal at later times. Furthermore, anyabsorbed photon has the possibility to modify the chemical bonds inside the specimen,which can lead to phototoxic effects, disrupting the proper development of the embryo.

Because of its optical sectioning capabilities combined with the high specificity of flu-orescent labels, confocal microscopy has had an immense influence on biological research,and has been the go-to technique for decades for many discoveries [36, 63, 64]. Imaginglive specimens for an extended period of time with confocal microscopy, although pos-sible [65, 66], is not ideal. Due to the use of a single objective, for each voxel imaged, alarge portion of the specimen has to be illuminated below and above the focal plane aswell. This results in a high dose of radiation on the sample that can be as much as 30–100times larger than the dose used for the actual imaging [67], depending on the number ofplanes recorded. Moreover, the usage of the pinhole, although rejects out-of-focus light,

23

DOI:10.15774/PPKE.ITK.2018.005

Page 33: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

also decreases the detectable signal intensity, thus it can have a negative impact on imagecontrast [68].

In contrast to confocal microscopy, light-sheet microscopy uses a much more efficientillumination scheme, as only the vicinity of the focal plane is illuminated. To achieve 3Dimaging, the sample is translated relative to the light-sheet, while snapshots are takenfrom each plane. The total irradiation in this case will be proportional to the thicknessof the light-sheet, and will not depend on the number of planes recorded.

Another benefit of light-sheet microscopy lies in the parallel readout of the fluores-cence due to the wide-field detection scheme. Since the whole focal plane is captured atthe same time, this can be considerably faster compared to the point-scanning methodof confocal microscopy.

In the next sections we will review the possible strategies for imaging mouse specimenswith light-sheet microscopy in different stages of development: pre-implantation and post-implantation embryos, and also adult mice.

1.4.1 Imaging mammalian pre-implantation development

Pre-implantation is the first phase of mouse embryonic development that starts right afterfertilization. The embryo in this phase is still in the oviduct, travelling towards the uterus,where it will implant into the uterine wall. The developmental stage between fertilizationand implantation is called the pre-implantation stage. Here the embryo divides, andalready the first cell fate specifications start when forming the trophoectoderm (TE)and the inner cell mass (ICM) at the blastocyst stage. ICM cells will form the embryoproper, while TE cell will contribute to the formation of the extraembryonic tissues.

During this process the embryo is still self-sufficient, which makes it possible to imagethis stage in an ex vivo embryo culture by providing the proper conditions [69]. Longterm imaging, however, is extremely challenging due to the very high light sensitivity ofthe specimens. Imaging these embryos in a confocal microscope will lead to incompletedevelopment, even if the imaging frequency is minimized to every 15mins [J2].

Imaging for just a few hours is already enough to investigate important processes, suchas cell fate patterning [70]. Other approaches aim to lower the phototoxicity by eitherusing 2-photon illumination which operates at longer wavelengths [71–73], or by loweringimaging frequency as a compromise [74]. These approaches, however, either require highlyspecialized equipment, such as an ultra-short pulsed laser, or are compromising on thetime resolution.

Light-sheet microscopy, on the other hand, drastically lowers the phototoxic effectsby using a much more efficient illumination scheme (see Section 1.3), and thus makes abetter use of the photon budget. Using this technique, it is possible to image the full pre-implantation development at high spatial and temporal resolution without any negative

24

DOI:10.15774/PPKE.ITK.2018.005

Page 34: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

SHM

S

C

40 µm

Y

X

Z

X

-0d 21.8h (E 0.5) 0d 0.5h (E 1.5) 0d 0.7h (E 1.5) 1d 8.3h (E 2.8) 1d 19.8h (E 3.3)0d 10.9h (E 1.9)

*

* *

*

*

*

Met

PNPN

Ana

emb. 11

B

-0d 21.8h (E 0.5) 0d 0.5h (E 1.5)

Y

X

0d 0.7h (E 1.5) 1d 8.3h (E 2.8) 1d 19.8h (E 3.3)

Met

PNPN

0d 10.9h (E 1.9)

*

*

* *

*

Ana

*emb. 11

IL

IM

SHA i ii iii

Figure 1.15: Inverted light-sheet microscope for multiple early mouse embryo imaging.. (i) Asample holder (SH), containing a transparent FEP membrane (M) allows multiple embryo samples (S) to beplaced in line for multisample imaging. (ii) Inverted objective orientation with side view of the sample holder.One possible configuration is to use a 10× 0.3 NA illumination objective (IL) and another 100× 1.1 NAdetection objective placed at a right angle to the illumination. (iii) Close up on side view of sample on FEPmembrane with both objectives. Since the FEP membrane is transparent on water, it provides no hindranceto the illumination beam in penetrating the sample or for the emitted fluorescence on reaching the detectionobjective. (B) Still images of one particular timelapse experiment, and (C) corresponding segmented nuclei.The star depicts the polar body. Adapted from Strnad et al. [J2]

impact on the developmental process. Such a microscope was developed by Strnad et al.at EMBL [J2], who used it to understand when exactly the first cell fate specification isdecided in the embryonic cells.

As a mouse embryo culture is not compatible with the standard agarose-based samplemounting techniques, a completely new approach was taken, which resulted in a micro-scope designed around the sample. The sample holder forming a V-shape was built witha bottom window, and it is lined with a thin FEP (fluorinated ethylene propylene) foilthat supports the embryos (Figure 1.15A, i). This arrangement allows the utilization ofthe standard microdrop embryo culture, while providing proper viewing access for theobjectives. As the embryos are relatively small (100µm) and transparent, a single illu-mination and single detection objective arrangement is enough for high quality imaging.A low resolution (NA=0.3) objective is used to generate the scanned light-sheet, anda high resolution (NA=1.1) objective is detecting the fluorescence at 50×magnification(Figure 1.15A, ii). As the foil is curved, it allows unrestricted access to the embryo, whileseparating the imaging medium from the immersion liquid (Figure 1.15A, iii). Further-more, its refractive index is matching the refractive index of water, so optical aberrationsare minimized.

Using this setup, Strnad et al. were able to pinpoint the exact timing of the first cellfate decision that leads either to ICM or TE cells. More than 100 embryos expressing

25

DOI:10.15774/PPKE.ITK.2018.005

Page 35: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

tip-truncated 1 ml syringe

acrylic rod

holes

Reichert’s

membrane

embryo

G

H

A

i ii

A P

primitive

streak

epiblast

mesodermvisceral enoderm

iii

I

Figure 1.16: Imaging mouse post-implantation development. (A) (i, ii) Mounting technique forE5.5 to E6.5 embryos. A tip-truncated 1mL syringe holds an acrylic rod, cut and drilled with holes ofdifferent size in order to best fit the mouse embryo by its Reichert’s membrane, leaving the embryo freeinside the medium. (iii) Maximum intensity projection of a 13µm thick slice at 78µm from distal end of anE6.5 mouse embryo. The different tissues corresponding to the rudimentary body plan are annotated. Scalebar: 20µm. (B) For stages ranging between E6.5 and E8.5, mounting using a hollow agarose cylinder hasalso successfully been proposed. Optimal sizes for the corresponding embryonic stage to be imaged can beproduced, so that the embryo can grow with least hindrance. (C–F) Steps for mounting the mouse embryoinside the agarose cylinder. The inner volume of the cylinder can be filled with optimal medium, allowingthe much larger chamber volume to have less expensive medium. (G–H) Example images of a 9.8h timelapsewith the mounting shown in (B) where the expansion of the yolk sac can be observed in direction of the bluearrows. (I) In order to aid multiview light-sheet setups in overcoming the higher scattering properties ofembryos at this stage, and to allow faster and easier data recording, electronic confocal slit detection allowsbetter quality images to be taken at shorter acquisition times. Scale bar: 20µm. Adapted from Ichikawa etal. [53], Udan et al. [54] and de Medeiros and Norlin et al. [75].

nuclear (H2B-mCherry) and membrane (mG) markers were imaged for the entire 3 daysof pre-implantation development (Figure 1.15B). The image quality was sufficient tosegment all nuclei in the embryos (Figure 1.15C), and track them from 1 to 64 cell stage,building the complete lineage tree. Based on the lineage trees and the final cell fateassignments, it was determined that at the 16 cell stage the final specification is alreadydecided, while earlier than this it is still random.

1.4.2 Imaging mammalian post-implantation development

After the initial 3 days of pre-implantation, the embryo undergoes the implantationprocess, during which it is inaccessible to microscopical investigations. Although a newmethod was recently developed that allows the in vitro culturing of the embryos em-bedded in a 3D gel [76], this has not reached wider adoption yet. Hence, developmental

26

DOI:10.15774/PPKE.ITK.2018.005

Page 36: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

processes during implantation have only been investigated in fixed embryos.Following the implantation process, at the post-implantation phase, ex vivo embryo

culturing becomes possible again [77, 78], and these embryos can be kept alive for severaldays in an artificial environment. During this process especially interesting stages are thelate blastocyst (∼E4.5), gastrulation (∼E6.5), and somite formation (∼E8.5). Before liveimaging techniques became available, these stages were mostly investigated using in situvisualization techniques to shed light on several developmental processes [79]. Manypathways playing important roles have been identified this way, however, live imaging isstill necessary to validate these results and ensure continuity in the same specimen [80].

Light-sheet microscopy is a good choice for imaging these stages, just like in the caseof pre-implantation embryos. These embryos, however, present new challenges for samplehandling and culturing. Owing to their extreme sensitivity, dissection can be difficult,especially for earlier stages (E4.5). Furthermore, since the embryo is also growing duringdevelopment, gel embedding is not an option, as this might constrain proper development.Thus, special handling and mounting techniques had to be developed in order to allowlive 3D imaging of these specimens.

Ichikawa et al. [53] designed a custom mounting apparatus manufactured from acrylicin the shape of a rod that fits in a standard 1mL tip-truncated syringe (Figure 1.16A,i). Several holes were drilled in the rod with different sizes, which can accommodatedifferent sized embryos. The embryos are held by an extraembryonic tissue, the Reichert’smembrane (Figure 1.16A, ii). Mounting this way does not disturb the embryo itself, andit can freely develop in the culturing medium, while it is also stationary for the purpose ofimaging. Using this technique, Ichikawa et al. were able to image through several stages ofdevelopment, including interkinetic nuclear migration at stages E5.5–6.5 (Figure 1.16A,iii).

A second method of sample mounting for light-sheet imaging was developed byUdan et al. who were able to record a full 24 h time-lapse of living embryos focusingon the gastrulation and yolk sac formation processes (Figure 1.16G–I). Their mountingtechnique comprised of producing a hollow agarose container shaped like a cylinder thatcould support the embryo from below without constraining its growth (Figure 1.16B–F).

Another consideration to keep in mind, is the growing size of the embryo. As itgets bigger, illumination is less efficient, and scattering can dominate at larger depths.As mentioned in earlier (Figure 1.3) this can be alleviated by multi-view imaging: illu-minating and detecting from multiple directions. Electronic confocal slit detection canfurther improve the signal-to-noise ratio by rejecting unwanted scattered light, whichallows deeper imaging in large specimens, even up to E7.5 (Figure 1.16I) [75].

27

DOI:10.15774/PPKE.ITK.2018.005

Page 37: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

Camera target

Tube lens

GFP filter

Objective

Fluorescencelight

Specimen chamberBrain

Cylinder lens

Blue laserlight

Slit aperture

A

B

GFP-labeledhippocampi

E

D

F

Opticalsections

Digital section

C

Figure 1.17: Imaging adult mouse brain with light-sheet microscopy.. (A) Schematics of theultramicroscope for brain imaging. The specimen is embedded in clearing medium to ensure necessaryimaging depth. Illumination is applied from two sides to achieve even illumination for the whole field ofview. Light-sheet is generated by a slit aperture followed by a cylindrical lens. The specimen is imagedfrom the top using wide-field detection method. (B) Photograph of the imaging chamber with a mountedcleared specimen and light-sheet illumination. (C) Surface rendering of a whole mouse brain, reconstructedfrom 550 optical sections. GFP and autofluorescence signal was recorded. Hippocampal pyramidal andgranule cell layers are visible in the digital section. Scale bar: 1mm. Objective: Planapochromat 0.5×. (D)Reconstruction of an excised hippocampus from 410 sections. Note that single cell bodies are visible. Scalebar: 500µm. Objective: Fluar 2.5×. (E) 3D reconstruction of a smaller region of an excised hippocampusfrom 132 sections. Scale bar: 200µm. Objective: Fluar 5×. (F) 3D reconstruction of CA1 pyramidal cellsimaged with a higher resolution objective (LD-Plan Neofluar 20× NA 0.4) in a whole hippocampus (430sections). Dendritic spines are also visible, even though usually a higher NA objective (>1.2) is required tovisualize these. Scale bar: 5µm. Adapted from Dodt et al. [6].

1.4.3 Imaging adult mice

Imaging adult mice is especially interesting for answering neurobiological questions. Sincedevelopment is over at this stage, the use of an environmental chamber is no longer neces-sary. The biggest challenge for imaging these samples is their size, as they are centimetersin size instead of less than a millimeter as in the embryonic stage. Furthermore, the tissuesof adult mice are much more opaque, which severely limits imaging depth. Light-sheetmicroscopy can already deal with large specimens, however, to achieve (sub)cellular res-olution for an entire brain, for example, multiple recordings have to be stitched togetherafter acquisition [81].

Light scattering and absorption depend on the tissue composition and imaging depth.Especially the brain with a high concentration of lipids in the myelinated fibers pose areal challenge for imaging. Live imaging is usually performed with 2-photon microscopywhich can penetrate the tissue up to 800µm deep [82]. Using fixed samples, however, thescattering problem can be eliminated by the use of tissue clearing methods.

Tissue clearing is a process that removes and/or substitutes scattering and absorbingmolecules by a chemical process while keeping the tissue structure intact and preservingfluorescence. The most dominant contributors to these effects are the proteins and lipids.Proteins in the cells locally change the refractive index of the tissue which leads to

28

DOI:10.15774/PPKE.ITK.2018.005

Page 38: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

1.4 Light-sheet imaging of mammalian development

scattering, while lipids predominantly absorb the light. Clearing methods tackle theseproblems by chemically removing and substituting lipids by certain types of gel, andimmersing the whole sample in a medium with higher refractive index to match theoptical properties of proteins. Numerous methods have been developed for tissue clearing,such as ScaleA2 [83], 3DISCO [84, 85], SeeDB [86], CLARITY [87], CUBIC [88] andiDISCO [89].

The first combination of optical clearing and light-sheet microscopy for whole brainimaging was performed by Dodt et al. using a custom ultramicroscope consisting of twoopposing illumination arms and a single detection arm with an objective from above(Figure 1.17A). The light-sheets were positioned horizontally, and the cleared samplescould be placed in a transparent imaging chamber filled with the clearing medium (Fig-ure 1.17B). Imaging was performed from both top and bottom after rotating the sample180◦. By changing the detection lens, it is possible to adapt the system to different sam-ples: low magnification is capable of imaging the whole brain (Figure 1.17C), while forsmaller, dissected parts, such as the hippocampus, higher magnification with higher res-olution is more appropriate (Figure 1.17D). With this configuration individual cell-cellcontacts can be recognized (Figure 1.17E), and even dendritic spines can be visualized(Figure 1.17F).

Although light-sheet microscopy is highly suitable for imaging cleared specimens, evenentire mice [90], brain imaging in live animals is more challenging due to the two-objectivesetup of a conventional SPIM microscope. Two light-sheet-based methods, however offera solution for this, axial plane optical microscopy (APOM) [91] and swept confocally-aligned planar excitation (SCAPE) [92] both use only a single objective to generate alight-sheet and detect the fluorescence as well. This is done by rotating the detection planeat an intermediate image (APOM), or by rotating both the light-sheet and detectionplane simultaneously (SCAPE).

29

DOI:10.15774/PPKE.ITK.2018.005

Page 39: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Chapter 2

Dual Mouse-SPIM

Unraveling the secrets of mammalian development has been a long standing challengefor developmental biologists and medical professionals alike. This phase of early life isan incredibly complex and dynamic process spanning through large scales in space andtime. Subcellular processes at the nanoscale are happening in the range of millisecondsor shorter, while whole embryo reorganizations and tissue migration events take placeover the course of hours [93]. Resolving these processes presents a true challenge, sinceto understand the underlying mechanisms, molecular specificity is just as crucial as highspatial and temporal resolution. As we have seen in Chapter 1, live imaging of mouseembryonic development is an especially challenging task, but light-sheet microscopy canoffer a good solution owing to its gentle optical sectioning and fast image acquisition. TheMouse-SPIM [J2] introduced earlier (Section 1.4.1) also solves the issue of live imagingwith its inverted design: the embryos are held by a foil which not only allows easy samplehandling but also acts as a barrier that isolates the embryos from the immersion medium.

As all microscopes, the Mouse-SPIM also had to make a compromise: although itsdesign allows for long-term live imaging, it only has a single detection view, as rotation isnot possible with the sample mounting trays. Due to this configuration, its resolution isinherently anisotropic, having an axial-to-lateral resolution ratio of around 3. Althoughthis is sufficient for many applications, such as cell tracking, for detecting subcellularfeatures it might be limiting. One such example is chromosome tracking, which could shedlight on chromosome missegregation mechanisms in the early embryonic development,which is the cause of many congenital diseases also affecting humans.

In this chapter, we present a novel light-sheet microscope developed to address theabove challenges by using two high NA objectives for subcellular isotropic resolutionand low-light imaging, while offering multi-view detection without the need to rotatethe samples. The microscope is designed to be live imaging compatible, offering newperspectives in the research of mouse embryonic development. This chapter will describethe design concepts for the microscope, the optical layout, alignment strategies, the

30

DOI:10.15774/PPKE.ITK.2018.005

Page 40: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.1 Microscope design concept

30°30°

aa30°30°

bb

Figure 2.1: Dual view concept with high NA objectives. To achieve multi-view detection while max-imizing resolution and light collection efficiency, two high NA objectives are placed in a 120◦ arrangement.The sample (orange) is held from below by a thin FEP foil. To be able to overlap the light-sheet with the fo-cal plane, the light-sheet is tilted by 30◦. The objectives are used in an alternating sequence for illuminationand detection.

results of various performance measurements, and its multi-view imaging capabilities.

2.1 Microscope design concept

As the limiting factor for subcellular imaging with the original Mouse-SPIM is the pooraxial resolution relative to the lateral, our first aim was to increase the axial resolutionto ideally reach the lateral resolution. A common way to reach isotropic resolution is toimage a specimen from multiple directions, and combine the resulting images by multi-view deconvolution [32, 94, 95]. This has the benefit that the high-resolution informationfrom one view can complement the low axial resolution of the other view, thus providingbetter resolution in all three directions.

As described in Chapter 1, many SPIM implementations allow for recording mul-tiple views either by rotating the sample, or by surrounding the sample with multipleobjectives that are used for detection (Figure 1.11). For our setup, following the sam-ple mounting technique of the original Mouse-SPIM, we wanted to keep the open-topsample mounting possibility, as this was proven to be highly compatible with mouseembryo imaging. To achieve multi-view detection in this configuration, we designed asetup where both objectives can be used for illumination and detection in a sequentialmanner, inspired by previous symmetrical SPIM designs [46, 96].

To achieve the highest possible resolution from two views, the core of our design isbased on the symmetric arrangement of two Nikon CFI75 Apo LWD 25x water-dippingobjectives, with a numerical aperture of 1.1. Due to the large light collection angle ofthese objectives, we arrange them in 120◦ instead of the conventional 90◦ used for light-sheet imaging. As the light-sheet still needs to coincide with the imaging focal plane

31

DOI:10.15774/PPKE.ITK.2018.005

Page 41: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.1 Microscope design concept

of the objectives, we tilt the light-sheet by 30◦ (Figure 2.1). Due to the low NA of thelight-sheet, this is possible without affecting illumination quality.

This 120◦ arrangement has several benefits when compared to the traditional 90◦

configuration. When placing the objectives in 90◦ the largest possible light collection half-angle for an objective can be αmax,90 = 45◦, and the corresponding NA is NAmax,90 =

n · sinαmax,90 = 0.94, where n = 1.33 is the refractive index of water. Considering thatthis is an idealized case, the practically available highest NA is only 0.8. For a 120◦

arrangement, the theoretical maximum is NAmax,120 = n · sin(120◦/2) = 1.15, with apractical maximum NA of 1.1.

Although the resolution won’t be completely isotropic when combining the imagesfrom two 120◦ views (Figure 2.2), as it is for 90◦ views, due to the higher maximum NApossible, the resolution can be higher in the 120◦ case. When simulating the combinedmulti-view PSFs (Figure 2.2), for 0.8 NA objectives in 90◦ the axial and lateral resolutionsare both 317 nm; while for two 1.1 NA objectives in 120◦ the axial resolution will beidentical, 317 nm, and the lateral will be better, 193 nm.

Although this difference in resolution may seem marginal, the 120◦ configuration hasanother advantage in light collection efficiency. Collecting as much of the fluorescencesignal as possible is crucial in live imaging applications, due to the limited availablephoton budget (see Figure 1.1 and [8]). Collecting more light from the sample allowsto image faster with the same contrast, or allows to reduce the illumination power andmaintain the imaging speed. As light collection efficiency depends on the solid anglesubtended by the detection lens, (see Appendix Section C), a 1.1 NA objective cancollect twice as many photons as a 0.8 NA objective, which gives the 120◦ setup a clearedge in low-light imaging.

2.1.1 Light-sheet design

To allow for flexibility in the field of view height, to achieve even illumination, reducedstripes, and have the potential for confocal line detection, we opted to use the beam-scanning technique to generate a virtual light-sheet. The effective focal length of theNikon 25x objective, given the 200mm focal length tube lens is

fo =ftlM

=200 mm

25= 8 mm, (2.1)

and the back aperture diameter is d = 17.6mm.To generate the tilted light-sheet as shown on Figure 2.1, the illumination beam will

need to be displaced in the back focal plane by

δ = fo · tan 30◦ = 4.62mm (2.2)

32

DOI:10.15774/PPKE.ITK.2018.005

Page 42: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.1 Microscope design concept

a b

dc

e

180 170 160 150 140 130 120 110 100 90100

200

300

400

500

600

700

800

900

angle (degrees)

reso

luti

on

(n

m)

lateral

axial

Figure 2.2: Lateral and axial resolution of a multi-view optical system. a) Simulated PSF for asingle view. b)–d) Simulated compound PSF of two views aligned in b) 150, c) 120 and d) 90 degrees toeach other. e) Axial and lateral resolution of a dual-view setup depending on the rotation angle of the twoobjectives. Parameters used for calculations: NA=1.1, λex = 488nm, λdet = 510nm, n = 1.333 for waterimmersion.

Since the Gaussian beam is not uniform, only a smaller portion of it can be used tomaintain an even illumination (Figure 1.13a). Because the size of an early mouse embryois around 80µm, we require the length and the height of the light-sheet to be at least100µm.

The length and thickness of the light-sheet

As we saw in Section 1.3.1, the length of the light-sheet is determined by the Rayleigh-range of the beam in the zy-plane. Since lFOV = 2 · zR = 100 µm

zR = 50 µm. (2.3)

Since the Rayleigh range and the diameter of the beam waist are coupled, the light-sheetthickness can be calculated after rearranging Equation 1.22:

2 ·W0 = 2 ·√

zR · λπ

= 5.57µm (2.4)

when λ = 488 nm for GFP excitation. As the beam width for these calculations is definedas 1/e2 of the peak intensity, we also calculate the more commonly used full width athalf maximum (FWHM):

FWHM = W0 ·√2 ln 2 = 3.28µm. (2.5)

33

DOI:10.15774/PPKE.ITK.2018.005

Page 43: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.2 Optical layout

From this, the divergence angle of the beam is

θ0 =λ

πW0,y= 55.74mrad = 3.196◦ (2.6)

This means, the numerical aperture needed to produce this light-sheet is:

NAls = n · sin(θ0) = 0.0743 (2.7)

Since NA = 1.1, the diameter of the back aperture is d = 17.6mm and the divergenceangle θ0 ≪ 1, using paraxial approximation, the necessary beam width at the back focalplane in the y direction is

by = d · NAls

NA= 1.19mm (2.8)

Thus, to generate a light-sheet with appropriate length to cover a whole mouse pre-implantation embryo, the laser beam diameter should be b = 1.19mm. Larger diameterswill result in a more focused beam and a shorter light-sheet, while a smaller diameterbeam will have worse optical sectioning capabilities, but will provide a larger field ofview.

The height of the light-sheet

The height of the light-sheet can be adjusted by changing the beam scanning amplitudewith the galvanometric mirror (also referred to as scanner). To scan the entire height ofthe field of view of hFOV = 270 µm, the scanning angle range at the back focal plane ofthe objective will need to be θ = tan−1(hFOV/2/fo) = ±0.967◦.

2.2 Optical layout

Based on the requirements and other considerations shown in the previous sections, themicroscope was designed in three main parts: 1) the core unit (green), 2) illuminationbranches (blue) and 3) detection branches (yellow, Figure 2.3). The aim when integratingthese units together was to allow for high level of flexibility with robust operation, whilealso keeping efficiency in mind. After finalizing the concept, the mechanical layout of themicroscope was designed in SolidWorks.

2.2.1 Core unit

As the most important part of the microscope is actually the sample, the design is basedaround a core consisting of the imaging chamber and the objectives (Figure 2.4). Alsopart of the core are two mirror blocks placed at the back of the objectives, and three

34

DOI:10.15774/PPKE.ITK.2018.005

Page 44: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.2 Optical layout

CAM

laser

GM

L1

PM

L2

L3

L5

FW

M5

M6L4

M2

M1

DM

M3

M4

L2’

L3’

L5’

M5’

M6’

L4’

M2’

M1’

DM’

M3’

M4’

OO’S

PMGM L1 L2 M1 L3 L4 M2 DM

IBFPI’BFP’

O

Figure 2.3: Dual Mouse-SPIM optical layout. The microscope consists of two main parts, the illumi-nation branches (blue) and detection branches (yellow). For both illumination and detection there are twoidentical paths implemented. The illumination direction can be changed by applying a different offset to thegalvanometric mirror, which in turn will direct the beam to the opposite face of the prism mirror. L1 andL2 will then image the scanner on M1. Using L3 as a scan lens, and L4 as a tube lens, the scanned beam iscoupled into the objective path by a quad band dichroic mirror. CAM – camera, DM – dichroic mirror, FW– filter wheel, L – lens, M – mirror, O – objective, PM – prism mirror, S – sample

35

DOI:10.15774/PPKE.ITK.2018.005

Page 45: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.2 Optical layout

Figure 2.4: The core unit of the microscope. The two objectives (O and O’) are mounted on a solid15mm thick aluminium plate. Fitting on the objectives, a custom chamber (Ch) is holding the immersionmedium for imaging. The mirror block with mirrors M3 and M4 directs the light to 65mm optical rails.Excitation (Ex.) and emission (Em.) light paths are indicated by the dash-dot line. Due to the symmetricarrangement, the excitation and illumination paths can be alternated. The objectives are secured with rings1–3 (green, see main text for details).

custom-designed rings to hold the objectives in place. The objectives are pointing slightlyupwards, closing a 60◦ angle with the horizontal plane, and 120◦ angle with each other.

Chamber

The chamber serves two purposes: it holds the immersion liquid necessary for imaging,and it keeps the objectives in the 120◦ position. The objectives are held by their necksas opposed to the standard mounting method, which is from the back, by the threads.The advantage of this is that any axial movements due to thermal expansion are greatlyreduced, thus the focal plane position is more stable even when changing the imagingconditions.

The chamber is machined from a high-performance plastic, polyether ether ketone(PEEK). This material has many beneficial properties: it is food safe, chemically highlyinert, and resistant to most solvents used in a biology laboratory. Due to these properties,PEEK is live imaging compatible, even for sensitive samples, as it can also be autoclaved.Compared to other plastics its mechanical properties are also superior. It has high tensileand compressive strength, comparable to those of aluminium, low thermal expansion andlow thermal conductivity. This can be beneficial when implementing temperature control,as thermal loss is reduced.

The objectives are kept in place by two custom-designed rings (Figure 2.4 1, 2).

36

DOI:10.15774/PPKE.ITK.2018.005

Page 46: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.2 Optical layout

The first ring has a cross sectional shape of a wedge, and sits tightly against both theobjective and the wall of the chamber. The second ring can freely slide on the objective,and has threads matching the chamber. When turned in, the threaded ring pushes thewedge ring further in, which in turn presses against the objective and the chamber walluniformly, thus preventing the objective from moving, and sealing the chamber at thesame time. As the wedge ring is made from a soft plastic (delrin), it will press evenlyagainst the objective preventing any damage. Given the conical shape of the ring, it willalso automatically center the objective, ensuring correct positioning.

To relieve any rotational stresses from the objective, the back of the objective is alsosupported by the mirror block. This is not fixed, however. A third ring, made of PEEK isthreaded on the objective, and slides into the opening of the mirror block. This reducesthe forces on the objectives, while still allows for some movements that might occur inthe axial direction due to thermal expansion.

Mirror blocks

Apart from supporting the objectives from the back, the mirror blocks are housing twobroadband dielectric mirrors (Thorlabs, BBE1-E03 and OptoSigma, TFMS-30C05-4/11)to direct the light to and from the objectives on a standard 65mm height, compatiblewith the Owis SYS65 rail system. The combination of two mirrors have two benefitscompared to using just one. With a single mirror directly reflecting the light to the back,the entire assembly would need to be much higher to reach the desired 65mm height.This could result in stability problems. Furthermore, due to the 60◦ rotation angle of theobjective, the image of the objective would also be rotated if using only a single mirror.With two mirrors the reflection planes can be kept orthogonal to the optical table, whichwill result in a straight image after the mirror block. This is not only beneficial whenrecording the images, but also when aligning the illumination arm. With the use of twomirrors, a convenient vertical scanning is required to produce the light-sheet; with asingle mirror, the scanning direction would need to be rotated by 60◦.

2.2.2 Illumination

The illumination arm of the microscope directs and shapes the laser beam to generatethe proper light-sheet dimension at the sample. As was calculated in Section 2.1.1, abeam diameter of 1.2mm is ideal for this setup.

The illumination arm has three main roles:

1. expands the laser beam to the required size.2. images the galvanometric scanner to the back focal plane of the objective3. switches the laser light between the two objectives during imaging

37

DOI:10.15774/PPKE.ITK.2018.005

Page 47: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.2 Optical layout

To achieve the desired beam diameter, a 1:2 beam expander (Sill Optics, 112751) isused in the reverse direction. As the output of the laser fiber produces a 3mm diameterbeam, this will reduce it to 1.5mm. As this is already the required beam diameter, thelenses further in the illumination path will not introduce any magnification.

Switching between the two illumination arms is performed by a custom-designedbeam splitter unit (Figure 2.5). Instead of utilizing a 50/50 beam splitter cube and me-chanical shutters, we exploit the fact that a galvanometric scanner is needed to generatethe light-sheet. As this galvanometric scanner (Cambridge Technology, 6210B) has arelatively large movement range (±20◦) it is also suitable for diverting the beam fromone illumination arm to the other.

Figure 2.5: Illumination branch splitting unit. To divert the beam to either side, a right angle prismmirror is used in conjunction with a galvanometric scanning mirror. L1 acts as a scan lens, thus the beam istranslated on mirror M. Depending on the scanner angle, the beam will be reflected either to the left (a) orto the right (b). L2 and L2’ act as relay lenses, and will image the scanner movement to the correspondingintermediate planes.

Switching illumination side is done the following way. As the scanner is positionedat the focus of the first lens (L1, f1 = 75mm, Edmund Optics, #47-639), the rotationalmovement will result in a linear scanning movement on mirror M and the prism mirrorPM (Figure 2.5). Depending on the lateral position of the beam, it will hit either the leftor the right leg of the prism (Thorlabs, MRAK25-E02), and will be reflected to eitherdirection. As the galvanometric mirror can be precisely controlled through our customsoftware, we can set and save the position when the beam is centered on the left lens L2(f2 = 75mm, Edmund Optics, #47-639) (Figure 2.5a) and the position when the beamis centered on the right lens L2’ (Figure 2.5b). Lenses L1 and L2(L2’) form a 1:1 relaysystem, and are imaging the scanner on mirror M1(M1’) (Figure 2.3). This way we canuse the same scanner to generate the light-sheet for both directions, depending on theinitial offset position. This not only has the advantage of being able to electronicallyswitch the illumination arms, but only requires a single galvanometric scanner insteadof one for each arm.

38

DOI:10.15774/PPKE.ITK.2018.005

Page 48: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.3 Optical alignment

Due to the arrangement of the bottom mirror and the prism mirror the scanningdirection will be rotated by 90◦. This will result in a vertical scanning plane, which isexactly what we need to generate the light-sheet on the sample (see Section 2.2.1). Furtherfollowing the illumination path, two achromatic lenses L3 and L4 (f3 = f4 = 200mm)form a 1:1 relay, imaging the scanning axis to the back focal plane (BFP) of the objective.

2.2.3 Detection

As the emitted light exits the objective and the mirror block, it is spectrally separatedfrom the illumination laser light by a quad band dichroic mirror (DM, Semrock, Di03-R405/488/561/635-t3-25x36) matching the wavelengths of the laser combiner. The lightis then focused by a 400mm achromatic lens (L5, Edmund Optics, #49-281) onto thecamera sensor (Andor Zyla 4.2 sCMOS). Just before the camera, a motorized filterwheel (FW, LEP 96A361) is placed to discriminate any unwanted wavelengths from theemission light. Although this is not in the infinity space, due to the very small anglesafter the 400mm tube lens, the maximum axial focal shift is ∼ 50 nm only, which isnegligible compared to the axial resolution of ∼ 1.1µm.

Similarly to the common scanner in the illumination path, the two detection armsshare the same camera. Although two cameras could also be used, due to the operatingprinciple of the microscope, the two objective are not used for imaging at the same time.This means a single camera is capable of acquiring all the images. However, the twodistinct detection arms need to be merged to be able to use a single detector.

Our solution to this problem is a custom-designed view-switching unit comprised oftwo broadband dielectric elliptical mirrors (Thorlabs,BBE1-E03) facing opposite direc-tions, mounted on a high precision linear rail (OptoSigma, IPWS-F3090). Depending onthe rail position, either the left (Figure 2.6a) or the right (Figure 2.6b) detection pathwill be reflected upwards, to the camera.

Moving the switcher unit is performed by a small, 10mm diameter pneumatic cylinder(Airtac, HM-10-040) that is actuated by an electronically switchable 5/2 way solenoidvalve (Airtac, M-20-510-HN). This solution offers a very fast switching between views,up to 5Hz, depending on the pressure, and it is extremely simple to control, as only adigital signal is necessary to switch the valve.

2.3 Optical alignment

Precise alignment of the illumination and detection paths are crucial for high qualityimaging, and has a pronounced importance for high magnification and high resolutionoptical systems. Due to the symmetrical setup of the microscope, we will only describethe alignment of one side, as the same procedure is also applicable to the other side.

39

DOI:10.15774/PPKE.ITK.2018.005

Page 49: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.3 Optical alignment

Figure 2.6: Detection branch switching unit. To be able to image both views on the same camera, amoveable mirror unit is introduced. Depending on the imaging direction, the mirror block is either movedbackward (a) or forward (b) to reflect the light up to the camera. Since the movement is parallel to themirrors’ surfaces, the image position on the sensor is not dependent on the exact position of the mirrors.

References to optical components will be as defined in Figure 2.3.

2.3.1 Alignment of the illumination branches

The two illumination branches start with a common light source, a single-mode fibercoupled to a laser combiner, and they also share a galvanometric mirror that performsthe beam scanning to generate the virtual light-sheet. Likewise shared is a scan lensfocusing on the galvanometric mirror (GM), and the illumination splitter unit (PM, seesection 2.2.2).

Alignment of the illumination arms is done in three steps. First the laser beam isaligned on the rail that holds the scanner, lens L1, and the splitter unit PM. This isperformed by two kinematic mirrors placed between the fiber output and the galvano-metric mirror (not shown on figure). Using these two mirrors it is possible to freely alignthe beam along all four degrees of freedom: translation in two orthogonal directions androtation around two orthogonal axes. Beam alignment on the rail is tested by using twoirises at the two ends of the rail, if the beam passes through both of them we consider itcentered and straight on the optical axis.

After the beam is aligned on the first rail, lens L1 and the splitter unit PM are placedin the measured positions to image the galvanometric mirror on mirror M1 using lensesL1 and L2. Correct positioning of the splitter unit along the rail is crucial, since this willaffect the lateral position and tilt of the beam exiting the unit. To some extent this canalso be compensated by adjusting the two mirrors before the galvanometric mirror, butshould be avoided if possible as this will also displace the beam from the center of thegalvanometric mirror.

40

DOI:10.15774/PPKE.ITK.2018.005

Page 50: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.3 Optical alignment

After the initial alignment of the illumination arms, when the laser is already coupledinto the objective, the fine adjustments are performed based on the image of the beamthrough the other objective. The beam is visualized by filling the chamber with a 0.1%methylene blue solution. As this solution is fluorescent and can be excited in a very largerange, it is well suited to visualize the beam during adjustment.

Adjusting beam position Beam position can be adjusted by either translating thebeam in a conjugated image plane (I’), or by rotating the beam in a conjugated backfocal plane (BFP’). The setup was designed in a way that BFP’ coincides with mirrorM1. This mirror is mounted in a gimbal mirror mount, allowing to rotate the mirrorexactly around its center, which avoids unwanted translational movements, and resultsin pure rotation of the beam. Lens L3 is positioned exactly 1 focal length away fromthe mirror, thus acting as a scan lens, and transforming the rotational movements totranslation. This translation is further imaged and demagnified by the tube lens L4 andthe objective O onto the sample.

Adjusting beam tilt Beam tilt can be adjusted by either rotating the beam in anintermediate image plane (I’), or translating it at the back focal plane (BFP). As mirrorM2 is relatively far from the back focal plane, adjusting it will mostly result in translationthat will rotate the beam in the image plane. This movement, however, will also introducetranslations, and has to be compensated by adjusting mirror M1. The light-sheet needsto be tilted by 30◦ to coincide with the focal plane of the other objective, but thislevel of adjustment is not possible with M2. In order to allow for a pure rotation ofthe light-sheet, we mounted the dichroic mirrors on linear stages (OptoSigma, TSDH-251C). By translating the dichroic mirror, the illumination laser beam gets translatedat the back focal plane, which will result in a pure rotational movement at the sample.Coarse alignment of the light-sheet is performed by adjusting the dichroic position whileinspecting the light-sheet through a glass window in the chamber. Precise alignment isdone afterwards based on the image of the beam visualized in a fluorescent medium.

Adjusting the scanning-plane angle After the beam is properly aligned, i.e., it isin focus and in the center of field of view, it is still necessary to check if the scanningdirection is parallel to the imaging plane. It is possible that the beam is in focus in thecenter position, but when moved vertically it drifts out of focus due to a tilted scanningangle. This tilt can be compensated by mirror M1, which is placed at the conjugate backfocal plane BFP’. Between lenses L3 and L4 a magnified version of the light-sheet willbe visible, and the tilt can be checked by placing an alignment target in the optical pathwhile scanning the beam. By tilting mirror M1 up or down the scanning pattern notonly translates, but it also rotates if the mirror surface is not exactly vertical. Since M1

41

DOI:10.15774/PPKE.ITK.2018.005

Page 51: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.3 Optical alignment

and GM are in conjugated planes, the tilt and offset can be performed independently.The tilt is first fixed by M1 while inspecting the target, and the beam is re-centered bychanging the offset on the galvanometric mirror. Moving the galvanometric mirror willnot introduce tilt, since in this case rotation axis is perpendicular to the reflection plane.

2.3.2 Alignment of the detection branches

Since the detection path is equivalent to a wide-field detection scheme, its alignment ismuch simpler than that of the illumination branches. The only difference is the detectionbranch merging unit (see Section 2.2.3.) which features two moving mirrors. This, how-ever, does not affect the alignment procedure, since the movement direction is parallelto both mirrors’ surfaces, meaning that the exact position of the mirrors will not affectthe image quality, as long as the mirrors are not clipping the image itself. A stabilitytest was performed to confirm the consistent switching performance of the mirror unitbefore the final alignment took place (see Section 2.5.2).

Positioning the tube lens The position of the tube lens determines the focal planethat is being imaged on the camera sensor. Ideally, the tube lens’s distance from thecamera sensor is exactly the tube lens’s focal length, which will ensure the best imagingperformance. If the tube lens’s distance is not correct, the focal plane will be slightlyshifted in the axial direction. Small shifts will not necessarily have detrimental effecton the image quality, because the light-sheet can also be shifted accordingly. Becauseof the shifted focal and image planes, however, the magnification of the system will beaffected, and will change depending on the amount of defocus. For this reason we aimfor positioning the tube lens as close to the theoretical position as possible.

Our tube lens is a compound, achromatic lens with a center thickness of 12.5mm,and edge thickness of 11.3mm. Its effective focal length is 400mm which will produce a50x magnified image. The back focal length is 394.33mm which we measured from thecamera chip, and the lens was positioned at this theoretically optimal position.

Adjusting the correction collar The Nikon 25x objectives used for this setup havea built in correction ring that can be used to correct spherical aberrations resultingfrom refractive index differences when imaging samples behind a coverslip. This canbe also effectively used to correct for any spherical aberrations occurring from imagingthrough the FEP foil. Although these aberrations are expected to be extremely low, dueto the relatively thin, 50µm foil thickness, and the close matching of refractive index(nFEP = 1.344, nH2O = 1.333), for optimal, aberration free image quality it can not beneglected.

The correction collars are adjusted by inspecting a gel-suspended fluorescent bead

42

DOI:10.15774/PPKE.ITK.2018.005

Page 52: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.4 Control unit

specimen with the microscope, where the beads can act as a reporter of the point spreadfunction of the microscope. The alignment can be performed “live” by inspecting thebead image quality for aberrations. By gradually changing the correction collar, the ringartifacts are minimized on out-of-focus beads, and the peak intensity is maximized for in-focus beads. By moving the correction ring, the focal plane is also slightly shifted, whichhas to be compensated by shifting the light-sheet correspondingly to coincide with thecorrect imaging plane.

Adjusting the field of view To allow for proper sampling of the image, we use50× magnification, which combined with the 6.5µm pixel pitch of our sCMOS cam-era, will result in a 0.13µm pixel size. The full field of view recorded by the camera is2048 × 0.13µm = 266.24µm. To ensure the best image quality, we align the center ofthe objective field of view on the camera sensor, since this region has the best opticalproperties in terms of numerical aperture, aberration correction and flatness of field.

Field of view alignment can be performed by using mirror M5 just before the detectionmerging unit. To identify the center region of the field of view, diffuse white light is usedto illuminate the entire sample chamber, and it is imaged on the camera. Then, mirrorM5 is adjusted until the top edge of the field of view becomes visible, i.e., where theillumination from the chamber is clipped. This will have a circular shape. Afterwards,adjusting the mirror in the orthogonal direction, the left-right position of the field ofview can be adjusted, by centering the visible arc on the camera sensor.

After the horizontal direction is centered, vertical centering is performed. This, how-ever can not be centered the same way as the horizontal direction, since for that wewould have to misalign the already aligned horizontal position. To determine the center,we move the field of view from the topmost position to the bottom. During this processthe number of turns of the adjustment screw is counted (this can be done accurately byusing a hex key). After reaching the far end of the field of view, the mirror movement isreversed, and the screw is turned halfway to reach the middle.

2.4 Control unit

The microscope’s control and automation is performed by an in-house designed modularmicroscope control system developed in LabVIEW [96]. The core of the system is aNational Instruments cRIO-9068 embedded system that features an ARM Cortex A9processor and a Xilinx Zynq 7020 FPGA. Having both chips in the same device is agreat advantage, since the main processor can be used to run most of the microscopecontrol software, while the FPGA can be used to generate the necessary output signalsin real time and with high precision.

43

DOI:10.15774/PPKE.ITK.2018.005

Page 53: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.4 Control unit

The embedded system is complemented by a high-performance workstation that isused to display the user interface of the microscope, and to record the images of thehigh-speed sCMOS camera.

2.4.1 Hardware

Various components need to be synchronized with high precision to operate the micro-scope: a laser combiner to illuminate the sample; a galvanometric scanner to generate thelight-sheet; stages to move the samples; filter wheel to select the imaging wavelengths;and a camera to detect the fluorescence signal. For high speed image acquisition, all ofthese devices have to be precisely synchronized in the millisecond range, and some evenin the microsecond range. Although they require different signals to control them, wecan split them into three main categories:

digital input analog input serial communicationcamera exposure galvanometric position filter wheellaser on/off (×3) laser intensity (×3) stages (×2)

All devices are connected to the NI cRIO 9068 embedded system, either to the builtin RS232 serial port, or to the digital and analog outputs implemented by C-seriesexpansion modules (NI 9401, NI 9263, NI 9264). The workstation with the user interfaceis communicating with the embedded system through the network. The only device witha connection to both systems is the camera: the embedded system triggers the imageacquisition, and the images are piped to the workstation through a dual CameraLinkinterface, capable of a sustained 800MB/s data transfer rate (Figure 2.7).

2.4.2 Software

Being able to precisely control all of the instruments not only relies on the selected hard-ware, but just as much on the software. Our custom software is developed in LabVIEW,using an object-oriented approach with the Actor Framework. The embedded system isresponsible for the low-level hardware control, for keeping track of the state of all devices,saving the user configurations, and automating and scheduling the experiments. It alsooffers a Javascript Object Notation (JSON) based application programming interface(API) through WebSocket communication. This is mainly used to communicate with theuser interface, however it also offers the possibility of automated control by an externalsoftware.

FPGA software

The on-board FPGA is responsible for generating the digital and analog output signalsbased on the microscope settings (Figure 2.8). To avoid having to calculate all the traces

44

DOI:10.15774/PPKE.ITK.2018.005

Page 54: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.4 Control unit

image acquisition

• display

• saving

• RAID0 array

user interface

workstation

• desktop

• currently LabVIEW

• could be open

source, OS

independent

RS232 DO/AO

FPGA

embedded system

• real time linux

• LabVIEW

microscope

Eth

CameraLink

n × 800 MB/s

long term

storage,

processing

10 Gbit/s

FW laserstage galvo n × camera

network

10 Gbit/s

10 Gbit/s

1 Gbit/s

Figure 2.7: Microscope control hardware and software architecture. The embedded system re-sponsible for the hardware control and the workstation are communicating through the network using theWebSocket protocol. The electronic devices of the microscope are either controlled through digital/analogsignals, or through serial communication. The camera is also connected to the workstation with a CameraLinkconnection, which transmits the recorded images.

0 10 20 30 40 50 60

0

3.3

Vcam

[V

]

camera

delay

exposure

0 10 20 30 40 50 60

0

3.3

Vla

s [

V]

laser

0 10 20 30 40 50 60

time (ms)

-0.5

0

0.5

Vscan [

V]

scanner

acc.

dec.

linear

return

Figure 2.8: Digital and analog control signals. Example traces for recording a single plane with 50msexposure time and 12ms delay. During the exposure of the camera the scanner has 3 sections: acceleration(acc.), linear, and deceleration (dec.). The laser line is synchronized with the scanner linear section to ensureeven illumination.

45

DOI:10.15774/PPKE.ITK.2018.005

Page 55: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

for a whole stack, the main software only calculates a few key parameters of the tracesthat are necessary to completely describe them. The FPGA then calculates the signalsin real time, and outputs them with microsecond precision.

To describe the traces, we define them as a concatenation of sections. Each section has3 or 5 parameters, depending on whether they are digital or analog. Both types have threecommon properties: value, length, and type. The analog sections additionally contain adValue and a ddValue element describing the velocity and the acceleration of the signal.This allows us to generate piecewise functions made of second-order polynomials.

The type element contains information on which value should be updated for thecurrent section. Setting the lowest bit high will update the value, setting the second bithigh will update the dValue, and setting the third bit high will update the ddValue.This feature allows to define smooth transitions between the sections, and also to definemore complex signals, as long as they are periodic in the second derivative.

2.5 Validating and characterizing the microscope

Following the design phase, all custom parts were manufactured by the EMBL mechanicalworkshop, and the microscope was assembled on a MellesGriot optical table (Figure 2.9and Figure D1). The microscope was equipped with an Andor Zyla 4.2 sCMOS camerathat offers a large field of view of 2048×2048 pixels with a pixel pitch of 6.5µm. It offersa high dynamic range of 1:30,000, and high frame rate at 100 frames per second (fps),while readout noise is minimal (0.9 e−).

To evaluate the performance of the microscope, we conducted various measurements,concerning the stability and the resolution of the system. The methods and results ofthese measurements will be presented in this section.

2.5.1 Methods

Preparation of fluorescent bead samples

For registration and resolution measurements we used TetraSpeck 0.5µm diameter fluo-rescently labeled beads (ThermoFisher, T7281). The stock bead solution was thoroughlyvortexed and sonicated for 5min before diluting it 1:100 in distilled water. The dilutedbead solution was stored at 4 ◦C until use. GelRite (Sigma-Aldrich, G1910) gel was pre-pared in distilled water at 0.8% concentration with 0.1% MgSO4 · 7H2O and kept at70 ◦C until use. 50µl of the diluted bead solution was added with a heated pipette tip to450µl of gel solution at 70◦ to prevent polymerization. The gel was thoroughly vortexed,and loaded to glass micropipettes (Brand 100µl). The gel was allowed to cool to roomtemperature and stored in a petri dish under dH2O at 4 ◦C until use. For imaging a smallpiece of gel was extruded from the capillary, cut off, and placed in the sample holder.

46

DOI:10.15774/PPKE.ITK.2018.005

Page 56: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

(a) Front view of the SolidWorks design. (b) Front view of the microscope.

GM L1F

BE

PM

L2

L2’

M1

M M

M1’ L3’

L3 L4

M2

DM

M3

M4

M3’

M4’

S

L5M5

M5’

FW

CAM

L5’

L4’

M2’

DM’ O’

O

(c) Top view of the microscope.

Figure 2.9: Completed DualMouse-SPIM. The excitation and emission light paths are depicted withblue and yellow lines respectively. Component numbering corresponds to Figure 2.3. Detection branch merg-ing unit, and mirrors M6 and M6’ are underneath the camera and filter wheel. BE – beam expander, CAM– camera, DM – dichroic mirror, F – fiber, FW – filter wheel, L – lens, M – mirror, O – objective, PM –prism mirror, S – stage

47

DOI:10.15774/PPKE.ITK.2018.005

Page 57: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

After positioning the gel to the bottom of the sample holder, the holder was filled with200µl dH2O.

Preparing the sample holder

The sample holder was lined with 12.5µm (PSF measurements) or 50µm (mouse zygoteand Drosophila embryo imaging) thin FEP foil (Lohmann, RD-FEP050A-610). The FEPfoils were cut to size, washed with 70% ethanol followed by a second wash with dH2O.After washing, both surfaces of the foils were chemically activated by a 20 s plasma treat-ment (PlasmaPrep2, Gala Instrumente). The foils were then stored in a petri dish untilfurther use, separated by lens cleaning tissues (Whatman, 2105-841). Before imaging,the sample holder was cleaned with dish soap, rinsed with tap water, rinsed with 70%ethanol, and rinsed with dH2O. The prepared foils were glued to the inside of the cleanedsample holder with medical grade silicon glue (twinsil speed, picodent). During the 5min

curing process the foil was pressed against the sample holder by a custom made pressfitting the shape of the sample holder. Final dilution of the stock bead solution in thegel is 1:1000.

Drosophila embryo imaging

Drosophila melanogaster embryos (fly stock AH1) expressing fluorescent nuclear (H2A-mCherry) and centriole (ASL-YFP) markers were collected on apple juice agar plates anddechorionated in 50% bleach solution for 1min. After rinsing the bleach with deionizedwater, the embryos were placed in the sample holder under PBS solution. A small piece ofgel containing fluorescent beads were also placed next to the samples to aid in multi-viewregistration.

Mouse zygote imaging

Fixed mouse zygotes labeled with Alexa Fluor 488 (microtubules) and Alexa Fluor 647(kinetochores) coupled secondary antibodies were kindly provided by Judith Reichmann.Zygotes were transferred to the sample holder, and imaged in PBS. To allow for multi-view reconstruction, a small piece of gel containing fluorescent beads was placed next tothe zygote in the sample holder. After imaging the zygote from both views, the beadswere also recorded using the same stack definitions. After data acquisition, the multi-viewdatasets were registered and deconvolved in Fiji [30] using the Multiview ReconstructionPlugin [95, 97]. The deconvolution was based on a simulated PSF generated in Fijiwith the PSF Generator plugin [29] using the Born-Wolf PSF model ??. NAex = 0.1,NAem = 1.1, n=1.33, λex = 488 nm, λem = 510 nm.

48

DOI:10.15774/PPKE.ITK.2018.005

Page 58: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

Right, zoom 1

950 1000105011001150

950

1000

1050

1100

1150

1:00

2:00

3:00

4:00Right, zoom 2

1000 1020 1040 1060

1020

1040

1060

1080 1:00

2:00

3:00

4:00Right, zoom 3

1028 1028.5 1029 1029.5

1048.5

1049

1049.5

10501:00

2:00

3:00

4:00

Left, zoom 1

950 1000 1050 1100 1150

900

950

1000

1050

1100

1150 1:00

2:00

3:00

4:00

Left, zoom 2

1020 1040 1060

1000

1010

1020

1030

1040

1050

1:00

2:00

3:00

4:00Left, zoom 3

1034 1034.5 1035 1035.5

1025

1025.5

1026

1026.5

1:00

2:00

3:00

4:00

Figure 2.10: Stability measurements of view switcher unit. A closed aperture was imaged on thecamera from both views, every 10 s, for 4h. The dots represent the center of each frame, color codedwith the time. The red ellipse represents the 95% confidence interval of the measured positions based onprincipal component analysis. Standard deviations for right view: σ1 = 4.37 px, σ2 = 2.26 px; for left view:σ1 = 0.07 px, σ2 = 0.04 px.

2.5.2 Results

Stability of the view switcher unit

As the view switcher unit is a custom-designed solution for this microscope, it is necessaryto evaluate the effect of the switching on the field of view. Given that this is a movingunit, many mechanical imprecisions can introduce a drift in the final image.

To assess the reproducibility in the movement of the detection branch switching unit,a long-term stability test was conducted. To exclude all other factors (such as sampledrift), we imaged the opening of two closed irises that were mounted on the detectionoptical rail. Image formation was done by two achromatic lenses (f = 75mm) positioneddirectly after mirrors M5 and M5’.

The apertures were imaged from both views every 10 seconds, for 4 h, for a totalof 1440 images per view, and 2880 switches. The center of the aperture was segmentedon all images using Matlab, and tracked to assess any drift occurring during the 4 h

time lapse (Figure 2.10). To visualize the uncertainty of the aperture’s position, weperformed principal component analysis (PCA) on the positions and visualized the resultsby plotting the 95% confidence ellipse on the images. Although the right view had asignificant spread with a standard deviation of 4.37 px and 2.26 px along the long andshort axes respectively, the left view was extremely stable, and the standard deviationwas only 0.07 px and 0.04 px for the long and short axes respectively. This result impliesthat there is no conceptual limitation in reaching sub-pixel reproducibility with the viewswitching unit, although some optimization is still needed for the right view to reach the

49

DOI:10.15774/PPKE.ITK.2018.005

Page 59: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

same stability as the left view.

Characterizing the illumination profile

Left view

Right view

120 125 130 135 140

Position ( m)

0

0.2

0.4

0.6

0.8

1

1.2

No

rma

lize

d in

ten

sity (

AU

)

Lateral profile of Left view

120 125 130 135 140

Position ( m)

0

0.2

0.4

0.6

0.8

1

1.2

No

rma

lize

d in

ten

sity (

AU

)

Lateral profile of Right view

50 100 150 200 250

Position ( m)

0

2

4

6

8

10

12

W (

m)

Width of Left view

50 100 150 200 250

Position ( m)

0

2

4

6

8

10

12

W (

m)

Width of Right view

Figure 2.11: Illumination beam profile. Average of 500 stationary beam images for both left (top left)and right objectives (bottom left). Beam intensity profile along the waist of each beam is plotted in thecenter column. The right column shows the beam width profile of each beam (1/e2). Scale bar, 50µm.

Since a Gaussian beam is used for illumination, its intensity profile depends on theaxial position (Equation 1.20). Although the total intensity at any cross section of thebeam is constant, the peak intensity varies due to the diverging beam. To assess any non-uniformities in the illumination pattern, we measured the illumination beam intensityfor each view.

To visualize the beam, the chamber was filled with a fluorescent solution (0.1 %methylene blue in distilled water). In order to increase the signal-to-noise ratio of theseimages, a long exposure time of 200ms was used, and 500 images of each beam wereaveraged. Background images were acquired by repeating the image acquisition withthe laser turned off. The average of the background images were subtracted from theaveraged beam images, resulting in the beam intensity profile (Figure 2.11).

To determine the beam waist position we measured the beam thickness (FWHM)for each column of the averaged images, and located the position of the minimal width.

50

DOI:10.15774/PPKE.ITK.2018.005

Page 60: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

The beam width at the waist position for the left view was 3.70µm, and for the rightview it was 3.58µm (Figure 2.11). We also measured the usable length of the beams(Section 1.3.1). The distance between the points where the beam has expanded by afactor of

√2 relative to the waist is 95.42µm for the left view, and 90.35µm for the

right view. This corresponds well to the original requirement of a minimal field of viewof 100µm. Depending on the sample and the required level of optical sectioning, thepractical field of view can be larger than this if the light-sheet is adjusted, as the camerasensor allows for a field of view of 266µm.

Resolution and point spread function measurement

In order to establish an ideal, reference PSF, we simulated the theoretical PSF of themicroscope with the Gibson-Lanni model [27], using the MicroscPSF Matlab implemen-tation [28] (see also Section 1.1.4). To simulate the reference PSF, we accounted forthe slight mismatch in refractive index between the FEP foil (ng = 1.344) and water(ni = 1.33). The working distance of the objective is 2mm, while the thickness of thefoil is 12.5µm.

It is apparent from the simulations that even though the FEP refractive index is al-most identical to the refractive index of water, a slight spherical aberration is still presenteven for the ideal case (Figure 2.12, left). The resolution measured as the FWHM of theintensity profile through the lateral and axial cross sections is 271 nm and 866 nm respec-tively. The FWHM was measured in Fiji by fitting a Gaussian curve, and multiplyingthe standard deviation of the resulting fit by 2

√2 ln 2.

To experimentally determine the resolution of the microscope and characterize itsoptical performance, we measured the point spread function using fluorescently labeledbeads suspended in 0.8% GelRite (Section 2.5.1). The gel was loaded in glass capillariesand allowed to cool. After the gel solidified, a ∼1mm piece was cut off and placed inthe microscope sample holder. The beads were imaged from both views using the 561 nm

laser line with the 561 long-pass filter (Semrock, BLP02-561R-25).From both views 12 beads were averaged using Fiji [30] and the 3D PSF estima-

tor of the MOSAIC suite [98]. In order to acquire a more accurate PSF, the averagedbead images were deconvolved with the ideal image of the bead (uniform sphere witha diameter of 500 nm) using the DeconvolutionLab2 Fiji plugin [99]. The results of thedeconvolution are shown on Figure 2.12. Similarly to the simulation, we measured theaxial and lateral resolutions on the experimental PSFs. For the left view, the measuredresolutions were 396 nm in the lateral direction, and 1350 nm in the axial direction. Forthe right view the lateral resolution was 426 nm, and the axial was 1297 nm.

To estimate the achievable multi-view resolution of the system, we combined the twomeasured PSFs. First, the PSFs were rotated by ±60◦ to correspond to the objective

51

DOI:10.15774/PPKE.ITK.2018.005

Page 61: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

late

ral

late

ral

simulatedsimulated left objectiveleft objective right objectiveright objective

ax

ial

ax

ial

FWHM: 271 nm

FWHM: 866 nm

FWHM: 1350 nm

FWHM: 426 nm

FWHM: 396 nm

FWHM: 1297 nm

x/y

z

x/y

z

x/y

z

Figure 2.12: Simulated and measured PSF of Dual Mouse-SPIM. Top row: axial sections of sim-ulated and measured point spread functions. Middle row: lateral intensity profile and Gaussian fit. Bottomrow: Axial intensity profile and Gaussian fit. Simulations were performed based on the Gibson-Lanni model.Immersion medium and sample refractive index: 1.330, coverslip (FEP foil) refractive index: 1.344, coverslipdistance: 1900µm, coverslip thickness: 50µm. Excitation wavelength: λex = 561 nm. Emission wavelength:λem = 600 nm. Scale bar: 1µm.

FWHM: 496 nm

long axislong axis

FWHM: 314 nm

short axisshort axis ccbbaa

Figure 2.13: Combined PSF of 2 views. (a) The measured PSFs (Figure 2.12) were rotated to theircorresponding orientations and multiplied. (b) Gaussian fit along the short axis of the combined PSF;FWHM=314nm. (c) Gaussian fit along the long axis of the combined PSF; FWHM=496nm. Scale bar,1µm

52

DOI:10.15774/PPKE.ITK.2018.005

Page 62: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

orientations. The PSF images were then normalized by scaling the maximum to 1, andthe two rotated and normalized PSFs were multiplied (Figure 2.13). Resolution of thecombined PSF is much improved in all directions: along its shortest axis the FWHMis 314 nm, and along the longest axis the FWHM is 496 nm (Figure 2.13). Both arebetter than the corresponding resolutions for a single view, especially along the longaxis, where the FWHM is 2.67 times smaller. This is almost perfectly matching thetheoretically expected increase in the axial direction (compare with Section 2.1).

Resolution inside a Drosphila melanogaster embryo

Apart from measuring the point spread function and resolution in an ideal bead sample,we also wanted to know the resolution inside a biological sample, as the tissues usuallyintroduce some aberrations to the light path. Although these aberrations highly dependon the sample, as a worst case scenario, we measured the point spread function insidea Drosophila melanogater embryo. As this specimen is highly opaque and scattering, itgives a good estimate for the upper bound of the resolution.

Instead of the fluorescent beads, we imaged a Drosophila embryo expressing H2A-mCherry marking the nuclei, and ASL-YFP marking the centrioles (Figure 2.14a, andSection 2.5.1). As the centriolar protein ASL is diffraction limited in size [100], its imagegives a good estimate for the point spread function inside the specimen.

Similarly to the bead recordings, 15 centriole images were averaged with the 3D PSFtool of the MOSAIC plugin in Fiji (Figure 2.14c). On the averaged image we measuredthe resolution by fitting a Gaussian function on the lateral and axial intensity profiles(Figure 2.14d,e), and calculating the FWHM of the fitted Gaussian. The size of theaveraged centrioles was 654 nm in the lateral, and 2739 nm in the axial direction.

Multi-view imaging of a mouse zygote

To demonstrate the multi-view capabilities and improved image quality of the micro-scope, we performed dual-view imaging of a fixed mouse zygote and combined the imagesusing the Multiview Reconstruction [95] plugin of Fiji (Figure 2.15). The microtubuleswere stained with Alexa Fluor 488 coupled secondary antibodies, and the zygote wasimaged from both views with 0.5µm inter-plane distance.

Anisotropy in the resolution becomes apparent in the single-view recordings, whenthe stacks are resliced from a different direction than the imaging plane. Even thoughthe native view of each objective (Figure 2.15a,d) shows good contrast and easy to recog-nize features, when rotated and sliced corresponding to the view of the other objective,the contrast is almost completely gone, and the resolution is decreased (Figure 2.15b,e).The fused stack, after the multi-view deconvolution contains the high resolution infor-mation from both views, and thus both directions show good quality and high contrast

53

DOI:10.15774/PPKE.ITK.2018.005

Page 63: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.5 Validating and characterizing the microscope

bbaa

10 µm50 µm

H2A-mCherry

ASL-YFP

lateral fitlateral fit FWHM:

654 nm

axial fitaxial fit FWHM:

2739 nm

cc

dd ee1 µm x/y

z

Figure 2.14: Maximum intensity projection of a Drosophila melanogaster embryo recording.Maximum intensity projection of a 30µm thick substack. Orange: nuclei (H2A-mCherry), blue: centrioles(ASL-YFP). 50x magnification. (a) Overview of the embryo. (b) Zoomed in view to the region marked in(a). (c) Average image of 15 centrioles distributed evenly in the embryo. (d) Gaussian fit on the lateralintensity profile of the averaged centrioles; FWHM=654nm. (d) Gaussian fit on the axial intensity profileof the averaged centrioles; FWHM=2739nm.

aa

left objectiveleft objectiveleft objective right objectiveright objectiveright objective deconvolved

bb

eedd ff

cc

side viewside viewside view

Figure 2.15: Multi-view recording of a mouse zygote. Dual-view imaging was performed on a fixedmouse zygote. Microtubules were stained with Alexa Fluor 488 coupled secondary antibodies. Top row: viewof Left objective. Bottom row: view of Right objective. (a) Raw image from left objective. (b) Rotated viewof Right objective. (c) Multi-view deconvolution of stacks (a) and (b). (d) Rotated slice from left objective.(e) Raw image from right objective. (f) Multi-view deconvolution of stacks (d) and (e). Scale bar: 50µm.The darker banding artifacts visible on the side view are due to bleaching.

54

DOI:10.15774/PPKE.ITK.2018.005

Page 64: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.6 Discussion

(Figure 2.15c,f).

2.6 Discussion

In this chapter we have presented a novel light-sheet microscope designed for subcellularimaging of mouse embryos. The microscope has two identical objectives, positioned in a120◦ angle, and operates with two 30◦ tilted light-sheets. Due to the symmetrical designboth objectives can be used for detection and illumination as well, providing multi-viewcapabilities without sample rotation. The 120◦ objective configuration allows the use ofhigh numerical aperture objectives (1.1 instead of 0.8), which drastically increases thelight collection efficiency compared to the conventional 90◦ configuration. Due to thisimprovement the microscope requires less tradeoff in imaging speed, resolution, contrast,and phototoxicity, as the corners of the pyramid (Figure 1.1) are now moved closertogether.

We have characterized the optical properties of the microscope by measuring the light-sheet dimensions (average length: 93µm, average thickness: 3.6µm), and by measuringthe point spread function. When combining the two views, the resolution is much closerto an ideal isotropic shape, being 496 nm along the long axis and 314 nm along theshort axis as opposed to 1323 nm and 411 nm measured for a single view. This 2.67 foldincrease in axial resolution can potentially make the difference in being able to track allchromosomes in a developing mouse embryo.

The obtained resolution is comparable to the state of the art lattice light-sheet mi-croscopy (LLSM) [7] (230 nm lateral and 370 nm axial), however, the optical alignmentand the operation of our microscope is considerably simpler, as there is no need for theuse of a spatial light modulator. As LLSM relies on the self-interference of the illumi-nation beam to obtain the thin sectioning pattern, the practical imaging volume is only50 × 50 × 50µm3. In contrast, the Dual Mouse-SPIM is capable of imaging a volumeof 260 × 100 × 100µm3 due to the more robust Gaussian-beam illumination and thetwo-sided detection. Imaging very fast dynamics from two directions, however, might bechallenging, as the detection direction is switched with a mechanical part, which limitsthe maximum volumetric imaging speed.

To be able to surpass the proof-of-concept state, the microscope still needs someimprovements. The detection view switching unit needs to be improved to minimizefield of view drift of both views. One possible issue with the current solution is theinsufficient damping of the moving mirror unit, which can cause unwanted vibrationswhen the switching force is high. This can also be a possible explanation of why only onedirection was unstable, as the movement speed was not equal in both directions. Sincethe pneumatic cylinder has a single actuator rod, the force on the piston is different

55

DOI:10.15774/PPKE.ITK.2018.005

Page 65: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

2.6 Discussion

depending on the movement direction: on the side where the rod is attached to thepiston the surface area is smaller, thus the actuating force is lower with the same airpressure. This could be remedied by applying a more efficient damping system, and bymaking sure that the forces are the same in both directions. For small specimens, if areduced field of view is sufficient, the entire moving assembly can be replaced by a fixedprism mirror, which reflects the two objective images on either half of the camera sensor.This has the added benefit of faster imaging, since the switching time between the twoimaging branches is only limited by the galvanometric scanner. Another solution couldbe the addition of a second camera, which would make the view switching completelyunnecessary without sacrificing the field of view.

Another necessity for long-term experiments is the implementation of an environmen-tal chamber to provide the appropriate conditions for live mammalian embryo imaging.This involves the integration of a heating element and a gas mixer unit to the mechani-cal design, complemented by an enclosure that seals off the environment of the imagingchamber. The mechanical design is already underway and the electronic control will beintegrated with the modular control software.

The use of high-NA objectives also keeps the door open for future upgrades, such asthe addition of structured illumination [58, 59], or coupling in a high-power pulsed laserfor photomanipulation capabilities [101]. Another natural extension to the system wouldbe the addition of a third objective that would allow for truly isotropic imaging andtwo-sided sample illumination. Using three objectives would introduce some challengesin sample mounting, as the open-top sample holder design could not be used in the sameway. A more practical way would be the use of an FEP tube, where the embryos coulddevelop in the appropriate medium while they are sealed from the outside by gel plugson either side. Of course such a system will need to be thoroughly tested of live imagingcompatibility.

Single-view solutions for isotropic imaging only allow a very limited field of view. Fordiffraction-limited optical sectioning a Gaussian beam is diverging too fast [102], whilethe use of other, non-diffractive beams rely on self-interference [7], which breaks downafter a few tens of micrometers inside the sample. Our dual-view high NA solution allowsfor near isotropic resolution even for large fields of view, such as for entire mouse zygotesor embryos (120µm×120µm×120µm).

56

DOI:10.15774/PPKE.ITK.2018.005

Page 66: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Chapter 3

Image compression

Light-sheet microscopy is capable of generating an immense amount of data in a shortamount of time. For this reason not only fast data processing, but data compression alsoplays an important role when evaluating, sharing and storing the images. In this chapterwe will briefly introduce the core concepts of image compression and show some examplesas a background for Chapter 4. For a more comprehensive review on data compressionfundamentals, the interested reader is referred to the works of Sayood [103], and Salomonand Motta [104].

There are two distinct ways of data compression: lossless and lossy. This is an impor-tant difference, and the choice between the two greatly depends on the application itself.For some types of data no loss is acceptable, for example computer programs or text.Here, even a small change could drastically influence the meaning of the original data.For other applications, such as audio and video compression, some loss can be toleratedas long as there is no perceived change or distortion. For scientific applications mostlylossless compression is used. Even for image data, where lossy compression can signifi-cantly increase the compression ratio (original size / compressed size), lossy compressionis usually not recommended, as it can alter the measurement results in unpredictableways [105].

3.1 Basics of information theory

As compression is the art of reducing the size of a message while keeping the same amountof information, it is necessary to briefly discuss the theory behind information, and howto quantify it. These concepts were first introduced by Shannon, in his highly influentialpapers [106–108]. As a rigorous mathematical explanation is outside the scope of thiswork, we refer the interested reader to the aforementioned works.

Informally, information can be quantified as the amount of surprise. A statement,if it has a high probability, carries a low amount of information, while if it has a small

57

DOI:10.15774/PPKE.ITK.2018.005

Page 67: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.2 Entropy coding

probability, there is more information. Consider the following example: “It is July”. If it isindeed July, this statement doesn’t provide too much information. Continuing with “It issnowing outside”, carries more information, as it is unexpected under the circumstances(provided the conversation takes place on the Northern Hemisphere).

The self-information of event A can be defined the following way:

I(A) = logb1

P (A)= − logb P (A), (3.1)

where P (A) is the probability of event A occurring. The base of the logarithm canbe chosen freely, but usually the preferred base is b = 2. With this choice, the unitof information is 1 bit, and I(A) represents how many bits are required to store theinformation contained in A.

The average self-information, also called entropy, for a random variable X can becalculated based on the self-information of the outcomes Ai:

H(X) =∑

i

P (Ai)I(Ai) = −∑

i

P (Ai) logb P (Ai) (3.2)

If we consider a data stream S, where each symbol records the outcome of independent,identical random experiments X, then the average information in each symbol will beH(X). As a consequence, as stated by Shannon’s source coding theorem [106], the short-est representation of S will require at least N · H(X) bits, where N is the number ofsymbols in the stream. This defines the theoretical lower bound that any naïve compres-sion algorithm, also called entropy coding, can achieve without loosing any information.

The requirement, however, is that the symbols are independent of each other, whichis usually not true for most of the data types we (humans) are interested in. In order toachieve ideal compression, the input needs to be modeled, or transformed, to represent itas a sequence of independent variables. This pattern of modeling and coding [109] is thecore of all modern compression algorithms. The following sections will briefly introducethese concepts by two examples: Huffman coding, and pixel prediction.

3.2 Entropy coding

Entropy coding algorithms aim to reduce data size and reach the limit of entropy for anykind of input based on the probability distribution of the possible symbols. All of thesealgorithms work in a similar way, as their aim is to represent the symbols of the inputdata by codewords in a way to minimize the overall length of the data. Variable lengthcodes, for example, replace the more common symbols with short codewords, while forless frequent symbols they use longer codewords. The ideal length of a codeword forsymbol A is actually equal to I(A), as this was the definition of self-information.

58

DOI:10.15774/PPKE.ITK.2018.005

Page 68: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.2 Entropy coding

Table 3.1: Examples of a random binary code (#1) and a prefix-free binary code (#2). Code#2 is uniquely decodable, while for code #1 it is necessary to introduce boundaries between codewords tobe able to distinguish them.

Letter Code #1 Code #2a1 0 10a2 11 11a3 00 00a4 10 010a5 111 011

Prefix-free codes

Fixed-length codes have the convenience that there is no need to indicate the boundariesbetween them, as they are all the same length (e.g., 8 bits for ASCII codes). For variablelength codes, as the name implies this is not possible, which necessitated the developmentof various strategies to demarcate these codewords.

One way to indicate the end of a codeword is to use a specific delimiting sequencewhich is not part of any of the codewords. While this is an obvious solution, it isalso wasteful, as these sequences will not contain any information regrading the orig-inal source. Prefix-free codes were developed to be able to omit these sequences, whilestill allowing unique decodability, making them very popular. This is achieved by mak-ing sure that once a codeword was assigned, no other codeword would start with thatsequence. In other words, no codeword is a prefix of another codeword, hence the nameof the technique. During decoding the bits are read until a valid codeword is matched.Since it is guaranteed that no other codeword will start with the same sequence, thedecoder can record the symbol corresponding for that codeword, and continue readingthe compressed stream.

Let’s take the example in Table 3.1. Five letters are coded in binary code by Code#1 and by Code #2. Code #1 is not a prefix code, and because of this when reading theencoded sequence we can not be sure when we reach the end of a codeword. Decodingthe sequence 0000 for example could be interpreted as 4 letters of a1 or 2 letters of a3.For Code #2 it is clear that the sequence decodes to 2 letters of a3.

Huffman coding

Huffman coding is a prefix-free, optimal code that is widely used in data compression.It was developed by David A. Huffman as a course assignment in the first ever courseon information theory at MIT, and was published shortly afterwards [110]. It is able toachieve optimal compression in a sense that no other prefix-free codes will produce ashorter output. Due to its simplicity combined with a good compression ratio it is still

59

DOI:10.15774/PPKE.ITK.2018.005

Page 69: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.2 Entropy coding

Table 3.2: Letters to be Huffman coded and their probabilities.

Letter Probability Codeworda2 0.4 c(a2)a1 0.2 c(a2)a3 0.2 c(a2)a4 0.1 c(a2)a5 0.1 c(a2)

widely used in many compression algorithms, such as in JPEG, bzip2, or DEFLATE(zip) [104].

The Huffman coding procedure is based on two observations regarding optimal andprefix-free codes:

1. For a letter with higher frequency the code should produce shorter codewords, andfor letters with lower frequency it should produce longer codewords.

2. In an optimum code, the two least frequent codewords should have the samelengths.

From these statements it is trivial to see that the first is correct. If the more frequentletters would have longer codewords than the less frequent letters, then the averagecodeword length (weighted by the probabilities) would be larger than in the oppositecase. Thus, more frequent letters must not have longer codewords than less frequentletter.

The second statement at first glance might not be so intuitive, so let’s consider thefollowing situation. Let’s assume that the two least frequent codewords do not have thesame lengths, and the least frequent is longer. However, because this is a prefix code,the second longest codeword is not a prefix of the longest codeword. This means thatif we truncate the longest codeword to the same length as the second longest, they willstill be distinct codes and uniquely decodable. This way we have a new coding schemewhich requires less space on average to code the same sequence as the original code, fromwhich we can conclude the original code was not optimal. Therefore, for an optimal code,statement 2 must be true.

To construct such a code, the following iterative procedure can be used. Let’s consideran alphabet with five letters A = [a1, a2, a3, a4, a5] with P (a1) = P (a3) = 0.2, P (a2) =

0.4 and P (a4) = P (a5) = 0.1 (Table 3.2). This distribution represents a source with 2.122bits/symbol. Let’s order the letters by probability, and consider the two least frequent.Since the codewords assigned to these should have the same lengths, they can be assigned

60

DOI:10.15774/PPKE.ITK.2018.005

Page 70: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.2 Entropy coding

Figure 3.1: Building the binary Huffman tree. The letters are ordered by probability, these will bethe final leaves of the tree. To create the branches, at every iteration we join the two nodes with the smallestprobability, and create a new common node with the sum of the probabilities. This process is continueduntil all nodes are joined in a root node with probability of 1. Now, if we traverse down the tree to eachleaf, the codeword will be defined by their position.

as

c(a4) = α1 ∗ 0c(a5) = α1 ∗ 1

where c(ai) is the assigned codeword for letter ai and ∗ denotes concatenation. Nowwe define a new alphabet A′ with only four letters a1, a2, a3, a

′4, where a′4 is a merged

letter for a4 and a5 with the probability P (a′4) = P (a4) + P (a5) = 0.2. We can continuethis process of merging the letters until all of them are merged and we have only one letterleft. Since this contains all of the original letters, its probability is 1. We can representthe end result in a binary tree (see Figure 3.1), where the leaves are the letters of thealphabet, the nodes are the merged letters, and the codewords are represented by thepath from the root node to each leaf (compare with Table 3.3). The average length ofthis code is

l = 0.4× 1 + 0.2× 2 + 0.2× 3 + 0.1× 4 + 0.1× 4 = 2.2 bits/symbol (3.3)

The efficiency of a code can be measured by comparing the average code size to theentropy of the source. For this example, the difference is 0.078 bits/symbols. As theHuffman code operates in base 2, it will produce a code with zero redundancy when theprobabilities are negative powers of 2.

Limitations of Huffman coding

Although the Huffman code is optimal among variable-length codes, for certain types ofdata its compression ratio may be low. This limit lies in the fact that the length of theshortest code assignable is one. For certain types of data with very low entropy this will

61

DOI:10.15774/PPKE.ITK.2018.005

Page 71: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.3 Decorrelation

Table 3.3: Huffman code table.

Letter Probability Codeworda2 0.4 1a1 0.2 01a3 0.2 000a4 0.1 0010a5 0.1 0011

result in a relatively high redundancy. Arithmetic coding offers an alternative here, ascontrary to Huffman coding, it does not rely on substituting symbols with codewords, andthus can compress very low entropy data with minimal redundancy. This coding methodmaps the whole message to a single number with arbitrary precision in the interval of 0to 1. A more detailed description of arithmetic coding in given in [103].

Another alternative for compressing very low entropy data is run-length encoding.This coding method assumes that the same character appears in the message many timesconsecutively, and instead of storing each of these individually, it counts the number ofidentical symbols in a single run, and encodes the run-length together with the symbol.As this strategy is only efficient for very low entropy data, it is a common practice tocombine run-length encoding with Huffman coding to construct a low redundancy coderfor a wide range of input data.

3.3 Decorrelation

Since entropy coding doesn’t assume anything about the data structure, it is not capableof recognizing and compressing any regular patterns or correlations between the consec-utive data points. To maximize the efficiency of compression, any correlations should beremoved from the data before it is fed to the entropy coder. Here we will briefly discusssome decorrelation strategies for image compression that aim to improve the performanceof a successive entropy coder.

Transform coding

Very popular methods for decorrelation are the transformations that represent the dataas a decomposition of orthonormal functions. One of the most common of these trans-formations is the Fourier transform, which represents a periodic signal as a sum of sinean cosine functions. As most natural images are of continuous tone, the idea behind thisis that most of the information will be captured by just a few Fourier coefficients. Asa result, compressing the coefficients with an entropy coder should result in a smalleroutput than encoding the original data.

62

DOI:10.15774/PPKE.ITK.2018.005

Page 72: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.3 Decorrelation

For practical purposes, instead of the discrete Fourier transform (DFT), the discretecosine transform (DCT) [111] is usually used in image compression applications. Thisis very similar to DFT, the difference being the periodic extension of the signal. In theDCT transform the image is mirrored on the sides instead of just repeated, to avoid largejumps at the boundaries that may introduce unwanted high-frequency components in theFourier transform. If the image is extended in a symmetric way, the Fourier transformwill be able to represent the data with only cosine functions, hence the name discretecosine transformation.

One drawback of frequency-based transformations is that they can not capture anyspatial information, as their base functions are periodic for the whole domain. This canhave a negative impact on image compression ratio. For example, if the image containsa single sharp edge, a high frequency base function will need to have a large coefficient.Since this will affect the entire image, other, lower frequency bases will also need to haveincreased coefficients to negate the unwanted effects. To overcome this limitation, theDCT is usually performed on small chunks of the image, such as on 8×8 blocks in theJPEG compression. The other option is to use non-periodic base functions to representthe signal. One good example for this is the use of wavelets, and based on this, thediscrete wavelet transform (DWT) [112, 113].

As both DCT and DWT operate with floating-point coefficients (although DWT hasinteger-based variants), the inverse transformation is not guaranteed to exactly recon-struct the original data. This is due to the finite machine precision of floating-pointnumbers. Due to this fact these transformations are generally used in lossy compressionalgorithms, and the coefficients are quantized based on the required image quality.

Predictive decorrelation

Another family of decorrelation methods is based on predicting the values of each datapoint (or pixel) depending on the context of the neighboring values. These techniques arebased on differential pulse code modulation (DPCM), a decorrelation method originallyused for audio compression. Here, before the data stream is entropy coded, a predictionis made for each value, which is equal to the preceeding value. The difference to theprediction, called the prediction error, or prediction residual (ε) is passed to the entropycoder.

To see how why such an algorithm can increase compression ratio, let’s consider thefollowing sequence:

37 38 39 38 36 37 39 38 40 42 44 46 48

Although there is no obvious pattern in this sequence, there are no sharp jumps betweenthe consecutive values, and this can be exploited by using a prediction scheme. Let’s

63

DOI:10.15774/PPKE.ITK.2018.005

Page 73: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.3 Decorrelation

C B D

A X

Figure 3.2: Context for pixel prediction. X is the current pixel to be encoded and the previouslyencoded pixels are indicated with gray background. To allow for decoding, only the already encoded pixelsare used for the prediction, typically the nearest neighbors (A, B, C, D).

define the prediction Pred(·) for each element Xk to be equal to the preceding element.The prediction error would be εk = XK − Pred(Xk) = Xk −Xk−1:

37 1 1 -1 -2 1 2 -1 2 2 2 2 2

As apparent from this new sequence, the number of distinct values are reduced, whichmeans that fewer bits can represent this sequence than the original. When running thesetwo sequences through an entropy coder, the error sequence will be much more com-pressed due to this property.

Using a predictive scheme to remove correlations from neighboring elements is a verygood strategy for various applications where this signal is only slowly changing. It hasbeen successfully implemented in audio and image compression algorithms as well, suchas lossless-JPEG [114], JPEG-LS [115], CALIC [116], SFALIC [117] and FLIC [118], justto name a few.

For image compression, as the datasets are inherently multi-dimensional, the predic-tion rules can also be multi-dimensional, which can better capture the structure of thedata, and achieve better decorrelation. For a typical encoding scenario, where the pixelsare encoded one by one, the prediction can be based on the already encoded values (grayin Figure 3.2). This causality constraint is necessary in order to be able to decode theimage.

Usually the nearest neighbors are used for the prediction, but this is not a necessity;CALIC, for example, uses the second neighbors [116], and the third dimension can alsobe utilized if the data structure allows this, for example when encoding hyperspectralrecordings [119].

The lossless part of the JPEG standard specifies 7 possible rules for the prediction

64

DOI:10.15774/PPKE.ITK.2018.005

Page 74: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

3.3 Decorrelation

step [114]:

Pred1(X) =A

Pred2(X) =B

Pred3(X) =C

Pred4(X) =A+B − C

Pred5(X) =A+ (B − C)/2

Pred6(X) =B + (A− C)/2

Pred7(X) =(A+B)/2

Depending on the patterns of the input image, some predictors can perform better thanothers. For general use, predictor 4 or 7 are recommended, as these are not directiondependent, and usually perform best. Predictor 7 is just the mean of the top and leftneighbors, while predictor 4 assumes that the values for pixels A, B, C and X are on thesame plane, and calculates the prediction based on this.

Other algorithms try to improve the prediction by adapting it depending on the localtexture. The LOCO-I algorithm (part of JPEG-LS), for example, uses the median edgedetector [115]:

Pred(X) =

min(A,B) if ≥ max(A,B)

max(A,B) if ≤ min(A,B)

A+B − C otherwise

(3.4)

This function tries to estimate if there is an edge close to X. If it detects a vertical edgeto the left of X, it picks B for the prediction, whereas if it detects a horizontal edge aboveX it tends to pick A. If no edge is detected, the prediction is the same as predictor 4 ofthe lossless JPEG standard.

A useful property of the prediction-based methods is that there is no base change as inthe transform coding methods, and it is possible to construct bounded lossy compressionalgorithms. JPEG-LS, for example defines a near-lossless mode of operation, where theuser can select the maximum absolute reconstruction error per pixel. This is achieved byquantizing the prediction residual ε with a quantization step depending on the allowableerror. Since the decoding algorithm simply calculates X̂ = Pred(X)+ε, any quantizationerror of ε is directly reflected in the reconstructed pixel value. Since the quantizationerror is known, and bounded by the quantization step size, the maximum reconstructionerror for each pixel is also bounded. This mode is sometimes used for medical imagingapplications, where the preservation of diagnostic quality is required by law, but highercompression ratios are desired than what is possible by lossless compression.

65

DOI:10.15774/PPKE.ITK.2018.005

Page 75: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Chapter 4

GPU-accelerated imageprocessing and compression

4.1 Challenges in data handling for light-sheet microscopy

When using any kind of microscopy in research, image processing is a crucial part ofthe workflow. This is especially true for light-sheet microscopy, since it is capable ofimaging the same specimen for multiple days, producing immense amounts of data. Asingle overnight experiment of Drosophila development (which is a very typical use-casefor light-sheet microscopy) can produce multiple terabytes of data.

Apart from light-sheet microscopy, many other microscopy modalities also suffer fromthis problem. Methods, such as high content screening [120–122] and single moleculelocalization microscopy (SMLM) [123–125] can easily face similar challenges.

Not only these methods are capable of generating data extremely fast, but withthe sustained high data rate a single experiment can easily reach multiples of terabytes(Figure 4.1). Handling this amount of data can quickly become the bottleneck for manydiscoveries, which is a more and more common issue in biological research [126–128].

This chapter will focus on addressing these challenges by presenting a real-time,GPU-based image preprocessing pipeline developed in CUDA [129] consisting of twoparts (Figure 4.2). The first part is a fast image fusion method for our workhorse light-sheet microscope, the MuVi-SPIM [48], that enables live fusion of the images arrivingfrom two opposing cameras. The second part of the pipeline, which can also be usedin a standalone way, is a real-time image compression library that allows for losslessand noise-dependent lossy compression of the data directly during acquisition even forhigh-speed sCMOS cameras.

66

DOI:10.15774/PPKE.ITK.2018.005

Page 76: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

Experiment size and data rate ofdifferent imaging modalities

10 GB 100 GB 1 TB 10 TB

experiment size

1

10

WiFi

100HDD

SSD

1000

thro

ug

hp

ut

(MB

/s)

SPIM

SMLM

screening

confocal

Figure 4.1: Experiment sizes and data rate of different imaging modalities. Comparison of single-plane illumination microscopy (SPIM, red), high-content screening (light blue), single molecule localizationmicroscopy (SMLM, orange) and confocal microscopy (blue) by typical experiment size and data productionrate (see also Table B1 in Appendix B).

view 1background subtraction

affinetransform

fusion masking

compression

view 2 background subtraction losslesswithin noise

data size 7.5 TB 3.75 TB 530 GB 180 GB

Figure 4.2: Real-time image processing pipeline for multiview light-sheet microscopy.

4.2 Real-time preprocessing pipeline

Similarly to the DualMouse-SPIM, our currently used production microscope, theMultiview-SPIM (MuVi-SPIM) [48] also uses multiple imaging directions to improvethe image quality. In this case, however, the aim is completeness rather than increasingthe resolution. As the MuVi-SPIM is capable of imaging much larger specimens, such asentire Drosophila embryos, the sample size itself can present some challenges, especiallyfor opaques specimens. As light scattering and absorption impact both the illuminationand detection optics, the negative effects for SPIM are more pronounced compared tosingle-lens systems [130].

67

DOI:10.15774/PPKE.ITK.2018.005

Page 77: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

ea

b c d

Tube lens

Detection

Filter wheel

Tube lensCMOScamera

armIlluminationarm

Camera 1

Light sheet 1

Camera 2

Light sheet 2

Input laserbeam

Scan lens

Galvanometricbeam scanner

Figure 4.3: Operating principle of MuVi-SPIM. (a) The microscope consists of two illumination andtwo detection arms for simultaneous multiview illumination and detection. (b) The 4 arms meet in theimaging chamber that is filled with the imaging medium and contains the sample. (c) The sample is heldby a glass capillary, in a GelRite cylinder. Optical sectioning is achieved by a virtual light-sheet. (d) Thelight-sheets can be generated from two sides (light-sheet 1 and 2), and detection is also double sided (camera1 and 2). Adapted from [48].

4.2.1 Multiview SPIM for in toto imaging

MuVi-SPIM provides an elegant solution for multiview imaging. A standard SPIM setupwith a single detection and a single illumination lens would rotate the sample to acquireimages from multiple directions. MuVi-SPIM, on the other hand, utilizes two opposingobjectives for illumination and two opposing objectives for detection (Figure 4.3a). Asthe sample is held by an aqueous gel inside the imaging chamber, all objectives haveunobstructed view of it from multiple directions (Figure 4.3b,c).

Data acquisition is done in two steps: the sample is illuminated by light-sheet 1, andfluorescence is collected by both detection objectives at the same time. After this, light-sheet 2 is activated and both cameras record the fluorescence again (Figure 4.3d). Thisprocess will result in 4 datasets, all with partial information due to scattering effects.The 4 views are later fused to a single, high quality dataset (Figure 4.3e). This fusionprocess is necessary before any further analysis steps can be performed. However, due tothe sheer size of the data, it takes a considerable amount of time after the acquisition.

By combining scanned light-sheet illumination [52] with confocal slit detection onthe camera chip [131], it is possible to exclude out-of-focus, scattered illumination light.This way it is possible to illuminate simultaneously with both light-sheets, which leavesus with only two views, the views of the two opposing cameras [75].

68

DOI:10.15774/PPKE.ITK.2018.005

Page 78: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

4.2.2 Image registration

To perform image fusion of multiple views, first image registration is necessary: thecoordinate systems of the views have to be properly overlapped. Ideally a single mirror-ing transformation would be enough to superpose the two camera images, however, inpractice the microscope can never be aligned with such precision. Other types of transfor-mations are also necessary: translation to account for offsets in the field of view; scalingin case of slightly different magnifications; and also shearing if the detection plane is notperfectly perpendicular to the sample movement direction [132]. To combine all of theseeffects, a full, 3D affine transformation is necessary to properly align the two camera im-ages (Figure 4.4 a). This transformation can be represented by a matrix multiplicationwith 12 different parameters:

x′

y′

z′

1

=

a b c d

e f g h

i j k l

0 0 0 1

×

x

y

z

1

=

ax+ by + cz + d

ex+ fy + gz + h

ix+ jy + kz + l

1

where x, y, z are the coordinates in the original 3D image, x′, y′, z′ are coordinates in thetransformed image, and a, b, ..., l are the affine transformation parameters.

These parameters are traditionally acquired by a bead based registration algorithmafter imaging fluorescent beads from each view of the microscope [97, 133]. The beads aresegmented by using a difference of Gaussian filter, and the registration parameters areacquired by matching the segmented bead coordinates in each view. Identifying the corre-sponding beads is done by a translation and rotation invariant local geometric descriptor.This matches the beads based on their relative position to their nearest neighbors. Afterthe matching beads are identified, the affine transformation parameters are calculatedby minimizing the global displacement for each pair of beads.

For the two opposing views of MuVi-SPIM, these parameters are only dependenton the optical setup itself, and not on the sample or on the experiment. Because ofthis, it is sufficient to determine the transformation parameters only after modifying themicroscope (e.g., after realignment).

In our currently used fusion pipeline, after the parameters are determined, the 3Dstacks are fused by transforming one of the views to the coordinate system of the other(using the affine transformation parameters from the bead based registration), and fusingthe two stack by applying a sigmoidal weighted average (4.4a). The weights are deter-mined in a way to exclude the parts of the stacks that have worse image quality andcomplement these from the other view’s high quality regions. When using the electronicconfocal slit detection (eCSD) weighting is not necessary, as the majority of the scatteredlight is already rejected in these recordings, and a simple sum of the two stacks gives the

69

DOI:10.15774/PPKE.ITK.2018.005

Page 79: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

a b c d

e f g h

i j k l

0 0 0 1

Σ

×

N

N

a

a b d

e f h

0 0 1

b

×

x

y

z

Figure 4.4: multiview fusion methods for light-sheet microscopy. a) Full 3D stacks are acquiredfrom the opposing views (yellow and blue), which are then registered in 3D space using previously determinedaffine transformation parameters. Registered stacks are then weighted averaged to create the final fused stack(green). b) Images from opposing views are directly fused plane by plane. Registration takes place in 2Dspace thus reducing computational effort and memory requirements. The registered planes are then weightedaveraged to create the final fused image.

best result regardless of the sample [75].The fusion process itself can be very resource intensive and can take a considerable

amount of time. This is simply due to the size of the 3D stacks: a single dataset is usuallybetween 2 and 4GB in size. Thus, the necessary memory requirement to fuse 2 of thesestacks is 3 · 4GB = 12GB, as the result will also take up the same space. Just readingand writing this amount of information to the hard disk takes a substantial amount oftime. When using an SSD drive for example, with a 500MB/s read and write speed, justthe read/write operations will take around 12GB

500MB/s = 24 s.

4.2.3 2D fusion of opposing views

To speed up the image processing, we take a new approach to fusing the images. As thetwo objectives of MuVi-SPIM ideally image the same z plane, it should be possible toreduce the alignment problem to a 2D affine transformation:

x′

y′

1

=

a b d

e f h

0 0 1

×

x

y

1

=

ax+ by + d

ex+ fy + h

1

The advantage of a simplified transformation is not only the reduced computationalcomplexity, but also the massively reduced memory requirement, as in this case it issufficient to store 3 planes in memory instead of 3 entire stacks (Figure 4.4 b). Performingthe 2D direct fusion on the GPU immediately after acquisition has two benefits: first,only the fused images are stored on the hard drive, which directly results in 50% savingsin storage space. Second, since the fusion is faster than the camera acquisition speed,the fused image can be displayed for the users instead of the 2 raw images. This greatlyimproves user experience and speeds up the initial set-up phase of each experiment.

In order to allow for the direct 2D fusion, the microscope needs to be very well aligned

70

DOI:10.15774/PPKE.ITK.2018.005

Page 80: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

(a) (b)

Figure 4.5: Classes of LVCUDA library. (a) CuArray class, wrapping a 1D or 2D pitched device memoryfor use in LabVIEW. (b) Texture class, wrapping a CUDA texture object for use in LabVIEW.

to minimize any adverse effects arising from discarding some of the transformation pa-rameters. The requirements for this are the following:

|cz| < σxy ∀z (4.1)

|gz| < σxy ∀z (4.2)

|ix+ jy + (k − 1)z + l| < σz ∀x, y, z (4.3)

where σxy is the lateral resolution, and σz is the axial resolution of the microscope, whichare 277 nm and 1099 nm respectively (for NA = 1.1).

If these conditions are fulfilled (i.e., the microscope is properly aligned), direct planeby plane fusion will not result in any loss of information compared to the full 3D imagefusion.

4.2.4 CUDA implementation of direct fusion

Our custom microscope control software (Section 2.4.2) is developed in LabVIEW, andwe implemented the pipeline as a combination of a CUDA and a LabVIEW library. TheCUDA library implements all the necessary low level functions and exposes these in adynamically linked library (dll). The LabVIEW library, LVCUDA.lvlib implements twohigh level classes: cuArray and Texture (Figure 4.5). These classes interface with theCUDA dll, and allow to easily build a flexible CUDA-based image processing pipeline inLabVIEW.

As the pipeline operates on 2D images, both classes support 1D or 2D arrays ina pitched configuration. CuArray, as the parent class, stores the pointer to the array,as well as the pitch, width and height. LabVIEW natively does not support pointeroperations, therefore the pointer is simply cast to a 64 bit unsigned integer internally.As this is a private member of the class there is no danger of accidental modification.

71

DOI:10.15774/PPKE.ITK.2018.005

Page 81: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

(a) (b)

Figure 4.6: Applying mask to remove background. (a) Original image of a Drosophila embryo. (b)Background (blue) selected by thresholding the blurred image.

Being a descendent of cuArray class, the Texture class adds a further member to storethe reference to the CUDA texture object. This class is used to implement the affinetransformation, as it supports floating point indexing and automatic linear interpolation.

Since the processing is actually bottlenecked by the data transfer rate between themain memory and GPU memory, we implemented multiple commonly used image prepro-cessing operations for our pipeline to maximize the utility of the GPU. These functionsinclude background subtraction, background masking and image compression. We willdiscuss these features in the following sections of the chapter.

4.2.5 Background subtraction and masking

As an additional step of the preprocessing pipeline, background subtraction and back-ground masking was implemented. Background subtraction can improve image qualityas it removes any system-specific patterns, while background masking is beneficial forthe consecutive compression step.

To perform the background subtraction, dark images are recorded without the laserturned on. Typically 500–2000 images are taken which are averaged, to capture the pixelnon-uniformities of the camera sensors. As this is camera specific, the step is repeatedfor all cameras. If necessary, the recordings can be repeated under different conditions,such as different exposure times, or different sensor modes (such as global shutter androlling shutter). The averaged background images are saved in a custom file format in anHDF5 container [134] together with the camera serial number. The microscope softwarecan read these files and applies the background subtraction to the appropriate camerasduring imaging.

Background masking is a useful step to increase the efficiency of image compression.For light-sheet microscopy, where typically an entire embryo is imaged, the boundaryof the specimen, and thus the area of interest, is well defined. Anything outside theembryo is just noise which does not contribute to any useful information. For imagecompression, however, these regions are extremely difficult to compress, as due to the

72

DOI:10.15774/PPKE.ITK.2018.005

Page 82: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

2D

fu

sio

n3

D f

usi

on

front view side view

FWHM: 1.277 µm

FWHM: 1.278 µm FWHM: 4.530 µm

a b

d

1 µm

c

FWHM: 4.511 µm

Figure 4.7: Comparison of 3D and 2D fusion of beads. Front and side views of a representativebead after 3D full affine fusion (a, b) and after 2D planewise fusion (c, d). FHWM was measured along theindicated lines, through the center of the bead image. Scale bar: 1µm.

random nature of noise, almost no reduction is achievable (see 3.1). To circumvent thisissue, the background pixels outside the specimen can be set to 0, which will greatlyincrease the compression ratio without compromising any data of interest [135].

The masking is performed in the following way. As the sample has much higher signalcompared to the background, a thresholding operation is sufficient. In order to have asmooth mask, the images are first blurred by a Gaussian kernel, and the thresholdingis done on the blurred image. This will define a binary mask with smooth boundaries,which is applied to the original image (Figure 4.6). The threshold can be calculatedautomatically by the Triangle method [136], or can also be user specified.

4.2.6 Methods and results

The image preprocessing pipeline was tested on our MuVi-SPIM microscope [48]. Process-ing speed of the CPU and GPU implementation of the processing pipeline was measuredwith the profiling tool of LabVIEW, and are an average of 10 runs. The benchmark-ing computer had 2 Intel Xeon E5620 processors with 8 processing cores running at2.4GHz, 24GB RAM and an Nvidia GeForce GTX 750 graphics card. Test images were2048×2048 pixels in size at 16 bit depth.

For the background subtraction we recorded 1500 dark images with each camera andaveraged them, in order to obtain the camera specific background images. Prior to imageacquisition these were uploaded to the GPU memory, and were readily available for thepipeline. The implemented CUDA pipeline with background subtraction, 2D fusion andmasking reached a throughput of 135.8 frames per second (fps), while a single-threaded

73

DOI:10.15774/PPKE.ITK.2018.005

Page 83: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

CPU implementation could only process 7.4 images per second on average.Following careful alignment of the microscope to meet the previously discussed re-

quirements (Eqs. 4.1 – 4.3), we imaged fluorescent beads in a gel suspension to obtainthe affine transformation parameters. The measured 3D transformation parameters werethe following:

1.013 2.494× 10−3 7.182× 10−3 −21.67

−1.697× 10−3 1.014 −3.192× 10−5 −4.161

−9.694× 10−5 1.937× 10−7 1.003 −0.433

0 0 0 1

Substituting these values to Eqs. 4.1 – 4.3 and evaluating for the full range of x, y, z, themaximum contribution of the discarded coefficients for each coordinate will be:

max |cz| =0.070µm

max |gz| =3.11× 10−4µm

max |ix+ jy + (k − 1)z + l| =0.7789µm

As these are below the resolution of the microscope, discarding the coefficients andreducing the 3D transformation to a 2D transformation will not have any negative effectson the image quality.

To validate our hypotheses, we fused bead stacks that were acquired for the calibra-tion using both the 3D and the 2D transformations. The 2D, planewise fused beads donot exhibit any pathological morphologies compared to the 3D fused beads (Figure 4.7).When measuring the size of the bead images (FWHM) along the lateral and axial di-rections, the difference is less than 0.5% compared to the 3D fused dataset. This is wellbelow any resolvable features.

We applied the direct fusion method to various biological specimens, such asDrosophila melanogaster embryos, zebrafish larvae, Phallusia mammillata embryo andVolvox algae. As a demonstration, here we show the results of imaging a Drosophila em-bryos expressing H2Av-mCherry histone marker. The embryos were imaged first withoutdirect fusion enabled, and immediately afterwards with direct fusion enabled (Figure 4.8).Image quality dependency on the depth of the imaging plane is especially apparent insingle-view stacks. Planes closer to the objective give a sharp, high contrast image, whileplanes deeper inside the embryo are severely degraded due to scattering (Figure 4.9).

Stacks obtained with the live fusion enabled show a consistently high image qualitythroughout the entire stack, independent of the depth (Figure 4.9). This allows us tokeep only the already fused data, thus effectively reducing the storage requirement byhalf, and facilitating further data processing steps.

74

DOI:10.15774/PPKE.ITK.2018.005

Page 84: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.2 Real-time preprocessing pipeline

Figure 4.8: GPU fused images of a Drosophila melanogaster embryo. Two stacks were taken inquick succession first without fusion, then with live fusion enabled. Fused images are shown in the middle ofeach subfigure, while the individual camera images are in the bottom insets. The top-left inset depicts thez-position of the shown images. (a) Image from closer to the left camera. (b) Image from the center of theembryo. (c) Image from closer to the right camera.

50 100 150 200

z position in stack ( m)

2

4

6

8

10

12

14

16

DC

TS

10-3

Image contrast

depending on z position

Fused

Left

Right

Figure 4.9: Image contrast depending on z position for raw and fused stacks. Image contrastwas measured as the Shannon entropy of the normalized Discrete Cosine Transform of the sections [137].Contrast measurments were performed on the images shown on Figure 4.8.

75

DOI:10.15774/PPKE.ITK.2018.005

Page 85: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

4.3 B3D image compression

The second part of our GPU-based image preprocessing pipeline is a new image compres-sion algorithm that allows for extremely fast image compression to efficiently reduce datasizes already during acquisition. By reducing the data size, not only the cost for storagecan be reduced, but also the time it takes to transfer the data during the various stepsof data processing is shortened. A fast compression method can also greatly improve 3Ddata browsing possibilities, as more data can be piped to the rendering software.

Despite its advantages, real-time compression for high-speed microscopy has not beenavailable. This is mostly due to the lack of appropriate compression methods suitable forscientific imaging that also offers the high throughput demanded by these applications.While typically used lossless compression methods, such as JPEG2000 [138] offer goodcompression ratios, they are very slow in processing speed, at least compared to the datarate of a modern microscope (∼ 1GB/s, Figure 4.1). High-speed compression methodsthat can deal with such data rates have been developed for ultra-high-definition 4Kand 8K digital cameras, such as the high efficiency video codec (HEVC) [139]. Thesemethods, however, have been optimized for lossy image compression which is generallynot acceptable for scientific data [105], and rarely support compression of high bit rateoriginals, which is typically the case for modern sCMOS sensors. Although the HEVCrecommendation does specify bit rates up to 16 bits, and lossless compression, the opensource version, x265 only supports bit rates up to 12 bits [140].

To address these issues, we developed B3D, a GPU based image compression librarycapable of high-speed compression of microscopy images during the image acquisitionprocess. By utilizing the massively parallel processing architecture of a GPU, we were notonly able to reach a compression and decompression speed of 1GB/s, but our algorithmalso keeps the load off the CPU, making it available for other computing tasks relatedto operating the microscope itself.

A second feature of our compression library is a novel algorithm for noise-dependentlossy compression of scientific images. Although most lossy compression methods are notsuitable for scientific data, our method allows for a deterministic way to control the exactamount of loss. The algorithm was designed in a way that any modification to a singlepixel is proportional to the inherent image noise accounting for shot noise and cameraread noise. Since this noise already introduces some uncertainty to the raw data, if anychanges made are smaller than this, the extractable information by further data analysissteps should be not affected, whereas the compression ratio is massively increased.

76

DOI:10.15774/PPKE.ITK.2018.005

Page 86: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

4.3.1 Compression algorithm

Lossless compression

The algorithm design initially had three main requirements to ensure suitability for ahigh-speed microscopy environment: 1) low complexity for fast execution, 2) no datadependencies to enable parallel execution, and 3) full reversibility to ensure lossless com-pression. Since existing image compression standards such as JPEG2000 [138] or JPEG-LS [115] were designed to maximize compression ratio even at the expense of compressionspeed, these were not suitable for our needs. Furthermore, most steps inherently requiresequential execution, thus preventing efficient parallelization. Promising efforts have beenmade to enable parallel compression by simplifying JPEG-LS and removing the limitingsteps from the algorithm (such as Golomb parameter adaptation or context modeling)[117, 118]. Although some of these methods could reach a significant speedup with min-imal expense in compression ratio, their performance remains insufficient for real-timecompression of high-speed microscopy data.

As in the case of other image compression methods, our algorithm has two maincomponents: decorrelation and entropy coding (Figure 4.10a). In this work we focusedon implementing a parallel encoder and decoder for the decorrelation part that wouldalso be compatible with the noise-dependent lossy compression, as efficient CUDA-basedentropy coders are already available [141]. To decorrelate the data, similarly to JPEG-LS and lossless-JPEG [114], a prediction is performed for each pixel by calculating theaverage of the top and left neighbors (Figure 4.10b). The prediction is subtracted fromthe real pixel value, resulting in the prediction error term:

ε = X − Pred(X). (4.4)

Since the neighboring pixels have a high chance of correlation, these error terms willtypically be very small compared to the raw intensity values. This step is similar topredictor 7 of lossless JPEG (see 3.3).

The pixel prediction method is especially well suited for parallel compression, as it canbe performed independently for all pixels. For decompression, the top and left neighborsof a pixel need to be reconstructed first, thus parallelization needs some adjustments. Anobvious choice is to perform the reverse prediction in tiles, where the number of threadswill equal the number of tiles. Parallel execution can be further scaled up if the decodingis done in diagonals, starting from the top left corner (Figure 4.10c). With this methodthe average number of parallel threads will be

Nth = Nt ∗ at/2, (4.5)

77

DOI:10.15774/PPKE.ITK.2018.005

Page 87: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

tile width

Prediction context

Parallelization for decompressiona c

bti

le h

eig

ht

Huffmancoding

Start

Stop

lossless?variance

stabilization

quantize to integers

quantize to integers

run lengthencoding

pixelprediction

pixelprediction

pixelprediction

no

yes scale by

Mode 1 Mode 2

q

A X

BC

processed unprocessedactive

Figure 4.10: B3D algorithm schematics.(a) Complete algorithm flowchart depicting the main stages of the compression. (b) Prediction context forpixel X, the next sample to be encoded. Three neighboring pixels are considered: left (A), top (B) and topleft (C) neighbors. (c) Parallelization scheme for decompression and lossy compression. Already processedpixels are in gray, unprocessed pixels are in white, active pixels are in green.

where Nth is the number of threads, Nt is the number of tiles, at is the tile width andheight for a square tile. Although the threads can be maximized by making smaller tiles,the compression ratio will be negatively affected. This is because the corner pixel cannot be predicted, and for the edges only one neighbor can be used instead of 2. We findsquare tiles between 16 and 64 pixels are a good choice between decompression speedand compression ratio.

The prediction error terms are subsequently run-length encoded and entropy codedby a fast, CUDA-enabled Huffman coder [141, 142] to effectively reduce data size. Thelibrary is making extensive use of parallel prefix sum, which can be used for the effectiveparallel execution for both run-length encoding and Huffman coding. For decompression,the Huffman coding and run-length encoding are first reversed, then each pixel value isreconstructed from the neighboring values and the prediction errors. Although this im-plementation sacrifices a little in compression ratio compared to size-optimized methods,it can surpass a compression speed of 1GB/s (see Section 4.3.2), which is sufficient evenfor the dual camera confocal MuVi-SPIM setup [75].

78

DOI:10.15774/PPKE.ITK.2018.005

Page 88: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

Within-noise-level compression

Noise-dependent lossy compression relies on the fact that microscopy images naturallycontain photon shot noise. To increase the compression ratio, this algorithm allows somedifferences in the pixel values, as long as they are within a predefined proportion of theinherent noise. A special case for this mode is when all compression errors are smallerthan the standard deviation of the noise (σ), which we call within-noise-level (WNL)compression.

Our algorithm is inspired by both the near-lossless mode of JPEG-LS [115], wherethe maximum compression error can be set to a predefined value, and the square rootcompression scheme [143, 144], that quantizes its input relative to its square. We combinethe two methods to achieve noise-dependent lossy compression while still retaining goodimage quality and avoiding banding artifacts.

Before the prediction errors are sent to the entropy coder, a quantization can takeplace that reduces the number of symbols that need to be encoded in order to increasethe compression ratio. By changing the quantization step, the maximum absolute re-construction error can be defined, such as when using JPEG-LS. For noise dependentcompression, however, the quantization step should be proportional to the image noise,which can be achieved by the following variance stabilizing transformation introducedby Bernstein et al. [144]:

T = 2√

e+ σ2RN , (4.6)

where e is the intensity scaled to photoelectrons, and σRN is the standard deviation ofthe camera read noise. To perform the scaling of digital numbers (DN) to photoelectrons,it’s necessary to know the camera offset and conversion parameters: e = (I − offset) · g,where g is the gain in photoelectrons/DN, and I is the original pixel intensity. Thetransformation assumes a Poisson-Gaussian noise, incorporating the photon shot noiseand the camera readout noise, which is a good model for most scientific imaging sensors,such as (EM)CCD or sCMOS.

To accommodate for all imaging needs, two options are available in the noise-dependent lossy mode. In Mode 1, after the stabilizing transform (Equation 4.6), theprediction is performed first, followed by quantization of the prediction errors. Thismode reaches a better compression ratio and higher peak signal-to-noise ratio (PSNR)for the same quantization step compared to Mode 2 (Figure 4.11 a, b), and is recom-mended for most use cases. For larger quantization steps (q ≥ 3σ), however, Mode 1might introduce some spatial correlations, because the prediction errors that are beingquantized depend on multiple adjacent pixels (Figure 4.11 c, d). Mode 2 excludes thispossibility by swapping the quantization and prediction (Figure 4.11 c, d) at the expenseof compression ratio. Since no correlations are introduced, Mode 2 is suitable for highly

79

DOI:10.15774/PPKE.ITK.2018.005

Page 89: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

sensitive image analysis tasks, such as single-molecule localization. In this chapter we useMode 2 for single-molecule localization data, and Mode 1 for all other datasets.

0 1 2 3 4 50

50

100

150co

mp

ress

ion

ra

tio Mode 1

Mode 2

0 1 2 3 4 5

σ

70

80

90

100

110

PS

NR

Mode 1

Mode 2

b

a

5 10 15

5

10

15

5 10 15

5

10

15

0

2000

4000

6000

Mode 2, q=4σ

Mode 1, q=1σ

5 10 15

5

10

15

5 10 15

5

10

15

0

2000

4000

6000

d

c

Mode 1, q=4σ

Mode 2, q=1σ

Figure 4.11: Options for noise-dependent lossy compression. Comparing Mode 1 (prediction thenquantization) and Mode 2 (quantization then prediction) of noise-dependent lossy compression in terms ofcompression ratio, peak signal-to-noise ratio (PSNR) and spatial correlations introduced to random noise.(a) Compression ratio as a function of the quantization step for Mode 1 and Mode 2. (b) PSNR as afunction of the quantization step for Mode 1 and Mode 2. (c, d) Random noise was compressed at variousquantization steps both for Mode 1 and Mode 2. Autocorrelation was calculated for the compressed images tosee whether the compression introduces any spatial correlation between the pixels. For q=1σ both modes arefree of correlation (c, top: compressed images, bottom: autocorrelation), however, for q=4σ Mode 1 exhibits acorrelation pattern (d, top left: compressed image, bottom left: autocorrelation) that is not present in Mode2 (d, top right compressed image, bottom right: autocorrelation). For more discussion, see Section 4.3.1.

Additional noise introduced by compression

When applying quantization to any kind of data, the quantization error can be modeled asan additional uniform noise with standard deviation of δ√

12, where δ is the quantization

step size [145]. During compression the quantization is proportional to the standarddeviation of the original image noise, thus the noise on the decompressed image will be:

σout = σin ·√

1 +q2

12(4.7)

For the WNL case, when q = 1σ this means an increase of 4%, which also coincides withthe measured increase of localization error for single molecule localization (Figure 4.18a).

80

DOI:10.15774/PPKE.ITK.2018.005

Page 90: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

Figure 4.12: Theoretical and measured increase in image noise for WNL compression. ValidatingEquation 4.8, we plot the relative increase in image noise as a function of the mean. Blue line: plot ofEquation 4.8 Red line: plot of measured data. To obtain the measurement, we compressed a stack of 512images consisting of Poisson noise, each with a different mean. After WNL compression the dataset wasread in Matlab, and the standard deviation was calculated for each frame. The ratio σout/σin is plotted asa function of the mean.

However, this is not the only source of additional noise we have to consider, because at thedecompression step there is a possibility of getting floating point results after applyingthe inverse of the compression steps, and for most applications these have to be coercedto integer numbers. This effectively adds a second quantization, but now at a constantquantization step of 1. This results in an additional uniform noise with variance of 1

12 :

σout = σin ·√

1 +q2

12+

1

12(4.8)

We verified this by compressing 512 images (1024×1024 pixels) of random Poisson noisewith different means (1–512), and calculated the standard deviation for each frame afterthe decompression step. Finally, we plotted the ratio σout/σin as a function of the meanm (Figure 4.12), and we found that it is in very good agreement with the theory outlinedabove.

4.3.2 Evaluation of the compression algorithm

Methods

Compression benchmarking For all presented benchmarks, TIFF and JPEG2000performance was measured through MATLAB’s imwrite and imread functions, whileKLB and B3D performance was measured in C++. All benchmarks were run on a com-puter featuring 32 processing cores (2×Intel Xeon E5-2620 v4), 128GB RAM and anNVIDIA GeForce GTX 970 graphics processing unit. Read and write measurementswere performed in RAM to minimize I/O overhead, and are an average of 5 runs. SMLMdatasets were provided by Joran Deschamps (EMBL, Heidelberg), and were originally

81

DOI:10.15774/PPKE.ITK.2018.005

Page 91: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

published in [146] and [147]. Screening datasets were provided by Jean-Karim Hériché(EMBL, Heidelberg), and were originally published in [148]

Light-sheet imaging Drosophila embryos were imaged in the MuVi-SPIM setup [48]using the electronic confocal slit detection (eCSD) [75]. Embryos were collected on anagar juice plate, and dechorionated in 50% bleach solution for 1min. The embryos werethen mounted in a shortened glass capillary (Brand 100µl) filled with 0.8% GelRite(Sigma-Aldrich), and pushed out of the capillary to be supported only by the gel.

3D nucleus segmentation 3D nucleus segmentation of Drosophila melanogaster em-bryos was performed using Ilastik 1.2.0 [149]. The original dataset was compressed atdifferent quantization levels, then upscaled in z to obtain isotropic resolution. To identifythe nuclei, we used the pixel classification workflow, and trained it on the uncompresseddataset. This training was then used to segment the compressed datasets as well. Seg-mentation overlap was calculated in Matlab using the Sørensen–Dice index [150, 151]:

QS = 2 |A ∩B| / (|A|+ |B|) (4.9)

where the sets A and B represent the pixels included in two different segmentations.

3D membrane segmentation Raw MuVi-SPIM recordings of Phallusia mammillataembryos expressing PH-citrine membrane marker were kindly provided by Ulla-Maj Fiuza(EMBL, Heidelberg). Each recording consisted of 4 views at 90 degree rotations. Theviews were fused using an image based registration algorithm followed by a sigmoidalblending of the 4 views. The fused stack was then segmented using the MARS algorithm[152] with an hmin parameter of 10. The raw data (all 4 views) was compressed at differentlevels, and segmented using the same pipeline. Segmentation results were then processedin Matlab to calculate the overlap score for the membranes using the Sørensen–Diceindex.

Single-molecule localization imaging In order to visualize microtubules, U2OS cellswere treated as in [146] and imaged in a dSTORM buffer [153]. In brief, the cells werepermeabilized and fixed with glutaraldehyde, washed, then incubated with primary tubu-lin antibodies and finally stained with Alexa Fluor 647 coupled secondary antibodies.The images were recorded on a home-built microscope previously described [146], in its2D single-channel mode.

Single-molecule localization data analysis Analysis of single-molecule localizationdata was performed on a custom-written MATLAB software as in [147]. Pixel values

82

DOI:10.15774/PPKE.ITK.2018.005

Page 92: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

were converted to photon counts according to measured offset and calibrated gain ofthe camera (EMCCD iXon, Andor). The background was estimated with a wavelet filter[154], background-subtracted images were thresholded and local maxima were detectedon the same images. 7-pixel ROIs around the detected local maxima were extractedfrom the raw images and fitted with a GPU based MLE fitter [155]. Drift correction wasperformed based on cross-correlation. Finally, images were reconstructed by filtering outlocalizations with a high uncertainty (>30 nm) and large PSF (>150 nm) and Gaussianrendering.

Simulation of single-molecule localization data Single molecule localizationdatasets were simulated in Matlab by generating a grid of pixelated Gaussian spotswith standard deviation of 1 pixel. With a pixel size of a 100 nm, this corresponds to aFWHM of 235.48 nm. The center of each spot was slightly offset from the pixel grid at0.1 pixel increments in both x and y directions. To this ground truth image a constantvalue was added to simulate illumination background, and finally Poisson noise was ap-plied to the image. This process was repeated 10,000 times to obtain enough images foradequate accuracy.

Results

We compared our algorithm’s performance with some of the most commonly used imageformats in the scientific field: TIFF (LZW) and JPEG2000. Furthermore, we also in-cluded the state of the art KLB compression [135], which was especially designed for fastcompression of large light-sheet datasets. We measured compression speed, decompres-sion speed and resulting file size for all algorithms (Figure 4.13a). Only B3D is capableof handling the sustained high data rate of modern sCMOS cameras typically used inlight-sheet microscopy, while still maintaining compression ratios comparable to morecomplex, but much slower algorithms (Table B2 in Appendix B).

Using the noise-dependent lossy mode with q = 1σ (WNL), the compression ra-tio massively increases for all imaging modalities compared to the lossless mode (Fig-ure 4.13b) without any apparent loss in image quality (Figure 4.14). Furthermore, the av-erage compression error is considerably smaller than the image noise itself (Figure 4.15).

To see how the noise-dependent compression affects common imaging pipelines, wetested the effect of different levels of compression on 3D nucleus and membrane segmen-tation in light-sheet microscopy, and on single-molecule localization accuracy in super-resolution microscopy. First, we imaged a Drosophila melanogaster embryo expressingan H2Av-mCherry nuclear marker in the MuVi-SPIM and segmented the nuclei withIlastik [149] (Figure 4.16a and Section 4.3.2). Then we performed noise dependent com-pression at various quality levels and calculated the segmentation overlap compared to

83

DOI:10.15774/PPKE.ITK.2018.005

Page 93: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

0 200 400 600 800 1000 1200

compression speed (MB/s)

0

200

400

600

800

1000

de

com

pre

ssio

n s

pe

ed

(M

B/s

)

B³D

KLB

TIFF uncomp.

TIFF+LZW

JPEG2000

circle size proportional to file sizeLossless compression performance

(a)

0 10 20 30 40 50

compression ratio

drosophila

phallusia

zebrafish

simulation

microtubules

lifeact

dapi

membrane

vsvg

WNLlossless

SPIM

SMLM

scre

ening

B3D lossless and within noise level(WNL) compression performance

(b)

Figure 4.13: Compression performance. (a) Performance comparison of our B3D compression algorithm(red circle) vs. KLB (orange), uncompressed TIFF (light yellow), LZW compressed TIFF (light blue) andJPEG2000 (blue) regarding write speed (horizontal axis), read speed (vertical axis) and file size (circle size).(see also Table B2). (b) WNL compression performance compared with lossless performance for 9 differentdatasets representing 3 imaging modalities (SPIM, SMLM, screening). Compression ratio = original size /compressed size. For description of datasets see Table B3 in Appendix B.

a a b b

c c f f

e e d d

uncompresseduncompressed WNL compressedWNL compressed

Figure 4.14: Image quality of a WNL compressed dataset. WNL compression of a Drosophilamelanogaster recording taken in the MuVi-SPIM setup. Compression ratio: 19.83. (a–c) Uncompressedimage of the whole field of view (a), and zoomed in smaller regions (b, c). (d–f) WNL compressed image ofthe whole field of view (d), and zoomed in smaller regions (e, f). Scale bars: 25µm (a, e); 2.5µm (b, c, e, f).

84

DOI:10.15774/PPKE.ITK.2018.005

Page 94: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

root mean square deviation

0

40

standard deviation

Figure 4.15: Compression error compared to image noise. To compare the difference arising fromWNL compression to image noise, we imaged a single plane 100 times in a Drosophila melanogaster embryoexpressing H2Av-mCherry nuclear marker at 38ms intervals. The whole acquisition took 3.8 s, for which thesample can be considered stationary. To visualize image noise, the standard deviation was calculated for theuncompressed images (left). All images were then WNL compressed, and the root mean square deviationwas calculated compared to the uncompressed images (right). The root mean square deviation on averageis 3.18 times smaller than the standard deviation of the uncompressed images.

0 1 2 3 4 5

compression level

0.9

1

ove

rla

p s

core

0 1 2 3 4 5

0

60

120

com

pre

ssio

n r

ati

o

5 μm 5 μm

q=1σ, CR: 19.83 q=4σ, CR: 89.68

uncompressed compressed overlap

Drosophila nucleussegmentation

dba c

Figure 4.16: Influence of noise-dependent lossy compression on 3D nucleus segmentation. ADrosophila melanogaster embryo expressing H2Av-mCherry nuclear marker was imaged in MuVi-SPIM [48],and 3D nucleus segmentation was performed (Section 4.3.2) (a). The raw dataset was subsequently com-pressed at increasingly higher compression levels, and segmented based on the training of the uncompresseddata. To visualize segmentation mismatch, the results of the uncompressed (green) and compressed (ma-genta) datasets are overlaid in a single image (b, c; overlap in white). Representative compression levels werechosen at two different multiples of the photon shot noise, at q=1σ (b) and q=4σ (c). For all compressionlevels the segmentation overlap score (Section 4.3.2) was calculated and is plotted in (d) along with theachieved compression ratios.

the uncompressed stack (Section 4.3.2). At WNL compression (q = 1σ) the segmenta-tion overlap is almost perfect (Figure 4.16b) with an overlap score of 0.996. Even whenincreasing the quantization step to 4σ (Figure 4.16c) the overlap score stays at 0.98 andonly drops below 0.97 when the compression ratio is already above 120 (quantizationstep of 5σ, Figure 4.16d). We got similar results for a membrane segmentation pipelinethat is used with Phallusia mammillata embryos (Figure 4.17 and Section 4.3.2).

Next, we evaluated our compression algorithm in the context of single molecule lo-calization microscopy (SMLM), and measured how the localization precision is affectedwhen compressing the raw images by an increasing compression ratio. We compressedan SMLM dataset of immuno-detected microtubules (Figure 4.18a) with increasing com-pression levels. For WNL compression (q = 1σ) no deterioration of the image was visible(Figure 4.18b), and even for the case of q = 4σ the compression induced errors were much

85

DOI:10.15774/PPKE.ITK.2018.005

Page 95: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

0

ove

rla

p s

core

0.8

0.9

1

com

pre

ssio

n r

ati

o

0

120

240

8

uncompressed compressed overlap

5 μm

q=1.6σ, CR: 46.8

5 μm

q=4.8σ, CR: 98.2Phallusia membranesegmentation

dba c

Figure 4.17: Influence of noise-dependent lossy compression on 3D membrane segmentation. APhallusia mammillata embryo expressing PH-citrine membrane marker was imaged in MuVi-SPIM [48], and3D membrane segmentation was performed (Section 4.3.2) (a). The raw dataset was subsequently compressedat increasingly higher compression levels, and segmented using the same settings as the uncompressed data.To visualize segmentation mismatch, the results of the uncompressed (green) and compressed (magenta)datasets are overlaid in a single image (b, c; overlap in white). Representative compression levels were chosenat two different multiples of the photon shot noise, at q=1.6σ (b) and q=4.8σ (c). For all compression levelsthe segmentation overlap score (Section 4.3.2) was calculated and is plotted in (d) along with the achievedcompression ratios.

0 1 2 3 4

1

1.2

1.4

1.6

rela

tive

lo

caliza

tio

n e

rro

r

0

5

10

15

com

pre

ssio

n r

ati

o

uncompressed compressed overlap

100 nm

q=1σ, CR: 4.81 q=4σ, CR: 11.07single moleculelocalization

500 nm 100 nm

dba c

Figure 4.18: Influence of noise-dependent lossy compression on single-molecule localization.Microtubules, immunolabeled with Alexa Fluor 647 were imaged by SMLM (a). The raw dataset was com-pressed at increasingly higher compression levels, and localized using the same settings as the uncompresseddata. To visualize the localization mismatch, the results of the uncompressed (green) and compressed (ma-genta) datasets are overlaid in a single image (b, c; overlap in white). Two representative compression levelswere chosen at q=1σ (b) and q=4σ (c). To assess the effects of compression on localization precision, asimulated dataset with known emitter positions was compressed at various levels. For all compression levelsthe relative localization error (normalized to the Cramér–Rao lower bound) was calculated and is plotted in(d) along with the achieved compression factors.

smaller than the resolvable features (Figure 4.18c). To quantify the impact of compres-sion on the localization error, we used a simulated dataset (Section 4.3.2) and comparedthe localization output of different compression levels to the ground truth (Figure 4.18d).Lossless compression resulted in a compression ratio of 2.7, whereas WNL compressionreached a compression ratio of 5.0, while increasing the localization error by only 4%.This also coincides with the theoretical increase of image noise (Section 4.3.1). Further-more, the increase in localization error was not dependent on the signal to backgroundnoise ratio (Figure 4.19).

86

DOI:10.15774/PPKE.ITK.2018.005

Page 96: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.3 B3D image compression

500 1000 5000 10000 50000

number of photons/localizaiton

1

1.1

1.2

1.3

1.4

rela

tive

lo

caliza

tio

n e

rro

r

lossless

0.25

0.5

0.75

1

1.5

2

2.5

Figure 4.19: Change in localization error only depends on selected quantization step. Wesimulated multiple datasets (Section 4.3.2) with different average photon numbers per localization. Thebackground was kept at a constant average of 20 photons/pixel. Datasets were compressed at multiplecompression levels (see legend), and localization error relative to the Cramér-Rao lower bound was calculated.The relative localization error only depends on the compression level, and not on the signal to backgroundillumination ratio.

4.3.3 HDF5 integration

To make our compression method more accessible, we developed a filter plugin for HDF5.Because of its versatility, HDF5 has emerged as the de facto standard in the open sourcelight-sheet microscopy field, and is also the basis for the widely used BigDataViewer [156]in Fiji [30]. Starting from version 1.8.11 (May 2013), HDF5 supports dynamically loadedfilters, and is able to load third party components without having to modify any alreadyinstalled files. When loading a B3D compressed image in an HDF5 enabled application,the library automatically calls our filter plugin, decompresses the image on the GPU,and copies it back into CPU memory.

Since HDF5 is extensively used in many scientific/image analysis software, this pluginformat is especially suitable to equip existing software with compression/decompressioncapabilities. To read compressed files no modification is required, and any software sup-porting HDF5 should be compatible. The following software have been tested with ourfilter plugin and are able to read B3D compressed files: Fiji (also with BigDataViewer),Imaris 8.4.1, Ilastik 1.2.0, Matlab R2016a, Python 2.7.10 and 3.5.3, LabVIEW 2015,C++.

Writing compressed HDF5 files is possible if the user has direct control over thefile writing routine, as an additional command is required to set up compression. Thisfunctionality has been tested in the following environments: C++, Python 2.7.10 and3.5.3, LabVIEW 2015.

87

DOI:10.15774/PPKE.ITK.2018.005

Page 97: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.4 Discussion

Code availability

Code used for analyzing data, B3D source code and compiled binaries, including a filterplugin for HDF5, are available for download at https://git.embl.de/balazs/B3D.

4.4 Discussion

In this chapter we presented a real-time image preprocessing and compression pipelinefor multiview light-sheet microscopy. This pipeline is capable of fusing opposing viewsrecorded in the MuVi-SPIM faster than the acquisition rate, as well as compressing theacquired data during imaging. Both of these are important steps in the data preprocess-ing, especially given the very high data production rate of the microscope. As a singleexperiment can reach tens of terabytes, any kind of processing that can be performedduring acquisition can immensely facilitate further analysis.

When using the full 3D fusion pipeline the data not only takes up twice as muchstorage space, but due to the demanding processing requirements the fusion processcan take a considerable amount of time. Since this step is necessary before any furtherevaluation of the data can take place, it acts as a bottleneck in an efficient workflow.Using the direct fusion this issue is greatly alleviated.

A further benefit of the direct fusion is that it can also run during setting up theexperiments, in the “live” view, and the fused view replaces the two camera views. Thishelps the users to better assess image quality, and expedites setting up the experiments.

For experiments where more than two opposing views are necessary and the sample isalso rotated, the direct fusion can not completely replace the previous 3D fusion method,as it can not account for any transformation outside the common focal plane. However,even for these experiments the raw data is reduced by a factor of two, and the 3D fusiontask is also simplified, since the number of distinct views are halved. This not onlymakes the fusion process faster, but also makes it more robust as the individual stackwill have larger overlapping regions owing to the more uniform image quality throughoutthe stacks.

The compression algorithm is implemented in C++, and allows for easy integrationthrough an API with various programming languages. The library was tested on Linux(Ubuntu 16.04) and Windows (10). Additionally, we implemented a filter plugin forHDF5 which enables a seamless integration in all software packages that are supportingthe native HDF5 library, such as Matlab, Python, Imaris, or Ilastik. Due to B3D’s efficientcompression ratio and its high decompression speed, loading data is often accelerated: Fora state-of-the-art hard drive with 200MB/s bandwidth, loading a 2GB uncompressed3D stack of images takes about 10 seconds. With an average compression ratio of 20 foldin the WNL mode, the loading time is reduced to 0.5 seconds followed by 2 seconds of

88

DOI:10.15774/PPKE.ITK.2018.005

Page 98: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

4.4 Discussion

decompression, which yields a factor of four speed-up.It is also worth to note, that the achieved WNL compression reduces the camera data

rate to below 40MB/s, well below the 1Gb/s Ethernet standard. This enables to usecurrent network infrastructure to move data to long term storage and even makes the useof cloud services possible. Altogether, B3D, our efficient GPU-based image compressionlibrary allows for exceptionally fast compression speed and greatly increases compressionratio with its WNL scheme, offering a versatile tool that can be easily tailored to manyhigh-speed microscopy environments.

89

DOI:10.15774/PPKE.ITK.2018.005

Page 99: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Chapter 5

Conclusions

Each chapter was concluded with their own discussion (see Section 2.6 and Section 4.4).Here, the new scientific results are summarized, and an outlook is provided for thepossible future applications.

5.1 New scientific results

Thesis I. I have designed and constructed a new light-sheet microscope suitable forhigh resolution imaging of delicate samples. A novel arrangement of two high numeri-cal aperture objectives in 120 degrees combined with a tilted light-sheet allows for nearisotropic resolution while increasing light collection efficiency by a factor of two.

Corresponding publications: [J1], [J2], [J3]Dual Mouse-SPIM is a novel design for symmetric light-sheet microscopy. The use

of high NA objectives in 120◦ not only increases volumetric resolution compared to theconventional 90◦ setup, but due to the larger detection angle, light collection efficiencyis doubled. This is especially beneficial for delicate, light-sensitive specimens, such asmouse embryos, since phototoxic effects are reduced while the contrast is preserved.

I designed the optical path, the layout, and the custom mechanical components, andconstructed the microscope. As part of the microscope I have designed a custom beamsplitter unit that allows the use of a single galvanometric scanner to generate light-sheetsfor both objectives. I have also designed a custom detection merging unit that allows theuse of a single camera for both detection views.

I have characterized the optical properties of the microscope, measured the illumina-tion profile and point spread function. With a 3.6µm thick light-sheet, a 95µm field ofview is evenly illuminated. Dual view imaging of bead samples revealed a lateral resolu-tion of 314 nm, and axial resolution of 496 nm. This is a 2.67× improvement comparedto the axial resolution of a single detection objective lens. I have also demonstrated theimaging capabilities of the microscope on Drosophila melanogaster embryos and mouse

90

DOI:10.15774/PPKE.ITK.2018.005

Page 100: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

5.1 New scientific results

zygotes.

Thesis II. I have developed a GPU-based image processing pipeline for multi-viewlight-sheet microscopy that enables real-time fusion of opposing views.

Corresponding publications: [C1], [C2], [C3]I have developed a GPU-based image preprocessing pipeline, which integrates di-

rectly to our universal microscope control software in LabVIEW. The pipeline currentlysupports background subtraction and background masking, furthermore it is capable offusing opposing views of the same plane faster than real-time. I have shown that it ispossible to reduce the registration of opposing camera views from a 3D alignment to a 2Dalignment without any negative effects in image quality and resolution. This massivelyreduces the necessary computing resources, and allows the use of CUDA textures forfaster than real-time image fusion. Processing speed of this implementation is 138 fps, a18.3× increase compared to a single threaded CPU implementation.

Thesis Group III. Real-time image compression.

Thesis III/1. I have developed a new image compression algorithm that enablesnoise dependent lossy compression of light microscopy images, and can reach a compres-sion ratio of 100 fold while preserving the results of downstream data analysis steps. A fastCUDA implementation allows for real-time image compression of high-speed microscopyimages.

Corresponding publications: [J4], [C1], [C2], [C3]Since many high-speed microscopy methods generate immense amounts of data,

easily reaching terabytes per experiment, image compression is especially importantto efficiently deal with such datasets. Existing compression methods suitable for mi-croscopy images are not able to deal with the high data rate of modern sCMOS cameras(∼ 800MB/s).

I developed a GPU-based parallel image compression algorithm called B3D, capableof over 1GB/s throughput, allowing live image compression. To further reduce the datasize, I developed a noise dependent lossy compression that only modifies the data ina deterministic manner. The allowed differences for each pixel can be specified as aproportion of the inherent image noise, accounting for photon shot noise and camerareadout noise. Due to the use of pixel prediction, the subjective image quality is higherthan for other methods that simply quantize the square root of the images.

Thesis III/2. I have shown that within-noise-level compression does not signifi-cantly affect the results of most commonly used image processing tasks, and it allows a3.32× average increase in compression ratio compared to lossless mode.

91

DOI:10.15774/PPKE.ITK.2018.005

Page 101: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

5.2 Application of the results

Corresponding publications: [J4], [C1], [C2], [C3]As data integrity in microscopy is paramount for drawing the right conclusions from

the experiments, using a lossy compression algorithm might be controversial. I haveshown that the within-noise-level (WNL) mode of B3D does not significantly affect theresults of several commonly used image analysis tasks. For light-sheet microscopy data Ihave shown that WNL compression introduces less variation to the image than the photonshot noise. When segmenting nuclei of Drosophila embryos and membranes of Phallusiaembryos, the overlap of the segmented regions of uncompressed and WNL compresseddatasets were 99.6% and 94.5% respectively, while compression ratios were 19.83 forDrosophila and 40.01 for Phallusia embryos. For single molecule localization microscopydata I have shown that WNL compression only introduces 4% increase in localizationuncertainty, while the average compression ratio is increased from 1.44 (lossless) to 4.96(WNL). I have also shown that change in localization error due to the compression doesnot depend on the SNR of the input images.

5.2 Application of the results

Both the new Dual Mouse-SPIM microscope and the GPU-based image processing andcompression pipeline have direct applications in light-sheet imaging of embryonic devel-opment.

Multiple potential collaborators indicated their interest in using the Dual Mouse-SPIM for their studies in mouse embryonic development. In the context of the researchof symmetry breaking events in the pre-implantation and early post-implantation stages,this system can be used for imaging larger specimens from multiple direction, which isnot possible on previous microscopes, and could allow to observe previously unknownmechanisms. Another possible application is investigating chromosome missesgregationmechanisms in the first few divisions during embryonic development. The increased axialresolution of this system will allow to track each individual chromosome during thedivision process, which is not possible on current microscopes due to the insufficientaxial resolution.

The GPU-based image processing pipeline, especially the 2D fusion of opposing viewsis already being used on our lab’s workhorse microscope, the MuVi-SPIM. Being ableto fuse the two views of the opposing objectives during imaging not only results inconsiderable storage space savings, but significantly speeds up the data analysis as well.

The image compression algorithm, B3D, although was developed with light-sheetmicroscopy in mind, has a more wide-spread use-case. Any kind of high-speed, high-throughput light-microscopy experiment can benefit from the massive data reductionoffered by the within-noise-level mode. Since the compression can also be done imme-

92

DOI:10.15774/PPKE.ITK.2018.005

Page 102: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

5.2 Application of the results

diately during imaging, not only the storage requirements, but the data bandwidth isreduced as well, which renders the use of high performance RAID arrays and 10Gbit

networks unnecessary, further reducing costs. Due to the similarly high decompressionspeed, reading the data is also accelerated, which can be beneficial for data browsing and3D rendering applications. Several companies of different fields already expressed theirinterest in the compression library, including Bitplane AG (3D data analysis and vi-sualization), Luxendo GmbH (light-sheet microscopy), and Hamamatsu Photonics K.K.(camera and sensor manufacturing).

93

DOI:10.15774/PPKE.ITK.2018.005

Page 103: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Appendix

A Bill of materials

Optical components• Lenses

– 2×Nikon CFI75 Apo LWD 25x/1.10w water dipping objectives– 2×Nikon ITL200 200mm tube lens– 2×200mm x 25mm Dia. achromatic lens (Edmund Optics, #47-645)– 2×400mm x 40mm Dia. achromatic lens (Edmund Optics, #49-281)– 3×75mm x 25mm Dia. achromatic lens (Edmund Optics, #47-639)– SILL 112751 1:2 beam expander

• Filters– 2×BrightLine quad-edge dichroic beam splitter (Semrock, Di03-

R405/488/561/635-t3-25x36)– EdgeBasic 488 nm long pass filter (Semrock, BLP01-488R-25)– BrightLine 525/50 band pass filter (Semrock, FF03-525/50-25)– EdgeBasic 561 nm long pass filter (Semrock, BLP02-561R-25)– RazorEdge 647 nm long pass filter (Semrock, LP02-647RU-25)

• Mirrors– 6×1” Broadband Dielectric Elliptical Mirror (Thorlabs, BBE1-E03)– 2×30mm Broadband 1/10λ Mirror (OptoSigma, TFMS-30C05-4/11)– 10 Pack of 1” Protected Silver Mirrors (Thorlabs, PF10-03-P01-10)– Knife-Edge Right-Angle Prism Dielectric Mirror (Thorlabs, MRAK25-E02)

Mechanical components• 40mm travel range pneumatic cylinder (Airtac, HM-10-040)• 5/2-way electric valve (Airtac, M-20-510-HN)• 2×25mm extended contact bearing steel stage (OptoSigma, TSDH-251C)• 30x90 mm stainless steel slide (OptoSigma, IPWS-F3090)• close proximity gimbal mirror mount (Thorlabs, GMB1/M)• 30mm cage elliptical mirror mount (Thorlabs, KCB1E)• Melles Griot Performance Plus Optical Breadboard• 4×Newport Stabilizer, High Performance laminar Flow Isolator, I-2000 Series

94

DOI:10.15774/PPKE.ITK.2018.005

Page 104: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

A Bill of materials

Custom partsAll parts are (anodized) aluminium, unless stated otherwise.

• 2×mirror holder block• Front plate for objective, chamber and mirror mounting• Imaging chamber (PEEK)• Wedge ring and matching threaded ring to fasten objectives• Camera bridge• Illumination splitter unit• adapter plates to mount stages

Electronics• Embedded system (National Instruments, cRIO-9068) equipped with:

– 2×C series digital I/O card (NI 9401)– 1×C Series 100 kS/s 4-channel Voltage Output Module (NI 9263)– 1×C Series 25 kS/s 16-channel Voltage Output Module (NI 9264)

• Omicron SOLE-3 laser combiner, with 488, 561 and 638 nm laser lines• Andor Zyla 4.2 sCMOS Camera• Galvanometric scanner mirror (Cambridge Technology, 6210B)• 6 position filter wheel (Ludl Electronic Products, 96A361)• Filter wheel controller unit (Ludl Electronic Products, MAC5000)• 2 piezoelectric stages (Nanos Instruments, LPS-30-30-1-V2_61-S-N)• 2 stage controller boards (Nanos Instruments, BMC101)

95

DOI:10.15774/PPKE.ITK.2018.005

Page 105: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

B Supplementary Tables

B Supplementary Tables

Table B1: Data sizes in microscopy. Typical devices used for confocal microscopy, high-content screen-ing, single-molecule localization microscopy and light-sheet microscopy and their data production charac-teristics. Data visualized on Figure 4.1

imaging device image size framerate data rate data

size

SPIM 2x sCMOS camera (e.g. Hama-matsu ORCA Flash4.0)

2048x2048 50/s 800 MB/s 10 TB

SMLM 2x EMCCD camera (e.g. AndoriXon Ultra 897)

512x512 56/s 56 MB/s 500 GB

screening CCD camera (e.g. HamamatsuORCA-R2)

1344x1024 8.5s/ 22 MB/s 5 TB

confocal Zeiss LSM 880, 10 channels 512x512 5/s 12.5 MB/s 50 GB

Table B2: Lossless compression performance. B3D is compared with various popular lossless imagecompression methods regarding write speed, read speed and compression ratio (original size / compressedsize). Data visualized on Figure 4.13.

write speed read speed CR file size

B3D 1,115.08 MB/s 928.97 MB/s 9.861 100%

KLB 283.19 MB/s 619.95 MB/s 10.571 93.28%

JPEG2000 31.94 MB/s 26.38 MB/s 11.782 83.69%

TIFF uncompressed 202.32 MB/s 161.08 MB/s 1.00 986.1%

TIFF + LZW 40.85 MB/s 102.37 MB/s 5.822 169.37%

96

DOI:10.15774/PPKE.ITK.2018.005

Page 106: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

B Supplementary Tables

Table B3: Datasets used for benchmarking compression performance.

Dataset name Imagingmodality Description Size (MB)

drosophila SPIM dataset acquired in MuVi-SPIM of aDrosophila melanogaster embryo express-ing H2Av-mCherry nuclear marker

494.53

zebrafish SPIM dataset acquired in MuVi-SPIM of a zebrafishembryo expressing b-actin::GCaMP6f calciumsensor

2,408.00

phallusia SPIM dataset acquired in MuVi-SPIM of a Phallu-sia mammillata embryo expressing PH-citrinemembrane marker

1,323.88

simulation SMLM MT0.N1.LD-2D simulated dataset of micro-tubules labeled with Alexa Fluor 647 fromSMLMS 2016 challenge

156.22

microtubules SMLM microtubules immuno-labeled with AlexaFluor 674-bound antibodies in U2OS cells

1,643.86

lifeact SMLM actin network labeled with LifeAct-tdEOS inU2OS cells

3,316.15

dapi screening wide field fluorescence images of DAPI stainedHeLa Kyoto cells [148]

1,005.38

vsvg screening wide field fluorescence images of CFP-tsO45Gproteins in HeLa Kyoto cells [148]

1,005.38

membrane screening wide field fluorescence images of membranelocalized CFP-tsO45G proteins labeled withAlexaFluor647 in HeLa Kyoto cells [148]

1,005.38

97

DOI:10.15774/PPKE.ITK.2018.005

Page 107: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

C Light collection efficiency of an objective

C Light collection efficiency of an objective

Let’s define light collection efficiency η as the ratio of collected photons and all emittedphotons:

η =Ncollected

Nemitted

Since we can assume that the direction of photons emitted from a fluorescent moleculeare random, the light collection efficiency will correspond to the solid angle subtendedby the objective front lens at the focal point. To calculate this, let’s consider the unitsphere centered at the focal point, and calculate the surface area of the spherical capcorresponding to the objective acceptance angle α (Fig. C1a). The area of the cap canbe expressed as a function of the angle:

Acap = 2πr2(1− cosα)

The surface area of the full sphere is calculated as:

Asph = 4πr2

For both equations r is the radius of the sphere. From here, the light collection efficiencycan be calculated as:

η =Ncollected

Nemitted=

Acap

Asph=

1− cosα

2

As most objectives are characterized by the numerical aperture, we also plot η as afunction of the NA on Figure C1b.

α

r

(a)0 0.2 0.4 0.6 0.8 1 1.2 1.4

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

(b)

Figure C1: Light collection efficiency of an objective. (a) Light collection efficiency is the ratio ofphotons collected by the objective and all emitted photons. If the fluorophores are emitted randomly in alldirections, it will be the surface ratio of the conical section (blue) to the whole sphere. (b) Light collectionefficiency (η) as a function of the numerical aperture (NA).

98

DOI:10.15774/PPKE.ITK.2018.005

Page 108: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

D 3D model of Dual Mouse-SPIM

D 3D model of Dual Mouse-SPIM

To view the 3D figure on the next page, Adobe Reader is required. Click on the figureto activate the 3D view.

Navigation:

• zoom: mouse wheel / right click and drag vertically• rotate: left click and drag• pan: left + right click and drag / Ctrl + click and drag

99

DOI:10.15774/PPKE.ITK.2018.005

Page 109: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

D3D

modelof

DualM

ouse-SPIMFigure D1: 3D model of DualMouse-SPIM.

100

DOI:10.15774/PPKE.ITK.2018.005

Page 110: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References

The author’s publications

[J1] Patrick Hoyer, Gustavo de Medeiros, Bálint Balázs, Nils Norlin, Christina Be-sir, Janina Hanne, Hans-Georg Kräusslich, Johann Engelhardt, Steffen J. Sahl,Stefan W. Hell, and Lars Hufnagel. “Breaking the diffraction limit of light-sheetfluorescence microscopy by RESOLFT”. Proceedings of the National Academy ofSciences 113.13 (Mar. 2016), pp. 3442–3446. doi: 10.1073/pnas.1522292113(cit. on pp. 17, 18, 90).

[J2] Petr Strnad, Stefan Gunther, Judith Reichmann, Uros Krzic, Balint Balazs, Gus-tavo de Medeiros, Nils Norlin, Takashi Hiiragi, Lars Hufnagel, and Jan Ellenberg.“Inverted light-sheet microscope for imaging mouse pre-implantation develop-ment”. Nature Methods 13.2 (Feb. 2016), pp. 139–142. doi: 10.1038/nmeth.3690(cit. on pp. 17, 18, 24, 25, 30, 90).

[J3] Gustavo de Medeiros, Bálint Balázs, and Lars Hufnagel. “Light-sheet imagingof mammalian development”. Seminars in Cell & Developmental Biology. Mam-malian development 55 (July 2016), pp. 148–155. doi: 10.1016/j.semcdb.2015.11.001 (cit. on pp. 18, 90).

[J4] Balint Balazs, Joran Deschamps, Marvin Albert, Jonas Ries, and Lars Hufnagel.“A real-time compression library for microscopy images”. bioRxiv (July 2017),p. 164624. doi: 10.1101/164624 (cit. on pp. 91, 92).

The author’s others publications

[J5] Zoltán Jakus, Edina Simon, Bálint Balázs, and Attila Mócsai. “Genetic deficiencyof Syk protects mice from autoantibody-induced arthritis”. Arthritis and Rheuma-tism 62.7 (July 2010), pp. 1899–1910. doi: 10.1002/art.27438.

101

DOI:10.15774/PPKE.ITK.2018.005

Page 111: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

[J6] Balázs Győrffy, Zsombor Benke, András Lánczky, Bálint Balázs, Zoltán Szállási,József Timár, and Reinhold Schäfer. “RecurrenceOnline: an online analysis tool todetermine breast cancer recurrence and hormone receptor status using microarraydata”. Breast Cancer Research and Treatment (July 2011). doi: 10.1007/s10549-011-1676-y.

[J7] Weiwei Shi, Balint Balazs, Balazs Györffy, Tingting Jiang, W. Fraser Symmans,Christos Hatzis, and Lajos Pusztai. “Combined analysis of gene expression, DNAcopy number, and mutation profiling data to display biological process anomaliesin individual breast cancers”. Breast Cancer Research and Treatment 144.3 (Mar.2014), pp. 561–568. doi: 10.1007/s10549-014-2904-z.

The author’s conference presentations

[C1] Bálint Balázs, Marvin Albert, and Lars Hufnagel. “GPU-based image processingfor multiview microscopy data”. Light Sheet Fluorescence Microscopy Interna-tional Conference. Sheffield, UK, Sept. 2016 (cit. on pp. 91, 92).

[C2] Bálint Balázs, Marvin Albert, and Lars Hufnagel. “GPU-based image processingfor multi-view microscopy data”. Focus on Microscopy. Taipei, Taiwan, Mar. 2016(cit. on pp. 91, 92).

[C3] Bálint Balázs, Marvin Albert, and Lars Hufnagel. “GPU-based image processingfor multi-view microscopy data”. Focus on Microscopy. Bordeaux, France, Apr.2017 (cit. on pp. 91, 92).

102

DOI:10.15774/PPKE.ITK.2018.005

Page 112: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

References cited in the thesis[1] Robert Hooke. Micrographia: or Some Physi-

ological Descriptions of Minute Bodies Madeby Magnifying Glasses. With Observations andInquiries Thereupon. London: J. Martyn andJ. Allestry, 1665 (cit. on p. 1).

[2] Philipp J Keller and Ernst HK Stelzer. “Quan-titative in vivo imaging of entire embryos withDigital Scanned Laser Light Sheet Fluores-cence Microscopy”. Current Opinion in Neuro-biology 18.6 (Dec. 2008). review, pp. 624–632.doi: 10.1016/j.conb.2009.03.008 (cit. onp. 1).

[3] Jan Huisken and Didier Y. R. Stainier. “Se-lective plane illumination microscopy tech-niques in developmental biology”. Development136.12 (June 2009), pp. 1963–1975. doi: 10.1242/dev.022426 (cit. on pp. 1, 18).

[4] Michael Weber and Jan Huisken. “Light sheetmicroscopy for real-time developmental biol-ogy”. Current Opinion in Genetics & Develop-ment 21.5 (2011), pp. 566–572. doi: 10.1016/j.gde.2011.09.009 (cit. on p. 1).

[5] Raju Tomer, Khaled Khairy, and Philipp JKeller. “Shedding light on the system: studyingembryonic development with light sheet mi-croscopy”. Current Opinion in Genetics & De-velopment. Developmental mechanisms, pat-terning and evolution 21.5 (Oct. 2011). Reviewfrom 2011, pp. 558–565. doi: 10.1016/j.gde.2011.07.003 (cit. on p. 1).

[6] Hans-Ulrich Dodt, Ulrich Leischner, AnjaSchierloh, Nina Jährling, Christoph PeterMauch, Katrin Deininger, Jan Michael Deuss-ing, Matthias Eder, Walter Zieglgänsberger,and Klaus Becker. “Ultramicroscopy: three-dimensional visualization of neuronal networksin the whole mouse brain”. Nature Methods4.4 (Mar. 2007), pp. 331–336. doi: 10.1038/nmeth1036 (cit. on pp. 1, 18, 28).

[7] Bi-Chang Chen et al. “Lattice light-sheetmicroscopy: Imaging molecules to embryosat high spatiotemporal resolution”. Science346.6208 (Oct. 2014), p. 1257998. doi: 10 .1126/science.1257998 (cit. on pp. 1, 18, 55,56).

[8] P. Philippe Laissue, Rana A. Alghamdi, PavelTomancak, Emmanuel G. Reynaud, and HariShroff. “Assessing phototoxicity in live fluo-rescence imaging”. Nature Methods 14.7 (July2017), pp. 657–661. doi: 10.1038/nmeth.4344(cit. on pp. 3, 4, 32).

[9] Jeff W. Lichtman and José-Angel Conchello.“Fluorescence microscopy”. Nature Methods2.12 (Dec. 2005), pp. 910–919. doi: 10.1038/nmeth817 (cit. on p. 3).

[10] Alberto Diaspro. Optical Fluorescence Mi-croscopy: From the Spectral to the Nano Di-mension. 1st ed. Springer-Verlag Berlin Hei-delberg, 2011 (cit. on p. 3).

[11] G. G. Stokes. “On the Change of Refrangi-bility of Light”. Philosophical Transactions ofthe Royal Society of London 142 (Jan. 1852),pp. 463–562. doi: 10.1098/rstl.1852.0022(cit. on p. 3).

[12] Spectra Viewer. url: https://www.chroma.com/spectra-viewer (visited on 09/11/2017)(cit. on p. 4).

[13] Robert Bacallao, Morgane Bomsel, Ernst H. K.Stelzer, and Jan De Mey. “Guiding Principlesof Specimen Preservation for Confocal Flu-orescence Microscopy”. Handbook of Biologi-cal Confocal Microscopy. DOI: 10.1007/978-1-4615-7133-9_18. Springer, Boston, MA, 1990,pp. 197–205 (cit. on p. 5).

[14] O. Shimomura, F. H. Johnson, and Y. Saiga.“Extraction, purification and properties of ae-quorin, a bioluminescent protein from the lu-minous hydromedusan, Aequorea”. Journal ofCellular and Comparative Physiology 59 (June1962). first purification of GFP from jellyfish,pp. 223–239 (cit. on p. 5).

[15] Douglas C. Prasher, Virginia K. Eckenrode,William W. Ward, Frank G. Prendergast, andMilton J. Cormier. “Primary structure of theAequorea victoria green-fluorescent protein”.Gene 111.2 (Feb. 1992). cloning GFP cDNA,pp. 229–233. doi: 10 . 1016 / 0378 - 1119(92 )90691-H (cit. on p. 5).

[16] M. Chalfie, Y. Tu, G. Euskirchen, W. W.Ward, and D. C. Prasher. “Green fluorescentprotein as a marker for gene expression”. Sci-ence 263.5148 (Feb. 1994). GFP in c. elegans/ nematodes, pp. 802–805. doi: 10 . 1126 /science.8303295 (cit. on p. 5).

[17] Adam Amsterdam, Shuo Lin, and Nancy Hop-kins. “The Aequorea victoria Green Fluores-cent Protein Can Be Used as a Reporter inLive Zebrafish Embryos”. Developmental Bi-ology 171.1 (Sept. 1995). first GFP zebrafish,pp. 123–129. doi: 10.1006/dbio.1995.1265(cit. on p. 5).

103

DOI:10.15774/PPKE.ITK.2018.005

Page 113: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

[18] Masaru Okabe, Masahito Ikawa, KatsuyaKominami, Tomoko Nakanishi, and YoshitakeNishimune. “‘Green mice’ as a source of ubiq-uitous green cells”. FEBS Letters 407.3 (May1997). first EGFP mouse, pp. 313–319. doi:10 . 1016 / S0014 - 5793(97 ) 00313 - X (cit. onp. 5).

[19] R Heim, D C Prasher, and R Y Tsien. “Wave-length mutations and posttranslational autox-idation of green fluorescent protein.” Proceed-ings of the National Academy of Sciences of theUnited States of America 91.26 (Dec. 1994),pp. 12501–12504 (cit. on p. 5).

[20] Roger Heim and Roger Y Tsien. “Engineeringgreen fluorescent protein for improved bright-ness, longer wavelengths and fluorescence res-onance energy transfer”. Current Biology 6.2(Feb. 1996), pp. 178–182. doi: 10.1016/S0960-9822(02)00450-5 (cit. on p. 5).

[21] Brendan P. Cormack, Raphael H. Valdivia,and Stanley Falkow. “FACS-optimized mu-tants of the green fluorescent protein (GFP)”.Gene. Flourescent Proteins and Applications173.1 (1996), pp. 33–38. doi: 10.1016/0378-1119(95)00685-0 (cit. on p. 5).

[22] Robert F. Service. “Three Scientists Bask inPrize’s Fluorescent Glow”. Science 322.5900(Oct. 2008), pp. 361–361. doi: 10 . 1126 /science.322.5900.361 (cit. on p. 5).

[23] E. Abbe. “Beiträge zur Theorie des Mikroskopsund der mikroskopischen Wahrnehmung”.Archiv für mikroskopische Anatomie 9.1(1873), pp. 413–418. doi: 10.1007/BF02956173(cit. on p. 7).

[24] Max Born and Emil Wolf. Principles of Op-tics: Electromagnetic Theory of Propagation,Interference and Diffraction of Light. Elsevier,June 2013 (cit. on pp. 8, 9).

[25] C. J. R. Sheppard and H. J. Matthews. “Imag-ing in high-aperture optical systems”. JOSA A4.8 (Aug. 1987), pp. 1354–1360. doi: 10.1364/JOSAA.4.001354 (cit. on p. 8).

[26] Lord Rayleigh F.R.S. “XXXI. Investigationsin optics, with special reference to thespectroscope”. Philosophical Magazine 8.49(Oct. 1879), pp. 261–274. doi: 10 . 1080 /14786447908639684 (cit. on p. 9).

[27] Sarah Frisken Gibson and Frederick Lanni.“Experimental test of an analytical model ofaberration in an oil-immersion objective lensused in three-dimensional light microscopy”.JOSA A 9.1 (Jan. 1992), pp. 154–166. doi:10.1364/JOSAA.9.000154 (cit. on pp. 10, 51).

[28] Jizhou Li, Feng Xue, and Thierry Blu. “Fastand accurate three-dimensional point spreadfunction computation for fluorescence mi-croscopy”. JOSA A 34.6 (June 2017), pp. 1029–1034. doi: 10.1364/JOSAA.34.001029 (cit. onpp. 11, 51).

[29] H. Kirshner, D. Sage, and M. Unser. “3DPSF Models for Fluorescence Microscopy inImageJ”. Proceedings of the Twelfth Interna-tional Conference on Methods and Applica-tions of Fluorescence Spectroscopy, Imagingand Probes (MAF’11). ImageJ PSF genera-tor. Strasbourg, French Republic, Sept. 2011,p. 154 (cit. on pp. 11, 48).

[30] Johannes Schindelin et al. “Fiji: an open-source platform for biological-image analysis”.Nature Methods 9.7 (July 2012), pp. 676–682.doi: 10.1038/nmeth.2019 (cit. on pp. 11, 48,51, 87).

[31] Jim Swoger, Jan Huisken, and Ernst H. K.Stelzer. “Multiple imaging axis microscopyimproves resolution for thick-sample appli-cations”. Optics Letters 28.18 (Sept. 2003),p. 1654. doi: 10.1364/OL.28.001654 (cit. onp. 13).

[32] Jim Swoger, Peter Verveer, Klaus Greger, JanHuisken, and Ernst H. K. Stelzer. “Multi-view image fusion improves resolution in three-dimensional microscopy”. Optics Express 15.13(2007), p. 8029. doi: 10.1364/OE.15.008029(cit. on pp. 13, 31).

[33] Marvin Minsky. “Microscopy apparatus”.US3013467 A. U.S. Classification 356/432,359/389, 348/79, 250/215; InternationalClassification G02B21/00; CooperativeClassification G02B21/0024, G02B21/002;European Classification G02B21/00M4A,G02B21/00M4. Dec. 1961 (cit. on p. 13).

[34] Paul Davidovits and M. David Egger. “Scan-ning Laser Microscope”. Nature 223.5208 (Aug.1969), pp. 831–831. doi: 10 . 1038 / 223831a0(cit. on p. 13).

104

DOI:10.15774/PPKE.ITK.2018.005

Page 114: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

[35] Ernst H.K. Stelzer and Steffen Lindek. “Fun-damental reduction of the observation vol-ume in far-field light microscopy by detec-tion orthogonal to the illumination axis: con-focal theta microscopy”. Optics Communica-tions 111.5–6 (Oct. 1994), pp. 536–547. doi:10 . 1016 / 0030 - 4018(94 ) 90533 - 9 (cit. onp. 15).

[36] Ralph Gräf, Jens Rietdorf, and Timo Zimmer-mann. “Live Cell Spinning Disk Microscopy”.Microscopy Techniques. Advances in Bio-chemical Engineering. DOI: 10.1007/b102210.Springer, Berlin, Heidelberg, 2005, pp. 57–75(cit. on pp. 15, 23).

[37] G. S. Kino. “Intermediate Optics in Nip-kow Disk Microscopes”. Handbook of Biologi-cal Confocal Microscopy. DOI: 10.1007/978-1-4615-7133-9_10. Springer, Boston, MA, 1990,pp. 105–111 (cit. on p. 15).

[38] Akihiko Nakano. “Spinning-disk Confocal Mi-croscopy — A Cutting-Edge Tool for Imag-ing of Membrane Traffic”. Cell Structure andFunction 27.5 (2002), pp. 349–355. doi: 10 .1247/csf.27.349 (cit. on p. 15).

[39] H. Siedentopf and R. Zsigmondy. “Uber Sicht-barmachung und Größenbestimmung ultra-mikoskopischer Teilchen, mit besonderer An-wendung auf Goldrubingläser”. Annalen derPhysik 315.1 (1902), pp. 1–39. doi: 10.1002/andp.19023150102 (cit. on p. 16).

[40] A H Voie, D H Burns, and F A Spelman.“Orthogonal-plane fluorescence optical sec-tioning: three-dimensional imaging of macro-scopic biological specimens”. Journal of Mi-croscopy 170.Pt 3 (June 1993), pp. 229–36.doi: 8371260 (cit. on p. 16).

[41] Arne H. Voie and Francis A. Spelman. “Three-dimensional reconstruction of the cochlea fromtwo-dimensional images of optical sections”.Computerized Medical Imaging and Graphics19.5 (Sept. 1995), pp. 377–384. doi: 10.1016/0895-6111(95)00034-8 (cit. on p. 17).

[42] Eran Fuchs, Jules S. Jaffe, Richard A. Long,and Farooq Azam. “Thin laser light sheet mi-croscope for microbial oceanography”. OpticsExpress 10.2 (Jan. 2002), pp. 145–154. doi:10.1364/OE.10.000145 (cit. on p. 17).

[43] Jan Huisken, Jim Swoger, Filippo Del Bene,Joachim Wittbrodt, and Ernst H. K Stelzer.“Optical Sectioning Deep Inside Live Embryosby Selective Plane Illumination Microscopy”.

Science 305.5686 (2004), pp. 1007–1009 (cit.on pp. 17, 19).

[44] Jérémie Capoulade, Malte Wachsmuth, LarsHufnagel, and Michael Knop. “Quantitativefluorescence imaging of protein diffusion andinteraction in living cells”. Nature Biotechnol-ogy 29.9 (Sept. 2011), pp. 835–839. doi: 10.1038/nbt.1928 (cit. on p. 17).

[45] Yicong Wu, Alireza Ghitani, Ryan Chris-tensen, Anthony Santella, Zhuo Du, Gary Ron-deau, Zhirong Bao, Daniel Colón-Ramos, andHari Shroff. “Inverted selective plane illumi-nation microscopy (iSPIM) enables coupledcell identity lineaging and neurodevelopmen-tal imaging in Caenorhabditis elegans”. Pro-ceedings of the National Academy of Sciences108.43 (Oct. 2011). iSPIM, tracking c. elegnasnuclei, pp. 17708–17713. doi: 10.1073/pnas.1108494108 (cit. on pp. 17, 18).

[46] Yicong Wu et al. “Spatially isotropic four-dimensional imaging with dual-view plane il-lumination microscopy”. Nature Biotechnology31.11 (Nov. 2013), pp. 1032–1038. doi: 10 .1038/nbt.2713 (cit. on pp. 17, 31).

[47] Jan Huisken and Didier Y. R. Stainier.“Even fluorescence excitation by multidi-rectional selective plane illumination mi-croscopy (mSPIM)”. Optics Letters 32.17(2007), pp. 2608–2610. doi: 10.1364/OL.32.002608 (cit. on pp. 17, 18, 22).

[48] Uros Krzic, Stefan Gunther, Timothy E. Saun-ders, Sebastian J. Streichan, and Lars Huf-nagel. “Multiview light-sheet microscope forrapid in toto imaging”. Nature Methods 9.7(July 2012). MuVi-SPIM, pp. 730–733. doi:10.1038/nmeth.2064 (cit. on pp. 17, 18, 66–68,73, 82, 85, 86).

[49] Raju Tomer, Khaled Khairy, Fernando Amat,and Philipp J. Keller. “Quantitative high-speed imaging of entire developing embryoswith simultaneous multiview light-sheet mi-croscopy”. Nature Methods 9.7 (July 2012).SimView, pp. 755–763. doi: 10.1038/nmeth.2062 (cit. on pp. 17, 18).

[50] Benjamin Schmid, Gopi Shah, Nico Scherf,Michael Weber, Konstantin Thierbach, CitlaliPérez Campos, Ingo Roeder, Pia Aanstad, andJan Huisken. “High-speed panoramic light-sheet microscopy reveals global endodermalcell dynamics”. Nature Communications 4(July 2013). zebrafish sphere projection + FEPtube. doi: 10.1038/ncomms3207 (cit. on pp. 17,18).

105

DOI:10.15774/PPKE.ITK.2018.005

Page 115: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

[51] Raghav K. Chhetri, Fernando Amat, Yi-nan Wan, Burkhard Höckendorf, William C.Lemon, and Philipp J. Keller. “Whole-animalfunctional and developmental imaging withisotropic spatial resolution”. Nature Methods12.12 (Dec. 2015), pp. 1171–1178. doi: 10 .1038/nmeth.3632 (cit. on pp. 17, 18).

[52] Philipp J. Keller, Annette D. Schmidt,Joachim Wittbrodt, and Ernst H. K. Stelzer.“Reconstruction of Zebrafish Early Embry-onic Development by Scanned Light SheetMicroscopy”. Science 322.5904 (Nov. 2008),pp. 1065–1069. doi: 10.1126/science.1162493(cit. on pp. 18, 19, 23, 68).

[53] Takehiko Ichikawa, Kenichi Nakazato, PhilippJ. Keller, Hiroko Kajiura-Kobayashi, ErnstH. K. Stelzer, Atsushi Mochizuki, andShigenori Nonaka. “Live Imaging of WholeMouse Embryos during Gastrulation: Migra-tion Analyses of Epiblast and MesodermalCells”. PLoS ONE 8.7 (July 2013). post im-plantation, e64506. doi: 10 . 1371 / journal .pone.0064506 (cit. on pp. 18, 26, 27).

[54] Ryan S. Udan, Victor G. Piazza, Chih-wei Hsu,Anna-Katerina Hadjantonakis, and Mary E.Dickinson. “Quantitative imaging of cell dy-namics in mouse embryos using light-sheet mi-croscopy”. Development (Oct. 2014). post im-plantation, dev.111021. doi: 10 . 1242 / dev .111021 (cit. on pp. 18, 26).

[55] Abhishek Kumar et al. “Dual-view plane il-lumination microscopy for rapid and spatiallyisotropic imaging”. Nature Protocols 9.11 (Nov.2014). diSPIM, pp. 2555–2573. doi: 10.1038/nprot.2014.172 (cit. on p. 18).

[56] Francesca Cella Zanacchi, Zeno Lavagnino,Michela Perrone Donnorso, Alessio Del Bue,Laura Furia, Mario Faretta, and Alberto Di-aspro. “Live-cell 3D super-resolution imagingin thick biological samples”. Nature Methods8.12 (Dec. 2011), pp. 1047–1049. doi: 10.1038/nmeth.1744 (cit. on p. 18).

[57] Mike Friedrich, Qiang Gan, Vladimir Ermo-layev, and Gregory S. Harms. “STED-SPIM:Stimulated Emission Depletion Improves SheetIllumination Microscopy Resolution”. Biophys-ical Journal 100.8 (Apr. 2011), pp. L43–L45.doi: 10.1016/j.bpj.2010.12.3748 (cit. onp. 18).

[58] Philipp J. Keller, Annette D. Schmidt, An-thony Santella, Khaled Khairy, Zhirong Bao,Joachim Wittbrodt, and Ernst H. K. Stelzer.

“Fast, high-contrast imaging of animal de-velopment with scanned light sheet-basedstructured-illumination microscopy”. NatureMethods 7.8 (Aug. 2010), pp. 637–642. doi:10.1038/nmeth.1476 (cit. on pp. 18, 56).

[59] Bo-Jui Chang, Victor Didier Perez Meza, andErnst H. K. Stelzer. “csiLSFM combines light-sheet fluorescence microscopy and coherentstructured illumination for a lateral resolu-tion below 100 nm”. Proceedings of the Na-tional Academy of Sciences 114.19 (May 2017),pp. 4869–4874. doi: 10.1073/pnas.1609278114(cit. on pp. 18, 56).

[60] K. Greger, J. Swoger, and E. H. K. Stelzer.“Basic building units and properties of a fluo-rescence single plane illumination microscope”.Review of Scientific Instruments 78.2 (Feb.2007), pp. 023705–023705–7. doi: doi : 10 .1063/1.2428277 (cit. on p. 20).

[61] Bahaa E. A. Saleh and Malvin Carl Teich. Fun-damentals of Photonics. John Wiley & Sons,Mar. 2007 (cit. on p. 20).

[62] Sonja Nowotschin, Anna Ferrer-Vaquer, andAnna-Katerina Hadjantonakis. “Chapter 20- Imaging Mouse Development with Confo-cal Time-Lapse Microscopy”. Methods in En-zymology. Ed. by Paul M. Wassarman andPhilippe M. Soriano. Vol. 476. Guide to Tech-niques in Mouse Development, Part A: Mice,Embryos, and Cells, 2nd Edition. AcademicPress, 2010, pp. 351–377 (cit. on p. 23).

[63] David M. Shotton. “Confocal scanning opticalmicroscopy and its applications for biologicalspecimens”. Journal of Cell Science 94.2 (Oct.1989), pp. 175–206 (cit. on p. 23).

[64] James Jonkman and Claire M. Brown. “AnyWay You Slice It—A Comparison of ConfocalMicroscopy Techniques”. Journal of Biomolec-ular Techniques : JBT 26.2 (July 2015),pp. 54–65. doi: 10.7171/jbt.15- 2602- 003(cit. on p. 23).

[65] Silvia Aldaz, Luis M. Escudero, and MatthewFreeman. “Live Imaging of Drosophila Imag-inal Disc Development”. Proceedings of theNational Academy of Sciences 107.32 (Aug.2010), pp. 14217–14222. doi: 10.1073/pnas.1008623107 (cit. on p. 23).

[66] Jean-Léon Maître, Hervé Turlier, RukshalaIllukkumbura, Björn Eismann, Ritsuya Ni-wayama, François Nédélec, and Takashi Hi-iragi. “Asymmetric division of contractile do-mains couples cell positioning and fate specifi-

106

DOI:10.15774/PPKE.ITK.2018.005

Page 116: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

cation”. Nature 536.7616 (Aug. 2016), pp. 344–348. doi: 10.1038/nature18958 (cit. on p. 23).

[67] Emmanuel G. Reynaud, Uroš Kržič, KlausGreger, and Ernst H.K. Stelzer. “Light sheet-based fluorescence microscopy: more dimen-sions, more photons, and less photodamage”.HFSP Journal 2.5 (Oct. 2008), pp. 266–275.doi: 10.2976/1.2974980 (cit. on p. 23).

[68] Stelzer. “Contrast, resolution, pixelation, dy-namic range and signal-to-noise ratio: fun-damental limits to resolution in fluorescencelight microscopy”. Journal of Microscopy 189.1(Jan. 1998). This is also the reason why it issufficient to storethe square root of the countrate. That number will containall the availableinformation., pp. 15–24. doi: 10.1046/j.1365-2818.1998.00290.x (cit. on p. 24).

[69] Adam S. Doherty and Richard M. Schultz.“Culture of Preimplantation Mouse Embryos”.Developmental Biology Protocols. Methods inMolecular Biology™. DOI: 10.1385/1-59259-685-1:47. Humana Press, 2000, pp. 47–52 (cit.on p. 24).

[70] Jens-Erik Dietrich and Takashi Hiiragi.“Stochastic patterning in the mouse pre-implantation embryo”. Development 134.23(Dec. 2007), pp. 4219–4231. doi: 10 . 1242 /dev.003798 (cit. on p. 24).

[71] W. Denk, J. H. Strickler, and W. W.Webb. “Two-photon laser scanning fluores-cence microscopy”. Science 248.4951 (Apr.1990), pp. 73–76. doi: 10 . 1126 / science .2321027 (cit. on p. 24).

[72] Jayne M. Squirrell, David L. Wokosin, JohnG. White, and Barry D. Bavister. “Long-term two-photon fluorescence imaging of mam-malian embryos without compromising viabil-ity”. Nature Biotechnology 17.8 (Aug. 1999).low damage live-cell imaging with two-photoncaser scanning hamster, pp. 763–767. doi: 10.1038/11698 (cit. on p. 24).

[73] Katie McDole, Yuan Xiong, Pablo A. Iglesias,and Yixian Zheng. “Lineage mapping the pre-implantation mouse embryo by two-photon mi-croscopy, new insights into the segregation ofcell fates”. Developmental Biology 355.2 (July2011), pp. 239–249. doi: 10.1016/j.ydbio.2011.04.024 (cit. on p. 24).

[74] Kazuo Yamagata, Rinako Suetsugu, andTeruhiko Wakayama. “Long-Term, Six-Dimensional Live-Cell Imaging for the MousePreimplantation Embryo That Does Not Af-fect Full-Term Development”. Journal of Re-production and Development 55.3 (2009),pp. 343–350. doi: 10 . 1262 / jrd . 20166 (cit.on p. 24).

[75] Gustavo de Medeiros, Nils Norlin, Stefan Gun-ther, Marvin Albert, Laura Panavaite, Ulla-Maj Fiuza, Francesca Peri, Takashi Hiiragi,Uros Krzic, and Lars Hufnagel. “Confocal mul-tiview light-sheet microscopy”. Nature Com-munications 6 (Nov. 2015), p. 8881. doi: 10.1038/ncomms9881 (cit. on pp. 26, 27, 68, 70,78, 82).

[76] Laura Panavaite, Balint Balazs, Tristan Ro-driguez, Shankar Srinivas, Jerome Collignon,Lars Hufnagel, and Takashi Hiiragi. “3D-geec: a method for studying mouse peri-implantation development”. Mechanisms ofDevelopment. 18th International Congress ofDevelopmental Biology 18-22 June, UniversityCultural Centre, National University of Sin-gapore 145.Supplement (July 2017), S78–S79.doi: 10 . 1016 / j . mod . 2017 . 04 . 189 (cit. onp. 26).

[77] Yu-Chih Hsu. “In vitro development of indi-vidually cultured whole mouse embryos fromblastocyst to early somite stage”. Developmen-tal Biology 68.2 (Feb. 1979). post-implantationculture, pp. 453–461. doi: 10 . 1016 / 0012 -1606(79)90217-3 (cit. on p. 27).

[78] Fu-Jen Huang, Tsung-Chieh J. Wu, and Meng-Yin Tsai. “Effect of retinoic acid on implan-tation and post-implantation development ofmouse embryos in vitro”. Human Reproduction16.10 (Oct. 2001). post-implantation culture,pp. 2171–2176. doi: 10.1093/humrep/16.10.2171 (cit. on p. 27).

[79] Sonja Nowotschin and Anna-Katerina Had-jantonakis. “Cellular dynamics in the earlymouse embryo: from axis formation to gastru-lation”. Current Opinion in Genetics & Devel-opment. Developmental mechanisms, pattern-ing and evolution 20.4 (Aug. 2010), pp. 420–427. doi: 10.1016/j.gde.2010.05.008 (cit. onp. 27).

[80] Monica D. Garcia, Ryan S. Udan, Anna-Katerina Hadjantonakis, and Mary E. Dickin-son. “Live Imaging of Mouse Embryos”. ColdSpring Harbor Protocols 2011.4 (Apr. 2011),

107

DOI:10.15774/PPKE.ITK.2018.005

Page 117: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

pdb.top104. doi: 10.1101/pdb.top104 (cit. onp. 27).

[81] Alessandro Bria and Giulio Iannello.“TeraStitcher - A tool for fast automatic 3D-stitching of teravoxel-sized microscopy im-ages”. BMC Bioinformatics 13 (Nov. 2012),p. 316. doi: 10.1186/1471-2105-13-316 (cit.on p. 28).

[82] Gergely Katona, Gergely Szalay, Pál Maák,Attila Kaszás, Máté Veress, Dániel Hillier,Balázs Chiovini, E. Sylvester Vizi, BotondRoska, and Balázs Rózsa. “Fast two-photon invivo imaging with three-dimensional random-access scanning in large tissue volumes”. Na-ture Methods 9.2 (Feb. 2012), pp. 201–208. doi:10.1038/nmeth.1851 (cit. on p. 28).

[83] Hiroshi Hama, Hiroshi Kurokawa, HiroyukiKawano, Ryoko Ando, Tomomi Shimogori,Hisayori Noda, Kiyoko Fukami, Asako Sakaue-Sawano, and Atsushi Miyawaki. “Scale: achemical approach for fluorescence imag-ing and reconstruction of transparent mousebrain”. Nature Neuroscience 14.11 (Nov. 2011),pp. 1481–1488. doi: 10.1038/nn.2928 (cit. onp. 29).

[84] Ali Ertürk, Klaus Becker, Nina Jährling,Christoph P. Mauch, Caroline D. Hojer, Jack-son G. Egen, Farida Hellal, Frank Bradke,Morgan Sheng, and Hans-Ulrich Dodt. “Three-dimensional imaging of solvent-cleared organsusing 3DISCO”. Nature Protocols 7.11 (Nov.2012), pp. 1983–1995. doi: 10 . 1038 / nprot .2012.119 (cit. on p. 29).

[85] Ali Ertürk et al. “Three-dimensional imagingof the unsectioned adult spinal cord to as-sess axon regeneration and glial responses af-ter injury”. Nature Medicine 18.1 (Jan. 2012),pp. 166–171. doi: 10.1038/nm.2600 (cit. onp. 29).

[86] Meng-Tsen Ke, Satoshi Fujimoto, and TakeshiImai. “SeeDB: a simple and morphology-preserving optical clearing agent for neuronalcircuit reconstruction”. Nature Neuroscience16.8 (Aug. 2013), pp. 1154–1161. doi: 10.1038/nn.3447 (cit. on p. 29).

[87] Kwanghun Chung and Karl Deisseroth.“CLARITY for mapping the nervous system”.Nature Methods 10.6 (June 2013), pp. 508–513.doi: 10.1038/nmeth.2481 (cit. on p. 29).

[88] Etsuo A. Susaki et al. “Whole-Brain Imag-ing with Single-Cell Resolution Using Chem-ical Cocktails and Computational Analysis”.Cell 157.3 (Apr. 2014), pp. 726–739. doi: 10.1016/j.cell.2014.03.042 (cit. on p. 29).

[89] Nicolas Renier, Zhuhao Wu, David J. Simon,Jing Yang, Pablo Ariel, and Marc Tessier-Lavigne. “iDISCO: A Simple, Rapid Methodto Immunolabel Large Tissue Samples for Vol-ume Imaging”. Cell 159.4 (Nov. 2014), pp. 896–910. doi: 10.1016/j.cell.2014.10.010 (cit.on p. 29).

[90] Kazuki Tainaka, Shimpei I. Kubota, TakeruQ. Suyama, Etsuo A. Susaki, Dimitri Perrin,Maki Ukai-Tadenuma, Hideki Ukai, and HirokiR. Ueda. “Whole-Body Imaging with Single-Cell Resolution by Tissue Decolorization”. Cell159.4 (Nov. 2014), pp. 911–924. doi: 10.1016/j.cell.2014.10.034 (cit. on p. 29).

[91] Tongcang Li, Sadao Ota, Jeongmin Kim, ZiJing Wong, Yuan Wang, Xiaobo Yin, and Xi-ang Zhang. “Axial Plane Optical Microscopy”.Scientific Reports 4 (Dec. 2014). doi: 10.1038/srep07253 (cit. on p. 29).

[92] Matthew B. Bouchard, Venkatakaushik Vo-leti, César S. Mendes, Clay Lacefield, Wes-ley B. Grueber, Richard S. Mann, Randy M.Bruno, and Elizabeth M. C. Hillman. “Sweptconfocally-aligned planar excitation (SCAPE)microscopy for high-speed volumetric imag-ing of behaving organisms”. Nature Photonics9.2 (Feb. 2015), pp. 113–119. doi: 10.1038/nphoton.2014.323 (cit. on p. 29).

[93] Scott F. Gilbert. Developmental Biology. 10th.Sinauer Associates, 2013 (cit. on p. 30).

[94] M. Temerinac-Ott, O. Ronneberger, P. Ochs,W. Driever, T. Brox, and H. Burkhardt. “Mul-tiview Deblurring for 3-D Images from Light-Sheet-Based Fluorescence Microscopy”. IEEETransactions on Image Processing 21.4 (Apr.2012), pp. 1863–1873. doi: 10.1109/TIP.2011.2181528 (cit. on p. 31).

[95] Stephan Preibisch, Fernando Amat, EvangeliaStamataki, Mihail Sarov, Robert H. Singer,Eugene Myers, and Pavel Tomancak. “EfficientBayesian-based multiview deconvolution”. Na-ture Methods 11.6 (June 2014), pp. 645–648.doi: 10.1038/nmeth.2929 (cit. on pp. 31, 48,53).

108

DOI:10.15774/PPKE.ITK.2018.005

Page 118: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

[96] Bálint Balázs. “Development of a symmet-ric single-plane illumination microscope”. Mas-ter’s Thesis. Hungary: Pázmány Péter CatholicUniversity, 2013 (cit. on pp. 31, 43).

[97] Stephan Preibisch, Stephan Saalfeld, JohannesSchindelin, and Pavel Tomancak. “Software forbead-based registration of selective plane il-lumination microscopy data”. Nature Methods7.6 (June 2010), pp. 418–419. doi: 10.1038/nmeth0610-418 (cit. on pp. 48, 69).

[98] Janick Cardinale. An ImageJ Plugin to Mea-sure 3D Point Spread Functions. 2010. url:http://mosaic.mpi-cbg.de/Downloads/PSF_measurement_3D.pdf (visited on 10/04/2017)(cit. on p. 51).

[99] Daniel Sage, Lauréne Donati, Ferréol Soulez,Denis Fortun, Guillaume Schmit, Arne Seitz,Romain Guiet, Cédric Vonesch, and MichaelUnser. “DeconvolutionLab2: An open-sourcesoftware for deconvolution microscopy”. Meth-ods. Image Processing for Biologists 115 (Feb.2017), pp. 28–41. doi: 10.1016/j.ymeth.2016.12.015 (cit. on p. 51).

[100] Pierre Gönczy. “Towards a molecular archi-tecture of centriole assembly”. Nature ReviewsMolecular Cell Biology 13 (2012). doi: 10 .1038/nrm3373 (cit. on p. 53).

[101] M. Rauzi. “Probing tissue interaction withlaser-based cauterization in the early develop-ing Drosophila embryo”. Methods in Cell Biol-ogy. Cell Polarity and Morphogenesis 139 (Jan.2017), pp. 153–165. doi: 10 . 1016 / bs . mcb .2016.11.003 (cit. on p. 56).

[102] Kevin M. Dean, Philippe Roudot, Carlos R.Reis, Erik S. Welf, Marcel Mettlen, and RetoFiolka. “Diagonally Scanned Light-Sheet Mi-croscopy for Fast Volumetric Imaging of Ad-herent Cells”. Biophysical Journal 110.6 (Mar.2016), pp. 1456–1465. doi: 10.1016/j.bpj.2016.01.029 (cit. on p. 56).

[103] Khalid Sayood. Introduction to Data Com-pression, Fourth Edition. 4th ed. The MorganKaufmann Series in Multimedia Informationand Systems. Morgan Kaufmann, 2012 (cit. onpp. 57, 62).

[104] David Salomon and Giovanni Motta. Handbookof Data Compression. 5th. Google-Books-ID:LHCY4VbiFqAC. Springer Science & BusinessMedia, Jan. 2010 (cit. on pp. 57, 60).

[105] Douglas W. Cromey. “Digital Images AreData: And Should Be Treated as Such”. Meth-ods in molecular biology (Clifton, N.J.) 931(2013), pp. 1–27. doi: 10.1007/978-1-62703-056-4_1 (cit. on pp. 57, 76).

[106] Claude E. Shannon. “A mathematical theoryof communication”. The Bell System Techni-cal Journal 27.4 (Oct. 1948), pp. 623–656. doi:10.1002/j.1538-7305.1948.tb00917.x (cit.on pp. 57, 58).

[107] Claude E. Shannon. “A mathematical theoryof communication”. The Bell System Techni-cal Journal 27.3 (July 1948), pp. 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x (cit.on p. 57).

[108] Claude E Shannon. “Prediction and entropy ofprinted English”. Bell Labs Technical Journal30.1 (1951), pp. 50–64 (cit. on p. 57).

[109] J. Rissanen and G. Langdon. “Universal mod-eling and coding”. IEEE Transactions on In-formation Theory 27.1 (Jan. 1981), pp. 12–23. doi: 10.1109/TIT.1981.1056282 (cit. onp. 58).

[110] David A. Huffman. “A Method for the Con-struction of Minimum-Redundancy Codes”.Proceedings of the IRE 40.9 (Sept. 1952),pp. 1098–1101. doi: 10.1109/JRPROC.1952.273898 (cit. on p. 59).

[111] N. Ahmed, T. Natarajan, and K. R. Rao. “Dis-crete Cosine Transform”. IEEE Transactionson Computers C-23.1 (Jan. 1974), pp. 90–93.doi: 10.1109/T-C.1974.223784 (cit. on p. 63).

[112] Stephane G. Mallat. “A theory for multireso-lution signal decomposition : the wavelet rep-resentation”. IEEE Transaction on PatternAnalysis and Machine Intelligence (1989) (cit.on p. 63).

[113] Arne Jensen and Anders la Cour-Harbo. Rip-ples in Mathematics. DOI: 10.1007/978-3-642-56702-5. Berlin, Heidelberg: Springer BerlinHeidelberg, 2001 (cit. on p. 63).

[114] William B. Pennebaker and Joan L. Mitchell.JPEG: Still Image Data Compression Stan-dard. Springer Science & Business Media, Dec.1992 (cit. on pp. 64, 65, 77).

[115] M.J. Weinberger, G. Seroussi, and G. Sapiro.“The LOCO-I lossless image compression al-gorithm: principles and standardization intoJPEG-LS”. IEEE Transactions on Image Pro-cessing 9.8 (Aug. 2000), pp. 1309–1324. doi:10.1109/83.855427 (cit. on pp. 64, 65, 77, 79).

109

DOI:10.15774/PPKE.ITK.2018.005

Page 119: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

[116] X. Wu and N. Memon. “Context-based, adap-tive, lossless image coding”. IEEE Transac-tions on Communications 45.4 (Apr. 1997),pp. 437–444. doi: 10.1109/26.585919 (cit. onp. 64).

[117] Roman Starosolski. “Simple fast and adap-tive lossless image compression algorithm”.Software: Practice and Experience 37.1 (Jan.2007), pp. 65–91. doi: 10.1002/spe.746 (cit.on pp. 64, 77).

[118] Z. Wang, M. Klaiber, Y. Gera, S. Simon, andT. Richter. “Fast lossless image compressionwith 2D Golomb parameter adaptation basedon JPEG-LS”. Signal Processing Conference(EUSIPCO), 2012 Proceedings of the 20thEuropean. Aug. 2012, pp. 1920–1924 (cit. onpp. 64, 77).

[119] Nazeeh Aranki, Didier Keymeulen, AlirezaBakhshi, and Matthew Klimesh. “HardwareImplementation of Lossless Adaptive and Scal-able Hyperspectral Data Compression forSpace”. Proceedings of the 2009 NASA/ESAConference on Adaptive Hardware and Sys-tems. AHS ’09. Washington, DC, USA: IEEEComputer Society, 2009, pp. 315–322. doi: 10.1109/AHS.2009.66 (cit. on p. 64).

[120] Anne E. Carpenter and David M. Sabatini.“Systematic genome-wide screens of gene func-tion”. Nature Reviews Genetics 5.1 (Jan. 2004),pp. 11–22. doi: 10 . 1038 / nrg1248 (cit. onp. 66).

[121] Christophe J. Echeverri and Norbert Perri-mon. “High-throughput RNAi screening in cul-tured cells: a user’s guide”. Nature Reviews Ge-netics 7.5 (May 2006), pp. 373–384. doi: 10.1038/nrg1836 (cit. on p. 66).

[122] Rainer Pepperkok and Jan Ellenberg. “High-throughput fluorescence microscopy for sys-tems biology”. Nature Reviews Molecular CellBiology 7.9 (Sept. 2006), pp. 690–696. doi: 10.1038/nrm1979 (cit. on p. 66).

[123] Eric Betzig, George H. Patterson, RachidSougrat, O. Wolf Lindwasser, Scott Olenych,Juan S. Bonifacino, Michael W. Davidson, Jen-nifer Lippincott-Schwartz, and Harald F. Hess.“Imaging Intracellular Fluorescent Proteinsat Nanometer Resolution”. Science 313.5793(Sept. 2006), pp. 1642–1645. doi: 10 . 1126 /science.1127344 (cit. on p. 66).

[124] Samuel T. Hess, Thanu P. K. Girirajan, andMichael D. Mason. “Ultra-High ResolutionImaging by Fluorescence Photoactivation Lo-calization Microscopy”. Biophysical Journal91.11 (Dec. 2006), pp. 4258–4272. doi: 10 .1529/biophysj.106.091116 (cit. on p. 66).

[125] Michael J. Rust, Mark Bates, and XiaoweiZhuang. “Sub-diffraction-limit imaging bystochastic optical reconstruction microscopy(STORM)”. Nature Methods 3.10 (Oct. 2006),pp. 793–796. doi: 10.1038/nmeth929 (cit. onp. 66).

[126] Roy Wollman and Nico Stuurman. “Highthroughput microscopy: from raw images todiscoveries”. Journal of Cell Science 120.21(Nov. 2007), pp. 3715–3722. doi: 10.1242/jcs.013623 (cit. on p. 66).

[127] Emmanuel G. Reynaud, Jan Peychl, JanHuisken, and Pavel Tomancak. “Guide to light-sheet microscopy for adventurous biologists”.Nature Methods 12.1 (Jan. 2015), pp. 30–34.doi: 10.1038/nmeth.3222 (cit. on p. 66).

[128] Jeffrey M. Perkel. “The struggle with imageglut”. Nature 533.7601 (Apr. 2016), pp. 131–132. doi: 10.1038/533131a (cit. on p. 66).

[129] John Nickolls, Ian Buck, Michael Garland, andKevin Skadron. “Scalable Parallel Program-ming with CUDA”. Queue 6.2 (Mar. 2008),pp. 40–53. doi: 10.1145/1365490.1365500 (cit.on p. 66).

[130] Gustavo Q. G. de Medeiros. “Deep tissuelight-sheet microscopy”. PhD thesis. Germany:Ruperto-Carola University of Heidelberg, 2016(cit. on p. 67).

[131] Eugen Baumgart and Ulrich Kubitscheck.“Scanned light sheet microscopy with confo-cal slit detection”. Optics Express 20.19 (2012),pp. 21805–21814. doi: 10.1364/OE.20.021805(cit. on p. 68).

[132] Uroš Kržič. “Multiple-view microscopy withlight-sheet based fluorescence microscope”.PhD thesis. Germany: Ruperto-Carola Univer-sity of Heidelberg, 2009 (cit. on p. 69).

[133] Stephan Preibisch, Stephan Saalfeld, TorstenRohlfing, and Pavel Tomancak. “Bead-basedmosaicing of single plane illumination mi-croscopy images using geometric local descrip-tor matching”. Ed. by Josien P. W. Pluim andBenoit M. Dawant. Feb. 2009, 72592S. doi: 10.1117/12.812612 (cit. on p. 69).

[134] The HDF Group. Hierarchical Data Format,version 5. 1997 (cit. on p. 72).

110

DOI:10.15774/PPKE.ITK.2018.005

Page 120: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

References cited in the thesis

[135] Fernando Amat, Burkhard Höckendorf, YinanWan, William C. Lemon, Katie McDole, andPhilipp J. Keller. “Efficient processing andanalysis of large-scale light-sheet microscopydata”. Nature Protocols 10.11 (Nov. 2015),pp. 1679–1696. doi: 10.1038/nprot.2015.111(cit. on pp. 73, 83).

[136] G. W. Zack, W. E. Rogers, and S. A. Latt. “Au-tomatic measurement of sister chromatid ex-change frequency”. The Journal of Histochem-istry and Cytochemistry: Official Journal ofthe Histochemistry Society 25.7 (July 1977),pp. 741–753. doi: 10.1177/25.7.70454 (cit. onp. 73).

[137] Loïc A. Royer, William C. Lemon, Raghav K.Chhetri, Yinan Wan, Michael Coleman, Eu-gene W. Myers, and Philipp J. Keller. “Adap-tive light-sheet microscopy for long-term, high-resolution imaging in living organisms”. NatureBiotechnology 34.12 (Dec. 2016), pp. 1267–1278. doi: 10.1038/nbt.3708 (cit. on p. 75).

[138] Michael D. Adams. The JPEG-2000 still im-age compression standard. 2001 (cit. on pp. 76,77).

[139] International Telecommunications Union.H.265 : High efficiency video coding. Dec.2016. url: http://www.itu.int/rec/T-REC-H.265- 201612- I/en (visited on 10/02/2017)(cit. on p. 76).

[140] x265 HEVC Encoder / H.265 Video Codec.url: http : / / x265 . org/ (visited on10/11/2017) (cit. on p. 76).

[141] Marc Treib, Florian Reichl, Stefan Auer, andRüdiger Westermann. “Interactive Editing ofGigaSample Terrain Fields”. Computer Graph-ics Forum 31.2pt2 (May 2012), pp. 383–392.doi: 10.1111/j.1467-8659.2012.03017.x (cit.on pp. 77, 78).

[142] M. Treib, K. Burger, F. Reichl, C. Mene-veau, A. Szalay, and R. Westermann. “Turbu-lence Visualization at the Terascale on Desk-top PCs”. IEEE Transactions on Visualizationand Computer Graphics 18.12 (Dec. 2012),pp. 2169–2177. doi: 10.1109/TVCG.2012.274(cit. on p. 78).

[143] Robert A. Gowen and Alan Smith. “Squareroot data compression”. Review of ScientificInstruments 74.8 (Aug. 2003), pp. 3853–3861.doi: 10.1063/1.1593811 (cit. on p. 79).

[144] Gary M. Bernstein, Chris Bebek, JasonRhodes, Chris Stoughton, R. Ali Vanderveld,and Penshu Yeh. “Noise and bias in square-root compression schemes”. Publications of theAstronomical Society of the Pacific 122.889(Mar. 2010). arXiv: 0910.4571, pp. 336–346.doi: 10.1086/651281 (cit. on p. 79).

[145] R. M. Gray and D. L. Neuhoff. “Quantiza-tion”. IEEE Transactions on Information The-ory 44.6 (Oct. 1998), pp. 2325–2383. doi: 10.1109/18.720541 (cit. on p. 80).

[146] Joran Deschamps, Markus Mund, and JonasRies. “3D superresolution microscopy by su-percritical angle detection”. Optics Express22.23 (Nov. 2014), pp. 29081–29091 (cit. onp. 82).

[147] Joran Deschamps, Andreas Rowald, and JonasRies. “Efficient homogeneous illumination andoptical sectioning for quantitative single-molecule localization microscopy”. Optics Ex-press 24.24 (Nov. 2016), pp. 28080–28090. doi:10.1364/OE.24.028080 (cit. on p. 82).

[148] Jeremy C. Simpson et al. “Genome-wide RNAiscreening identifies human proteins with a reg-ulatory function in the early secretory path-way”. Nature Cell Biology 14.7 (July 2012),pp. 764–774. doi: 10 . 1038 / ncb2510 (cit. onpp. 82, 97).

[149] C. Sommer, C. Straehle, U. Köthe, and F. A.Hamprecht. “Ilastik: Interactive learning andsegmentation toolkit”. 2011 IEEE Interna-tional Symposium on Biomedical Imaging:From Nano to Macro. Mar. 2011, pp. 230–233. doi: 10.1109/ISBI.2011.5872394 (cit.on pp. 82, 83).

[150] Thorvald Julius Sørensen. A method of estab-lishing groups of equal amplitude in plant soci-ology based on similarity of species content andits application to analyses of the vegetation onDanish commons. København: I kommissionhos E. Munksgaard, 1948 (cit. on p. 82).

[151] Lee R. Dice. “Measures of the Amount of Eco-logic Association Between Species”. Ecology26.3 (July 1945), pp. 297–302. doi: 10.2307/1932409 (cit. on p. 82).

[152] Romain Fernandez, Pradeep Das, VincentMirabet, Eric Moscardi, Jan Traas, Jean-LucVerdeil, Grégoire Malandain, and ChristopheGodin. “Imaging plant growth in 4D: ro-bust tissue reconstruction and lineaging at cellresolution”. Nature Methods 7.7 (July 2010),

111

DOI:10.15774/PPKE.ITK.2018.005

Page 121: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

pp. 547–553. doi: 10.1038/nmeth.1472 (cit. onp. 82).

[153] Mike Heilemann, Sebastian van de Linde, MarkSchüttpelz, Robert Kasper, Britta Seefeldt,Anindita Mukherjee, Philip Tinnefeld, andMarkus Sauer. “Subdiffraction-Resolution Flu-orescence Imaging with Conventional Fluores-cent Probes”. Angewandte Chemie Interna-tional Edition 47.33 (Aug. 2008), pp. 6172–6176. doi: 10.1002/anie.200802376 (cit. onp. 82).

[154] I. Izeddin, J. Boulanger, V. Racine, C. G.Specht, A. Kechkar, D. Nair, A. Triller, D.Choquet, M. Dahan, and J. B. Sibarita.“Wavelet analysis for single molecule localiza-

tion microscopy”. Optics Express 20.3 (Jan.2012), pp. 2081–2095 (cit. on p. 83).

[155] Carlas S. Smith, Nikolai Joseph, Bernd Rieger,and Keith A. Lidke. “Fast, single-molecule lo-calization that achieves theoretically minimumuncertainty”. Nature Methods 7.5 (May 2010),pp. 373–375. doi: 10.1038/nmeth.1449 (cit. onp. 83).

[156] Tobias Pietzsch, Stephan Saalfeld, StephanPreibisch, and Pavel Tomancak. “Big-DataViewer: visualization and processing forlarge image data sets”. Nature Methods 12.6(June 2015), pp. 481–483. doi: 10.1038/nmeth.3392 (cit. on p. 87).

112

DOI:10.15774/PPKE.ITK.2018.005

Page 122: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

Acknowledgements

The work presented in this thesis has been carried out in the Hufnagel group at theEuropean Molecular Biology Laboratory (EMBL) in Heidelberg, Germany. I would liketo thank Lars Hufnagel for this unique opportunity to work in the field of microscopy,and for his continuous support through the project.

I am very grateful to the members of my thesis advisory committee: Jonas Ries, TakashiHiiragi and Balázs Rózsa; thank you for the support and the fruitful discussions.

I would also like to thank Prof. Péter Szolgay for the opportunity to be part of theRoska Tamás Doctoral School of Sciences and Technology at Pázmány Péter CatholicUniversity.

I would like to thank Gustavo Quintas Glasner de Medeiros and Nils Norlin for themicroscope building and debugging sessions, and for sharing all their experience andknowledge. A big thank you to Marvin Albert and Joran Deschamps for their help withthe compression algorithm, and also for sharing the ups and downs of EMBL PhD life.

Many thanks to all friends and colleagues who helped me throughout my four years:Dimitri Kromm, Ulla-Maj Fiuza, Li-Ling Yang, Stefan Günther, Lucía Durrieu, PatrickHoyer, Laura Panavaite, Uroš Kržič, Petr Strnad, Sebastian Streichan, Aldona Nowicka,Christina Besir, Tatjana Schneidt, Judith Reichmann, Yu Lin, Nils Florian Wagner,Christian Tischer; it has been a truly memorable time.

Development of the microscope would not have been possible without the expert help ofAlfons Riedinger, Alejandro Gil Ortiz, Leo Burger, Sascha Blättel, Tim Hettinger andall other members of the EMBL electronic and mechanical workshops.

Special thanks goes to Judit Tóth, István Gärtner and Attila Mócsai for introducing meto the world of natural sciences, and Prof. Szabad János for showing me EMBL for thefirst time.

I am eternally grateful to my parents and my first scientific advisors, Béla Balázs andIlona Fige, and my sister, Eszter for their unconditional support.

Finally, Melánia: Thank you for your infinite patience, support and love.

113

DOI:10.15774/PPKE.ITK.2018.005

Page 123: M2r M;H2 QM HB;?i@b?22i KB+`Qb+QTv M/ `2 H@iBK2 BK ;2 T`Q+ ... · GAahP66A:l_1a kXN *QKTH2i2/.m HJQmb2@aSAJ XXXXXXXXXXXXXXXXXXXXXXXXX 9d kXRyai #BHBivK2 bm`2K2MibQ7pB2rbrBi+?2`mMBi

The project was supported by the European Union, co-financed by the European Social Fund

(EFOP-3.6.3-VEKOP-16-2017-00002).