plenoptic camera technology dissertation - zee khattak

74
Dissertation Title: Plenoptic Camera Technology: SpatialAngular Resolution Capture Written by: Zee Khattak Student number: S08424889 BSc (Hons) Film Production & Technology Project Supervisors: Jeremy Foss, Andy White April 2015

Upload: zee-khattak

Post on 11-Apr-2017

311 views

Category:

Documents


1 download

TRANSCRIPT

1

 

 

Dissertation  Title:    

Plenoptic  Camera  Technology:    

Spatial-­‐Angular  Resolution  Capture  

 

Written  by:       Zee  Khattak  

Student  number:   S08424889  

 

BSc  (Hons)  Film  Production  &  Technology  

 

Project  Supervisors:        Jeremy  Foss,  Andy  White  

 

April  2015                                                                                                              

 

Zee Khattak ii

ABSTRACT  

This report investigates the technology of plenoptic cameras, which in addition to

capturing 2D spatial resolution of a scene; characteristic of conventional

photography cameras, captures 4D light field depth resolution. Additional depth

resolution capture increases the photographer’s post-production capabilities,

including the ability to refocus the image. The main problem affecting the

widespread adoption of end-user plenoptic light field cameras is defined: the

spatial-angular trade-off that is intrinsic to using microlens arrays for 4D capture;

limiting final output viewing resolutions. The overall aim is to explore the extent of

these resolution limitations in relation to end-users, and to discern the

significance of this for companies developing or investing in plenoptic technology

for the future.

The plenoptic camera Lytro Illum is investigated for its ability to capture depth

information to consistent, optimal resolutions for the camera’s system; examined

through a series of sharpness resolution tests in an image test lab, within a

measured depth-scene. Additionally, the extent of resolution differences between

the Lytro Illum and a conventional Canon D60 DSLR camera using RAW

photography mode is demonstrated through optimal image quality chart testing.

The main findings indicate that an error in the calibration of the Lytro Illum lens

caused significant irregularity to its depth resolution capture at variable focal

lengths and optimal focus distances. This emphasised the intricate relationship of

the microlens array-tuning to the aperture and sensor for accurate computation in

the light field camera, and the importance for this to be properly calibrated. In

addition it is found that the light field camera predictably underperformed in

comparison with the DSLR, but not as significantly as was projected. It is

surmised that such performance limitations are a major detraction for current and

potential users, and that companies looking to invest in plenoptic camera

technology should be wary of this issue.

Zee Khattak iii

ACKNOWLEDGEMENTS  

My appreciation in researching this project and developing a testing scheme

goes towards my project supervisors at Birmingham City University:

Jeremy Foss - Senior Lecturer

Andy White - Senior Lecturer

Additional praise for goes towards individuals at Birmingham City University:

Jay Patel - Senior Lecturer

Robert McLaughlin – Lecturer

Michael Bickerton - Lecturer

Anthony Lewis - Senior Academic

Thanks are given towards the individuals and companies:

Bruce Devlin - Chief Media Scientist at Dalet Digital Media Systems, for

providing me first-hand knowledge of light field imaging from a modern media

company perspective.

Imatest LLC for allowing me to use their image testing software Imatest Master

during my studies.

Hireacamera for providing me with first-rate service in hiring the Lytro Illum

camera.

Finally, thanks are given to the individuals at Millennium point and Parkside

(Birmingham City University): The Media Centre Hires & Loans and the Parkside

staff for providing me with necessary testing equipment and studio space.

Zee Khattak iv

1 Table  of  Contents  

GLOSSARY  ...............................................................................................................................  VII  

1        INTRODUCTION  ..................................................................................................................  1  1.1   PROBLEM  DEFINITION  ..................................................................................................................  1  1.2   SCOPE  ...............................................................................................................................................  3  1.3   RATIONALE  .....................................................................................................................................  3  1.4   OVERALL  AIM  &  OBJECTIVES  ......................................................................................................  3  

2   REVIEW  OF  EXISTING  KNOWLEDGE  ...........................................................................  5  2.1   THE  DEVELOPMENT  OF  INTEGRAL  PHOTOGRAPHY  AND  PLENOPTIC  CAMERAS  ................  5  2.2   LYTRO  PLENOPTIC  CAMERAS  ......................................................................................................  6  2.3   ADOBE  PLENOPTIC  CAMERAS  &  DIFFRACTION  LIMITING  .....................................................  9  2.4   RAW  DSLR  VS.  LIGHT  FIELD  WORKFLOW  COMPARISON  ...................................................  11  2.5   IMAGE  RESOLUTION  TESTS:  SHARPNESS  ................................................................................  13  2.6   LITERATURE  REVIEW  ..................................................................................................................  14  2.6.1   Digital  Light  Field  Technology  –  Dissertation  (Ng,  2006)  ................................  14  2.6.2   ‘Lytro,  Light  Fields,  and  The  Future  of  Photography’  –  Filmed  Lecture  

(Tellman,  2003)  ...................................................................................................................................  14  2.6.3   Spatio-­‐Angular  Resolution  Tradeoff  in  Integral  Photography  –  Journal  

Article  (Georgeiv  et  al.,  2006)  .......................................................................................................  15  2.6.4   Improving  Resolution  and  Depth-­‐of-­‐Field  of  Light  Field  Cameras  Using  a  

Hybrid  Imaging  System  –  Report  (Boominathan  et  al.,  2014)  .......................................  16  2.6.5   Lytro's  Light  Field  Tech  Is  Amazing,  But  Is  It  Already  Obsolete?  -­‐  Web  

Article  (Hession,  2014)  .....................................................................................................................  16  2.6.6   ‘RAW  Workflow  from  Capture  to  Archives’  –  Book  (Andrews  et  al.,  2006)  17  2.6.7   Three-­‐Dimensional  Television,  Video,  and  Display  Technologies  –  Book  

(Javidi  &  Okano,  2002)  .....................................................................................................................  17  

3   METHODOLOGY  ..............................................................................................................  18  3.1   PRIMARY  RESEARCH  EXPERIMENTS  .........................................................................................  18  3.2   HYPOTHESIS  ..................................................................................................................................  18  3.3   PROCEDURE  ...................................................................................................................................  20  3.3.1   Equipment  ..............................................................................................................................  20  3.3.2   Setting  Up  ...............................................................................................................................  21  3.3.3   Tests  1  ......................................................................................................................................  22  

Zee Khattak v

3.3.4   Test  2  ........................................................................................................................................  22  3.3.5   Test  3  ........................................................................................................................................  22  3.3.6   Test  4  ........................................................................................................................................  23  3.3.7   Analysing  Results  Procedure  ..........................................................................................  23  

3.4   LIMITATIONS  .................................................................................................................................  25  3.5   SOLUTIONS/  IMPROVEMENTS:  PRELIMINARY  TESTING  .......................................................  25  3.5.1   Preliminary  Test  1  –  Constant  Focus,  Variable  Distance  ...................................  26  3.5.2   Preliminary  Test  2  –  Changing  Focus,  Constant  Distance  .................................  27  

3.6   FINAL  LIMITATIONS  OF  TESTING  PROCEDURE  &  IMPROVEMENTS  ....................................  28  3.7   ALTERNATIVE  APPROACHES  ......................................................................................................  29  

4   RESULTS  ............................................................................................................................  31  4.1   TABLES  ...........................................................................................................................................  31  4.1.1   Test  1  ........................................................................................................................................  31  4.1.2   Test  2  ........................................................................................................................................  31  4.1.3   Test  3  ........................................................................................................................................  32  4.1.4   Test  4  ........................................................................................................................................  32  

4.2   GRAPHS  ..........................................................................................................................................  33  4.2.1   Test  1  ........................................................................................................................................  33  4.2.2   Test  1  Ideal  Performances:  .............................................................................................  33  4.2.3   Test  2:  .......................................................................................................................................  34  4.2.4   Test  2  Ideal  Performances:  .............................................................................................  34  4.2.5   Test  3:  .......................................................................................................................................  35  4.2.6   Test  3  Ideal  Performances:  .............................................................................................  35  4.2.7   Test  4:  .......................................................................................................................................  36  

4.3   DEPTH  SCENE  CHARTS:  ..............................................................................................................  36  4.3.1   Test  1    Focal  Length  30mm  .............................................................................................  36  4.3.2   Test  2    Focal  Length  50mm  .............................................................................................  37  4.3.3   Test  3    Focal  Length  80mm  .............................................................................................  37  

5   DISCUSSION  ......................................................................................................................  38  5.1   INTERPRETING  THE  RESULTS  ....................................................................................................  38  5.1.1   Test  1  ........................................................................................................................................  38  5.1.2   Test  2  ........................................................................................................................................  39  5.1.3   Test  3  ........................................................................................................................................  42  5.1.4   Test  4  ........................................................................................................................................  43  

Zee Khattak vi

5.2   CONSENSUS  &  EXPLANATION  OF  RESULTS  .............................................................................  43  

6   CONCLUSIONS  ..................................................................................................................  46  6.1   OVERALL  FINDINGS  .....................................................................................................................  46  6.2   REFLECTION  ..................................................................................................................................  46  

7   RECOMMENDATIONS  FOR  FURTHER  WORK  .........................................................  48  

8   REFERENCES  ....................................................................................................................  49  

9   BIBLIOGRAPHY  ...............................................................................................................  53  

10   APPENDICES  ..................................................................................................................  56  10.1   BRUCE  DEVLIN  FULL  INTERVIEW  -­‐  10-­‐04-­‐15  ...................................................................  56  10.2   IMAGE  TEST  LAB  PHOTOGRAPHS  ...........................................................................................  59  10.3   DISSERTATION  LOG  BOOK  .......................................................................................................  60  10.4   PERSONAL  IN-­‐FIELD  PHOTOGRAPHY  USING  THE  LYTRO  ILLUM  ......................................  63  10.5   LIGHT  FIELD  IMAGE  UPLOAD  &  STORAGE  ............................................................................  64  

Zee Khattak vii

GLOSSARY  3D-ready image - An image that provides the visual perception of depth once

viewed through an appropriate medium such as 3D glasses.

Aperture – The measurement that defines the size of the opening of a lens, which

allows light to pass into the main camera.

Airy disk – The central point within the rings of a diffraction pattern (see

diffraction limiting); the width in relation to the system’s pixel size determines the

hypothetical maximum resolution for a camera.

CCD/CMOS – Two camera image sensor types: charge-coupled devices (CCD)

and complementary metal-oxide-semiconductor (CMOS); used to convert light

energy into electrical charges for digital image capture.

Chroma value – The colour information of an image.

Chromatic aberration – A problem that occurs when the camera lens focuses

colours to different convergence points; causing blurriness and/or colour fringing

and a loss of overall resolution.

Computational photography – Digital image capture that instead of using

conventional optical processes, uses digital calculation for processing images.

Depth look-up table – In plenoptic cameras, used for processing depth

information; like a colour-look-up table (CLUT), which converts input colours into

new colours.

Depth map – In plenoptic/ light field images, the part the image that stores

information relating to the varying distances of surfaces of a scene.

Depth-of-field – Describes the distances between in-focus near and far objects of

a scene.

Diffraction limiting – The limited resolution output of a camera system due to

diffracted light passing through the aperture opening, causing interference of light

rays travelling at different distances; phasing occurs, producing a diffraction

pattern with peaking intensities of added/subtracted light (see airy disk).

Zee Khattak viii

Discrete Cosine Transform (DCT) – Involved in lossy compression of files;

discards information according to a sum of cosine functions.

DSLR camera – Digital single-lens-reflex camera; makes use of a mirror that

sends the image to the viewfinder or image sensor.

Dynamic range – The ratio between the largest and smallest intensity of the

image luminance signal (see luma value).

F-stop/ number – The lens ratio of the focal length (see focal length) to the

diameter of the aperture (see aperture).

Focal length – The distance between the optical centre of the camera lens to its

sensor, when the lens is focused to infinity (see infinity focus).

Holography – Relating to holograms: 3D displayed images of encoded light fields

without the use of 3D glasses.

Human visual acuity – The range of clearness of human vision.

Infinity focus (INF) – The camera is focused on a theoretical infinite distance; light

rays entering the lens appear as parallel rather then diverging rays: the maximum

possible focusing distance of the camera.

ISO – A measurement of the variable sensitivity capability of the camera’s sensor

to light.

Lens calibration – The way a camera lens is tuned to its optical system.

Light ray – A theoretical model of light, representing the direction of light wave

travel.

Luma value – The brightness information of an image.

Macro range – Extreme close-up distances in photography, making the size of

the subject in the image much greater then it is in real-life.

Megapixel – A graphic resolution reading equalling 1 million pixels (see pixel).

Metadata – Image data that is used to describe and manage other data within

image files, including information on focal length and aperture size that is unique

to the image.

Zee Khattak ix

Photosite – A single light-sensitive point on a camera sensor that becomes

charged as light hits it (see CCD/CMOS), providing single pixel information on

light and colour intensity (see pixel).

Pixel – A single point within a graphically displayed image, arranged in rows/

columns to make up the final image.

Refocusable range – The distance in which plenoptic cameras can render in-

focus images with varying depth-of-fields (see depth-of-field).

Shutter speed – The length of time the camera’s shutter is open to allow light to

pass through the aperture (see aperture).

Texture mapping – 3D-rendered image data for depth perception in images.

User/ Industry technology workflow – The sequence of tasks necessary to

complete a technological process.

White balance – The balancing of colour temperature in photographs in order for

whites to appear naturally white, due to varying light temperature conditions in

scenes.

 

Zee Khattak 1

 1        INTRODUCTION  

1.1 Problem  Definition  

‘Inspired by the natural diversity of perceptual systems and fuelled by advances

of digital camera technology, computational processing, and optical fabrication,

image processing has begun to transcend limitations of film- based analogue

photography.’ (Wetzstein, Ihrke, Lanman, & Heidrich, 2011, p. 1)

Modern advances in sensor resolution technology have far outgrown the

restriction on spatial output resolutions of cameras imposed by diffraction limiting

(Pereira, 2011). Within the field of computational photography however lies

plenoptic camera technology, which not only captures spatial information of a

scene, but also depth information; a means of making use of modern sensor

capabilities in excess of 100 megapixels (Lukac, 2010), (Salesin & Georgiev,

2011). Mostly associated with the commercial camera manufacturers Lytro, the

industrial application cameras Raytrix, and to a lesser degree the computer

software company Adobe Systems, plenoptic cameras offer the user greater

control over their images in post with the ability to adjust the distance-of-focus,

depth-of-field, and to a certain extent the angle-of-view of the photograph

(Wetzstein et al., 2011).

Despite being an area quite unheard of today, this technology relates back to

integral imaging: first proposed around 100 years ago by the physicist Gabriel

Lippmann (Adelson & Bergen, 1991). The purpose of this project is to understand

the slow development of this technology by investigating its main problem: the

output resolution limitations of the images (Lukac, 2010), in order to determine

what might be its future within digital imaging.

(Continued)

Zee Khattak 2

Plenoptic cameras make use of thousands of microlenses set in front of the

sensor’s photosites, separating the light to record the angular information of each

light ray; the full representation of the light field (Theobalt, Koch, & Kolb, 2013). A

light field is a 4-D representation of the scene: 2 dimensions in the spatial

domain, and 2 dimensions in the angular domain; the orientation of the light ray

(Salesin & Georgiev, 2011). Instead of capturing 1 ray per pixel, multiple different

rays are captured for each pixel, and texture mapping is used to resolve these

images into a single scene (Lumsdaine, 2012). The angular resolution in

cameras however has an inverse relationship to the spatial resolution (fig. 1-1); a

single microlens covers a 9 X 9 photosite group, which ultimately represents 1

pixel in the final rendered image (Boominathan, Mitra, & Veeraraghavan, 2014).

As seen in fig. 1-1, if plenoptic cameras were to use a similar 11-megapixel

sensor that would be found in a common DSLR, the resolution trade-off would

cause the final outputted image to be 0.1-megapixels. By comparison, a DSLR

doesn’t record any angular information, and therefore the full range of resolution

can be acquired from each image: the original 11-megapixels (Weston & Coe,

2013).

Although the new release of the Lytro Illum saw flattened light field images equal

to ~4.0 megapixels (Crisp, 2014), the continued limited spatial resolution in

comparison with conventional digital cameras continues to detract potential end-

users (Hession, 2014). The downside to not recording angular resolution

information however is argued by Lytro’s founder Ren Ng in his 2006 dissertation

on the benefits of what he coined 'light field photography', including the persistent

problems of reliable focusing in modern DSLR photography (Ng, 2006). However,

it is the spatial-angular trade-off that makes Lytro’s light field images less then

Figure 1-1 – The spatial-angular trade-off

inherent in plenoptic and DSLR photography

(Boominathan et al., 2014).

Zee Khattak 3

desirable to a modern photographer’s standards that has so far hindered the

progress of plenoptic photography (Perwaß, Wietzke, & Gmbh, 2012).

1.2 Scope  

This project centres specifically on the lens performance differences of the

plenoptic camera and the DSLR camera; covering the differences between the

spatial resolutions of the light field camera Lytro Illum and the DSLR camera

Canon D60 through sharpness resolution chart testing. Additionally the depth

resolution of the Lytro Illum is examined through sharpness testing, determining

the consistency of resolution between the refocusable ranges.

Not within the scope of this project are other areas affecting the final output of

image resolution between the two cameras: from computational vs. conventional

optical processing, to megaray vs. conventional CCD/CMOS sensor application.

1.3 Rationale  

This project provides a useful examination for potential investors and end-users

of plenoptic camera technology to decide whether this technology would benefit

their current industry/ user workflows, and to judge how plenoptic cameras

compare to modern DSLR photography.

1.4 Overall  Aim  &  Objectives  

To investigate the spatial-angular resolution trade-off problem of plenoptic

photography and the extent this affects final output resolutions in relation to end-

users, and to determine the significance this may have for future widespread use

from the perspective of companies aiming to invest in light field technology.

• To provide a brief discussion about the history of integral imaging and its

relation to contemporary light field capturing technology; from the first

proposal by Grabriel Lippmann in 1908 (France) to the founding of the

plenoptic camera manufacturers Lytro by Ren Ng in 2006 (America), and

the development of Adobe plenoptic cameras.

• To determine the difference between the digital capture workflows of the

Lytro Illum by comparing RAW DSLR photography workflows.

• To investigate the angular resolution consistency of the Lytro Illum

against ideal depth scene guides provided by the Lytro Company, and the

in-camera focus range guide by performing resolution tests using the ISO

Zee Khattak 4

12233 chart within a controlled image test lab, analysing the results with

image quality software Imatest.

• To analyse the spatial-angular trade-off of plenoptic and DSLR

photography by testing the Lytro Illum against the Canon D60 within a

measured depth-scene, using the ISO 12233 chart for cross-comparison

using image quality software Imatest.

• To discuss the findings in relation to the spatial-angular resolution debate

concerning plenoptic and DSLR cameras from the perspective of end-

user and industry workflows, comparing the extent of differences between

the two.

• To conclude on the relative merits or hindrances that may affect plenoptic

cameras from becoming a well-established field within photography in the

future; considering possible advances in hybrid DSLR and light field

capturing devices, light field video, or replacement by other depth-

capturing hardware/software.

(Continued)

Zee Khattak 5

2  REVIEW  OF  EXISTING  KNOWLEDGE  

2.1 The  Development  of  Integral  Photography  and  Plenoptic  Cameras  

Key terms: Integral Photography; Gabriel Lippmann; Integral Imaging.

The history of plenoptic camera technology can be traced back to Gabriel

Lippmann: the physicist who first proposed the idea of integral photography as

early as 1908 (Richardson, 2013). Integral photography (IP) is a method

comparable to holography as it is a means of displaying a 3D image without the

use of 3D glasses (fig. 2-1), (Benton & Bove, 2008).

The theory proposed by Lippmann was to use a microlens array: a collection of

tiny lenses as opposed to a singular lens within the camera (Daly, 2000). The

effect of using a microlens array means that each lens has a slightly different

angle-of-view of the object to be captured; essentially capturing a 4D light field

instead of capturing the conventional 2D image plane of the object (Javidi &

Okano, 2002).

The result of IP is when viewing the captured photograph through the same lens

array from different angles, the parallax of the

image changes: the apparent position of the

photographed object changes in relation to

the viewer (fig. 2-1), (Son, Kim, Kim, Javidi, &

Kwack, 2008). As seen in fig. 2-2, the lens

array in front of the photograph causes this

change in parallax, as the viewer is seeing

differing pixels and their varying distance in

Figure 2-1 – A video demonstration of the change of parallax from the image as the viewing

angle changes. (“3D integral image,” 2012)

Figure 2-2 – Viewing an integral

photograph (Cristobal et al., 2013)

Zee Khattak 6

relation to each other, as the viewing angle changes according to the viewer’s

position (Cristobal, Schelkens, & Thienpont, 2013).

The method of IP can be replicated by taking multiple 2D pictures of the same

scene, and processing these varying angles into a singular image, which can

then be viewed through a microlens array to the same effect (Cristobal et al.,

2013). This is the modern equivalent of integral photography named integral

imaging (II): a now digital-based scheme of using software to combine 2D images

into 3D-ready images (Lueder, 2011). Due to the viewer not requiring any viewing

glasses, the IP/ II method is still considered as one of the ideal 3D viewing

systems, however it comes with numerous drawbacks (Javidi & Okano, 2002).

The major drawback as discussed in the introduction to this project is the loss of

resolution: a repeating occurrence in developing technologies that have applied II

technology, as the method of using a microlens array in front of the image

creates a spatial distortion of the image, which will always degrade its original

resolution (Poon, 2006). This is a problem which cannot be reversed, as

developers must choose between increasing the 2D resolution of the image by

miniaturising the lens arrays further, or increasing the 3D spatial resolution by

increasing the size of the lenses; both degrade the other (Georgeiv et al., 2006).

2.2 Lytro  Plenoptic  Cameras  

Key terms: Light field camera; plenoptic camera; 4D light field.

Updating IP/II technology for the digital age, the Lytro Company, founded in 2006

by Ren Ng, produce plenoptic cameras for everyday consumers (Tellman, 2003).

Light field cameras operate using the plenoptic function: a 5-dimensional function

describing light intensity information in relation to an observer at every point in

space/time (Ng, Levoy, Duval, Horowitz, & Hanrahan, 2005). Essentially the

camera’s plenoptic function computation allows the recording of light ray

directional information, as well as the intensity hitting the sensor; recording the

full 4D light field of a scene (Zhang & Chen, 2006).

Figure 2-3 – Concept of Lytro camera (Ng et al., 2005).

Zee Khattak 7

Similar to IP/II, the camera makes use of a microlens, this time set in front of a

main lens that is used to focus the image onto the microlens array (fig. 2-3)

(Fatahalian, 2011). As see in fig. 2-3, the microlens array scatters the light over

several photosites, capturing different angles-of-view for a single pixel

(Boominathan et al., 2014), (Ng et al., 2005). This setup ensures the function of

the camera is very similar to a normal DSLR, with a single sensor to capture light

information; the difference being that Lytro’s light field images allow users to later

re-focus their images; changing the depth-of-field up to f/16 (fig. 2-5), changing

the distance-of-focus (fig. 2-4), and to a certain extent changing the angle-of-view

(fig. 2-6) (Lezano, 2012). All of these functions can be manually adjusted in post

using Lytro Desktop (fig. 2-7), with additional features opened up for animating

the images, and creating 3D-ready images (Wang, 2014).

Figure 2-4 – Changing the distance-of-focus using Lytro’s demonstration

website.

Figure 2-5 – Changing the depth-of-field by adjusting the aperture settings on Lytro

Desktop.

Zee Khattak 8

However, the resolution limitations continue to detract users from both the initial

release of the Lytro camera equalling ~1.2 megapixels, and the subsequent

release of the Lytro Illum in 2012 at ~4.0 megapixels (Crisp, 2014). Such

performance limitations have led to divided opinions: the first claiming that Lytro

or indeed every plenoptic camera manufacturer will never be able to overcome

the limitations dictated by using microlens technology; that these cameras will

remain a passing niche interest, only to be replaced by light-field mimicking

technologies (Besiex, 2012), (Hession, 2014). Other opinions however recognise

that plenoptic technology is very much in it’s beginnings, and that both the in-

camera technology and software have the potential to evolve, making any

limitations only temporary (Georgiev & Lumsdaine, 2009), (Perwaß et al., 2012).

Figure 2-6 – Changing the angle-of-view by manipulating a photo on Lytro Desktop;

made possible by the ability to shift the view through the sub-apertures of the microlens

array (Tao, Hadap, Malik, & Ramamoorthi, 2013)

Figure 2-7 – Aperture and Depth control functions on Lytro

Desktop.

Zee Khattak 9

2.3 Adobe  Plenoptic  Cameras  &  Diffraction  Limiting    

Key terms: Adobe; plenoptic cameras; plenoptic photography; diffraction limiting.

Modern photography resolution has far outgrown most screen’s capabilities to

display such large information; often manufacturers will limit cameras to ~11.0

megapixels, as anything more would be in excess of human visual acuity (Hirsch,

2013), (Ng, 2006). However, Adobe’s development of plenoptic cameras use

over 100 megapixel sensors with similar microlens arrays to Lytro, with a

resultant resolution of 5.2 megapixels (Savov, 2010). Fig. 2-8 details the RAW

data capturing differing angles of the scene (fig. 2-9), which is then computed into

a final light field image for display (fig. 2-10), for later refocusing (fig. 2-11)

(Salesin & Georgiev, 2011).

Such advances in plenoptic technology demonstrate how much larger sensor

resolution can be put to use in capturing depth resolution of a scene as well as

spatial resolution (Lukac, 2010). Ordinarily, diffraction would limit the spatial

Figure 2-8 - RAW image data from plenoptic

camera (Salesin & Georgiev, 2011). Figure 2-9 - Zoomed in: tiny pieces of the images

captured by the microlens array (Salesin &

Georgiev, 2011).

Figure 2-10 - The image put back together

(Salesin & Georgiev, 2011).

Figure 2-11 - Refocusing on a 5.2 megapixel

display (Salesin & Georgiev, 2011).

Zee Khattak 10

resolution output of conventional digital cameras, no matter how large the

megapixel capability is of the sensor (Goldstein, 2009). Diffraction in cameras

occurs due to light dispersing when passing the camera’s aperture opening (Allen

& Triantaphillidou, 2011). The light rays interfere with each other, causing

phasing: a cancelling out of waves which produces a diffraction pattern with

peaking intensities light (fig. 2-12), (“Diffraction Limited Photography: Pixel Size,

Aperture and Airy Disks,” 2015). The Airy disk is the central point within the rings

of a diffraction pattern; the width of the Airy disk determines the hypothetical

maximum resolution of a camera (fig. 2-12), (Murphy, 2002). If the Airy disk

becomes larger then a single pixel size, it begins to have a detrimental visual

affect on the image (Murphy, 2002).

Although a smaller aperture hole (higher f/stop number) is often desired as it

increases sharpness by reducing lens aberration, eventually the camera

becomes diffraction limited past a certain f/stop reading (Milanfar, 2010).

(Continued)

Figure 2-12 – Visual representation of an Airy disk produced

by diffraction; the central peak of highest light intensity used to

measure the hypothetical maximum resolution of a camera

(“Diffraction Limited Photography: Pixel Size, Aperture and Airy

Disks,” 2015)

Zee Khattak 11

2.4 RAW  DSLR  vs.  Light  Field  Workflow  Comparison  

Key terms: RAW image workflow; Bayer filter array; CCD/CMOS devices.

The most common DSLR workflow for professional photographers is the RAW

workflow (Andrews, Butler, Butler, & Farace,

2006). In order for digital files to acquire the

brightness information of a scene, light sensitive

photosites are charged via CCD or CMOS

devices proportionately to the amount of light

hitting its surface (Ippolito, 2003). To capture

colour information, a Bayer pattern filter; 25% red,

25% blue and 50% green is present over each

photo site (fig. 2-13), where their varying ratios of

intensity accordingly determine the colour value

for that pixel (Hirsch, 2012). RAW image files are

essentially the full, unprocessed data acquired

from the scene for both the luma values and

chroma values that are obtained via the

CCD/CMOS devices and Bayer filter method

(Dillard, 2008). In comparison, a JPEG file is a

compressed version of the RAW data that throws

out much of the colour information based on a discrete cosine transform (DCT),

designed to reduce file sizes to an acceptable degree of image resolution loss

(Salomon, Motta, & Bryant, 2007).

Although the files are significantly larger then their JPEG compressed state,

RAW offers the user greater control in post; including being able to recalibrate

the white balance (fig 2-14), with more control over exposure values due to the

image file containing the highest dynamic range possible for the camera (Taylor,

2012). Shifting the photographer’s workflow towards post-software processing

resembles one of the key philosophies behind the Lytro company: they believe in

acquiring the maximum data of a scene, no matter what the file size (Tellman,

Figure 2-13 – Highlighting

the Bayer pattern filter,

where each photo site is

filtered by the RGGB bayer

(Andrews et al., 2006).

Figure 2-14 - Example workflow for RAW image editing (Sumner, 2014).

Zee Khattak 12

2003). Lytro cameras use similar Bayer filters to capture colour information,

however additional depth information is stored in the form of a depth map and

depth look-up table within the light field (LF) image file (Patel, 2012).

Depth-of-field (DOF) would usually be controlled by changing the aperture size

on a conventional camera, however Lytro cameras have fixed f/2 apertures; due

to the complexity of computational photography, each microlens must be tuned to

the aperture opening itself: the size must be fixed as a result (Tellman, 2003).

Having a fixed wide aperture effectively eliminates the problem of diffraction

limiting at high f-stops (Lee, 2014), while allowing the user to later adjust the

aperture setting from f/1 to f/16 within Lytro Desktop (Anon, 2014). This also

enables the user to shoot at higher shutter speeds and requires lower ISO

settings compared to conventional cameras using high f-stops (Lee, 2014); Lytro

cameras instead require the user to change the focus distance or focal length of

the camera to affect DOF (Anon, 2014). This encourages photographers to think

about the composition of their photos in an unconventional way in order to

capture effective ‘living pictures’; pictures with high depth characteristics

including foreground, mid and background subjects (Northrup, 2015). The

camera also features an ‘instant’ shutter: it doesn’t require focusing motors, so

effectively reduces the delay when the image is captured after pressing the

shutter button (Wang, 2014).

The light field workflow however doesn’t come without glitches; the more depth

complexity introduced into the scene, the more likely errors can occur such as

depth-mapping faults (fig. 2-15), (Northrup, 2015).

Figure 2-15 – Depth mapping problems have occurred in this image, where sections in the

foreground have been incorrectly mapped as part of the background, making them out of focus

and in-focus at the wrong depth level (Northrup, 2015).

Zee Khattak 13

These problems can be amended, however they add further complexity in the

light field editing workflow, requiring additional software such as Photoshop, and

only after exporting in a compatible image format such as TIFF (Jarabo,

Bousseau, Pellacini, & Gutierrez, 2012).

2.5 Image  Resolution  Tests:  Sharpness    

Image resolution is measured in lines per millimetre (L/mm) or line-pairs per

millimetre (LP/mm) (Reichmann, 2009). It is often based on subjective opinion

(Galstian, 2013), however it can be measured as a Modulation Transfer Function

(MTF) or Spatial Frequency Response (SFR) (Williams, Jones, Layer, &

Osenkowsky, 2007). MTF is a measure of the contrast between the spatial

frequencies, and how accurately a camera system can preserve the brightness

variations of a scene (fig. 2-16); the higher the contrast, the greater the perceived

image resolution (Reichmann, 2009). Sharpness is generally seen as the most

important quality for determining image resolution and how much detail is being

displayed (“Image Quality Factors - Imatest Documents,” 2014). It represents the

subjective perception of an edge (an edge profile), and can be measured using

MTF on applications like Imatest (Allen & Triantaphillidou, 2011). An ISO 12233

test chart can be set up in a controlled lab and photographed in order to test the

lens resolution; focusing on specific areas for software analysis, such as the

slanted edge lines of the ‘H’ shape for sharpness testing (fig. 2-17), (Bilissi &

Langford, 2013).

Figure 2-17 – An ISO 12233 test

chart (Bilissi & Langford, 2013), with

highlighted example sharpness

testing area.

Figure 2-86 - High (left) and low (right) spatial

frequency patterns (Galstian, 2013).

Zee Khattak 14

2.6 Literature  Review  

2.6.1 Digital  Light  Field  Technology  –  Dissertation  (Ng,  2006)  

This  dissertation  written  by  the  founder  of  the  Lytro  Company  provides  a  valuable  

account  of  his  developing  theories  of  light  field  technology.  As  well  as  offering  a  

technical  introduction  to  light  field  capture  via  microlens  arrays,  the  text  also  offers  

extensive  reasons  as  to  why  this  technology  is  needed,  by  emphasising  the  modern  

limitations  of  DSLRs.    

‘I  have  tried  to  write  and  illustrate  this  dissertation  in  a  

manner  that  will  hopefully  make  it  accessible  and  

interesting  to  a  broad  range  of  readers,  including  lay  

photographers,  computer  graphics  researchers,  optical  

engineers  and  those  who  enjoy  mathematics.’  (Ng,  

2006,  p.  25)  

As  stated,  the  writer  has  considered  a  range  of  technical  

knowledge  already  possessed  by  the  reader,  and  can  therefore  

be  understood  on  many  different  levels,  increasing  the  

usefulness  of  the  source.  A  ‘roadmap’  is  even  provided  (fig.  2-­‐18),  emphasising  the  use  

of  illustrations  and  diagrams  with  captions  that  assists  in  the  learning  of  technical  

information.  This  dissertation  is  for  the  degree  of  doctor  of  philosophy  at  Stanford  

University;  verified  by  numerous  advisors,  and  is  therefore  a  trustworthy  account.  The  

writing  is  academic  with  consistent  references  to  previous  writers  in  the  field.  

2.6.2  ‘Lytro,  Light  Fields,  and  The  Future  of  Photography’  –  Filmed  Lecture  (Tellman,  

2003)  

This filmed lecture provided a very useful introduction to the origins and

philosophies of the Lytro Company. It could be said that there is a bias on this

information as Sam Tellman is an image quality analyst for Lytro itself; he

occasionally uses sensationalised words to describe the technology, as if he

were selling it. However, the presence of acknowledged limitations expressed

such as the problems of low resolution show a more honest and scientific

approach to the technology, with room for improvement in the later models.

Figure 2-18 –

‘Roadmap’ for the

layout of the

dissertation (Ng, 2006)  

Zee Khattak 15

The talk features a slideshow that aims to explain how the technology works

through effective diagrams; the use of associating dates clearly puts the Lytro

Company in perspective with other developers in the same field, and within the

history of photographic technology (fig. 2-19). Additionally, information gained on

the company’s use of microlens arrays is some of the most comprehensible

found. Overall this lecture was very specific to the knowledge required for this

section of the report, and led to many other avenues of research.

The year of this lecture for the subject could be seen as out-of-date, however this

was at the time of the release of the first consumer Lytro camera, and is therefore

still relevant to the discussion on the origins of the Lytro Company.

2.6.3 Spatio-­‐Angular  Resolution  Tradeoff  in  Integral  Photography  –  Journal  Article  

(Georgeiv  et  al.,  2006)  

The fundamental resolution trade-off problem affecting integral and light field

photography alike was defined from reading this

journal article by numerous developers involved

with Adobe Systems and their plenoptic

cameras. This features principal engineer at

Qualcomm Todor Georgiev, whose website

provided vast writings for further research;

mostly aimed at overcoming the same resolution

problem. He also features on the video

demonstration of Adobes 100+ megapixel

plenoptic cameras.

Figure 2-19 – The history of film photography (left), digital photography

(middle) and light field photography (right) (Tellman, 2003).

Figure 2-20 – A novel optical

device developed for Adobe

Systems in researching and

developing plenoptic

technology (Georgeiv et al.,

2006)  

Zee Khattak 16

The main focus of this text is the main prototype: a multiple lens-prism optical

system (fig. 2-20). This shows a part of the development stage the research and

development team took before settling on using a microlens array, similar to

Lytro. The text is an academic report with a team of writers and numerous

development photographs, making it a reliable source.

2.6.4 Improving  Resolution  and  Depth-­‐of-­‐Field  of  Light  Field  Cameras  Using  a  Hybrid  

Imaging  System  –  Report  (Boominathan  et  al.,  2014)  

This more up-to-date report contains alternative theories for overcoming the on-

going resolution limitations of plenoptic cameras. The proposal is for a hybrid

camera system; combining a light field and a high-resolution DSLR camera in

order to use computation to super-resolve the light field camera by patching it

with high-resolution images from the latter. Although they provide promising

results, it is important to note that these are gained from a single research and

development experiment based on working theories, and have therefore not been

proven by multiple repeated experiments from different sources.

This academic text by writers at Rice University, Houston provided a useful

account of why microlens arrays have limited spatial resolution, used in the

development of this dissertation.

2.6.5 Lytro's  Light  Field  Tech  Is  Amazing,  But  Is  It  Already  Obsolete?  -­‐  Web  Article  

(Hession,  2014)  

This web-based article offers an interesting counter-argument to the merits of

light field technology that has previously been seen in other sources. The overall

argument is that the technology is only practical as a niche interest to

photographers, and that competing hardware/ software companies that mimic its

capability will ultimately fill this niche, replacing the technology altogether.

Although it appears to be very opinionated with sensationalised writing and a lack

of technical research/ references, it is nonetheless a valid opinion that continues

to divide end-users of Lytro cameras.

The credentials of the writer cannot be determined as there is no background

information present, however it was useful for bringing a fresh perspective to

plenoptic cameras that would otherwise not have been considered for further

research.

Zee Khattak 17

2.6.6 ‘RAW  Workflow  from  Capture  to  Archives’  –  Book  (Andrews  et  al.,  2006)  

This details the RAW workflow from a photographer end-user perspective, and

was useful in gaining knowledge on the fundamentals of RAW image processing.

The book contains multiple editors, each with their accompanying achievements

in the field of photography and academic writing: increasing the reliability of the

information.

Although this book is slightly out-dated, written in 2006, it still provides the

essential technology behind the RAW image format for the camera to be used in

this project: the Canon D60. The information is clearly spaced out in chapters

relating to the progression of the RAW workflow. High quality diagrams and

screenshots of the relevant software including Adobe Photoshop provide very

clear references to the more technical details covered in the text.

2.6.7 Three-­‐Dimensional  Television,  Video,  and  Display  Technologies  –  Book  (Javidi  

&  Okano,  2002)  

This source provided useful information about the technology of integral

photography and integral imaging. The presence of two editors from different

parts of the world: USA and Japan, highlights the reliability of the information, as

it is based on factual information.

The layout of each chapter follows that of academic literature writing, with an

introduction, citations and references at the end: useful for further research into a

particular field. The provided diagrams break up the technical text information,

however they are occasionally not clearly described in a step-by-step manner;

the book takes for granted a certain technical understanding already possessed

by the reader.

 

Zee Khattak 18

3 METHODOLOGY  

3.1 Primary  Research  Experiments  

Primary research will involve depth resolution lens tests of the Lytro Illum camera

at varying focal lengths, focus distances, and distances from the object.

Additionally the light field camera will be tested against the DSLR Canon D60 in

RAW photography mode to compare the lens performances of the two cameras

against an ideal lens performance.

The purpose of these experiments is to firstly test the lens depth resolution

capabilities of the Lytro Illum camera against the in-camera depth guide for

consistency in image quality. Secondly, the lens resolution differences between

the plenoptic and DSLR camera will be compared in order to discover the extent

of the spatial-angular resolution trade-off in the cameras. An image quality test

lab will be set-up with a measured depth-scene to perform the quality checks,

photographing a resolution chart for image sharpness analysis. All images will be

examined with image analysis software Imatest after post-processing the light

field files with Lytro Desktop and the DSLR RAW files with Iphoto.

3.2 Hypothesis  

Figure 3-1 - The refocusable range at 50mm focal

length (35mm equivalent), optical focus ~42cm

(“Depth Composition Features: The Refocusable

Range,” 2014).

Zee Khattak 19

According to Lytro’s guide to the depth refocusing capabilities of the Lytro Illum, it

is predicted that as the focal length of the camera is increased, the refocusable

range will become narrower, as seen in comparing the distance (mm) of the band

of colours between fig. 3-1 and fig. 3-2 (“Depth Composition Features: The

Refocusable Range,” 2014). According to fig. 3-1, for a 50mm lens at optical

focus ~42cm, everything within the band of colours (~240 – 1300mm distance) is

refocusable to provide optimally sharp images. The resolution during testing is

therefore expected to be consistently optimal within the camera-predicted range,

while gradually deteriorating outside of these ranges. The Lytro Illum’s in-camera

predicted range and the Lytro provided guide (fig 3-1) will ultimately be compared

to the findings. According to Josh Anon’s guide to operating the Lytro Illum, it is

predicted the closer the focus distance is to the camera, the narrower the

refocusable range will become (Anon, 2014).

Depth values (-10 to +10) (fig. 3-3) are used to describe the depth characteristics

of a scene as seen by the camera; Lytro claims that at depth values between -4

and +4 the resolution will be as consistently high as possible, as displayed on all

devices, including computer monitors (Anon, 2014). The depth assist bar (fig. 3-

4) will be used as indication of the camera-predicted range, in comparison with

the resolution results in order to determine if the images are performing/under

performing as predicted.

Figure 3-2 – The refocusable range at 100mm focal length (35mm equivalent),

optical focus ~42cm (“Depth Composition Features: The Refocusable Range,”

2014).

Zee Khattak 20

In the test comparing the Lytro Illum with the Canon D60, it is expected that the

Canon will have over double the resolution; according to the megapixel ratio

4:11, the Canon is expected to have higher resolution by ~65%.

3.3 Procedure  

3.3.1 Equipment    

Testing Lab:

• Lytro Illum camera (30-250mm lens)

• Canon D60 (50mm, 55-250mm lens)

• 2 X 32g memory cards

• Studio with black background

• ISO 12233:2000 test chart

• Easel

• 2 X 200W Lillput lights

• Light meter

• Manfrotto Tripod

• 8m Tape measurer

• Log book

Figure 3-3 – The depth values of a scene: increasing

and decreasing from the ‘0’ centre optical focus of the

lens (Connecting Depth to Living Pictures, 2015).

Figure 3-4 - The in-camera

refocusable range guide; blue

represents close-focus in front

of the optical focus, red

represents far-focus behind the

optical focus, as measured

from the camera’s sensor

plane (Composing for Depth,

2015).

+10 depth value

-10 depth value

Optical focus

Zee Khattak 21

Analysing Results:

• IMac 21.5-inch display

• Lytro Desktop software

• Imatest software

3.3.2 Setting  Up  

The tests will be carried out by setting up an image test lab in a studio and

photographing the ISO chart under optimal lighting. The chart will be set on an

easel against an ideally black background; the lighting will be set at a 20-45

degree angle to the test chart in order to provide even illumination, with no glare

(Brown, 2014) (fig. 3-5). The in-camera zebra guides set to 100% will be used to

serve as indication of over-exposure (Anon, 2014), which may disrupt the results.

The camera will be mounted on a Manfrotto tripod and set to manual mode (M) in

order to fix the shutter speed, ISO and to disable the exposure compensation

(Story, 2008). A measuring tape and log book will be used to record the varying

distances: starting at the sensor plane (behind the lens) (fig. 3-6, fig. 3-7), and

measuring to the front of the test chart (the object) (Elkins, 2013). A log will also

be kept of all the variables of the tests including: image number, focal length,

object distance and the predicted focus range of the camera.

Figure 3-5 - Guide to setting up an image test lab with optimal

lighting (“Using SFR Part 1: Setting Up and Photographing SFR

Test Charts,” 2015.)

Zee Khattak 22

3.3.3 Tests  1  

The Lytro camera will be formatted to ensure the recorded image number will

match the files contained on the SD card. Test 1 will proceed by setting the Lytro

Illum at a constant focal length of 30mm using the zoom dial, firstly with the focal

plane 40cm away from the test chart (object), aligning the test chart centrally.

The range of distances-of-focus will be captured in sequential order; taking a

photo at 2 varying focus points above the minimum 40cm. The lower range of

focus distances below 40cm cannot be measured, as a minimum distance from

the chart must be kept in order to provide reliable results; ensuring the lens

accuracy will be tested instead of testing the print quality of the chart (“Using SFR

Part 1: Setting Up and Photographing SFR Test Charts,” 2015).

The first photo will be taken, logging the photo number and corresponding test

variables, then the focus changed to the second focal distance and a second

photo will be taken. This will be repeated at the varying distances: 60, 90, 120 up

to 210 cm; of which this practical limit is chosen in order to ensure an appropriate

edge can still be analysed with the Imatest software, as discovered in the

preliminary testing phase.

3.3.4 Test  2  

Test 2 will repeat the same procedure as Test 1, this time setting the Lytro

camera to focal length 50mm.

3.3.5 Test  3  

The Lytro camera will be set to focal length 80mm, and the procedure repeated.

Figure 3-6 - Mark indicating

the sensor plane (Elkins,

2013).

Figure 3-7 – First-hand photograph: the

sensor plane mark on the Lytro Illum.

Zee Khattak 23

3.3.6 Test  4  

The Canon DSLR will be mounted on the tripod, and fixed at an optimal distance

130cm from the test chart at 50mm focal length, matching the f/2 aperture of the

Illum, and ensuring the ISO and shutter speeds are noted as the same. The

camera will be auto-focused onto the chart, and the image taken. A number of

images will be taken with the same position as backup files for analysis. The

Lytro Illum will be setup, matching exactly the above settings of the DSLR, and

multiple images taken at these optimal settings into order to perform comparative

resolution analysis.

3.3.7 Analysing  Results  Procedure  

The resultant images will be uploaded onto an IMac computer. Opening the light

field images on Lytro Desktop, each image will be focused by clicking on the test

chart region, and exported as a flattened TIFF file. The RAW DSLR files will be

uploaded to IPhoto, and similarly processed: exporting them as TIFF files for fair

analysis. The TIFF files will be individually analysed using Imatest’s SFR function

in order to test the sharpness of the images (fig. 3-8); SFR chart testing uses

slanted edge (light-to-dark) targets to test the sharpness factor of images (Peres,

2013). The region of interest ‘H’ shape will be sampled by clicking and dragging

the area to be tested (fig. 3-9); a horizontal or vertical slanted line is used for

testing sharpness, avoiding completely vertical, horizontal and 45 degree edges

as this can cause errors (Roland, 2015). The resultant region of interest can be

finely adjusted to be central in the frame (fig. 3-9 enlarged).

(Continued)

Zee Khattak 24

In the settings, the resolution units will be set to LW/PH: line-widths per picture-

height, turning off chromatic aberration and noise displays in order to only display

sharpness results: the edge/MTF graph plots (fig. 3-10). ‘Edge roughness

analysis’ will be turned on; this reading will be logged to make up the results of

the resolution testing for each image (fig. 3-11).

Figure 3-8 – Imatest SFR function

to test sharpness of images

(highlighted).

Figure 3-9 – Sampling the

region of interest for edge

analysis.

Figure 3-10 – Settings for SFR graph display; edge roughness

and edge/MTF plot.

Zee Khattak 25

3.4 Limitations    

A number of factors may affect the accuracy of the proposed method of testing

the resolving depth capabilities of the Lytro Illum. Firstly, changing the distance

from the test chart may affect the readings gathered from Imatest, as the

software relies on optimal test lab settings for accurate results (“Using SFR Part

1: Setting Up and Photographing SFR Test Charts,” 2015). Secondly, to a certain

degree testing will rely upon subjective impressions of what is ‘in-focus’ in order

to determine which results are providing the most accurate representation of the

scene, and to lay constraints as to what is acceptably in-focus and out-of-focus

out of these readings.

3.5 Solutions/  Improvements:  Preliminary  Testing  

Preliminary testing was set up in order to limit and find solutions to the discussed

issues, using a combination of subjective in-focus testing and software analysis

to determine the most reliable readings for the final results.

Figure 3-11 – Final edge profile graph with RMS Edge

Roughness reading (enlarged).

Zee Khattak 26

3.5.1 Preliminary  Test  1  –  Constant  Focus,  Variable  Distance  

Focus

Range (cm)

Focal

Length (mm)

Object Distance

(cm)

Edge Profile Rise

(pxls)

MTF50 (cy/pxl) RMS Edge

Roughness

(pxls)

55-INF 30 40 5.95 0.0868 0.0359

55-INF 30 60 3.82 0.128 0.0333

55-INF 30 150 2.19 0.206 0.0376

Focus range 55-INF, Focal

length 30, Object distance 40.

Subjective focus: IN-FOCUS

Focus range 55-INF, Focal

length 30, Object distance 60.

Subjective focus: IN-FOCUS

Focus range 55-INF, Focal

length 30, Object distance 150.

Subjective focus: IN-FOCUS

Figure 3-12 – Sampled and analysed regions of 3 subjectively in-focus images at constant

focus ranges, focal lengths and variable distances.

Table 3-2 – The results of preliminary test 1.

Zee Khattak 27

Based on the first preliminary test analysing the results on Imatest (fig. 3-12), it

has been discerned that despite the subjective impression of the charts being in-

focus for a constant focal length and focal range, the Edge Profile Rise reading

and MTF50 does not give an accurate representation of the readings, as they

appear to be diminishing in size (table 3-1); essentially showing an increase in

resolution (Nasse, 2008). This does not support the findings, and can be

attributed to the one variable: the fact that the chart sample area is becoming

consistently smaller as the distance from the object increases. On the other

hand, the RMS Edge Roughness reading is consistent, and can therefore be

assumed to be accurately representing the resolution of the in-focus charts;

however an out-of-focus test must be first undertaken for comparison.

3.5.2 Preliminary  Test  2  –  Changing  Focus,  Constant  Distance  

Focal range 110-INFcm, Focal

length 50, Object distance 90.

Subjective focus: IN-FOCUS

Focal range 28-60cm, Focal

length 50, Object distance 90.

Subjective focus: SLIGHTLY

OUT-OF-FOCUS

Focal range 15-18cm, Focal

length 50, Object distance 90.

Subjective focus: VERY OUT-

OF-FOCUS

Figure 3-13 – Sampled and analysed regions of 3 variably focused images at constant

distances, focal lengths and variable focus ranges.

Zee Khattak 28

Focus Range

(cm) Focal Length

(mm) Object Distance

(cm) Edge Profile

Rise (pxls) MTF50 (cy/pxl) RMS Edge

Roughness

(pxls)

110-INF 50 90 4.67 0.105 0.0267

28-60cm 50 90 5.82 0.0846 0.0414

15-18cm 50 90 21.37 0.024 0.138

This second preliminary test proves the assumption that the Edge Profile Rise

and MTF50 readings are dependent on maintaining a constant distance from the

object, as the results are now as expected from the subjective reading of the

charts with variable focus lengths (fig. 3-13). The 10-90 % Edge Profile Rise; an

average edge spatial response measured in pixels, is shown to be increasing

(table 3-2); indicating worsening edge performance and therefore resolution

(“SFR - Imatest Modules,” 2015). The MTF50; a frequency response reading

where spatial contrast falls to 50%, is shown to be decreasing (table 3-2);

indicating a loss in resolution from the ideal 0.5 cycles/pixel Nyquist frequency

(Bertalmío, 2014). However, for consistency between the tests it is necessary to

only record the RMS Edge Roughness reading, which again appears to be

correctly worsening (increasing in pixel roughness) as the resolution falls (table

3-2). It is decided from the results of this preliminary test that the minimum

standard for acceptably in-focus, high resolution images is 0 to 0.04 RMS, as

anything above appears soft, such as the second image in this test with an RMS

reading of 0.0414 (table 3-2).

3.6 Final  Limitations  of  Testing  Procedure  &  Improvements  

After preliminary testing, with the current testing scheme of photographing

resolution charts, a minimum/ maximum distance cannot be overcome due to

Imatest requiring these limits in order to not test the print quality or the roughness

of the paper, and to have an acceptable size sample region size. Therefore the

macro and long-range distances of the camera cannot be analysed.

Imatest’s online documents describe the reading of RMS Edge Roughness as ‘a

promising measurement related to image quality’ (“SFR results  : Edge and MTF

(Sharpness) Plot,” 2015, p. 4), however since it is a relatively new measurement

‘there isn’t a lot of data for comparison’ (“Using SFRplus Part 3: Imatest SFR

Plus Results,” 2015, p. 14). Although using the guide of subjective aesthetic

Table 3-2 – The results of preliminary test 2.

Zee Khattak 29

opinion alongside this new form of image sharpness testing perhaps limits the

reliability of this method, the preliminary testing nonetheless provided promising

results for comparison.

As the preliminary testing showed, once the distance from the test chart was

fixed, the other MTF data acquired became usable; the MTF-50 reading will be

used instead of the RMS reading in the comparison between the light field and

DSLR camera for Test 4.

3.7 Alternative  Approaches  

Alternative methodologies were considered in the developmental phase of this

project, considering the main objective of how to test the spatial and depth

resolution of the Lytro Illum camera in comparison with the Canon D60. The main

approaches considered were as follows:

1. Using the image processing software Matlab to analyse the light field

image metadata in order to determine the depth information data,

comparing this with conventional DSLR metadata.

Methods of opening the image data of the light field file have been undertaken

before, demonstrating the uniqueness of the LF coding (Patel, 2012). However,

for this project limitations of this method include the fact that results would take

the form of complex metadata, which is difficult to describe and represent as

quantitative figures for analysis.

2. Investigating means of turning a DSLR into a depth-capturing light field

device; creating an artefact of a light field image with the full RAW

resolution of DSLR photography.

Such methods have been tried, and successfully succeeded in methods of

stacking DSLR images to create a light-field mimicking image, with simple

applications online to repeat these experiments (Vaish & Adams, 2008). This

demonstration was considered too simple for the scope of this project, with little

to conclude on. On the other hand, creating a truly hybrid system of a DSLR that

can capture the light field in a single shot, such as is proposed by Boominathan

et al., was considered far beyond the scope of the project.

3. Using Lytro Desktop to export multiple stacked TIFF files for each light

field image of the ISO resolution chart, analysing their relative resolutions

Zee Khattak 30

with Imatest to determine the depth capture capabilities of the camera (at

differing distances, focus ranges and focal lengths).

This methodology seemed an effective testing scheme, however this would have

made the workflow more complex; the proposed methodology of this report was

deemed more suitable considering time constraints, and was chosen on the basis

of testing the constant settings of the focal length at variable distances.

4. Performing a range of image quality tests on the Lytro Illum and Canon

D60; including colour accuracy, noise, dynamic range and chromatic

aberration.

Multiple other factors such as those listed above also affect final output viewing

resolution of images (“Image Quality Factors - Imatest Documents,” 2014).

However due to the already complex method of analysing the depth resolution of

the Lytro Illum in tests 1 through 3, and considering time constraints, it was

deemed that focusing on the one factor of sharpness was suitable for the scope

of this project. Sharpness is, after all defined as one of the most, if not the most

important factor affecting perceived image quality (“Image Quality Factors -

Imatest Documents,” 2014).

(Continued)

Zee Khattak 31

4 Results  

4.1 Tables  

Key:  Over-­‐Performance  

Acceptably  In-­‐Focus  

Under-­‐Performance  

Out-­‐Of-­‐Focus   Anomaly  

4.1.1 Test  1  

                   Image  No.  

Focal  Length  (mm)  

Object  Distance  (cm)  

Focus  Distance  Range  (cm)  

RMS  Edge  Roughness  (pxls)  

1   30   40   55  -­‐  INF   0.0371  2   30   60   55  -­‐  INF   0.0326  3   30   90   55  -­‐  INF   0.0718  4   30   120   55  -­‐  INF   0.0437  5   30   150   55  -­‐  INF   0.0372  6   30   180   55  -­‐  INF   0.0611  7   30   210   55  -­‐  INF   0.0784  

         8   30   40   20  -­‐  60   0.0754  9   30   60   20  -­‐  60   0.0860  10   30   90   20  -­‐  60   0.0436  11   30   120   20  -­‐  60   0.0366  12   30   150   20  -­‐  60   0.0642  13   30   180   20  -­‐  60   0.0775  14   30   210   20  -­‐  60   0.1120  

4.1.2 Test  2  

Image  No.  

Focal  Length  (mm)  

Object  Distance  (cm)  

Focus  Distance  Range  (cm)  

RMS  Edge  Roughness  (pxls)  

15   50   40   110  -­‐  INF   0.0741  16   50   60   110  -­‐  INF   0.0464  17   50   90   110  -­‐  INF   0.0239  18   50   120   110  -­‐  INF   0.0806  19   50   150   110  -­‐  INF   0.0474  20   50   180   110  -­‐  INF   0.0295  21   50   210   110  -­‐  INF   0.0391  

         22   50   40   28  -­‐  60   0.191  23   50   60   28  -­‐  60   0.0748  

LYTRO  ILLUM  FOCAL  LENGTH  30  mm  

LYTRO  ILLUM  FOCAL  LENGTH  50  mm  

Zee Khattak 32

24   50   90   28  -­‐  60   0.042  25   50   120   28  -­‐  60   0.0328  26   50   150   28  -­‐  60   0.0614  26   50   180   28  -­‐  60   0.0619  27   50   210   28  -­‐  60   0.133  

4.1.3 Test  3  

Image  No.  

Focal  Length  (mm)  

Object  Distance  (cm)  

Focus  Distance  Range  (cm)  

RMS  Edge  Roughness  (pxls)  

67   80   40   200  -­‐  INF   1.37  71   80   60   200  -­‐  INF   0.0839  75   80   90   200  -­‐  INF   0.0369  79   80   120   200  -­‐  INF   0.0356  83   80   150   200  -­‐  INF   0.0315  91   80   180   200  -­‐  INF   0.0287  95   80   210   200  -­‐  INF   0.0289  99   80   250   200  -­‐  INF   0.0287  

         68   80   40   60  -­‐  180   0.0973  76   80   60   60  -­‐  180   0.0703  72   80   90   60  -­‐  180   0.103  80   80   120   60  -­‐  180   0.051  84   80   150   60  -­‐  180   0.0401  92   80   180   60  -­‐  180   0.0418  96   80   210   60  -­‐  180   0.0473  100   80   250   60  -­‐  180   0.054  

 

4.1.4 Test  4  

                           Lytro  Illum  

           Image  No.    

FL  (mm)    

Object  Dist.  (cm)    

ISO    

Shutter    

Ap.    

MTF  50  (cy/pxl)    

104   50   130   100   1/100   f/2   0.168  

             Canon  D60              1   50   130   100   1/100   f/2   0.32  

LYTRO  ILLUM  FOCAL  LENGTH  80  mm  

LYTRO  ILLUM  VS.  CANON  D60  LENS  PERFORMANCE  

Zee Khattak 33

4.2 Graphs    

4.2.1 Test  1  

Focal Length 30 mm - RMS Edge Roughness (pxls) vs. Object Distance (cm)

   

4.2.2 Test  1  Ideal  Performances:    

 

0  

0.02  

0.04  

0.06  

0.08  

0.1  

0.12  

40   60   90   120   150   180   210  

55  -­‐  INF  

20  -­‐  60  

0  

0.02  

0.04  

0.06  

0.08  

0.1  

0.12  

40   60   90   120   150   180   210  

55  -­‐  INF  

20  -­‐  60  

cm

pxls

AREA OF ACCEPTABLE FOCUS

Focal Range

cm

pxls

AREA OF ACCEPTABLE FOCUS

Focal Range

Zee Khattak 34

4.2.3 Test  2:  

Focal Length 50 mm - RMS Edge Roughness (pxls) vs. Object Distance (cm)

4.2.4 Test  2  Ideal  Performances:  

 

0  

0.05  

0.1  

0.15  

0.2  

0.25  

40   60   90   120   150   180   210  

110  -­‐  INF  

28  -­‐  60  

0  

0.05  

0.1  

0.15  

0.2  

0.25  

40   60   90   120   150   180   210  

110  -­‐  INF  

28  -­‐  60  

pxls

Focal Range

cm

pxls

Focal Range

cm

AREA OF ACCEPTABLE

FOCUS

AREA OF ACCEPTABLE

FOCUS

Zee Khattak 35

4.2.5 Test  3:  

Focal Length 80 mm - RMS Edge Roughness (pxls) vs. Object Distance (cm)

4.2.6 Test  3  Ideal  Performances:  

 

0  

0.02  

0.04  

0.06  

0.08  

0.1  

0.12  

40   60   90   120   150   180   210   250  

200  -­‐  INF  

60  -­‐  180  

0  

0.02  

0.04  

0.06  

0.08  

0.1  

0.12  

40   60   90   120   150   180   210  

110  -­‐  INF  

60  -­‐  180  

pxls

Focal Range

cm

AREA OF ACCEPTABLE

FOCUS

pxls

Focal Range

cm

AREA OF ACCEPTABLE

FOCUS

Zee Khattak 36

4.2.7 Test  4:  

4.3 Depth  Scene  Charts:  

4.3.1 Test  1     Focal  Length  30mm  

(Continued)

0  

0.2  

0.4  

0.6  

Lytro  Illum   Canon  D60   Ideal  Lens  

MTF  50  (cy/pxl)  Comparative  Lens  Performance  

Focus Range: 55-INF cm Focus Range: 20-60 cm

cm cm

INF Camera predicted range

Key:

Camera predicted range

Zee Khattak 37

4.3.2 Test  2     Focal  Length  50mm  

4.3.3 Test  3     Focal  Length  80mm  

(Continued)

Focus Range: 110-INF cm

cm cm

Focus Range: 28-60 cm

cm

Focus Range: 200-INF cm

cm

Focus Range: 60-180 cm

INF Camera predicted range Camera predicted range

Camera predicted range

Camera predicted range

INF

Zee Khattak 38

5 Discussion  

5.1 Interpreting  the  Results  

5.1.1 Test  1  

Comparing fig. 5-1 with the ideal performance fig. 5-2, at the area of acceptable

focus (0 – 4 RMS pixels) it is clear that the camera has under-performed at both

focal ranges. According to the ideal, the 55-infinite (red plot) should have been in

focus in all but the 40cm range (being out of the predicted focus range). Although

it over-performed at distance 40cm with 0.0371 RMS, only two other distances

were in focus (60 and 150cm) (fig. 5-3, 1). Another notable over-performance of

the camera lies within the 20-60cm range, at distance 120cm from the chart a

0.0366 RMS reading gave an in-focus image, despite being double the predicted

maximum range (fig. 5-3, 2).

Figure 5-2 – Comparative graph of 2 focal

ranges (55-infinite, 20-60cm) at 30mm focal

length.

Figure 5-1 – Ideal graph performance.

Figure 5-3 – Two depth scene charts at camera-predicted focal ranges 55-Infinite

and 20-60cm at 30mm focal length.

Zee Khattak 39

Comparing the first chart to the second it is not clear whether as the optimal

focus distance is decreased, the refocusing range decreases; the first chart

contains a minimum in-focus image at 40cm and a maximum at 150cm, however

the second has an unexpected single in-focus image at 120cm. This makes it

difficult to settle the hypothesis that closer focus distances decreases the

refocusing range, despite the in-camera guide showing this to be the case.

5.1.2 Test  2  

Figure 5-4 - Comparative graph of 2 focal

ranges (110-infinite, 28-60cm) at 50mm focal

length.

Figure 5-5 - Ideal graph performance.

Figure 5-6 - Two depth scene charts at camera predicted focal ranges 110-Infinite

and 28-60cm at 50mm focal length.

Zee Khattak 40

Test 2 also displays a mixture of over-performance and under-performance of the

image resolution, this time with more as-predicted results at the focus range 110-

infinite (blue bands, fig. 5-6, 1). The graphs again display a notable change from

the ideal, particularly at the 120-150 region for the 110-infinite range, with

readings of 0.0806 and 0.0474 RMS; out-of-focus despite being within the

predicted region (fig. 5-4). The 28-60cm focal range; evidently narrowing from the

20-60cm range of Test 1 as the focal length increased, again saw over-

performance at the 120cm distance (fig. 5-6, 2). One anomaly occurred at the

object distance 40cm for range 28-60cm; the image was expected to be in-focus,

however a large RMS reading of 0.191 was present. Although the image

appeared subjectively sharp, on inspecting the region it was decided that the

results were inaccurate as Imatest was measuring the roughness of the paper

and print quality over the performance of the lens (fig. 5-7) (“Using SFR Part 1:

Setting Up and Photographing SFR Test Charts,” 2015).

Figure 5-7 – Details of the anomaly: having a combination of low

distance/ high focal length the jagged details of the paper/ print

quality were incorrectly analysed in Imatest (“Using SFR Part 1:

Setting Up and Photographing SFR Test Charts,” 2015).

Zee Khattak 41

At optical focus ~42cm (roughly equating to the middle of the 28-60 range),

interestingly far from the refocusing guides provided by Lytro, the camera

predicted a range of only 280-600mm, in comparing with the in-camera guide of

210-1300mm. Although the lens results clearly under-performed in general

compared to the guide (fig. 5-8), the guide was accurate in predicting that the

maximum distance an image will be in focus is ~1300mm, which is roughly what

was seen with the over-performance reading (green band) at 1200mm (fig. 5-9).

This indicates that the in-camera predicted ranges are only rough estimates, and

not wholly reliable, which is a fact that Lytro readily admits to (“Depth

Composition Features: The Refocusable Range,” 2014). The Lytro provided

guide (fig. 5-8) on the other hand seems to be a more accurate representation of

the refocusing range at this focal length and optical focus.

(Continued)

Figure 5-8 - The refocusable range from Lytro at

50mm focal length (35mm equivalent), optical focus

~42cm (“Depth Composition Features: The

Refocusable Range,” 2014).

Figure 5-9 – The results at 50mm focal

length, focus range 28-60cm (280-

600mm).

Zee Khattak 42

5.1.3 Test  3  

The results for Test 3 saw the notable over-performance of the lens at focal

length 80mm, with a 200-infinite camera predicted focal range. Despite predicting

a narrow band within the testing region (above 200) and successfully performing

these (blue band, fig. 5-12, 1), the lens captured in-focus images between

distances 90 and 180cm from the test chart (green band, fig. 5-12, 1). This is a

large close-distance increase in depth resolution capture for the camera at this

focal length. Conversely, at the same focal length with a predicted range of 60-

180cm (within the region which was successfully captured by the higher range),

the camera performed the worst overall of the testing procedures, with no in-

focus regions at all (red band, fig. 5-12, 2). The radical difference between both

plotted lines on the graphs to the ideal (fig. 5-10, 5-11) indicates a vast difference

of performance of the lens when set at differing focal ranges, despite having a

constant focal length.

Figure 5-10 - Comparative graph of 2

focal ranges (200-infinite, 60-180cm) at

80mm focal length.

Figure 5-11 - Ideal graph performance.

Figure 5-12 - Two depth scene charts at camera predicted focal ranges 200-Infinite

and 60-180cm at 80mm focal length.

Zee Khattak 43

5.1.4 Test  4  

Comparing the ideal lens MTF50 (0.5 cy/pxl) against the Lytro Illum and Canon

D60 it is clear that the DSLR is closer to the ideal (fig. 5-5) at 0.32 cy/pxl, while

the Illum rated 0.168 cy/pxl. This shows a percentage drop in sharpness of

resolution from the Lytro Illum of 47.5%: not as significant as the predicted

resolution difference of 65%.

5.2 Consensus  &  Explanation  of  Results  

Following the outcome of the results, the overall consensus is that the Lytro

Illum’s refocusability has underperformed in most focus ranges at variable focal

lengths, with one notable exception of all-round over-performance. The large

underperformance seen at variable focal lengths and focus ranges for

maintaining optimal resolution in the Lytro Illum may be attributed to incorrect

lens calibration or computational errors of re-construction (Bishop & Favaro,

2009), (Cho, Lee, Kim, & Tai, 2013).

Depth mapping errors within the computation of the light field image; the in-

camera sensor capture phase, or the software-processing phase, may have

contributed to the loss of image resolution (Bishop & Favaro, 2009). As Lytro

describes, depth-mapping errors are more likely in scenarios of photographing

very reflective surfaces, shadows or geometrical irregularities (“First Generation

Image Quality part 4: Depth Maps and Depth-Map Errors,” 2014). However, the

clear correlation between focal length and focus range settings to the over/under

performance of the camera makes the reason most likely lie with the lens itself;

these are after all variables affected by the lens settings. According to Cho et al:

Figure 5-5 – An ideal lens performance

against the Lytro Illum and Canon D60

MTF50 reading.

Zee Khattak 44

Due to manufacturing defection, it is common to have a micro-lens

array that does not perfectly align with image sensor coordinates.

(Cho et al., 2013, p.1).

An error in the microlens calibration may be the most comprehensive explanation

as to why the lens performance vastly differed at one focal point and focus range

over others.

With the lens calibration the issue that affected the optimal resolution

performance of this particular camera, it is therefore not possible to suggest an

ideal focal length or focus range scenario for in-field photographers; each lens is

likely to be calibrated differently. It does however draw attention to the

importance of careful microlens tuning to the sensor (Tellman, 2003); wrong

calibration can vastly affect the quality of the light field images, and the user’s

experience of the camera.

According to the test results of this project, companies looking to invest in the

research and development of plenoptic camera technology should be wary of the

sensitivity of the lens system to calibration errors. It is within reason that the

particular camera tested was defective and that this is a rare occurrence,

however it is generally seen that the consensus of professional and casual

photographers alike consider light field images to frequently experience problems

affecting viewing resolution (Northrup, 2015), which makes this appear a more

prevalent issue.

The chief media scientist at Dalet Digital Media Systems, Bruce Devlin was

interviewed for his opinions on light field images, and whether the company Dalet

has invested or is looking to invest in the technology. His responses provided a

very interesting insight into the media company’s opinions of the new imaging

medium from an industry workflow perspective. The main opinion correlates to

the findings: Dalet doesn’t use light field images in their current products despite

following and finding the technology research intriguing, as it is:

…not quite ready for commercial High Definition or Ultra High

Definition use. We have found that some of the artefacts are not

naturalistic and would not be appropriate for some of the content

that we handle (Devlin, 2015).

Zee Khattak 45

The output resolution of light field images being below the now standardised HD

viewing resolution is therefore what has prevented an established company such

as Dalet from investing in the technology. Although the results indicate the

resolution differences between the Lytro Illum and DSLR are not as significant as

was projected, the percentage difference of resolution is still around 50%; a

sizable decrease.

Additionally, Devlin argues that unpredictable artefacts and distortions from

improperly computed images can make them appear unnatural, especially in

comparison with DSLR images, which we are now conditioned to accept as

everyday (Devlin, 2015). However, Devlin see’s the potential merits of light field

images in its post-production appeal, especially for niche applications such as

difficult to capture news reporting footage or sports events (Devlin, 2015).

(Continued)

Zee Khattak 46

6 Conclusions  

6.1 Overall  Findings  

Contrary to the first hypotheses that the resolution would be consistent within the

focal ranges and gradually deteriorate outside of these distances; that an

increased focal length will decrease the refocusing range, the results seem to be

mixed to offer no real correlation between focal length and the acceptable

refocusable range of the camera. Considering the hypothesis that as the

camera’s focus distance is decreased, the refocusable range increases, the in-

camera depth guide confirms this prediction (Anon, 2014). However, the sparse

results of over/under-performance throughout the tests make this difficult to

confirm.

The large over-performance of the lens at focal length 80mm with the focus

predicted to be within the 200 – Infinite range, with a significant under-

performance of the lens at the same focal length, focus range 60-180cm

questions the reliability of the camera to offer the user images of consistent,

optimal resolution in-field. Although Lytro claims that the in-camera focus range

offers the user a shooting guide only (“Depth Composition Features: The

Refocusable Range,” 2014), nonetheless the all-round underperformance of the

latter to offer even a single in-focus image indicates this to be a microlens

calibration error (Cho et al., 2013). The Lytro-provided depth guide chart at

50mm focal lens and ~42cm optical focus however did prove somewhat more

realistic then the in-camera guide.

The percentage drop in sharpness resolution between the Lytro Illum and Canon

D60 of 47.5% is not as significant as the predicted 65% resolution difference, as

acquired from the 4:11 output megapixel ratio (Crisp, 2014). This result has

nonetheless demonstrated the limitations of the spatial resolution of the plenoptic

camera, which result from the spatial-angular resolution trade-off from using

microlens array technology (Georgeiv et al., 2006).

6.2 Reflection  

The main philosophy behind plenoptic cameras and light field photography is to

never have to worry about taking an out-of-focus image again (Ng, 2006).

However, if the very resolution of the images are compromised under controlled

testing labs due to an error in the lens calibration as seen in the results of this

Zee Khattak 47

project, it is very likely that in-field photographers will experience consistently

less-then-desirable photographs with low resolution and blurriness. The testing

procedures have proven the spatial resolution limitations of the light field camera

compared with a DSLR, but also the unreliability of the camera to perform at its

optimal resolution at varying focal lengths and focus ranges.

Taking the Lytro Illum out into the field (see appendices), the resolution

limitations often became apparent; especially in low-light situations. However, the

advancements in megapixel capabilities shown in Adobe’s plenoptic cameras

seem to indicate an advancing technology that may one day have a strong

competitive edge over diffraction-limited DSLRs (Milanfar, 2010), (Salesin &

Georgiev, 2011). It is the experience of using plenoptic technology that sets it

apart from any other form of photography or video technology; being able to

change the distance-of-focus of a photograph essentially re-invents the rules for

traditional concepts of composition, as the viewer can be self-guided through

multiple points of interest and parallax changes (Harris, 2012). Computational

photography allows the user to make focusing and depth-of-field alterations

within supporting software that would otherwise be impossible, as they seemingly

defy the laws of the physics of light that dictate limits for traditional photographic

means (Northrup, 2015). By capturing large amounts of light field information for

every picture, future advances in software development will likely stretch the

limits of what is possible in post production photography, while fixing existing

processing errors such as depth-mapping problems (Tellman, 2003).

It is for these reasons, and considering the relative immaturity of the technology

at this present age, that it is believed plenoptic camera technology, or at least it’s

unique capabilities will one day be common amongst professional and

nonprofessional photographer’s alike. Whether this technology will become a

hybrid with current DSLRs (Boominathan et al., 2014), make a significant move

into light field capturing video (Wilburn, 2001), or be replaced by competing

depth-capturing hardware and software (Hession, 2014) is yet to be seen.

Zee Khattak 48

7 Recommendations  for  Further  Work  As seen in section 3.7 discussing alternative considered methodologies, further

work could be undergone using a similar method of sharpness resolution seen in

this project: this time using a fixed number of distances in the testing procedure,

which may have limited the results. Using a fixed distance firstly at 40cm from the

chart, the camera’s focal length can be aligned to fit the chart optimally within the

frame; increasing the number of useable figures (including the Edge Profile and

MTF 50 reading) acquired by the software Imatest. The focal length is then noted

and multiple images are taken at variable optimal focus points in the frame. The

camera’s distance is then changed; this time requiring a longer focal length to fit

the chart optimally within the frame; the test procedure is repeated a number of

times. Opening the images in Lytro Desktop, the composite depth images that

make up a single light field file can be exported as separate TIFF files for

comparative resolution analysis in Imatest; the combination of which displays the

depth resolution characteristics for each image. Although a lengthy procedure, it

is hypothesised that this would be a reliable alternative means of analysing the

depth resolution captured by the camera. Limitations to this method include the

fact that a consistent focal length in comparison with changing distances from the

object cannot be measured, as was the premise of this project’s testing scheme.

Furthermore, resolution testing could be devised for overcoming the minimum

and maximum distance set for the project’s procedure; testing the macro and

long-distance refocusing capabilities of the camera would require different testing

schemes altogether to photographing resolution charts.

(Continued)

Zee Khattak 49

8 References  

Adelson, E.H. & Bergen, J.R., 1991. The Plenoptic Function and the Elements of Early Vision. In M. Landy & J. A. Movshon, eds. Computational Models of Visual Processing. Cambridge: MIT Press, pp. 3–20.

Allen, E. & Triantaphillidou, S. eds., 2011. The Manual of Photography 10th ed., Oxford: Taylor & Francis.

Andrews, P. et al., 2006. Raw Workflow from Capture to Archives: A Complete Digital Photographer’s Guide to Raw Imaging, Oxford: Focal Press.

Anon, 2012. 3D integral image. Howseography. Available at: https://www.youtube.com/watch?v=lVgy1X5gkII [Accessed December 12, 2014].

Anon, 2015a. Composing for Depth, America: Lytro Inc. Available at: https://www.lytro.com/learn/videos/composing-for-depth.php.

Anon, 2015b. Connecting Depth to Living Pictures, America: Lytro Inc. Available at: https://www.lytro.com/learn/videos/connecting-depth-to-living-pictures.php.

Anon, 2014a. Depth Composition Features: The Refocusable Range. Available at: http://manuals.lytro.com/illum/depth-composition-features/ [Accessed April 4, 2015].

Anon, 2015c. Diffraction Limited Photography: Pixel Size, Aperture and Airy Disks. Cambridge in Colour. Available at: http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm [Accessed January 20, 2015].

Anon, 2014b. First Generation Image Quality part 4: Depth Maps and Depth-Map Errors. Lytro Inc. Available at: https://support.lytro.com/hc/en-us/articles/200865170-First-Generation-Image-Quality-part-4-Depth-Maps-and-Depth-Map-Errors [Accessed April 10, 2015].

Anon, 2014c. Image Quality Factors - Imatest Documents. Imatest, p.13. Available at: http://www.imatest.com/docs/iqfactors/#sharpness [Accessed January 7, 2015].

Anon, 2015d. SFR - Imatest Modules. Imatest. Available at: http://www.imatest.com/support/modules/sfr/ [Accessed April 6, 2015].

Anon, 2015e. SFR results  : Edge and MTF (Sharpness) Plot. Imatest, p.6. Available at: http://www.imatest.com/docs/sfr_mtfplot/ [Accessed April 1, 2015].

Anon, 2015f. Using SFR Part 1: Setting Up and Photographing SFR Test Charts. Imatest. Available at: http://www.imatest.com/docs/sfr_instructions/ [Accessed April 4, 2015].

Anon, 2015g. Using SFRplus Part 3: Imatest SFR Plus Results. Imatest, p.18. Available at: http://www.imatest.com/docs/sfrplus_instructions3/#edgerough [Accessed April 1, 2015].

Anon, J., 2014. Using LYTRO ILLUM: A Guide to Creating Great Living Pictures, San Francisco: Josh Anon.

Benton, S.A. & Bove, V.M., 2008. Holographic Imaging, Hoboken: John Wiley & Sons.

Bertalmío, M., 2014. Image Processing for Cinema, Boca Raton: CRC Press.

Zee Khattak 50

Besiex, Q. von, 2012. Lytro Alternative: Automatic, Intelligent Focus Bracketing. Quinxy. Available at: http://quinxy.com/technology/lytro-alternative-automatic-focus-bracketing/ [Accessed March 22, 2015].

Bilissi, E. & Langford, M., 2013. Langford’s Advanced Photography, London: CRC Press.

Bishop, T.E. & Favaro, P., 2009. Plenoptic Depth Estimation From Multiple Aliased Views. IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), pp.1622 – 1629.

Boominathan, V., Mitra, K. & Veeraraghavan, A., 2014. Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. 2014 IEEE International Conference on Computational Photography (ICCP), (3), pp.1–10.

Brown, B., 2014. The Filmmaker’s Guide to Digital Imaging: For Cinematographers, Digital Imaging Technicians, and Camera Assistants, London: CRC Press.

Cho, D. et al., 2013. Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction. Proceedings of the IEEE International Conference on Computer Vision, pp.3280–3287.

Crisp, S., 2014. Lytro Reveals the Professional-Grade Illum Light Field Camera. Gizmag. Available at: http://www.gizmag.com/lytro-illum-light-field-camera/31742/ [Accessed March 22, 2015].

Cristobal, G., Schelkens, P. & Thienpont, H. eds., 2013. Optical and Digital Image Processing: Fundamentals and Applications, Hoboken: John Wiley & Sons. Available at: https://books.google.com/books?id=upooPUQLIA0C&pgis=1 [Accessed December 12, 2014].

Daly, D., 2000. Microlens Arrays, London: CRC Press.

Devlin, B., 2015. Email-interviewed by Zee Khattak, April 10.

Dillard, T., 2008. Raw Pipeline: Revolutionary Techniques to Streamline Digital Photo Workflow, New York: Lark Books.

Elkins, D.E., 2013. The Camera Assistant’s Manual 5th ed., Oxford: Taylor & Francis.

Fatahalian, K., 2011. Lecture 19  : Depth Cameras Continuing theme  : Computational Photography. Graphics and Imaging Architectures, (Fall), p.30.

Galstian, T. V., 2013. Smart Mini-Cameras, Boca Raton: CRC Press.

Georgeiv, T. et al., 2006. Spatio-Angular Resolution Tradeoff in Integral Photography. Eurographics Symposium on Rendering, p.10.

Georgiev, T. & Lumsdaine, A., 2009. Depth of Field in Plenoptic Cameras. Eurographics, (1), pp.5–8.

Goldstein, D.B., 2009. Physical Limits in Digital Photography. Northlight Images. Available at: http://www.northlight-images.co.uk/article_pages/guest/physical_limits_long.html [Accessed March 25, 2015].

Zee Khattak 51

Harris, M., 2012. Light-Field Photography Revolutionizes Imaging. IEEE Spectrum. Available at: http://spectrum.ieee.org/consumer-electronics/gadgets/lightfield-photography-revolutionizes-imaging [Accessed April 1, 2015].

Hession, M., 2014. Lytro ’s Light Field Tech Is Amazing, But Is It Already Obsolete? Reframe, p.2. Available at: http://reframe.gizmodo.com/lytros-light-field-tech-is-amazing-but-is-it-already-o-1566058802 [Accessed October 25, 2014].

Hirsch, R., 2013. Exploring Color Photography Fifth Edition: From Film to Pixels, Oxford: Taylor & Francis.

Hirsch, R., 2012. Light and Lens: Photography in the Digital Age 2nd ed., Oxon: CRC Press.

Ippolito, J.A., 2003. Understanding Digital Photography, New York: Cengage Learning.

Jarabo, A. et al., 2012. How Do People Edit Light Fields?, Rome.

Javidi, B. & Okano, F. eds., 2002. Three-Dimensional Television, Video, and Display Technologies, New York: Springer Science & Business Media.

Lee, R., 2014. Review: Do you need a Lytro Illum? Techly. Available at: http://www.techly.com.au/2014/08/19/lytro-illum-tuba/ [Accessed January 20, 2015].

Lezano, D., 2012. The Photography Bible: A Complete Guide for the 21st Century Photographer, Devon: David & Charles.

Lu, W., Mok, W.K. & Neiman, J., 2013. 3D and Image Stitching With the Lytro Light-Field Camera, New York.

Lueder, E., 2011. 3D Displays, Chichester: John Wiley & Sons.

Lukac, R. ed., 2010. Computational Photography: Methods and Applications, Boca Raton: CRC Press.

Lumsdaine, A., 2012. Color Demosaicing in Plenoptic Cameras, California.

Lumsdaine, A. & Georgiev, T., 2009. The Focused Plenoptic Camera, Bloomington; San Jose.

Milanfar, P. ed., 2010. Super-Resolution Imaging, Boca Raton: CRC Press.

Murphy, D.B., 2002. Fundamentals of Light Microscopy and Electronic Imaging, New York: John Wiley & Sons.

Nasse, H.H., 2008. How to Read MTF Curves. Carl Zeiss - Camera Lens Division, (December), p.33.

Ng, R., 2006. Digital Light Field Photography. Stanford University.

Ng, R. et al., 2005. Light Field Photography with a Hand-held Plenoptic Camera, Stanford. Available at: http://www.eng.tau.ac.il/~ipapps/Supplement/[ 2005 ] Light Field Photography with a Hand-held Plenoptic Camera.pdf.

Northrup, T., 2015. Lytro Illum Review, America: Chelsea & Tony. Available at: https://www.youtube.com/watch?v=0JbERFPWNyU.

Zee Khattak 52

Patel, N., 2012. Reverse Engineering the Lytro .LFP File Format. Eclecticc. Available at: http://eclecti.cc/computervision/reverse-engineering-the-lytro-lfp-file-format [Accessed December 16, 2014].

Pereira, D., 2011. The Art of HDR Photography, David Pereira.

Peres, M.R. ed., 2013. The Focal Encyclopedia of Photography 4th ed., Oxford: Taylor & Francis.

Perwaß, C., Wietzke, L. & Gmbh, R., 2012. Single Lens 3D-Camera with Extended Depth-of-Field. Proceedings of SPIE, 49(431), p.15. Available at: http://link.aip.org/link/PSISDG/v8291/i1/p829108/s1&Agg=doi.

Poon, T.-C., 2006. Digital Holography and Three-Dimensional Display: Principles and Applications, New York: Springer Science & Business Media.

Reichmann, M., 2009. MTF. Luminous Landscape, p.5. Available at: http://luminous-landscape.com/mtf/ [Accessed March 3, 2015].

Richardson, M., 2013. Techniques and Principles in Three-Dimensional Imaging: An Introductory Approach, Pennsylvania: IGI Global.

Roland, J., 2015. A Study of Slanted-Edge MTF Stability and Repeatability, America: Imatest LLC. Available at: https://www.youtube.com/watch?v=e2V2kS9_L1w [Accessed January 21, 2015].

Salesin, D. & Georgiev, T., 2011. GPU Technology Conference: Adobe Shows off Plenoptic Lenses, USA: Adobe.

Salomon, D., Motta, G. & Bryant, D., 2007. Data Compression: The Complete Reference 4th ed., London: Springer Science & Business Media.

Savov, V., 2010. Adobe Shows off Plenoptic Lenses that Let You Refocus an Image After It’s Taken. Engadget. Available at: http://www.engadget.com/2010/09/23/adobe-shows-off-plenoptic-lenses-that-let-you-refocus-an-image-a/ [Accessed March 22, 2015].

Son, J.Y. et al., 2008. Image-Forming Principle of Integral Photography. IEEE/OSA Journal of Display Technology, 4(3), pp.324–331.

Story, D., 2008. The Digital Photography Companion, Massachusetts: O’Reilly Media, Inc.

Sumner, R., 2014. Processing RAW Images in MATLAB. , pp.1–15.

Tao, M.W. et al., 2013. Depth from Combining Defocus and Correspondence Using Light-Field Cameras. 2013 IEEE International Conference on Computer Vision, 2, pp.673–680.

Taylor, D., 2012. Understanding RAW Photography, London: Ammonite Press.

Tellman, S., 2003. Lytro, Light Fields, and The Future of Photography, America: Lytro Inc.

Theobalt, C., Koch, R. & Kolb, A., 2013. Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications  : Dagstuhl 2012 Seminar on Time-of-Flight Imaging and GCPR 2013 Workshop on Imaging New Modalities, Berlin: Springer.

Vaish, V. & Adams, A., 2008. The (New) Stanford Light Field Archive. Stanford. Available at: http://lightfield.stanford.edu/lfs.html [Accessed December 14, 2014].

Zee Khattak 53

Wang, T., 2014. Lytro Illum Camera Hands On. Available at: https://www.youtube.com/watch?v=CECoSeL79Wc [Accessed March 16, 2015].

Weston, C. & Coe, C., 2013. Creative DSLR Photography: The Ultimate Creative Workflow Guide, Oxford: CRC Press.

Wetzstein, G. et al., 2011. Computational Plenoptic Imaging. Computer Graphics Forum, 30(8), pp.2397–2426.

Wilburn, B.S., 2001. Light Field Video Camera, California.

Williams, E.A. et al. eds., 2007. National Association of Broadcasters Engineering Handbook 10th ed., Oxford: Taylor & Francis.

Zhang, C. & Chen, T., 2006. Light Field Sampling, San Rafael: Morgan & Claypool Publishers.

9 Bibliography  

Anon, 2012. A Guide to Smoother Digital Workflows in Television. The Digital Production Partnership. Available at: https://aapt.com.au/sites/default/files/pdf/DPP_Bloodless_Revolution.pdf [Accessed March 1, 2015].

Anon, 2015. Camera Lens Quality: MTF, Resolution & Contrast. Cambridge In Colour, p.7. Available at: http://www.cambridgeincolour.com/tutorials/lens-quality-mtf-resolution.htm [Accessed March 25, 2015].

Cardinal, D., 2011. Lytro: It’s focusing on the wrong problem. Extreme Tech. Available at: http://www.extremetech.com/extreme/101489-lytro-its-focusing-on-the-wrong-problem [Accessed March 17, 2015].

Chellappa, R. & Theodoridis, S., 2013. Academic Press Library in Signal Processing: Image, Video Processing and Analysis, Hardware, Audio, Acoustic and Speech Processing, Oxford: Academic Press.

Chen, H.H., 2009. Research on Light Field Camera and Music Emotion Recognition. 2009 IEEE International Conference on Multimedia and Expo, pp.1558–1559.

Choudhury, B., Singla, D. & Chandran, S., 2006. Real-Time Camera Walks Using Light Fields. Computer Vision, Graphics and Image Processing, 1, pp.321–332.

Drazic, V., 2010. Optimal depth resolution in plenoptic imaging. 2010 IEEE International Conference on Multimedia and Expo, ICME 2010, pp.1588–1593.

Georgiev, T. et al., 2013. Lytro Camera Technology: Theory, Algorithms, Performance Analysis C. G. M. Snoek et al., eds. , 5(1), p.10.

Georgiev, T. & Lumsdaine, A., 2009a. Depth of Field in Plenoptic Cameras. Eurographics, (1), pp.5–8.

Georgiev, T. & Lumsdaine, A., 2009b. Superresolution with Plenoptic Camera 2.0. Adobe Systems Incorporated, Tech. Rep, 2009(April), pp.1–9.

Zee Khattak 54

Georgiev, T. & Lumsdaine, A., 2012. The Multi-Focus Plenoptic Camera,

Georgiev, T.G., 2012. New Light Field Camera Designs,

Hainich, R.R., 2009. The End of Hardware, 3rd Edition: Augmented Reality and Beyond, South Carolina: Booksurge.

Han, H., Kang, M. & Sohn, K., 2010. Lens Simulation with Light Field Camera. Digest of Technical Papers International Conference on Consumer Electronics (ICCE), pp.355–356.

Kamal, M.H., Golbabaee, M. & Vandergheynst, P., 2012. Light Field Compressive Sensing in Camera Arrays. 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.5413–5416.

Keelan, B., 2002. Handbook of Image Quality: Characterization and Prediction, New York: CRC Press.

Li, S., Liu, C. & Wang, Y. eds., 2014. Pattern Recognition: 6th Chinese Conference, CCPR 2014, Changsha, China, November 17-19, 2014. Proceedings, Part 2, Berlin: Springer.

Lister, M. ed., 2013. The Photographic Image in Digital Culture, Oxon: Taylor & Francis.

Lu, W., Mok, W.K. & Neiman, J., 2013. 3D and Image Stitching With the Lytro Light-Field Camera, New York.

Masia, B., Jarabo, A. & Gutierrez, D., 2005. Favored Workflows in Light Field Editing, Zaragoza.

McAndrew, A., Wang, J.H. & Tseng, C.S., 2010. Introduction to Digital Image Processing with MATLAB, Singapore: Cengage Learning Asia Pte Limited.

Ng, R., 2011. Inside the Lytro Camera, and the Start-up’s 3D Future. Smart Planet. Available at: https://www.youtube.com/watch?v=FH57z_goJ9U [Accessed March 16, 2015].

Pizzi, S. & Jones, G., 2014. A Broadcast Engineering Tutorial for Non-Engineers, Boca Raton: CRC Press.

Raghavendra, R. et al., 2013. Multi-face Recognition at a Distance Using Light-Field Camera. 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, (1), pp.346–349.

Raskar, R. & Tumblin, J., 2014. Computational Photography: Mastering New Techniques for Lenses, Lighting, and Sensors, Massachusetts: A K Peters, Limited.

Reddy, D., Bai, J. & Ramamoorthi, R., 2013. External Mask Based Depth and Light Field Camera. 2013 IEEE International Conference on Computer Vision Workshops, pp.37–44.

Rouse, A., 2007. Understanding Raw Photography, London: Photographers Institute Press.

Shao, L. ed., 2014. Computer Vision and Machine Learning with RGB-D Sensors, New York: Springer.

Tulyakov, S., Lee, T.H. & Han, H., 2013. Quadratic Formulation of Disparity Estimation Problem for Light-field Camera. 2013 IEEE International Conference on Image Processing, pp.2063–2067.

Zee Khattak 55

Wietzke, L. & Perwass, C., 2011. Raytrix 3D-Focus Cameras 3D-Focus Camera Technology  : One Camera, One Lens, One Shot, Kiel.

Wolf, J., 2013. How to Take Apart a Lytro Camera. Jason Wolf Channel. Available at: https://www.youtube.com/watch?v=fSmhJWNI8Dk [Accessed March 16, 2015].

Zubrzycki, J., 2012. Challenges and Solutions in Broadcast Archives. BBC Research and Development, p.38. Available at: http://www.dpconline.org/component/docman/doc_download/581-soundavision-zybrycki [Accessed March 1, 2015].

(Continued)

Zee Khattak 56

10 Appendices  

10.1 Bruce  Devlin  Full  Interview  -­‐  10-­‐04-­‐15  

The chief media scientist at Dalet Digital Media Systems was interviewed for his

opinions on light field technology:

How would light field images affect your existing workflows if it were to be

adopted?

They would increase the complexity of the automation tools. For example, the

trend is to "commit" to the final image as late as possible in a workflow. You

could imagine that and editor using a light field source would export an EDL with

all the traditional timeline and compositional elements as well as new elements to

position the 2D image in x,y,z and possible other controls like depth of field. This

might be rendered by a transcoder to make a final 2D image to the end-user. The

control surface for the editor, transcoder and any other rendering engine would

have to be compatible with the light field created by the sensor. It's just

complexity that needs to be managed, but scaling that complexity for different

tools to interoperate is a real barrier to deploying light fields in any high volume

workflows.

What in your opinion is the biggest problem affecting light field images from wide-

spread use?

The biggest problem is the way that a rendered 2D image looks. I haven't yet

seen a light field image on a 55 inch screen (or on a 10 inch screen viewed up

close) that competes with a modestly priced HD camera. To quote my teenage

children - they look a bit funny, but aren't able to express what is funny about the

image. I can see their use when that is the creative look that you're aiming for,

but for general use, there is still a cost-performance hurdle to climb.

Do you know of any imaging companies that have already adopted this

technology? Are you aware of Adobe’s developments of 100+ megapixel

plenoptic cameras (similar to Lytro’s light field cameras), and do you see this

technology making a significant impact?

Zee Khattak 57

I am aware of research effort but not of any commercial movies, TV shows or

high-volume titles that have adopted this style of shooting.

Do you see the current limitations of the technology being overcome in the future

for widespread use; if so, how long do you expect this to take?

I think the current price-performance limitations could be overcome with sufficient

time and investment, providing there is a "killer application" that will provide the

cash to recoup the investment. Maybe the humble mobile phone is that killer app.

If you have the energy and storage to shoot the "scene" and then figure out what

you wanted to shoot later, then virtually anyone could become a director of

photography. I think we're a few years from that though.

Do you predict that this technology may become a hybrid with existing digital

cameras (DSLRs), or would you expect the DSLRs to remain unaffected?

I think that top-end DSLRs will be unaffected because the Pros and purists use

them as a tool to create a desired effect. I suspect there will be a few specialist

DSLRs at the lower end of the scale to appeal to early adopter so that the "killer

app" has a good chance of appearing.

What did you mean when you described the artefacts as ‘un-naturalistic’; was this

in reference to depth rendering problems of light field images, or general

resolution limitations or both?

Both. We (consumers) have been conditioned to a certain look in our

photography and cinematography over recent years. We naturally correlate the

different distortions and come to expect them (JJ Abrahms lens flare for example

to make Star Trek look real). Watching Lytro clips

(https://www.youtube.com/watch?v=iYtj41s8iZQ) I can see things going on that

don't happen quite as they do in real life - for example the YouTube clip when the

focus is pulled along the railing, the tree on the left looks wrong for reasons I

can't quite explain. Maybe this can all be fixed in post with better simulated lens

physical and more pixels / dynamic range at the input. Today, my old eyes tell me

Zee Khattak 58

that the pictures aren't quite right. Maybe my kids will buy light field cameras and

they will become accustomed to the look and will rebel against the traditional

look.

What in your opinion is the greatest appeal of light field images? How would you

rate them against conventional image formats?

For me the biggest appeal is the "fix it in post" appeal. I can see for difficult

sports, events and special-effects footage that a light field camera of sufficient

resolution, dynamic range and at the right price could capture the story in a way

that would ensure you always go the best results. No more blurred shots of the

unexpected events at the back of the crowd because the light field camera could

have its focus pulled after the fact.

(Continued)

Zee Khattak 59

10.2 Image  Test  Lab  Photographs  

Figure 10-1 – Image test lab set up with

lighting.

Figure 10-2 – Measuring tape for distance

measurements.

Figure 10-3 – Lytro Illum front. Figure 10-4 – Lytro Illum side.

Figure 10-5 – Photographing the test

chart.

Figure 10-6 – Logging the results.

Zee Khattak 60

10.3 Dissertation  Log  Book  

06/10/14

Rob

Presented initial ideas for project (see sheet).

He told me that I could push it a lot further – not just comparison. Image

processing, algorithms – above and beyond just pixels, into code.

Image processing software: Leena, Gimp, Lateck, Lumix.

Academic papers – Google scholar.

Take apart cameras – data tests. Not just of existing light field technology – but

can this effect be recreated using just normal digital cameras – how theoretically

can it be made. Adding data.

Contacts: Jerry Foss, Ian Williams, Greg Hough, Sam Smith – work in

conjunction with – tell me where to be looking, what to read up on. Level 1 image

processing research.

08/10/14

Jerry

I mentioned Rob’s suggestion – Jerry said he has heard of someone recreating

the lf file – only works with chrome (he could not remember exactly who).

He suggested taking a PRODUCTION WORKFLOW approach to my project – a

lot of movement in the file-based workflow (based on METADATA). The

capturing process of images/video – everything that occurs in the production

stage.

BBC research + Development – looking into light field cameras.

Metadata + workflow – send Jerry email regarding slides for digital workflow +

slides for contacts list.

Idea to compare and contrast current digital workflows with that of the light field

workflow – eg. a digital camera workflow compared with a lytro.

Zee Khattak 61

AVID – suites are workflow. How do you think light field capture would be

integrated into such a system? CONTACT them – is this something they have

considered?

Talk to people in the industry surrounding digital workflows – BRUCE DEVLIN –

google his name and mention Jerry. CTO company Amberfin (acquired by Dalet

– French company). Ask are they working on anything that absorbs light field into

their current workflows. Mention metadata link.

‘Bruce’s shorts’ – sign up for on the website – series on aspects of workflow

industry.

01/11/14 – Andy White VIVA

Matlab – Ian Williams

DMT Students – Digital Image processing

Next 3-4 weeks – see how this goes

Tasks/ Schedule – too rigid, plan ahead – Pre work for each/ scoping objective

Camera – outline options. Rent/buy/borrow. BBC? Ask Jerry

02/11/14 – Jerry

Signal processing analysis – (data into report) cite all experiments + report as

appendix

Borrow lytro – hires + loans – looking at images during capture process, file

formats

Mention you will first do a superficial analysis of the quality

Lytro developing all the time – comparison resolution, sensor, diffraction on

sensor, DOF important for depth information discussion

How it does this – possible developments to ASTRO PHOTOGRAPHY

Zee Khattak 62

BBC – Looking for convenience in capturing a range of usable footage for video –

documentary, news – feasibility of using it for film? File output of lytro into

workflow. Information BLUE ROOM – play around with technology R & D area

THEN talk to Bruce – what does he think is the feasibility of doing this?

Literature review – find things that disagree. Talk about your opinions – weigh up

pros + cons

05/12/14 - Jerry

Depth testing – distance markers. Test res for each depth.

Focus on something in front of chart.

Res behind + in front ranges.

Results – mentions in report how it hasn’t been done, difficult to say for sure.

Article – Authors.

22/02/15 – Andy

Testing the lens capabilities – focus on the lens only.

Talk to Jay – additional lenses, many focal points – extending the lens.

15/05/15 – Jerry, Andy, Jay

Geometric distortion test chart.

Testing lens – move camera – evenness of exposure throughout ranges.

Microlens picks up reflections, chart to be in different part of field.

Verify depth of field guide for hypothesis.

Rig: Test/ measure diffraction of smaller sub-apertures.

How does this compare to conventional?

Zee Khattak 63

Test different lighting colour temperature conditions – 400/700 nm

Comparing DOF limits

10.4 Personal  In-­‐Field  Photography  Using  the  Lytro  Illum  

Figure 10-7 – Refocusing within Lytro Desktop – effective light field images use depth

composition of subjects, which the viewer can refocus to discover new elements of the

story.

Figure 10-8 – Testing the range of the Lytro Illum’s refocusability.

Zee Khattak 64

10.5 Light  Field  Image  Upload  &  Storage  

(Continued)

Figure 10-9 – Depth mapping problems around subject.

Figure 10-10 – Processing must be done for each image during/after uploading to Lytro

Desktop; making it a lengthy process to view the final images on a computer.

Zee Khattak 65

Figure 10-11 – 41.11 GB total for Lytro Desktop

with 389 light field images stored.

Figure 10-12 – Multiple files stored for a single light field capture equalling 130.6 MB.