computational photography tutorial @ icip 2016 list of...

9

Click here to load reader

Upload: duongtuyen

Post on 13-May-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

Computational Photography Tutorial @ ICIP 2016List of references

Mohit Gupta and Jean-Francois Lalonde

1 Coded photography

Object Side Coding

[1] H. Du, X. Tong, X. Cao, and S. Lin. “A prism-based system for multispectral video acquisition”.In: Computer Vision, 2009 IEEE 12th International Conference on. Sept. 2009, pp. 175–182.

[2] T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. K. Nayar, and C. Intwala. “Spatio-AngularResolution Tradeoff in Integral Photography”. In: In Eurographics Symposium on Rendering.2006, pp. 263–272.

[3] S. Kuthirummal and S. K. Nayar. “Multiview Radial Catadioptric Imaging for Scene Capture”.In: ACM Trans. Graph. 25.3 (July 2006), pp. 916–923.

[4] R. Raskar, A. Agrawal, and J. Tumblin. “Coded Exposure Photography: Motion DeblurringUsing Fluttered Shutter”. In: ACM Trans. Graph. 25.3 (July 2006), pp. 795–804.

[5] Y. Schechner and S. Nayar. “Generalized Mosaicing: Polarization Panorama”. In: IEEETransactions on Pattern Analysis and Machine Intelligence 27.4 (Apr. 2005), pp. 631–636.

[6] Y. Schechner and N. Karpel. “Clear underwater vision”. In: Computer Vision and PatternRecognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conferenceon. Vol. 1. June 2004,

[7] Y. Schechner and S. Nayar. “Generalized Mosaicing: High Dynamic Range in a Wide Field ofView”. In: International Journal on Computer Vision 53.3 (July 2003), pp. 245–267.

[8] Y. Schechner and S. Nayar. “Generalized Mosaicing: Wide Field of View Multispectral Imaging”.In: IEEE Transactions on Pattern Analysis and Machine Intelligence 24.10 (Oct. 2002),pp. 1334–1348.

[9] J. Gluckman and S. Nayar. “Rectified catadioptric stereo sensors”. In: Computer Vision andPattern Recognition, 2000. Proceedings. IEEE Conference on. Vol. 2. 2000, 380–387 vol.2.

[10] D. H. Lee, I. S. Kweon, and R. Cipollaa. “Single Lens Stereo with a Biprism”. In: Proceedingsof the IAPR International Workshop on Machine Vision and Applications. 1998, pp. 136–139.

[11] J. S. Chahl and M. V. Srinivasan. “Reflective surfaces for panoramic imaging”. In: Appl. Opt.36.31 (Nov. 1997), pp. 8275–8285.

[12] S. Peleg and J. Herman. “Panoramic mosaics by manifold projection”. In: Computer Visionand Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on.June 1997, pp. 338–343.

[13] K. Yamazawa, Y. Yagi, and M. Yachida. “Omnidirectional imaging with hyperboloidal projec-tion”. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots andSystems. Vol. 2. July 1993, 1029–1034 vol.2.

1

Page 2: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[14] L. B. Wolff and T. E. Boult. “Constraining object features using a polarization reflectancemodel”. In: Pattern Analysis and Machine Intelligence, IEEE Transactions on 13.7 (July1991), pp. 635–657.

Pupil (Aperture) Plane Coding

[1] O. Cossairt, C. Zhou, and S. Nayar. “Diffusion Coded Photography for Extended Depth ofField”. In: ACM Trans. Graph. 29.4 (July 2010), 31:1–31:10.

[2] A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman. “4D Frequency Analysisof Computational Cameras for Depth of Field Extension”. In: ACM Trans. Graph. 28.3 (July2009), 97:1–97:14.

[3] Y. Bando, B.-Y. Chen, and T. Nishita. “Extracting Depth and Matte Using a Color-filteredAperture”. In: ACM Trans. Graph. 27.5 (Dec. 2008), 134:1–134:9.

[4] C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen. “Programmable AperturePhotography: Multiplexed Light Field Acquisition”. In: ACM Trans. Graph. 27.3 (Aug. 2008),55:1–55:10.

[5] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. “Image and Depth from a ConventionalCamera with a Coded Aperture”. In: ACM Trans. Graph. 26.3 (July 2007).

[6] S. W. Hasinoff and K. N. Kutulakos. “Confocal stereo”. In: In Proc. ECCV. Springer, 2006,pp. 620–634.

[7] M. Aggarwal and N. Ahuja. “Split aperture imaging for high dynamic range”. In: ComputerVision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on. Vol. 2. 2001,10–17 vol.2.

[8] E. R. Dowski and W. T. Cathey. “Extended depth of field through wave-front coding”. In:Appl. Opt. 34.11 (Apr. 1995), pp. 1859–1866.

[9] A. P. Pentland. “A New Sense for Depth of Field”. In: Pattern Analysis and Machine Intelli-gence, IEEE Transactions on PAMI-9.4 (July 1987), pp. 523–531.

Focal (Image) Plane Coding

[1] G. Bub, M. Tecza, M. Helmes, P. Lee1, and P. Kohl. “Temporal pixel multiplexing forsimultaneous high-speed, high-resolution imaging”. In: Nature Methods 7 (2010).

[2] J. Gu, Y. Hitomi, T. Mitsunaga, and S. Nayar. “Coded Rolling Shutter Photography: FlexibleSpace-Time Sampling”. In: IEEE International Conference on Computational Photography(ICCP). Mar. 2010.

[3] M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan. “Flexible Voxels for Motion-Aware Videography”. In: Proc. European Conference on COmputer Vision. 2010.

[4] S. Kuthirummal, H. Nagahara, C. Zhou, and S. Nayar. “Flexible Depth of Field Photography”.In: IEEE Transactions on Pattern Analysis and Machine Intelligence 99 (Mar. 2010).

[5] H. Nagahara, S. Kuthirummal, C. Zhou, and S. Nayar. “Flexible Depth of Field Photography”.In: European Conference on Computer Vision (ECCV). Oct. 2008.

2

Page 3: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[6] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. “Dappled Photography:Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing”. In:ACM Trans. Graph. 26.3 (July 2007).

[7] S. G. Narasimhan and S. K. Nayar. “Enhancing Resolution Along Multiple Imaging DimensionsUsing Assorted Pixels”. In: IEEE Trans. Pattern Anal. Mach. Intell. 27.4 (Apr. 2005), pp. 518–530.

[8] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan. “Light Field Photographywith a Hand-held Plenoptic Camera”. In: (Technical Report CSTR 2005-02), Stanford, CA:Stanford Computer Science Department (2005).

[9] T. Naemura, T. Yoshida, and H. Harashima. “3-D computer graphics based on integralphotography”. In: Opt. Express 8.4 (Feb. 2001), pp. 255–262.

[10] E. Adelson and J. Wang. “Single lens stereo with a plenoptic camera”. In: Pattern Analysisand Machine Intelligence, IEEE Transactions on 14.2 (Feb. 1992), pp. 99–106.

[11] G. Hausler. “A method to increase the depth of focus by two step image processing”. In: OpticsCommunications (1972), pp. 38–42.

Illumination Coding

[1] Leica-Geosystems. Pulsed LIDAR Sensor. http://www.leica-geosystems.us/en/index.htm.

[2] Velodyne. Pulsed LIDAR Sensor. http://www.velodynelidar.com/lidar/lidar.aspx.

[3] M. Gupta, S. K. Nayar, M. Hullin, and J. Martin. “Phasor Imaging: A Generalization ofCorrelation Based Time-of-Flight Imaging”. In: ACM Transactions on Graphics (2015).

[4] N. Matsuda, O. Cossairt, and M. Gupta. “MC3D: Motion Contrast 3D Scanning”. In: IEEEInternational Conference on Computational Photography. 2015.

[5] M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan. “A Practical Approachto 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus”. In:International Journal of Computer Vision 102.1-3 (2013), pp. 33–55.

[6] M. Gupta and S. K. Nayar. “Micro Phase Shifting”. In: Proc. IEEE CVPR. 2012.

[7] M. Gupta, Y. Tian, S. Narasimhan, and L. Zhang. “A Combined Theory of DefocusedIllumination and Global Light Transport”. In: International Journal of Computer Vision 98.2(2012), pp. 146–167.

[8] A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar.“Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging”.In: Nature 3 (745) (2012).

[9] J. Salvi, S. Fernandez, T. Pribanic, and X. Llado. “A state of the art in structured lightpatterns for surface profilometry”. In: Pattern Recognition 43.8 (2010), pp. 2666–2680.

[10] S. Zhang, D. V. D. Weide, and J. Oliver. “Superfast phase-shifting method for 3-D shapemeasurement”. In: Opt. Express 18.9 (2010), pp. 9684–9689.

[11] A. Kirmani, T. Hutchison, J. Davis, and R. Raskar. “Looking around the corner using transientimaging”. In: IEEE ICCV. 2009.

3

Page 4: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[12] D. Lanman, D. Crispell, and G. Taubin. “Surround Structured Lighting: 3-D Scanning withOrthographic Illumination”. In: Comput. Vis. Image Underst. 113.11 (2009), pp. 1107–1117.

[13] L. Zhang and S. Nayar. “Projection Defocus Analysis for Scene Capture and Image Display”.In: ACM Trans. Graph. 25.3 (2006), pp. 907–915.

[14] J. Salvi, J. Pagfffdfffdfffds, and J. Batlle. “Pattern codification strategies in structured lightsystems”. In: Pattern Recognition 37.4 (2004), pp. 827–849.

[15] D. Scharstein and R. Szeliski. “High-accuracy stereo depth maps using structured light”. In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. Vol. 1. 2003,

[16] L. Zhang, B. Curless, and S. M. Seitz. “Spacetime Stereo: Shape Recovery for Dynamic Scenes”.In: IEEE Conference on Computer Vision and Pattern Recognition. 2003, pp. 367–374.

[17] S. Rusinkiewicz, O. Hall-Holt, and M. Levoy. “Real-time 3D Model Acquisition”. In: ACMTrans. Graph. 21.3 (2002), pp. 438–446.

[18] R. Lange and P. Seitz. “Solid-State time-of-flight range camera”. In: IEEE J. QuantumElectronics 37.3 (2001).

[19] R. Lange. “3D time-of-flight distance measurement with custom solid-state image sensors inCMOS-CCD-technology”. In: PhD Thesis (2000).

[20] M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson,J. Davis, J. Ginsberg, J. Shade, and D. Fulk. “The Digital Michelangelo Project: 3D Scanningof Large Statues”. In: SIGGRAPH. 2000, pp. 131–144.

[21] J. .-.-Y. Bouguet and P. Perona. “3D photography on your desk”. In: Proc. IEEE InternationalConference on Computer Vision. 1998, pp. 43–50.

[22] E. Horn and N. Kiryati. “Toward optimal structured light patterns”. In: International Confer-ence on Recent Advances in 3-D Digital Imaging and Modeling. 1997, pp. 28–35.

[23] T. Kanade, A. Gruss, and L. Carley. “A very fast VLSI rangefinder”. In: IEEE InternationalConference on Robotics and Automation. 1991, pp. 1322–1329.

[24] I. Moring, T. Heikkinen, R. Myllyla, and A. Kilpela. “Acquisition Of Three-Dimensional ImageData By A Scanning Laser Range Finder”. In: Optical Engineering 28.8 (1989).

[25] K. Sato and S. Inokuchi. “3D surface measurement by space encoding range imaging”. In:Journal of Robotic Systems 2.1 (1985), pp. 27–39.

[26] T. C. Strand. “Optical Three-Dimensional Sensing For Machine Vision”. In: Optical Engineering24.1 (1985).

[27] S. Inokuchi, K. Sato, and F. Matsuda. “Range imaging system for 3-D object recognition”. In:International Conference Pattern Recognition. 1984, pp. 806–808.

[28] J. L. Posdamer and M. D. Altschuler. “Surface measurement by space-encoded projected beamsystems”. In: Computer Graphics and Image Processing 18.1 (1982), pp. 1–17.

[29] D. E. Smith. “Electronic Distance Measurement for Industrial and Scientific Applications”. In:Hewlett-Packard Journal 31.6 (1980).

[30] G. Mamon, D. G. Youmans, Z. G. Sztankay, and C. E. Mongan. “Pulsed GaAs laser terrainprofiler”. In: Appl. Opt. 17.6 (1978), pp. 868–877.

[31] G. J. Agin and T. O. Binford. “Computer Description of Curved Objects”. In: IEEE Trans.Comput. 25.4 (1976), pp. 439–449.

4

Page 5: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[32] J. M. Payne. “An Optical Distance Measuring Instrument”. In: Review of Scientific Instruments44.3 (1973), pp. 304–306.

[33] P. M. Will and K. S. Pennington. “Grid coding: A novel technique for image processing”. In:Proceedings of the IEEE 60.6 (1972), pp. 669–680.

[34] Y. Shirai and M. Suwa. “Recognition of Polyhedrons with a Range Finder”. In: Proceedings ofthe International Joint Conference on Artificial Intelligence. 1971, pp. 80–87.

[35] W. Koechner. “Optical ranging system employing a high power injection laser diode”. In:IEEE Trans. aerospace and electronic systems 4.1 (1968).

[36] B. S. Goldstein and G. F. Dalrymple. “Gallium arsenide injection laser radar”. In: Proc. ofthe IEEE 55.2 (1967), pp. 181–188.

Surveys

[1] S. K. Nayar. Computational Cameras: Approaches, Benefits and Limits. Tech. rep. Jan. 2011.

[2] C. Zhou and S. K. Nayar. “Computational Cameras: Convergence of Optics and Processing”.In: IEEE Transactions on Image Processing 20.12 (Dec. 2011), pp. 3322–3340.

[3] S. K. Nayar. “Computational Cameras: Redefining the Image”. In: IEEE Computer Magazine,Special Issue on Computational Photography (Aug. 2006), pp. 30–38.

[4] F. Blais. “Review of 20 years of range sensor development”. In: Journal of Electronic Imaging13.1 (2004), pp. 231–243.

[5] P. Besl. “Active, optical range imaging sensors”. In: Machine Vision and Applications 1.2(1988), pp. 127–152.

[6] R. A. Jarvis. “A Perspective on Range Finding Techniques for Computer Vision”. In: IEEETransactions on Pattern Analysis and Machine Intelligence 5.2 (1983), pp. 122–139.

5

Page 6: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

2 Augmented photography

Inverting the imaging pipeline

[1] A. Mosleh, P. Green, E. Onzon, I. Begin, and P. Langlois. “Camera Intrinsic Blur KernelEstimation: A Reliable Framework”. In: IEEE Conference on Computer Vision and PatternRecognition. 2015.

[2] F. Heide, K. Egiazarian, J. Kautz, K. Pulli, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pajfffdfffdk,D. Reddy, O. Gallo, J. Liu, and W. Heidrich. “FlexISP: a flexible camera image processingframework”. In: ACM Transactions on Graphics 33.6 (Nov. 2014), pp. 1–13.

[3] F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb. “High-qualitycomputational imaging through simple lenses”. In: ACM Transactions on Graphics 32.5 (2013),pp. 1–14.

[4] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Scholkopf. “Non-stationary correction of opticalaberrations”. In: International Conference on Computer Vision. IEEE, Nov. 2011, pp. 659–666.

[5] L. Zhang, X. Wu, and A. Buades. “Color demosaicking by local directional interpolation andnonlocal adaptive thresholding”. In: Journal of Electronic Imaging 20.2 (Apr. 2011).

[6] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. “Image denoising by sparse 3D transformation-domain collaborative filtering”. In: IEEE Transactions on Image Processing 16.8 (2007), pp. 1–16.

[7] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. “Image and depth from a conventionalcamera with a coded aperture”. In: ACM Transactions on Graphics 26.3 (July 2007), p. 70.

[8] Y. Weiss and W. Freeman. “What makes a good model of natural images?” In: 2007 IEEEConference on Computer Vision and Pattern Recognition. 2007.

[9] B. A. Olshausen and D. J. Field. “Wavelet-like receptive elds emerge from a network thatlearns sparse codes for natural images.” In: Nature (1996), pp. 1–11.

Burst photography

[1] S. W. Hasinoff, J. T. Barron, and A. Adams. “Burst photography for high dynamic range andlow-light imaging on mobile cameras”. In: ACM Transactions on Graphics (SIGGRAPH Asia)(2016).

[2] M. Delbracio and G. Sapiro. “Removing Camera Shake via Weighted Fourier Burst Accumula-tion”. In: IEEE Transactions on Image Processing 24.11 (Nov. 2015), pp. 3293–3307.

[3] A. Ito, S. Tambe, K. Mitra, A. C. Sankaranarayanan, and A. Veeraraghavan. “Compressiveepsilon photography for post-capture control in digital imaging”. In: ACM Transactions onGraphics 33.4 (July 2014), pp. 1–12.

[4] Z. Liu, L. Yuan, X. Tang, M. Uyttendaele, and J. Sun. “Fast burst images denoising”. In:ACM Transactions on Graphics 33.6 (Nov. 2014), pp. 1–9.

[5] S. H. Park and M. Levoy. “Gyro-Based Multi-image Deconvolution for Removing HandshakeBlur”. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2014,pp. 3366–3373.

6

Page 7: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[6] E. Ringaby and P.-E. Forssen. “A virtual tripod for hand-held video stacking on smartphones”.In: IEEE International Conference on Computational Photography. IEEE, May 2014, pp. 1–9.

[7] M. Granados, B. Ajdin, M. Wand, C. Theobalt, H. P. Seidel, and H. P. a. Lensch. “OptimalHDR reconstruction with linear digital cameras”. In: IEEE Conference on Computer Visionand Pattern Recognition. IEEE, 2010, pp. 215–222.

[8] N. Joshi and M. F. Cohen. “Seeing Mt. Rainier: Lucky Imaging for multi-image denois-ing, sharpening, and haze removal”. In: IEEE International Conference on ComputationalPhotography. 2010.

[9] D. G. Lowe. “Distinctive Image Features from scale-invariant keypoints”. In: InternationalJournal of Computer Vision 60.2 (2004), pp. 91–110.

[10] J. R. Janesick, T. Elliott, S. Collins, M. M. Blouke, and J. Freeman. “Scientific Charge-CoupledDevices”. In: Optical Engineering 26.8 (1987), p. 268692.

Advanced image editing

[1] C. Barnes, F.-l. Zhang, L. Lou, X. Wu, and S.-m. Hu. “PatchTable : Efficient Patch Queriesfor Large Datasets and Applications”. In: ACM Transactions on Graphics (2015).

[2] J. T. Barron, A. Adams, Y. Shih, and C. Hernandez. “Fast Bilateral-Space Stereo for SyntheticDefocus”. In: IEEE Conference on Computer Vision and Pattern Recognition. 2015.

[3] M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu. “Global contrast basedsalient region detection”. In: IEEE Transactions on Pattern Analysis and Machine Intelligence37.3 (2015), pp. 569–582.

[4] O. Fried, E. Shechtman, D. B. Goldman, and A. Finkelstein. “Finding Distractors In Images”.In: IEEE Conference on Computer Vision and Pattern Recognition. 2015.

[5] R. Jaroensri, S. Paris, A. Hertzmann, V. Bychkovsky, and F. Durand. “Predicting Rangeof Acceptable Photographic Tonal Adjustments”. In: IEEE International Conference onComputational Photography. 2015.

[6] T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. “A computational approach for obstruction-free photography”. In: ACM Transactions on Graphics 34.4 (July 2015), 79:1–79:11.

[7] C. Fang, Z. Lin, R. Mech, and X. Shen. “Automatic Image Cropping using Visual Composi-tion, Boundary Simplicity and Content Preservation Models”. In: Proceedings of the ACMInternational Conference on Multimedia. New York, New York, USA: ACM Press, Nov. 2014,pp. 1105–1108.

[8] M. Son, Y. Lee, H. Kang, and S. Lee. “Art-Photographic Detail Enhancement”. In: ComputerGraphics Forum 33.2 (2014), pp. 391–400.

[9] M. Wang, Y.-K. Lai, Y. Liang, R. R. Martin, and S.-M. Hu. “BiggerPicture: data-driven imageextrapolation using graph matching”. In: ACM Transactions on Graphics 33.6 (Nov. 2014),pp. 1–13.

[10] K. Yamaguchi, D. Mcallester, and R. Urtasun. “Efficient Joint Segmentation, OcclusionLabeling, Stereo and Flow Estimation”. In: European Conference on Computer Vision. 2014,pp. 1–16.

7

Page 8: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[11] J. Yan, S. Lin, S. B. Kang, and X. Tang. “A Learning-to-Rank Approach for Image ColorEnhancement”. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE,June 2014, pp. 2987–2994.

[12] J. Baek, D. Pajak, K. Kim, K. Pulli, and M. Levoy. “WYSIWYG Computational Photographyvia Viewfinder Editing”. In: ACM Transactions on Graphics 32.6 (2013), 198:1–198:10.

[13] N. K. Kalantari, E. Shechtman, C. Barnes, S. Darabi, D. B. Goldman, and P. Sen. “Patch-basedhigh dynamic range video”. In: ACM Transactions on Graphics 32.6 (2013), pp. 1–8.

[14] R. Margolin, A. Tal, and L. Zelnik-Manor. “What makes a patch distinct?” In: IEEE Conferenceon Computer Vision and Pattern Recognition. 2013, pp. 1139–1146.

[15] J. Yan, S. Lin, S. B. Kang, and X. Tang. “Learning the Change for Automatic Image Cropping”.In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2013, pp. 971–978.

[16] L. Kaufman, D. Lischinski, and M. Werman. “Content-Aware Automatic Photo Enhancement”.In: Computer Graphics Forum 31.8 (Dec. 2012), pp. 2528–2540.

[17] P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman. “Robustpatch-based HDR reconstruction of dynamic scenes”. In: ACM Transactions on Graphics 31.6(2012), p. 1.

[18] Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski. “Non-rigid dense correspondencewith applications for image enhancement”. In: ACM Transactions on Graphics 30.4 (2011),p. 1.

[19] B. Wang, Y. Yu, and Y.-Q. Xu. “Example-based image color and tone style enhancement”. In:ACM Transactions on Graphics 30.4 (Aug. 2011), p. 1.

[20] A. Adams, J. Baek, and M. A. Davis. “Fast high-dimensional filtering using the permutohedrallattice”. In: Computer Graphics Forum 29.2 (2010), pp. 753–762.

[21] C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein. “The generalized PatchMatchcorrespondence algorithm”. 2010.

[22] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. “PatchMatch: A RandomizedCorrespondence Algorithm for Structural Image Editing”. In: ACM Transactions on Graphics28.3 (2009), p. 1.

[23] T. Judd, K. Ehinger, F. Durand, and A. Torralba. “Learning to predict where humans look”.In: IEEE International Conference on Computer Vision (2009), pp. 2106–2113.

[24] A. Buades, B. Coll, and J.-M. Morel. “A non-local algorithm for image denoising”. In: IEEEConference on Computer Vision and Pattern Recognition. Vol. 2. 2005.

2D image, 3D scene

[1] K. Karsch, K. Sunkavalli, S. Hadap, N. Carr, H. Jin, R. Fonte, M. Sittig, and D. Forsyth.“Automatic Scene Inference for 3D Object Compositing”. In: ACM Transactions on Graphics33.3 (2014), 32:1–32:15.

[2] N. Kholgade, T. Simon, A. Efros, and Y. Sheikh. “3D object manipulation in a single photographusing stock 3D models”. In: ACM Transactions on Graphics 33.4 (2014), 127:1–127:12.

8

Page 9: Computational Photography Tutorial @ ICIP 2016 List of ...vision.gel.ulaval.ca/~jflalonde/projects/compphototutorial/... · M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @

M. Gupta and J.-F. Lalonde Comp. Photography Tutorial @ ICIP 2016—List of references

[3] J.-F. Lalonde and I. Matthews. “Lighting Estimation in Outdoor Image Collections”. In:International Conference on 3D Vision. 2014.

[4] T. Chen, Z. Zhu, A. Shamir, S.-M. Hu, and D. Cohen-Or. “3-Sweep: extracting editable objectsfrom a single photo”. In: ACM Transactions on Graphics 32.6 (2013), 195:1–195:10.

[5] J. F. Lalonde, A. a. Efros, and S. G. Narasimhan. “Estimating the natural illuminationconditions from a single outdoor image”. In: International Journal of Computer Vision 98.2(2012), pp. 123–145.

[6] Y. Zheng, X. Chen, M.-M. Cheng, K. Zhou, S.-M. Hu, and N. J. Mitra. “Interactive images:cuboid proxies for smart image manipulation”. In: ACM Transactions on Graphics 31.4 (2012),99:1–99:11.

[7] K. Karsch, V. Hedau, D. Forsyth, and D. Hoiem. “Rendering synthetic objects into legacyphotographs”. In: ACM Transactions on Graphics 30.6 (2011), p. 1.

[8] V. Hedau, D. Hoiem, and D. Forsyth. “Recovering the spatial layout of cluttered rooms”. In:IEEE International Conference on Computer Vision. IEEE, Sept. 2009, pp. 1849–1856.

[9] A. Levin, A. Rav-Acha, and D. Lischinski. “Spectral matting”. In: IEEE Transactions onPattern Analysis and Machine Intelligence. Vol. 30. 10. 2008, pp. 1699–1712.

[10] P. Debevec. “Rendering Synthetic Objects into Real Scenes : Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography”. In: Pro-ceedings of ACM SIGGRAPH. 1998, pp. 189–198.

9