five minute speech: an overview of activities developed in disciplines and guided studies

16
Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014 Five Minute Speech An Overview of Activities Developed in Disciplines and Guided Studies Michel Alves dos Santos Pós-Graduação em Engenharia de Sistemas e Computação Universidade Federal do Rio de Janeiro - UFRJ - COPPE Cidade Universitária - Rio de Janeiro - CEP: 21941-972 Docentes Responsáveis: Prof. Dsc. Ricardo Marroquim & Prof. PhD. Cláudio Esperança {michel.mas, michel.santos.al}@gmail.com January, 2014 Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Upload: michel-alves

Post on 19-Jan-2015

2.180 views

Category:

Education


0 download

DESCRIPTION

Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies. In this presentation, I showed an overview of works and activities developed in the last disciplines and guided studies. Among these activites I can highlight the results obtained by applying the CCPD method (Capacity-Constrained Point Distributions, a varyant of Lloyd's method), increase of proficiency in the use of scientific tools for numerical computation and selection of major bibliographies for possible dissertation themes.

TRANSCRIPT

Page 1: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Five Minute SpeechAn Overview of Activities Developed in Disciplines and Guided Studies

Michel Alves dos Santos

Pós-Graduação em Engenharia de Sistemas e ComputaçãoUniversidade Federal do Rio de Janeiro - UFRJ - COPPECidade Universitária - Rio de Janeiro - CEP: 21941-972

Docentes Responsáveis: Prof. Dsc. Ricardo Marroquim & Prof. PhD. Cláudio Esperança

{michel.mas, michel.santos.al}@gmail.com

January, 2014

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 2: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Introduction

I Adjustment and finalization of the computervision project;

I Results obtained by the method ‘Capacity-Constrained Point Distributions’;

I Increased proficiency in the use of Gnuplot,Maxima and Scilab tools;

I Extension of studies on the synthesis ofimages (texture and noise);

I Update contents of the institutional page;I Survey of bibliography and possible themes

for dissertation preparation.

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

0.00

0.20

0.40

0.60

0.80

1.000.00

0.20

0.40

0.60

0.80

1.00

0.000.200.400.600.801.00

0.99

0.995

1

1.005

1.01Activities developed since thelast meeting to date:

Presentation Hosted on: http://www.lcg.ufrj.br/Members/malves/index

Page 3: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Capacity-Constrained Point Distributions Results

Capacity-Constrained Point DistributionMichel Alves

Graduate Program in Systems Engineering and Computing :: Federal University of Rio de Janeiro :: UFRJ

LCG :: Laboratory of Computer Graphics :: [email protected] :: http://www.lcg.ufrj.br/Members/malves

December, 2013Rio de Janeiro - Brazil

Applications: Stippling, HDR Sampling Radiance/Luminance, etc.

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 4: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Possible Dissertation ThemesI Effectiveness of Image Quality Assessment Indexes on

Detection of Structural and Nonstructural Distortions:I Use of Image Quality Assessment Indexes.I Detection of Structural and Nonstructural Distortions.I Admissible levels of distortion for: noise, blocking, compression,

fusion/blending, watermarking, etc.I A Framework for Harmonic Color Measures:

I Main objective: to introduce a quality comparison scale for colorimages that takes into account the "balance" or harmony of theexisting sets of colors in the input model;

I Intelligent Transfer of Thematic Harmonic Color Palettes:I Main objective: to introduce a "smart" transfer method of

harmonic color palettes based on a particular theme or colorexpression model.

I Fast Procedural Texture Synthesis - An Approach Based onGPU Use:

I Fast generation of procedural textures using the parallelarchitecture of GPUs.

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 5: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Effectiveness of Image Quality AssessmentWHY IS IMAGE QUALITY ASSESSMENT SO DIFFICULT?

Zhou Wang and Alan C. Bovik

Lab for Image and Video Engi., Dept. of ECEUniv. of Texas at Austin, Austin, TX 78703-1084

[email protected], [email protected]

Ligang Lu

IBM T. J. Watson Research CenterYorktown Heights, NY 10598

[email protected]

ABSTRACT

Image quality assessment plays an important role in various imageprocessing applications. A great deal of effort has been made in re-cent years to develop objective image quality metrics that correlatewith perceived quality measurement. Unfortunately, only limitedsuccess has been achieved. In this paper, we provide some insightson why image quality assessment is so difficult by pointing out theweaknesses of the error sensitivity based framework, which hasbeen used by most image quality assessment approaches in the lit-erature.

Furthermore, we propose a new philosophy in designing im-age quality metrics: The main function of the human eyes is toextract structural information from the viewing field, and the hu-man visual system is highly adapted for this purpose. Therefore, ameasurement of structural distortion should be a good approxima-tion of perceived image distortion. Based on the new philosophy,we implemented a simple but effective image quality indexing al-gorithm, which is very promising as shown by our current results.

1. INTRODUCTION

Image quality measurement is crucial for most image processingapplications. Generally speaking, an image quality metric hasthree kinds of applications:

First, it can be used to monitor image quality for quality con-trol systems. For example, an image and video acquisition systemcan use the quality metric to monitor and automatically adjust it-self to obtain the best quality image and video data. A networkvideo server can use it to examine the quality of the digital videotransmitted on the network and control video streaming.

Second, it can be employed to benchmark image processingsystems and algorithms. Suppose we need to select one from mul-tiple image processing systems for a specific task, then a qualitymetric can help us evaluate which of them provides the best qualityimages.

Third, it can be embedded into an image processing system tooptimize the algorithms and the parameter settings. For instance,in a visual communication system, a quality metric can help opti-mal design of the prefiltering and bit assignment algorithms at theencoder and the postprocessing algorithms at the decoder.

The best way to assess the quality of an image is perhaps tolook at it because human eyes are the ultimate receivers in mostimage processing environments. The subjective quality measure-ment Mean Opinion Score (MOS) has been used for many years.

This research was supported in part by IBM Corporation, Inc., TexasInstruments, Inc., and by State of Texas Advanced Technology Program.

However, the MOS method is too inconvenient, slow and expen-sive for practical usage. The goal of objective image and videoquality assessment research is to supply quality metrics that canpredict perceived image and video quality automatically. PeakSignal-to-Nose Ratio (PSNR) and Mean Squared Error (MSE) arethe most widely used objective image quality/distortion metrics,but they are widely criticized as well, for not correlating well withperceived quality measurement. In the past three to four decades,a great deal of effort has been made to develop new objective im-age and video quality measurement approaches which incorporateperceptual quality measures by considering human visual system(HVS) characteristics [1, 2, 3, 4, 5, 6, 7, 8, 9].

Surprisingly, only limited success has been achieved. It hasbeen reported that none of the complicated objective image qual-ity metrics in the literature has shown any clear advantage oversimple mathematical measures such as PSNR under strict testingconditions and different image distortion environments [2, 9, 10].For example, in a recent test conducted by the Video Quality Ex-perts Group (VQEG) in validating objective video quality assess-ment methods, there are eight to nine proponent models whoseperformance is statistically indistinguishable [2]. Unfortunately,this group of models includes PSNR.

It is worth noting that most proposed objective image qualitymeasurement approaches share a common error sensitivity basedphilosophy, which is motivated from psychological vision scienceresearch, where evidences show that human visual error sensitivi-ties and masking effects vary in different spatial and temporal fre-quency and directional channels. In this paper, we try to point outthe drawbacks of this framework. In addition, we propose a newphilosophy for designing image quality metrics, which models im-age degradations as structural distortion instead of errors.

2. ERROR SENSITIVITY BASED IMAGEQUALITY MEASUREMENT

2.1. Framework of Error Sensitivity Based Methods

A typical error sensitivity based approach can be summarized asFigure 1. Although variances exist and the detailed implementa-tions are different for different image quality assessment models,the underlying principles are the same. First, the original and testimage signals are subject to preprocessing procedures, possibly in-cluding alignment, luminance transformation, and color transfor-mation, etc. The output is preprocessed original and test signals. Achannel decomposition method is then applied to the preprocessedsignals, resulting in two sets of transformed signals for differentchannels. There are many choices for channel decomposition, suchas identity transform (as the simplest special case), wavelet trans-

Noname manuscript No.(will be inserted by the editor)

Visual Quality Assessment Algorithms : What Does theFuture Hold?

Anush K. Moorthy · Alan C. Bovik

Received: date / Accepted: date

Abstract Creating algorithms capable of predicting the perceived quality of a visual

stimulus defines the field of objective visual quality assessment (QA). The field of ob-

jective QA has received tremendous attention in the recent past, with many successful

algorithms being proposed for this purpose. Our concern here is not with the past

however; in this paper we discuss our vision for the future of visual quality assessment

research.

We first introduce the area of quality assessment and state its relevance. We de-

scribe current standards for gauging algorithmic performance and define terms that

we will use through this paper. We then journey through 2D image and video quality

assessment. We summarize recent approaches to these problems and discuss in detail

our vision for future research on the problems of full-reference and no-reference 2D

image and video quality assessment. From there, we move on to the currently popular

area of 3D QA. We discuss recent databases, algorithms and 3D quality of experience.

This yet-nascent technology provides for tremendous scope in terms of research activ-

ities and we summarize each of them. We then move on to more esoteric topics such

as algorithmic assessment of aesthetics in natural images and in art. We discuss cur-

rent research and hypothesize about possible paths to tread. Towards the end of this

article, we discuss some other areas of interest including high-definition (HD) quality

assessment, immersive environments and so on before summarizing interesting avenues

for future work in multimedia (i.e., audio-visual) quality assessment.

Keywords Quality assessment · objective quality assessment · subjective quality

assessment · perceived quality

Anush MoorthyDept. of Electrical and Computer Engg.,The University of Texas at AustinTel.: 512-415-0213E-mail: [email protected]

Alan BovikDept. of Electrical and Computer Engg.,The University of Texas at AustinTel.: 512-471-6530

Original signal

Distorted signal

Qualtiy/ Distortion Measure

Channel Decomposition

Error Weighting

Error Weighting

Error Weighting

.

.

.

.

.

.

Error Masking

Error Masking

Error Masking

.

.

.

Error Summation

Preprocessing . . .

Fig. 1. Error sensitivity based image quality measurement.

forms, discrete cosine transform (DCT), and Gabor decomposi-tions. The decomposed signal is treated differently in differentchannels according to human visual sensitivities measured in thespecific channel. The errors between the two signals in each chan-nel are calculated and weighted, usually by a Contrast SensitivityFunction (CSF). The weighted error signals are adjusted by a vi-sual masking effect model, which reflects the reduced visibility oferrors presented on the background signal. Finally, an error pool-ing method is employed to supply a single quality value of thewhole image being tested. The summation usually takes the form:��� ��������� ��� � ������� ���

(1)

where

��� is the weighted and masked error of the k-th coefficient

in the l-th channel, and � is a constant typically with a value be-tween 1 and 4. This formula is commonly called Minkowski errorpooling.

2.2. Weaknesses of Error Sensitivity Based Methods

The above error sensitivity based framework can be viewed as asimplified representation of the HVS. Such simplification impliesthe following assumptions:

1. The reference signal is of perfect quality.2. There exist visual channels in the HVS and the channel

responses can be simulated by an appropriate set of channel trans-formations.

3. CSF variance and intra-channel masking effects are thedominant factors that affect the HVS’s perception on each trans-formed coefficient in each channel.

4. For a single coefficient in each channel, after CSF weight-ing and masking, the relationship between the magnitude of theerror,

� ��� �, and the distortion perceived by the HVS, � ��� , can be

modelled as a non-linear function: � ��� � � ��� � �.

5. In each channel, after CSF weighting and masking, the in-teraction between different coefficients is small enough to be ig-nored.

6. The interaction between channels is small enough to beignored.

7. The overall perceived distortion is monotonically increasingwith the summation of the perceived errors of all coefficients in allchannels.

8. The perceived image quality is determined in the early vi-sion system. Higher level processes, such as feature extraction,pattern matching and cognitive understanding happening in the hu-man brain, are less effective.

9. Active visual processes, such as the change of fixationpoints and the adaptive adjustment of spatial resolution becauseof attention, are less effective.

The first assumption is reasonable for image/video coding andcommunication applications. The second and third assumptionsare also practically reasonable, provided the channel decomposi-tion methods are designed carefully to fit the psychovisual experi-mental data. However, all the other assumptions are questionable.We give some examples below.

Notice that most subjective measurement of visual error sen-sitivity is conducted near the visibility threshold, typically using a2 Alternative Forced Choice (2AFC) method. These measurementresults are not necessarily good for measuring distortions muchlarger than just visible, which is the case for most image process-ing applications. Therefore, Assumption 4 is weak, unless moreconvincing evidence can be provided.

It has been shown that many models work appropriately forsimple patterns, such as pure sine waves. However, their perfor-mance degrades significantly for natural images, where a largenumber of simple patterns coincide at the same image locations.This implies that the inter-channel interaction is strong, which is acontradiction of Assumption 6.

Also, we find that Minkowski error pooling (1) is not a goodchoice for image quality measurement. An example is given inFigure 2, where two test signals, test signals 1 (up-left) and 2(up-right), are generated from the original signal (up-center). Testsignal 1 is obtained by adding a constant number to each samplepoint, while the signs of the constant number added to test signal2 are randomly chosen to be 1 or ��� . The structural informationof the original signal is completely destroyed in test signal 2, butpreserved pretty well in test signal 1. In order to calculate theMinkowski error metric, we first subtract the original signal fromthe test signals, leading to the error signals 1 and 2, which havevery different structures. However, applying the absolute opera-tor on the error signals results in exactly the same absolute errorsignals. The final Minkowski error measures of the two test sig-nals are equal, no matter how the � value in (1) is selected. Thisexample not only demonstrates that structure-preservation abilityis an important factor in image quality assessment, but also showsthat Minkowski error pooling (1) is very inefficient in capturingthe structures of errors. By the observation that the frequency dis-tributions of the test signals 1 and 2 are very different, one mightargue that the problem can be solved by transforming the errorsignals into different frequency channels and measure the errorsdifferently in different channels. This argument is seemingly rea-sonable, but if the above example signals are extracted from cer-tain frequency bands instead of the spatial domain, then repeated

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 6: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

A Framework for Harmonic Color MeasuresSaliency-Guided Consistent Color

Harmonization

Yoann Baveye, Fabrice Urban, Christel Chamaret, Vincent Demoulin,and Pierre Hellier

Technicolor Research and Innovation, Rennes, France{baveyey,urbanf,chamaretc,demoulinv,hellierp}@technicolor.com

https://research.technicolor.com/rennes/

Abstract. The focus of this paper is automatic color harmonization,which amounts to re-coloring an image so that the obtained color paletteis more harmonious for human observers. The proposed automatic algo-rithm builds on the pioneering works described in [3,12] where templatesof harmonious colors are defined on the hue wheel. We bring three con-tributions in this paper: first, saliency [9] is used to predict the mostattractive visual areas and estimate a consistent harmonious template.Second, an efficient color segmentation algorithm, adapted from [4], isproposed to perform consistent color mapping. Third, a new mappingfunction substitutes usual color shifting method. Results show that themethod limits the visual artifacts of state-of-the-art methods and leadsto a visually consistent harmonization.

Keywords: color harmonization, color segmentation, color mapping,saliency, visual attention.

1 Introduction

Harmony within a picture is somehow subjective and related to a specific fieldof investigation. While artists mention shape symmetry, photographers refer toimage composition and movie directors perform rather at the color level. Har-mony is a large term whose definition is context-dependent. In this paper, colorharmony of picture will be referred to the way how to automatically changethe color palette in order to enhance the global ”look”, rendering or feeling.The main motivation is to improve user experience by objectively creating moreharmony when nature has associated different objects with each other, havingglobal color disharmony, as illustrated in Figure 1.

During the last decades, increasing focus has been devoted to the area of colorharmony, viewed as the color interaction within a spatial framework. Many colorscientists have performed experimentations to design a definition and some objec-tive criteria to color harmony characterization. Color combinations for definingcomprehensively harmonious doublets or triplets have been widely discussed inthe literature [13,15,8,18] especially associated to a range of moods and adjec-tives [20].

S. Tominaga, R. Schettini, and A. Tremeau (Eds.): CCIW 2013, LNCS 7786, pp. 105–118, 2013.c© Springer-Verlag Berlin Heidelberg 2013

No-reference Harmony-guided Quality Assessment

Christel Chamaret and Fabrice UrbanTechnicolor

975, avenue des Champs Blancs ZAC des Champs Blancs CS 17616 35576 Cesson [email protected], [email protected]

Abstract

Color harmony of simple color patterns has been widelystudied for color design. Rules defined then by psychologi-cal experiments have been applied to derive image aestheticscores, or to re-colorize pictures. But what is harmoniousor not in an image? What can the human eye perceivedisharmonious? Extensive research has been done in thecontext of quality assessment to define what is visible ornot in images and videos. Techniques based on human vi-sual system models use signal masking to define visibilitythresholds. Based on results in both fields, we present aharmony quality assessment method to assess what is har-monious or not in an image. Color rules are used to detectwhat part of images are disharmonious, and visual mask-ing is applied to estimate to what extent an image area canbe perceived disharmonious. The output perceptual har-mony quality map and scores can be used in a photo editingframework to guide the user getting the best artistic effects.Results show that the harmony maps reflect what a user per-ceives and that the score is correlated to the artistic intent.

1. IntroductionWhen manipulating, editing, improving images, the best

quality as well as a certain artistic intent are usually the fi-nality. Nevertheless, although the issues related to objectivequality assessment have been largely studied in the contextof low level artifacts (blur, blockiness, jitter...), the artisticintent is a problem more subjective, leading to strong dif-ficulties in modeling or generalization. As an intermediaryindicator, aesthetic quality metrics based on high-level fea-tures intuitively related to beauty (colorfulness, line orien-tation, shape...) and rules of thumb (composition, rules-of-third, skyline...) are showing up recently in the community[11, 4, 23]. Depending on the application context, someapproaches take advantage of a reference source or do thebest effort without any reference when providing an abso-lute quality measurement. Color harmony theory is usedin [15, 7] as a global image cue for the assessment of aes-

Figure 1. Is it possible to quantify the color harmony of a pixel ina picture? Left column is the original picture, right column is theharmony-guided quality map. The whitter the more disharmoniousthe pixels relatively to the global picture. On the bottom picture,balls of color can be sorted by disharmony level.

thetic quality.In this paper, we propose a new approach for assessing

what the quality of a picture is. As an interesting tool forcontent creator and targeting a maximization of the artis-tic effect, the proposed metric provides a no-reference per-

2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops

978-0-7695-4990-3/13 $26.00 © 2013 IEEE

DOI 10.1109/CVPRW.2013.161

955

2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops

978-0-7695-4990-3/13 $26.00 © 2013 IEEE

DOI 10.1109/CVPRW.2013.161

961

DWT (YUV space)Contrast Masking

Activity Masking (YUV space)

Harmony Distance (HSV space)

Perceptual masking Spatial poolingInter-level

accumulation

Activity Masking

Map

Hierarchical Harmony Map

Contrast Masking Map

Perceptual Harmony Map

Score

(RGB space)

Figure 3. Overview of the complete system.

its width in degrees. For notation simplicity, αm denotesthe angle of the first sector, which is also referred to as thetemplate angle. For a given picture, an appropriate rota-tion angle αm is computed to align at best Tm with the huedistribution of the image. It is the template angle that mini-mizes the Kullback-Liebler divergence between the normal-ized hue distribution M(h) of the picture, typically in theform of an histogram with L = 360 bins, and the normal-ized hue distribution of the template, such as described byBaveye et al. [1].

αm = argminα

h

M(h) ln

(M(h)

Pm(h− α)

), (2)

where Pm(h) is the hue distribution of the template Tm withangle 0:

Pm(h) =1

J

Km∑

k=1

Pm,k(h), J =∑

h

Km∑

k=1

Pm,k(h) (3)

Pm,k(h) = 1{|h−αm,k|<wm,k2 }e

−1

1−(

2|h−αm,k|wm,k

)10

. (4)

The value Em =∑h

M(h) ln(

M(h)Pm(h−αm)

)represents the

residual energy of template m for image at hand.

2.2. Harmony distance

The definition of harmonious templates reveals to beconvenient information that may be arranged to provide aspatial harmony map. For a given template Tm, a hue h isconsidered harmonious if it is enclosed by a sector (meaningits harmonious distance is 0), while a hue outside the sectoris not harmonious regarding a certain proportion defined bythe hue distance dm(h). It is evaluated by computing thearc-length distance on the hue wheel (measured in degrees)to the closest sector:

dm(h) = mink=1...Km

[|h− αm,k| −

wm,k2

]+, (5)

where |.| is the arc-length distance and [.]+ = max(0, .).Then, assuming that each template (associated with its opti-mal angle) provides harmony information about the picture,the dm maps are computed for all templates and combinedat the pixel level. At each pixel u = (x, y) with associatedhue h(u), the harmony distance map G(u) accumulates theharmony distances dm(h(u)) as follows. The contributionof each template is weighted according to its respective en-ergy, to give more importance to well suited templates (hav-ing low energy);

G(u) =

m

1− Em∑

m′Em′

dm (h(u))

· s(u) · v(u)

(6)where s and v are saturation and value of the image.Weighting the harmony distance by saturation and valuegives a more perceptual result because the more saturatedthe color or the highest its value, the strongest it is per-ceived. Some qualitative results are depicted in the secondcolumn of figure 4.

2.3. Perceptual masking

Including a perceptual masking that simulates the per-ception of the human visual system, we transform harmonydistances into perceptual harmony maps. Spatial maskingrefers to the alteration of the perception of a signal by sur-rounding background, i.e, visibility increase (pedestal ef-fect) or decrease (masking effect) due to the surroundingsignal. As recommended by Watson et al. [21], both con-trast masking and entropy masking are incorporated in theproposed quality metric. Contrast masking models the visi-bility change of the signal due to contrast values created byedges or color gradation. Entropy masking reflects the un-certainty of the masking signal, due to texture complexity.Entropy masking is also known as activity masking or localtexture masking [14]; in the following, activity masking isused. The masking values are computed on the luminance

957963

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 7: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Intelligent Transfer of Harmonic PalettesVis Comput (2010) 26: 933–942DOI 10.1007/s00371-010-0498-y

O R I G I NA L A RT I C L E

Example-based painting guided by color features

Hua Huang · Yu Zang · Chen-Feng Li

Published online: 14 April 2010© Springer-Verlag 2010

Abstract In this paper, by analyzing and learning the colorfeatures of the reference painting with a novel set of mea-sures, an example-based approach is developed to transfersome key color features from the template to the source im-age. First, color features of a given template painting is ana-lyzed in terms of hue distribution and the overall color tone.These features are then extracted and learned by the algo-rithm through an optimization scheme. Next, to ensure thespatial coherence of the final result, a segmentation basedpost processing is performed. Finally, a new color blendingmodel, which avoids the dependence of edge detection andadjustment of inconvenient tune parameters, is developed toprovide a flexible control for the accuracy of painting. Ex-perimental results show that the new example-based paint-ing system can produce paintings with specific color fea-tures of the template, and it can also be applied to changingcolor themes of art pieces, designing color styles of paint-ings/real images, and specific color harmonization.

Keywords Image processing · Example-based painting ·Color features learning

H. Huang (�) · Y. ZangSchool of Electronic and Information Engineering, Xi’an JiaotongUniversity, No. 28, Xianning West Road, Xi’an, Chinae-mail: [email protected]

Y. Zange-mail: [email protected]

C.-F. LiSchool of Engineering, Swansea University, Swansea, UKe-mail: [email protected]

1 Introduction

In computer painting and image synthesis, an example-based approach creates paintings/images by automaticallymodifying the source image to imitate some specific fea-tures of the reference image. Many existing methods try tolearn the texture features of a painting [10, 16] and althoughthey present impressive results, the color features of a paint-ing are seldom considered. In this paper, we mainly focuson how to learn and extract the color features of the tem-plate painting, and to transfer to the source image the colortheme and color style.

When painting, a artist chooses colors from a certainrange to express a specific emotion and, to emphasize theemotion, the use of color is often exaggerated, e.g. the skymay no longer be painted in blue and the meadow may nolonger be green. Real images such as photos taken by cam-era are created to describe the real world as factually as pos-sible and, except for some specific art photos, they often lackvivid emotion. Hence, in order to transfer the artist’s emo-tion from a painting to a real image, a key task is to learnand copy the color features. Choosing the right color to paintmay not be difficult for specialized artists, but it can be veryconfusing for an amateur user who only has a passionate ad-miration for a reference painting and a vague request “makeit like this.”

Traditional color transfer methods [1, 6, 12, 13, 15, 17]are not suitable for this task because they aim to directlytransfer the colors from one image to the other, but do notaddress the color features that determine the overall colorstyle of the reference. These direct color transfer approachesoften fail to transfer the emotion of the template painting,and some important visual features of the input image (suchas the relationship of light and shadow) tend to get dam-aged. On the other hand, painters never simply copy the col-orful world to the canvas and instead they create paintings

52 March/April2008 PublishedbytheIEEEComputerSociety 0272-1716/08/$25.00©2008IEEE

ComputationalAesthetics

Automatic Mood-Transferring between Color ImagesChuan-Kai Yang and Li-Kai Peng ■ National Taiwan University of Science and Technology

With the digital camera’s invention, cap-turing images has become extremely easy and widespread, while image data

has become more robust and easy to manipulate. Among the many possible image-processing op-tions, users have become increasingly interested in changing an image’s tone or mood by altering its colors—such as converting a tree’s leaves from green to yellow to suggest a change of season.

Groundbreaking work by Reinhard and colleagues made such a conversion possible and extremely sim-ple.1 In their approach, users provide an input image, along with a reference image to exemplify the desired

color conversion. The technique’s algorithm essentially attempts to match the input image’s color sta-tistics with those of the reference image to achieve the tonal change. However, this seemingly success-ful approach has two outstanding problems. First, some users might find it difficult or impossible to choose a proper reference image

because they lack either an aesthetic background or access to an image database. Second, it’s difficult to evaluate the color-transfer quality, particularly when the input and reference images have significantly different content.

We propose solutions to both problems. First, we give users a simple and intuitive interface that in-cludes a list of predefined colors representing a broad range of moods. So, rather than having to supply a reference image, users simply select a color mood with a mouse click. To address the color-transfer quality problem, we associate each color mood with a set of up to 10 images from our image database. After selecting their color mood, users choose one associated image. Our histogram matching algo-

rithm then uses the selected image to determine the input image’s color distribution. We thereby achieve more accurate and justifiable color conversion re-sults, while also preserving spatial coherence. Here, we further describe our solutions and their results and compare them to existing approaches.

Color-mood conversion Our approach adopts Whelan’s classification

scheme,2 dividing the color spectrum into 24 cat-egories or moods. To create a mood image data-base, we collect from the Web five to 10 images for each category; these serve as our reference pool. At runtime, when users input an image and a tar-get mood, our system classifies the input image. Then, if necessary, it makes corresponding global alignment transformations—including translation, rotation, and scaling—to every pixel of the input image. However, because such transformations alone can’t guarantee that the system will success-fully convert the input image to the desired cat-egory, we employ a segmentation-based approach to make the color adjustment region by region. We perform such adjustments repeatedly until the in-put image matches the target mood.

Because each mood category contains more than one image, we can further enhance our conversion quality by treating each associated database image as a reference in the transformation process. Our system can then choose the best conversion result as the output image. Finally, we use the RGB color system. Although Reinhard and colleagues propose using the Lαβ for better decorrelation, we’ve found little significant differences among color systems. Also, Whelan uses the CMYK system in their orig-inal color-mood definitions,2 so we’ve converted those definitions to their RGB counterparts for adoption in our system.

Thisnewcolor-conversionmethodoffersusersanintuitive,one-clickinterfaceforstyleconversionandarefinedhistogram-matchingmethodthatpreservesspatialcoherence.

Example-based painting guided by color features 941

Fig. 10 Our method applied to basic tone design. Top row: (a) the original painting, (b) and (c) paintings designed with warm tone, (d) and(e) paintings designed with cold tone. Bottom row: (f) the input photo, (g) warm style design, (h) cold style design, (i) custom style design

with the background. Shown in (b) is the result presented byCohen-Or et al. [2] which harmonized the foreground withrespect to the background. As shown in (c), the harmoniza-tion effect can also be archived using the proposed methodby taking input image as the foreground object (the girl)and the template image as the background. As the graph-cuttechnique is used in Cohen-Or et al. [2], the separated areasbelonging to the same object may not be deduced, as high-lighted by the red ellipse in (b). As one object is likely tohave similar colors, this unsatisfactory situation is avoidedto some degree in the proposed method, as shown in (c).

Performance We performed our algorithm on a PC withan Intel 3.0 GHz dual-core CPU and a GeForce 9600 GTvideo card. The computation time depends on the imagesize and experimental parameters, and the typical value fora 600 × 800 image is about 8–10 seconds.

7 Conclusion and limitation

In this paper, we propose an example-based painting schemeguided by color features. After defining two key color fea-tures for paintings, an optimization based learning schemeis presented to transfer the color features from the templatepaintings to the input image. In order to obtain results with-out discontinuity artifacts, a spatial coherence processingscheme is also developed. When painting, a color blendingmodel is designed to control more flexibly the accuracy ofthe painting process, which avoids the dependence of edgedetection and adjustment of inconvenient tune parameters.Comprehensive examples are presented to demonstrate the

performance of the proposed approach in different appli-cations, including example-based image painting, example-based color theme changing and color styles design, andcolor harmonization, etc.

The new technique allows the input image to learn colorfeatures from the template image while preserving its ownstructural features. However, in some extreme cases, col-ors like white and black could not be handled well. This isbecause these two colors have the minimum saturation andlightness such that stay unchanged in the learning process.Figure 7 shows a failed example, where (a) is the input im-age and (b) is the template painting. As a large area in thebeach are rendered with white, this area stays almost un-changed in the result (c) and appears disharmonious. For fu-ture work, the problem might be tackled by combining thenew example-based approach and the traditional color trans-fer method.

Acknowledgements This work is partially supported by the Na-tional Natural Science Foundation of China (Grant No. 60970068), theKey Project of Chinese Ministry of Education (Grant No. 109142) andthe MOE-Intel Joint Research Fund (Grant No.MOE-INTEL-09-07).

References

1. Chang, Y., Saito, S., Nakajima, M.: A framework for transfer col-ors based on the basic color categories. In: Computer GraphicsInternational, pp. 176–183 (2003)

2. Cohen-Or, D., Sorkine, O., Gal, R., Leyvand, T., Xu, Y.: Colorharmonization. ACM Trans. Graph. 25(3), 624–630 (2006)

3. Comaniciu, D., Meer, P.: Mean shift: A robust approach towardfeature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. (5),603–619 (2002)

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 8: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Fast Procedural Texture SynthesisEUROGRAPHICS 2010/ H. Hauser and E. Reinhard STAR – State of The Art Report

State of the Art in Procedural Noise Functions

A. Lagae1,2 S. Lefebvre2,3 R. Cook4 T. DeRose4 G. Drettakis2 D.S. Ebert5 J.P. Lewis6 K. Perlin7 M. Zwicker8

1Katholieke Universiteit Leuven 2REVES/INRIA Sophia-Antipolis 3ALICE/INRIA Nancy Grand-Est / Loria4Pixar Animation Studios 5Purdue University 6Weta Digital 7New York University 8University of Bern

Abstract

Procedural noise functions are widely used in Computer Graphics, from off-line rendering in movie production to

interactive video games. The ability to add complex and intricate details at low memory and authoring cost is one

of its main attractions. This state-of-the-art report is motivated by the inherent importance of noise in graphics,

the widespread use of noise in industry, and the fact that many recent research developments justify the need for an

up-to-date survey. Our goal is to provide both a valuable entry point into the field of procedural noise functions, as

well as a comprehensive view of the field to the informed reader. In this report, we cover procedural noise functions

in all their aspects. We outline recent advances in research on this topic, discussing and comparing recent and

well established methods. We first formally define procedural noise functions based on stochastic processes and

then classify and review existing procedural noise functions. We discuss how procedural noise functions are used

for modeling and how they are applied on surfaces. We then introduce analysis tools and apply them to evaluate

and compare the major approaches to noise generation. We finally identify several directions for future work.

Keywords: procedural noise function, noise, stochastic process, procedural, Perlin noise, wavelet noise,anisotropic noise, sparse convolution noise, Gabor noise, spot noise, surface noise, solid noise, anti-aliasing,filtering, stochastic modeling, procedural texture, procedural modeling, solid texture, texture synthesis, spectralanalysis, power spectrum estimation

Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/ImageGeneration—I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Color, shading, shadowing,and texture

1. Introduction

Efficiently adding rich visual detail to synthetic images hasalways been one of the major challenges in computer graph-ics. Procedural noise is one of the most successful funda-mental tools used to generate such detail. Ever since the firstimage of the marble vase, presented by K. Perlin [Per85](see figure 1), “Perlin noise” has seen widespread use bothin research and in industry. Noise has been used for a diverseand extensive range of purposes in procedural texturing, in-cluding clouds, waves, tornadoes, rocket trails, heat ripples,incidental motion of animated characters, and so on. It iswidely used both in film production and video games, and iscurrently implemented in every major 3D computer graph-ics software package, such as Autodesk 3ds Max and Maya,Blender, Pixar’s RenderMan R©, etc.

Procedural noise has many advantages: it is typically veryfast to evaluate, often allowing evaluation of complex andintricate patterns on-the-fly, and it has a very low memoryfootprint, making it an ideal candidate for compactly gener-ating complex visual detail. In addition, with a suitable set ofparameters, procedural noise can be used to easily generate alarge number of different patterns. Finally, procedural noiseis often randomly accessible, so that it can be evaluated inde-pendently at every point in constant time. This last propertyhas always been a great advantage, but takes on even highersignificance with the advent of massively parallel GPU’s andmulti-core CPU systems.

The most recent survey on noise is in the book of Ebert etal. [EMP∗02]. Since then there have been a multitude of re-cent research results in the domain, such as [CD05,BHN07,GZD08, LLDD09a], as well as many others. In this sur-

c© The Eurographics Association 200x.

Procedural GPU Shading Ready for UseStefan Gustavson1, Linköping University, Sweden and Ian McEwan2, Ashima Research, USA

A selection of procedural patterns, generated entirely on the GPU without any texture accesses. The left two spheres use Perlin simplex noise by itself and in a fractal sum. The right two spheres use Worley cellular noise in different ways. The plane at the bottom shows Perlin and Neyret's ”flow noise”, with rotating gradients. All these shaders are animated, have analytic derivatives that are easy to compute, and are fast enough to be considered for routine use even on previous generation GPU hardware.

Procedural patterns have been a staple of software shad-ing for decades. Perlin noise revolutionized the industry and won an Academy award for technical achievement. With the comparably recent introduction of programma-ble shading in GPU architectures, hardware accelerated procedural shading is now very straightforward and de-serves to be considered a lot more than what seems to be current practice.

Very recent work by us, submitted for publication elsewhere, has provided open source GLSL implementations of many classic noise algorithms in the form of fast, self-contained functions [1]. To make the case for procedural shading, we will show live demos using this work of ours to create visually rich surface patterns. A cross platform demo, with full source code, to render the animated scene in the figure above is on:http://www.itn.liu.se/~stegu/gpunoise/More examples and demos will be presented during the talk. If you have a GLSL-capable laptop, bring it along.

Procedural shading has an inherent flexibility that cannot be matched by sampled texture images. The initial effort of writ-ing a good procedural shader is more complicated than draw-ing a texture or editing a photographic image to suit your needs, but with procedural shaders, the pattern and the colors can be varied with a simple change of parameters. This allows extensive re-use in many circumstances, as well as fine tuning or even complete overhauls of the surface appearance very late in a production process. A procedural pattern allows for easy generation of a corresponding bump or normal map. Procedural patterns can be rendered at an arbitrary resolu-tion without jagged edges or blurring in close-up views, which is particularly useful for real time applications where the viewpoint is often unrestricted. There are no problems with periodic tiling artifacts when a procedural texture is applied to a large area. Procedural shading also lifts the memory and tiling restrictions for 3D textures and animated patterns, and enables analytic anisotropic antialiasing.

While all these advantages have made procedural shading popular for offline rendering, real time applications have not yet adopted this practice. One obvious reason is that the GPU is a limited resource, and quality often has to be sacrificed for performance. However, recent developments have given us massive computing power even on typical consumer level GPUs, and with the massively parallel architectures that are employed, memory access has become a major bottleneck. A modern GPU has an abundance of texture units and uses caching strategies to reduce the number of accesses to global memory, but many real time applications now have an im-balance between texture bandwidth and processing band-width, to the extent where you can sometimes consider that ”cycles are free”, in the meaning that if there is a lot of texture access going on, computing instructions to augment the image based textures with procedural elements can often be executed in parallel to memory reads without any slowdown at all. Even on low end hardware for mobile devices, texture download and texture access both come at a considerable cost which can be alleviated by procedural texturing.

Procedural methods are not limited to fragment shading. With the ever increasing complexity of real time geometry and the recent introduction of GPU-hosted tesselation, tasks like surface displacements and ambient animations are best performed on the GPU. The tight interaction between proce-dural displacement shaders and procedural surface shaders have proven very fruitful for creating complex and impressive visuals in offline shading environments, and there is no rea-son to assume that real time shading would be fundamental-ly different in that respect.

For all these reasons, now is a good time to consider using more of the GPU power for procedural texturing.

[1] http://github.com/ashima/webgl-noise1) [email protected], 2) [email protected]

Pacific Graphics 2012C. Bregler, P. Sander, and M. Wimmer(Guest Editors)

Volume 31 (2012), Number 7

Multi-scale Assemblage for Procedural Texturing

G. Gilet1, J-M. Dischler2 and D. Ghazanfarpour1

1XLIM - UMR CNRS 7252, University of Limoges, France2LSIIT - UMR CNRS 7005, University of Strasbourg, France

Figure 1: Multi-scale assemblage is a random pattern generation process generalizing sparse convolution. It allows usersto design interactively new types of texture basis functions (noise-like functions) and / or structured patterns by preservingall advantages of procedural definitions, namely infinity without repetition, definition independency and extreme compactness.These textures require no texture memory and fit entirely into the shader program.

AbstractA procedural pattern generation process, called multi-scale “assemblage” is introduced. An assemblage is definedas a multi-scale composition of “multi-variate” statistical figures, that can be kernel functions for defining noise-like texture basis functions, or that can be patterns for defining structured procedural textures. This paper presentstwo main contributions: 1) a new procedural random point distribution function, that, unlike point jittering, allowus to take into account some spatial dependencies among figures and 2) a “multi-variate” approach that, insteadof defining finite sets of constant figures, allows us to generate nearly infinite variations of figures on-the-fly. Forboth, we use a “statistical shape model”, which is a representation of shape variations. Thanks to a direct GPUimplementation, assemblage textures can be used to generate new classes of procedural textures for real-timerendering by preserving all characteristics of usual procedural textures, namely: infinity, definition independency(provided the figures are also definition independent) and extreme compactness.

Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-DimensionalGraphics and Realism—Color, shading, shadowing, and texture

Keywords: procedural texturing, noise, procedural object distribution function, GPU shader

1. Introduction

Content creators tend to design larger and larger virtualworlds characterized by a huge amount of visual details,which are commonly modeled using textures. However,while increasing texture complexity improves visual qual-ity, it also raises more and more problems concerning: 1)excessive storage requirements and 2) prohibitive synthe-

sis timings. Many popular texture synthesis techniques, likesynthesis “by example” [WLKT09] or “physical simulation”[DRS08] do not scale well, since the memory consumptionas well as the computational complexity grow proportion-ally w.r.t the surface size and the texture definition. Procedu-ral textures [EMP∗98] intrinsically avoid the two previousproblems. Instead of fetching texture values from massivepre-computed data arrays, a little program directly computes

c© 2012 The Author(s)Computer Graphics Forum c© 2012 The Eurographics Association and Blackwell Publish-ing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ,UK and 350 Main Street, Malden, MA 02148, USA.

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 9: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Universidade Federal do Rio de Janeiro - UFRJ - Campus Cidade Universitária - Rio de Janeiro - Ilha do Fundão, CEP: 21941-972 - COPPE/PESC/LCG

FMS :: Five Minute Speech :: An Overview of Activities Developed in Disciplines and Guided Studies :: Laboratory Seminars and Meetings :: January, 2014

Thanks

Thanks for your attention!Michel Alves dos Santos - [email protected]

Michel Alves dos Santos - (Alves, M.)MSc Candidate at Federal University of Rio de Janeiro.E-mail: [email protected], [email protected]: http://lattes.cnpq.br/7295977425362370Home: http://www.michelalves.comPhone: +55 21 2562 8572 (Institutional Phone Number)

http://www.facebook.com/michel.alves.santos

http://www.linkedin.com/profile/view?id=26542507

Michel Alves dos Santos: Laboratório de Computação Gráfica - LCG Pós-Graduação em Engenharia de Sistemas e Computação - PESC

Page 10: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Bibliography: Effectiveness of Image Quality Assessment Indexes onDetection of Structural and Nonstructural Distortions

Michel Alves dos Santos

January, 2014

ReferencesFreeman, J. (2012), Computation and representation in the

primate visual system, PhD thesis, Center for Neural Sci-ence, New York University, New York, NY.

Guerrero-Colón, J. A., Simoncelli, E. P. & Portilla, J. (2008),Image denoising using mixtures of Gaussian scale mixtu-res, in ‘Proc 15th IEEE Int’l Conf on Image Proc’, IEEEComputer Society, San Diego, CA, pp. 565–568.

Lyu, S. & Simoncelli, E. P. (2009), Reducing statistical de-pendencies in natural signals using radial Gaussianiza-tion, in D. Koller, D. Schuurmans, Y. Bengio & L. Bot-tou, eds, ‘Adv. Neural Information Processing Systems(NIPS*08)’, Vol. 21, MIT Press, Cambridge, MA, pp. 1009–1016.

Moorthy, A. K. & Bovik, A. C. (2011), ‘Visual quality assess-ment algorithms: What does the future hold?’, MultimediaTools Appl. 51(2), 675–696.

Rajashekar, U. & Simoncelli, E. P. (2009), Multiscale de-noising of photographic images, in A. C. Bovik, ed., ‘TheEssential Guide to Image Processing’, 2nd ed., AcademicPress, chapter 11, pp. 241–261.

Rajashekar, U., Wang, Z. & Simoncelli, E. P. (2009), Quan-tifying color image distortions based on adaptive spatio-chromatic signal decompositions, in ‘Proc 16th IEEE Int’lConf on Image Proc’, IEEE Computer Society, Cairo,Egypt, pp. 2213–2216.

Rajashekar, U., Wang, Z. & Simoncelli, E. P. (2010), Per-ceptual quality assessment of color images using adap-tive signal representation, in B. Rogowitz & T. N. Pappas,eds, ‘Proc SPIE on Human Vision and Electronic Imaging,XV’, Vol. 7527, Society of Photo-Optical Instrumentation,San Jose, CA.

Simoncelli, E. P. (2005), Statistical modeling of photographicimages, in A. Bovik, ed., ‘Handbook of Image and VideoProcessing’, Academic Press, chapter 4.7, pp. 431–441. 2ndedition.

Simoncelli, E. P. (2009), Capturing visual image propertieswith probabilistic models, in A. C. Bovik, ed., ‘The Essen-tial Guide to Image Processing’, 2nd ed., Academic Press,chapter 9, pp. 205–223.

Wang, Z. & Simoncelli, E. P. (2004), Stimulus synthesis forefficient evaluation and refinement of perceptual imagequality metrics, in B. Rogowitz & T. N. Pappas, eds, ‘Proc.SPIE, Conf on Human Vision and Electronic Imaging IX’,Vol. 5292, San Jose, CA, pp. 99–108.

Wang, Z. & Simoncelli, E. P. (2005a), Reduced referenceimage quality assessment using a wavelet domain naturalimage statistic model, in B. Rogowitz, T. N. Pappas &S. J. Daly, eds, ‘Proc. SPIE, Conf. on Human Vision andElectronic Imaging X’, Vol. 5666, San Jose, CA, pp. 149–159.

Wang, Z. & Simoncelli, E. P. (2005b), Translation insensitiveimage similarity in the complex wavelet domain, in ‘Proc.Int’l Conf Acoustics Speech Signal Processing (ICASSP)’,Vol. II, IEEE Sig Proc Society, Philadelphia, PA, pp. 573–576.

Wang, Z., Bovik, A. & Lu, L. (2002), Why is image qualityassessment so difficult?, in ‘Acoustics, Speech, and Sig-nal Processing (ICASSP), 2002 IEEE International Con-ference on’, Vol. 4, pp. IV–3313–IV–3316.

Wang, Z., Bovik, A. C. & Simoncelli, E. P. (2005), Structu-ral approaches to image quality assessment, in A. Bovik,ed., ‘Handbook of Image and Video Processing’, AcademicPress, chapter 8.3, pp. 961–974. 2nd edition.

Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P.(2004), ‘Perceptual image quality assessment: From errorvisibility to structural similarity’, IEEE Trans Image Pro-cessing 13(4), 600–612. Recipient, IEEE Signal ProcessingSociety Best Paper Award, 2009.

Wang, Z., Simoncelli, E. P. & Bovik, A. C. (2003), Multis-cale structural similarity for image quality assessment, in‘Proc 37th Asilomar Conf on Signals, Systems and Com-puters’, Vol. 2, IEEE Computer Society, Pacific Grove, CA,pp. 1398–1402.

Wang, Z., Wu, G., Sheikh, H. R., Simoncelli, E. P., Yang,E. & Bovik, A. C. (2006), ‘Quality-aware images’, IEEETrans Image Processing 15(6), 1680–1689.

Yu, H. & Liu, X. (2011), Structure similarity image qualityassessment based on visual perception., in ‘EMEIT’, IEEE,pp. 1519–1522.

Zhang, F. & Xu, Y. (2009), Image quality evaluation ba-sed on human visual perception, in ‘Proceedings of the21st Annual International Conference on Chinese Controland Decision Conference’, CCDC’09, IEEE Press, Pisca-taway, NJ, USA, pp. 1542–1545. URL http://dl.acm.

org/citation.cfm?id=1714472.1714772.

Zhang, L., , L. Z., Mou, X. & Zhang, D. (2011), ‘Fsim:A feature similarity index for image quality assessment.’,IEEE Transactions on Image Processing 20(8), 2378–2386. URL http://dblp.uni-trier.de/db/journals/

tip/tip20.html#ZhangZMZ11.

1

Page 11: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Bibliography: A Framework for Harmonic Color Measures

Michel Alves dos Santos

January, 2014

ReferencesAdobe (2013), ‘Adobe kuler’. URL https://kuler.

adobe.com/create/color-wheel/.

Anvil Design (Redwood City, C. & Publishers, R.(2005), Pattern + Palette Sourcebook: A CompleteGuide to Choosing the Perfect Color and Pattern inDesign, Rockport Publishers.

Baveye, Y., Urban, F., Chamaret, C., Demoulin, V.& Hellier, P. (2013), Saliency-guided consistent co-lor harmonization, in ‘Proceedings of the 4th Inter-national Conference on Computational Color Ima-ging’, CCIW’13, Springer-Verlag, Berlin, Heidelberg,pp. 105–118. URL http://dx.doi.org/10.1007/

978-3-642-36700-7_9.

Billmeyer, F. W. (1987), ‘Survey of color order systems’,Color Research & Application 12(4), 173–186.

Bochko, V. & Parkkinen, J. (2006), ‘A spectral co-lor analysis and colorization technique’, ComputerGraphics and Applications, IEEE 26(5), 74–82.

Bratkova, M., Boulos, S. & Shirley, P. (2009), ‘orgb- a practical opponent color space for compu-ter graphics’, Computer Graphics and Applications,IEEE 29(1), 42–55.

Burchett, K. E. (2002), ‘Color harmony’, Color Rese-arch & Application 27(1), 28–31.

Butterfield, S., Butterfield, S., Kaufman, D. & Go-ewey, J. (1998), Color Palettes: Atmospheric Inte-riors Using the Donald Kaufman Color Collection,Clarkson Potter.

Chang, Y., Saito, S., Uchikawa, K. & Nakajima, M.(2006), ‘Example-based color stylization of images’,ACM Transactions on Applied Perception 2(3), 322–345.

Clifton-Mogg, C. & Williams, A. (2001), The ColorDesign Source Book: Using Fabrics, Paints and Ac-cessories for Successful Decorating, Ryland Peters& Small.

Cohen-Or, D., Sorkine, O., Gal, R., Leyvand, T. & Xu,Y.-Q. (2006), ‘Color harmonization’, ACM Transacti-ons on Graphics (Proceedings of ACM SIGGRAPH)25(3), 624–630.

Datta, R., Joshi, D., Li, J. & Wang, J. Z. (2006),Studying aesthetics in photographic images usinga computational approach, in ‘Computer Vision–ECCV 2006’, Springer, pp. 288–301.

Diaz, J., Marco, J. & Vazquez, P. (2010), Cost-effectivefeature enhancement for volume datasets, in ‘15thInternational Workshop on Vision, Modeling andVisualization’, pp. 187–194.

Dorrell, P. (2004), Living the Artist’s Life: A Guideto Growing, Persevering and Succeeding in the ArtWorld, Hillstead Pub.

Eiseman, L., Recker, K. & Pantone, I. (2011), Pantone:The Twentieth Century in Color, Chronicle Books.

Feisner, E. (2006), Colour: How to Use Colour in Artand Design, Laurence King.

Gerritsen, F. (1975), Theory and practice of color: acolor theory based on laws of perception, CengageLearning.

Gerritsen, F. (1988), Evolution in color, Schiffer Pub.

Gooch, A. A., Olsen, S. C., Tumblin, J. & Gooch, B.(2005), ‘Color2gray: salience-preserving color remo-val’, ACM Trans. Graph. 24(3), 634–639.

Granville, W. C. (1987), ‘Color harmony: What is it?’,Color Research & Application 12(4), 196–201.

Granville, W. C. & Jacobson, E. (1944), ‘Colorime-tric specification of the color harmony manual fromspectrophotometric measurements’, J. Opt. Soc. Am.34(7), 382–393.

Gruber, L., Kalkofen, D. & Schmalstieg, D. (2010), Co-lor harmonization for augmented reality, in ‘Mixedand Augmented Reality (ISMAR), 2010 9th IEEEInternational Symposium on’, pp. 227–228.

Guo, Y. W., Liu, M., Gu, T. T. & Wang, W. P. (2012),‘Improving photo composition elegantly: Conside-ring image similarity during composition optimiza-tion’, Comp. Graph. Forum 31(7pt2), 2193–2202.

Haber, J., Lynch, S. & Carpendale, S. (2011), Colour-vis: exploring colour usage in paintings over time,in ‘Proceedings of the International Symposium onComputational Aesthetics in Graphics, Visualiza-tion, and Imaging’, CAe ’11, ACM, New York, NY,USA, pp. 105–112.

1

Page 12: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

REFERENCES 2

Hascoet, M. (2012), Visual color design, in ‘Infor-mation Visualisation (IV), 2012 16th InternationalConference on’, IEEE, pp. 62–67.

Hirano, K. & Miyamichi, J. (1992), ‘Constructionof neural networks to select harmonious colorcombinations’, Systems and Computers in Japan23(10), 42–53.

Holtzschue, L. (2002), Understanding Color, John Wi-ley & Sons. Inc., New York.

Hou, X. & Zhang, L. (2007), Color conceptualiza-tion, in ‘Proceedings of the 15th international con-ference on Multimedia’, MULTIMEDIA ’07, ACM,New York, NY, USA, pp. 265–268. URL http:

//doi.acm.org/10.1145/1291233.1291288.

Huang, H., Zang, Y. & Li, C.-F. (2010a), ‘Example-based painting guided by color features’, Vis. Com-put. 26(6-8), 933–942.

Huang, H., Zang, Y. & Li, C.-F. (2010b), ‘Example-based painting guided by color features’, The VisualComputer 26(6-8), 933–942.

Huo, X. & Tan, J. (2009), An improved method forcolor harmonization, in ‘Image and Signal Proces-sing, 2009. CISP ’09. 2nd International Congresson’, pp. 1–4.

Irony, R., Cohen-Or, D. & Lischinski, D. (2005), Colo-rization by example, in ‘Proceedings of the SixteenthEurographics conference on Rendering Techniques’,Eurographics Association, pp. 201–210.

Itten, J. (1960), The Art of Color, New York: VanNostrand Reinhold Company.

Jennings, S. (2003), Artist’s Color Manual: The Com-plete Guide to Working with Color, Chronicle Books.

Kelly, K. L. & Judd, D. B. (1976), Color: universal lan-guage and dictionary of names, Vol. 440, US Depart-ment of Commerce, National Bureau of Standards.

Kopacz, J. (2004), Color in Three-dimensional Design,McGraw-Hill.

Krause, J. (2002), Color Index: Over 1,000 Color Com-binations, CMYK and RGB Formulas, for Print andWeb Media, F & W Publications, Incorporated.

Lalonde, J.-F. & Efros, A. A. (2007), Using color com-patibility for assessing image realism, in ‘ComputerVision, 2007. ICCV 2007. IEEE 11th InternationalConference on’, IEEE, pp. 1–8.

Levin, A., Lischinski, D. & Weiss, Y. (2004), ‘Co-lorization using optimization’, ACM Trans. Graph.23(3), 689–694.

Li, C. & Chen, T. (2009), ‘Aesthetic visual quality as-sessment of paintings’, Selected Topics in Signal Pro-cessing, IEEE Journal of 3(2), 236–252.

Lindner, A., Shaji, A., Bonnier, N. & Susstrunk,S. (2012), Joint statistical analysis of images andkeywords, ACM, New York, NY, USA, pp. 489–498.

Liu, L., Chen, R., Wolf, L. & Cohen-Or, D. (2010), Op-timizing photo composition, in ‘Computer GraphicsForum’, Vol. 29, Wiley Online Library, pp. 469–478.

Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.-Q. & Shum, H.-Y. (2007), Natural image coloriza-tion, in ‘Proceedings of the 18th Eurographics con-ference on Rendering Techniques’, Eurographics As-sociation, pp. 309–320.

Luo, Y. & Tang, X. (2008a), Photo and video qualityevaluation: Focusing on the subject, in ‘ComputerVision–ECCV 2008’, Springer, pp. 386–399.

Luo, Y. & Tang, X. (2008b), Photo and video qualityevaluation: Focusing on the subject, in ‘ComputerVision–ECCV 2008’, Springer, pp. 386–399.

Mahnke, F. (1996), Color, Environment, and HumanResponse: An Interdisciplinary Understanding of Co-lor and Its Use as a Beneficial Element in the Designof the Architectural Environment, Interior DesignSeries, Wiley.

Matsuda, Y. (1995), Color Design, Asakura Shoten (inJapanese), Tokio, Japan.

Meier, B. J. (1988), Ace: a color expert system foruser interface design, in ‘Proceedings of the 1st an-nual ACM SIGGRAPH symposium on User Inter-face Software’, UIST ’88, ACM, New York, NY, USA,pp. 117–128.

Monsef, D. (2013), ‘Colourlovers’. URL http://www.

colourlovers.com/.

Moon, P. & Spencer, D. E. (1944), ‘Geometric formula-tion of classical color harmony’, JOSA 34(1), 46–50.

Morovic, J. & Luo, M. R. (2001), ‘The fundamentals ofgamut mapping: A survey’, pp. 283–290.

Morse, B. S., Thornton, D., Xia, Q. & Uibel, J. (2007),Image-based color schemes, in ‘Image Processing,2007. ICIP 2007. IEEE International Conference on’,Vol. 3, IEEE, pp. III–497.

Morton, J. L. (2013), ‘Basic color theory’. URL http:

//www.colormatters.com.

Munsell, A. H. (1969), A grammar of color: a basictreatise on the color system, Van Nostrand ReinholdCo. GoetheBook.

Nack, F., Manniesing, A. & Hardman, L. (2003), Co-lour picking: the pecking prder of form and func-tion, in ‘Proceedings of the eleventh ACM internati-onal conference on Multimedia’, MULTIMEDIA ’03,ACM, New York, NY, USA, pp. 279–282.

Page 13: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

REFERENCES 3

Neumann, A. & Neumann, L. (2005), Color styletransfer techniques using hue, lightness and satura-tion histogram matching, in ‘Computational Aesthe-tics’05’, pp. 111–122.

Neumann, L., Nemcsics, A. & Neumann, A. (2005),Computational color harmony based on coloroidsystem, in ‘First Eurographics conference on Com-putational Aesthetics in Graphics’, Eurographics As-sociation, pp. 231–240.

Neves, P. S., Pereira, A. C. & Gonçalves, B. (2013),‘O conceito de harmonização cromática aplicado emambientes internos’.

Obrador, P. (2006), Automatic color scheme picker fordocument templates based on image analysis anddual problem, in ‘Electronic Imaging 2006’, Interna-tional Society for Optics and Photonics, pp. 607609–607609.

Obrador, P., Saad, M. A., Suryanarayan, P. & Oliver,N. (2012), Towards category-based aesthetic modelsof photographs, in ‘Proceedings of the 18th internati-onal conference on Advances in Multimedia Mode-ling’, Springer-Verlag, Berlin, Heidelberg, pp. 63–76.

O’Connor, Z. (2010), ‘Colour harmony revisited’, ColorResearch & Application 35(4), 267–273.

O’Donovan, P., Agarwala, A. & Hertzmann, A. (2011),Color compatibility from large datasets, in ‘ACMSIGGRAPH 2011 papers’, SIGGRAPH ’11, ACM, NewYork, NY, USA, pp. 63:1–63:12.

Ostwald, W. & Birren, F. (1969), The Color Primer,Van Nostrand Reinhold Company, New York, USA.

Peng, H.-Y., Yang, C. & Wu, C.-H. (2013), ‘Dip finalproject report - color harmonization’. URL http:

//www-scf.usc.edu/~hsuanyup/Projects.html.

Phillips, J. (2013), ‘20 color combination tools for de-signers - spyrestudios’. URL http://spyrestudios.

com/color-combination-tools/.

Press, W. H., Teukolsky, S. A., Vetterling, W. T. &Flannery, B. P. (1992), Numerical Recipes in C: TheArt of Scientific Computing, 2nd ed., Cambridge Uni-versity Press, New York, NY, USA.

Rasche, K., Geist, R. & Westall, J. (2005), ‘Re-coloringimages for gamuts of lower dimension’, ComputerGraphics Forum 24(3), 423–432. URL http://dx.

doi.org/10.1111/j.1467-8659.2005.00867.x.

Reinhard, E., Ashikhmin, M., Gooch, B. & Shirley, P.(2001), ‘Color transfer between images’, IEEE Com-put. Graph. Appl. 21(5), 34–41.

Sato, Y. & Tajima, J. (1995), A color scheme supportingmethod in a color design system, in ‘SPIE’, Vol. 2411,pp. 25–33.

Sauvaget, C. & Boyer, V. (2010), Harmonic coloriza-tion using proportion contrast, ACM, New York, NY,USA, pp. 63–69.

Sauvaget, C., Manuel, S., Vittaut, J.-N., Suarez, J.& Boyer, V. (2010), Segmented images colorizationusing harmony, in ‘Signal-Image Technology andInternet-Based Systems (SITIS), 2010 Sixth Interna-tional Conference on’, pp. 153–160.

Sawant, N. & Mitra, N. (2008), Color harmonizationfor videos, in ‘Computer Vision, Graphics ImageProcessing, 2008. ICVGIP ’08. Sixth Indian Con-ference on’, pp. 576–582.

Seo, S., Park, Y. & Ostromoukhov, V. (2013), ‘Imagerecoloring using linear template mapping’, Multi-media Tools Appl. 64(2), 293–308. URL http:

//dx.doi.org/10.1007/s11042-012-1024-1.

Shapira, L., Shamir, A. & Cohen-Or, D. (2009), Imageappearance exploration by model-based navigation,in ‘Computer Graphics Forum’, Vol. 28, Wiley On-line Library, pp. 629–638.

Sherin, A. (2011), Design Elements, Color Fundamen-tals: A Graphic Style Manual for Understanding howColor Impacts Design, Rockport Publishers.

Silvestrini, N. & Fischer, E. P. (2013), ‘Colorsystem:Colour order systems in art and science.’. URLhttp://www.colorsystem.com/?lang=en.

Starmer, A. (2005), The Color Scheme Bible: Inspirati-onal Palettes For Designing Home Interiors, FireflyBooks, Limited.

Suganuma, K., Sugita, J. & Takahashi, T. (2008), Co-lorization using harmonic templates, in ‘ACM SIG-GRAPH 2008 posters’, SIGGRAPH ’08, ACM, NewYork, NY, USA, pp. 62:1–62:1.

Sun, M., Sun, Q. & Xu, X. (2009), Color harmony ba-sed on fitting functions, in ‘Information Technologyand Applications, 2009. IFITA’09. International Fo-rum on’, Vol. 1, IEEE, pp. 165–167.

Sunkavalli, K., Johnson, M. K., Matusik, W. & Pfister,H. (2010a), ‘Multi-scale image harmonization’, ACMTrans. Graph. 29(4), 125:1–125:10.

Sunkavalli, K., Johnson, M. K., Matusik, W. & Pfister,H. (2010b), ‘Multi-scale image harmonization’, ACMTransactions on Graphics (TOG) 29(4), 125.

Tanaka, G., Suetake, N. & Uchino, E. (2010), ‘Colortransfer based on normalized cumulative hue histo-grams.’, JACIII 14(2), 185–192.

Tang, Z., Miao, Z. & Wan, Y. (2010), Image composi-tion with color harmonization, in ‘Image and VisionComputing New Zealand (IVCNZ), 2010 25th Inter-national Conference of’, pp. 1–8.

Page 14: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

REFERENCES 4

Tang, Z., Miao, Z., Wan, Y. & Jesse, F. F. (2011a), ‘Co-lour harmonisation for images and videos via two-level graph cut’, Image Processing, IET 5(7), 630–643.

Tang, Z., Miao, Z., Wan, Y. & Wang, Z. (2011b), ‘Co-lor harmonization for images’, Journal of ElectronicImaging 20(2), 023001–023001.

Tangaz, T. (2006), Interior Design Course, Interior De-sign Course: Principles, Practices and Techniquesfor the Aspiring Designer, Barron’s Educational Se-ries, Incorporated.

Tokumaru, M., Muranaka, N. & Imanishi, S. (2002),Color design support system considering color har-mony, in ‘Fuzzy Systems, 2002. FUZZ-IEEE’02. Pro-ceedings of the 2002 IEEE International Conferenceon’, Vol. 1, pp. 378–383.

von Goethe, J. W. (1971), Goethe’s Color Theory. Trans-lated by Rupprecht Matthei, Van Nostrand ReinholdCo., New York, USA.

Wang, B., Yu, Y. & Xu, Y.-Q. (2011), ‘Example-basedimage color and tone style enhancement’, ACMTrans. Graph. 30(4), 64:1–64:12.

Wang, B., Yu, Y., Wong, T.-T., Chen, C. & Xu, Y.-Q. (2010), Data-driven image color theme enhan-cement, in ‘ACM SIGGRAPH Asia 2010 papers’,SIGGRAPH ASIA ’10, ACM, New York, NY, USA,pp. 146:1–146:10.

Wang, C., Zhang, R. & Deng, F. (2009), ‘Image com-position with color harmonization’, Chinese OpticsLetters 7(6), 483–485.

Wang, L. & Mueller, K. (2008a), Harmonic color-maps for volume visualization, in ‘Proceedings ofthe Fifth Eurographics / IEEE VGTC conferenceon Point-Based Graphics’, SPBG’08, EurographicsAssociation, Aire-la-Ville, Switzerland, Switzerland,pp. 33–39. URL http://dx.doi.org/10.2312/VG/

VG-PBG08/033-039.

Wang, L. & Mueller, K. (2008b), Harmonic color-maps for volume visualization, in ‘Proceedings ofthe Fifth Eurographics/IEEE VGTC conference onPoint-Based Graphics’, Eurographics Association,pp. 33–39.

Wang, L., Giesen, J., McDonnell, K. T., Zolliker, P.& Mueller, K. (2008), ‘Color design for illustrativevisualization’, IEEE Transactions on Visualizationand Computer Graphics 14(6), 1739–1754.

Wang, X., Jia, J., Liao, H. & Cai, L. (2012a), ‘Affec-tive image colorization’, Journal of Computer Sci-ence and Technology 27(6), 1119–1128.

Wang, X., Jia, J., Liao, H. & Cai, L. (2012b), Imagecolorization with an affective word, in ‘Proceedingsof the First international conference on Computatio-nal Visual Media’, CVM’12, Springer-Verlag, Berlin,Heidelberg, pp. 51–58.

Wang, X., Jia, J., Liao, H. & Cai, L. (2012c), Image co-lorization with an affective word, in ‘ComputationalVisual Media’, Springer, pp. 51–58.

Welsh, T., Ashikhmin, M. & Mueller, K. (2002),‘Transferring color to greyscale images’, ACM Trans.Graph. 21(3), 277–280.

Westland, S., Laycock, K., Cheung, V., Henry, P. &Mahyar, F. (2012), ‘Colour harmony’, JAIC-Journalof the International Colour Association.

Wong, W. (1997), Principles of Color Design, Wiley.

Xue, S., Agarwala, A., Dorsey, J. & Rushmeier, H.(2012), ‘Understanding and improving the realism ofimage composites’, ACM Trans. Graph. 31(4), 84:1–84:10.

Yao, L., Suryanarayan, P., Qiao, M., Wang, J. Z. & Li,J. (2012), ‘Oscar: On-site composition and aestheticsfeedback through exemplars for photographers’, Int.J. Comput. Vision 96(3), 353–383. URL http://dx.

doi.org/10.1007/s11263-011-0478-3.

Yeh, C.-H., Ng, W.-S., Barsky, B. A. & Ouhyoung, M.(2009), An esthetics rule-based ranking system foramateur photos, in ‘ACM SIGGRAPH ASIA 2009Sketches’, SIGGRAPH ASIA ’09, ACM, New York,NY, USA, pp. 24:1–24:1.

Zajonc, A. G. (1976), ‘Goethe’s theory of color andscientific intuition’, American Journal of Physics44, 327.

Zhang, S.-H., Li, X.-Y., Hu, S.-M. & Martin, R. R.(2011), ‘Online video stream abstraction and styli-zation’, IEEE Transactions on Multimedia pp. 1286–1294.

Page 15: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Bibliography: Intelligent Transfer of Thematic Harmonic Color Palettes

Michel Alves dos Santos

January, 2014

ReferencesChang, Y., Saito, S. & Nakajima, M. (2002), Color transfor-

mation based on the basic color categories of a painting,in ‘ACM SIGGRAPH 2002 Conference Abstracts and Ap-plications’, ACM, New York, NY, USA, pp. 157–157.

Chang, Y., Saito, S., Uchikawa, K. & Nakajima, M. (2005),‘Example-based color stylization of images’, ACM Trans.Appl. Percept. 2(3), 322–345.

Chang, Y., Uchikawa, K. & Saito, S. (2004), Example-basedcolor stylization based on categorical perception, in ‘Pro-ceedings of the 1st Symposium on Applied Perception inGraphics and Visualization’, ACM, New York, NY, USA,pp. 91–98.

Cohen-Or, D., Sorkine, O., Gal, R., Leyvand, T. & Xu,Y.-Q. (2006), Color harmonization, in ‘ACM SIGGRAPH2006 Papers’, SIGGRAPH ’06, ACM, New York, NY,USA, pp. 624–630. URL http://doi.acm.org/10.1145/

1179352.1141933.

Gonzales, R. C. & Wintz, P. (1987), Digital Image Processing(2Nd Ed.), Addison-Wesley Longman Publishing Co., Inc.,Boston, MA, USA.

Gooch, A. A., Olsen, S. C., Tumblin, J. & Gooch, B. (2005),Color2gray: Salience preserving color removal, in ‘ACMSIGGRAPH 2005 Papers’, SIGGRAPH ’05, ACM, NewYork, NY, USA, pp. 634–639.

Greenfield, G. R. & House, D. H. (2003), Image recoloringinduced by palette color associations., in ‘WSCG’.

Greenfield, G. R. & House, D. H. (2005), A palette drivenapproach to image color transfer, in ‘Proceedings of theFirst Eurographics Conference on Computational Aesthe-tics in Graphics, Visualization and Imaging’, Computa-tional Aesthetics’05, Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, pp. 91–99.

Huang, H., Zang, Y. & Li, C.-F. (2010), ‘Example-based pain-ting guided by color features’, The Visual Computer 26(6-8), 933–942.

Levin, A., Lischinski, D. & Weiss, Y. (2004), ‘Colorizationusing optimization’, ACM Trans. Graph. 23(3), 689–694.

Li, C. (2010), ‘Example-based painting guided by color fea-tures’, The Visual Computer.

Nguyen, C. H., Ritschel, T., Myszkowski, K., Eisemann, E. &Seidel, H.-P. (2012), ‘3d material style transfer’, ComputerGraphics Forum (Proc. EUROGRAPHICS 2012).

Pérez, P., Gangnet, M. & Blake, A. (2003), ‘Poisson imageediting’, ACM Trans. Graph. 22(3), 313–318. URL http:

//doi.acm.org/10.1145/882262.882269.

Pouli, T. & Reinhard, E. (2011), ‘Progressive color trans-fer for images of arbitrary dynamic range’, Computers &Graphics 35(1), 67–80.

Reinhard, E. & Pouli, T. (2011), Colour spaces for colourtransfer, in ‘Proceedings of the Third International Confe-rence on Computational Color Imaging’, Springer-Verlag,Berlin, Heidelberg, pp. 1–15.

Reinhard, E., Ashikhmin, M., Gooch, B. & Shirley, P. (2001),‘Color transfer between images’, IEEE Comput. Graph.Appl. 21(5), 34–41.

Shinagawa, Y. & Kunii, T. L. (1998), ‘Unconstrained au-tomatic image matching using multiresolutional critical-point filters’, IEEE Trans. Pattern Anal. Mach. Intell.20(9), 994–1010. URL http://dx.doi.org/10.1109/34.

713364.

Suganuma, K., Sugita, J. & Takahashi, T. (2008), Coloriza-tion using harmonic templates, in ‘ACM SIGGRAPH 2008posters’, SIGGRAPH ’08, New York, USA, pp. 62:1–62:1.

Wang, B., Yu, Y. & Xu, Y.-Q. (2011), Example-based imagecolor and tone style enhancement, in ‘ACM SIGGRAPH2011 Papers’, SIGGRAPH ’11, ACM, New York, NY,USA, pp. 64:1–64:12. URL http://doi.acm.org/10.1145/

1964921.1964959.

Wang, B., Yu, Y., Wong, T.-T., Chen, C. & Xu, Y.-Q.(2010), ‘Data-driven image color theme enhancement’,ACM Transactions on Graphics (SIGGRAPH Asia 2010issue) 29(6), 146:1–146:10.

Wang, C.-M. & Huang, Y.-H. (2004), ‘A novel color trans-fer algorithm for image sequences.’, J. Inf. Sci. Eng.20(6), 1039–1056. URL http://dblp.uni-trier.de/db/

journals/jise/jise20.html#WangH04.

Welsh, T., Ashikhmin, M. & Mueller, K. (2002), ‘Transferringcolor to greyscale images’, ACM Trans. Graph. 21(3), 277–280.

Xiao, X. & Ma, L. (2006), Color transfer in correlated co-lor space, in ‘Proceedings of the 2006 ACM InternationalConference on Virtual Reality Continuum and Its Appli-cations’, ACM, New York, NY, USA, pp. 305–309.

Yang, C. K. & Peng, L.-K. (2008), ‘Automatic mood trans-ferring between color images’, IEEE Comput. Graph. Appl.28(2), 52–61.

1

Page 16: Five Minute Speech: An Overview of Activities Developed in Disciplines and Guided Studies

Bibliography: Fast Procedural Texture Synthesis - A Approach Based onGPU Use

Michel Alves dos Santos

January, 2014

ReferencesAshikhmin, M. (2001), Synthesizing natural textures, in

‘Proceedings of the 2001 Symposium on Interactive 3DGraphics’, ACM, New York, NY, USA, pp. 217–226.

Dong, Y., Lefebvre, S., Tong, X. & Drettakis, G. (2008), Lazysolid texture synthesis, in ‘Computer Graphics Forum(Proceedings of the Eurographics Symposium on Rende-ring)’. URL http://www-sop.inria.fr/reves/Basilic/

2008/DLTD08.

Galerne, B., Lagae, A., Lefebvre, S. & Drettakis, G. (2012),‘Gabor noise by example’, ACM Transactions on Graphics(SIGGRAPH Conference Proceedings). URL http://

www-sop.inria.fr/reves/Basilic/2012/GLLD12.

Gilet, G. & Dischler, J. M. (2010), Procedural texture parti-cles, in ‘Proceedings of the 2010 ACM SIGGRAPH Sym-posium on Interactive 3D Graphics and Games’, I3D’10, ACM, New York, NY, USA, pp. 6:1–6:1. URL http:

//doi.acm.org/10.1145/1730804.1730978.

Gilet, G., Dischler, J.-M. & Ghazanfarpour, D. (2012),‘Multi-scale assemblage for procedural texturing.’, Com-put. Graph. Forum 31(7-1), 2117–2126.

Hewgill, A. & Ross, B. J. (2003), ‘Procedural 3d tex-ture synthesis using genetic programming’, COMPUTERSAND GRAPHICS 28, 569–584.

Hewgill, A. & Ross, B. J. (n.d.), ‘The evolution of 3d proce-dural textures’.

Lagae, A. & Drettakis, G. (2011), ‘Filtering solid gabornoise’, ACM Transactions on Graphics (SIGGRAPH Con-ference Proceedings). URL http://www-sop.inria.fr/

reves/Basilic/2011/LD11.

Lagae, A., Lefebvre, S. & Dutré, P. (2011), ‘Improving ga-bor noise’, IEEE Transactions on Visualization and Com-puter Graphics. URL http://www-sop.inria.fr/reves/

Basilic/2011/LLD11.

Lagae, A., Lefebvre, S., Cook, R., DeRose, T., Drettakis,G., Ebert, D., Lewis, J. & Perlin, K. (2010a), ‘A surveyof procedural noise functions’, Computer Graphics Forum29(8), 2579–2600.

Lagae, A., Lefebvre, S., Cook, R., DeRose, T., Drettakis, G.,Ebert, D., Lewis, J., Perlin, K. & Zwicker, M. (2010b), Stateof the art in procedural noise functions, in H. Hauser &E. Reinhard, eds, ‘EG 2010 - State of the Art Reports’,Eurographics, Eurographics Association. URL http://

www-sop.inria.fr/reves/Basilic/2010/LLCDDELPZ10.

Lagae, A., Lefebvre, S., Drettakis, G. & Dutré, P. (2009),‘Procedural noise using sparse gabor convolution’, ACMTransactions on Graphics (SIGGRAPH Conference Proce-edings).

Lefebvre, S., Hornus, S. & Lasram, A. (2010), ‘By-examplesynthesis of architectural textures’, ACM Transactionson Graphics (SIGGRAPH Conference Proceedings). URLhttp://www-sop.inria.fr/reves/Basilic/2010/LHL10.

M ĺueller, G., Sarlette, R. & Klein, R. (2007), Procedural edi-ting of bidirectional texture functions, in ‘Proceedingsof the 18th Eurographics Conference on Rendering Te-chniques’, EGSR’07, Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, pp. 219–230. URL http:

//dx.doi.org/10.2312/EGWR/EGSR07/219-230.

Pietroni, N., Cignoni, P., Otaduy, M. & Scopigno, R. (2010a),‘Solid-texture synthesis: A survey’, IEEE Comput. Graph.Appl. 30(4), 74–89. URL http://dx.doi.org/10.1109/

MCG.2009.153.

Pietroni, N., Cignoni, P., Otaduy, M. A. & Scopigno, R.(2010b), ‘A survey on solid texture synthesis’, IEEE Com-puter Graphics & Applications.

Ross, B. J. & Zhu, H. (2004), ‘Procedural texture evolu-tion using multi-objective optimization’, New Gen. Com-put. 22(3), 271–293. URL http://dx.doi.org/10.1007/

BF03040964.

Sperl, G. (2013), ‘Procedural textures for architectural mo-dels’.

Turk, G. (2001), Texture synthesis on surfaces, in ‘Pro-ceedings of the 28th Annual Conference on ComputerGraphics and Interactive Techniques’, SIGGRAPH ’01,ACM, New York, NY, USA, pp. 347–354. URL http:

//doi.acm.org/10.1145/383259.383297.

Wei, L.-Y. & Levoy, M. (2000), Fast texture synthesis usingtree-structured vector quantization, in ‘Proceedings of the27th Annual Conference on Computer Graphics and Inte-ractive Techniques’, SIGGRAPH ’00, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, pp. 479–488.URL http://dx.doi.org/10.1145/344779.345009.

Weidlich, A. & Wilkie, A. (2008), Modeling aventurescentgems with procedural textures, in ‘Proceedings of theSpring Conference on Computer Graphics (SCCG)’, ACM.

Witkin, A. & Kass, M. (1991), Reaction-diffusion textures,in ‘Proceedings of the 18th Annual Conference on Com-puter Graphics and Interactive Techniques’, SIGGRAPH’91, ACM, New York, NY, USA, pp. 299–308. URLhttp://doi.acm.org/10.1145/122718.122750.

1