a fast texture synthesis technique using spatial · pdf file · 2005-09-15a fast...

20
A Fast Texture Synthesis Technique using Spatial Neighborhood Muath Sabha Philip Dutr´ e Report CW400, September 2005 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A – B-3001 Heverlee (Belgium)

Upload: vuongthu

Post on 16-Mar-2018

224 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

A Fast Texture Synthesis Technique

using Spatial Neighborhood

Muath Sabha

Philip Dutre

Report CW400, September 2005

Katholieke Universiteit LeuvenDepartment of Computer Science

Celestijnenlaan 200A – B-3001 Heverlee (Belgium)

Page 2: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

A Fast Texture Synthesis Technique

using Spatial Neighborhood

Muath Sabha

Philip Dutre

Report CW400, September 2005

Department of Computer Science, K.U.Leuven

Abstract

In this technical report, we present a texture synthesis techniquethat stores all possible histories in memory structure acting as aFinite State Machine. The histories, which are the most probablespatial neighborhood for each pixel or patch, are stored in each stateof the FSM. After constructing that structure, the synthesis goesvery fast and smooth in a raster scan order. We found that ourtechnique works well for a wide variety of textures, ranging fromstochastic to regular structured textures, including near-regular tex-tures. We show that our technique is faster, and gives better resultsthan previous pixel- or patch-based techniques, with minimal userinput.

Keywords : keyword1, Texture synthesis, Image processing, Pixel-based tex-tures, Markov random field, and Image-based rendering.CR Subject Classification : I.3.3. [Computer Graphics]: Picture/Image Gen-eration; I.3.7. [Computer Graphics]: Three-Dimensional Graphics and Realism,color, shading, shadowing and Texture.

Page 3: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

A Fast Texture Synthesis Technique using Spatial NeighborhoodTechnical Report: Department of Computer Science, K.U.Leuven

Muath Sabha∗ Philip Dutre†

September 14, 2005

Abstract

In this technical report, we present a texture synthesis technique that stores all possible histories inmemory structure acting as a Finite State Machine. The histories, which are the most probable spatialneighborhood for each pixel or patch, are stored in each state of the FSM. After constructing that struc-ture, the synthesis goes very fast and smooth in a raster scan order. We found that our technique workswell for a wide variety of textures, ranging from stochastic to regular structured textures, including near-regular textures. We show that our technique is faster, and gives better results than previous pixel- orpatch-based techniques, with minimal user input.

1 Introduction

Texture synthesis algorithms generate new images, starting from a source texture as input. The new texturelooks very similar, but different than the source texture. Texture synthesis has a variety of applications incomputer graphics, computer vision and image processing such as texture mapping, completion of missingparts of an image, motion synthesis, film post production and compression of images and video sequences.There are two approaches in defining textures:

1. Top-down approach: there is a basic element (texel or texton) and a placement rule that defines howand where elements are placed. (bricks)

2. Button-up approach: the texture is a property that can be derived from statistics of small groups ofpixels (such as mean and variance) (quartz and grass)

In this report, we present an approach for texture synthesis that can be used both as a pixel-based anda patch-based technique. Our technique generates high-quality textures, is significantly faster compared toexisting techniques, and provides an intuitive and easy framework for the understanding of texture synthesisin general.

Our algorithm uses a finite state machine (FSM) to encode the appearance of a texture. Each color inthe source texture will be represented in the FSM as a separate state. Adjacent colors in the source textureare represented as state transitions, dependent on the characteristics of the local neighborhood. Differentoccurrences of the color in the source textures (i.e. different neighborhoods) result in several differentpossible state transitions from a single state.

By “executing” the FSM, we can generate a target texture, very similar in appearance to the sourcetexture, which is the goal of texture synthesis in general. This is achieved by examining the neighborhood

∗e-mail: [email protected]†e-mail: [email protected]

1

Page 4: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

of the already partially generated target texture, and comparing it to all possible state transitions from thecurrent state in the FSM. Based on this input, a state transition and a color for the next pixel are determined.Therefore, our approach follows the idea of texture synthesis in general: find similar neighborhoods in thesource texture in order to determine the next pixel or patch in the target texture. By converting the sourcetexture to a FSM, we obtain a very efficient search mechanism, thereby trading memory for speed.

Textures have been traditionally classified as either: Regular (Deterministic) (consisting of repeatedtexels) or Stochastic (without explicit texels) [6].

2 Previous Work

Texture synthesis has been an active research topic over the past years. Published techniques can be sub-divided in two main categories: procedural texture synthesis, and texture synthesis from examples.

2.1 Models of Texture Synthesis

There are two models for texture synthesis:Procedural Texture Synthesis, in which a function is used to return a color value at any given point

in the 3D space. This gives a high quality, continuous and fast generated texture, but it cannot generateall types of textures. Examples of procedural texture synthesis techniques: Reaction-Diffusion, which isa chemical process that builds up patterns and can be simulated to create textures. Solid Texture (wood,marble), 3D noise function (water waves, wood grain and marble), cellular noise function -variant of 3Dnoise function- (water waves and stones), cell cloning [17]. The main limitation of the procedural texturesynthesis approach is that creating a new texture requires a programmer to write and test code until the resulthas the right “look”. For more information, we refer the reader to [4].

Texture Synthesis from Samples, which depends on a sample part of the texture taken from nature orgenerated by human person. From that sample, it generates a complete texture. Pyramid-Based TextureSynthesis/Feature Matching tries to capture spatial frequencies at different scales. Pixel-Based TextureSynthesis assumes that pixel values are chosen by a 2-D stochastic process. Patch-Based Texture Synthesistries to copy a complete patch from the sample texture to the target texture. There are some other non-popular methods like Co-occurrence Matrices, Discrete Fourier, Discrete Cosine Transforms (orthogonaltransforms), Fractals and Morphology.

Texture synthesis from examples, has occupied most of the literature work for the last decade. [7] pro-posed to analyze textures in terms of histograms of filter responses at multiple scales and orientations. [14]were able to substantially improve synthesis results for structured textures at the cost of a more complicatedoptimization procedure. [2] scrambles the input in a coarse-to-fine fashion, preserving the conditional dis-tribution of filter outputs over multiple scales (jets). After that, more directed attention was given to texturesynthesis from examples. The following two subsections survey pixel-based and patch-based techniques inmore detail.

In the following two subsections, we will talk about the pixel-based and patch-based techniques in moredetail.

2.2 Pixel-Based Texture Synthesis

Pixel-based texture synthesis algorithms are generally based on the theory of Markov Random Fields (MRF),a twodimensional extension to Markov Chains [13]. Using MRF’s, a texture is modeled as a local andstationary random process: each pixel is classified by a small set of neighboring pixels (local causality) andthis classification is the same for all pixels (stationary).

2

Page 5: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Non-parametric sampling [5] pioneered this approach. They started synthesizing (growing) the desiredtexture outwards pixel by pixel, from a (3 × 3) pixels seed taken randomly from the input sample image.For each pixel to-be synthesized from already synthesized neighborhood pixels, an approximation to theconditional probability is constructed for each pixel by computing a Gaussian weighted, normalized sumof square differences between the synthesized pixels and the pixel neighborhoods of each candidate in theinput texture. A target pixel is then selected from a set of pixels with high conditional probability.

The algorithm proposed by [18] makes excellent use of the fixed neighborhood size by interpreting allpossible neighborhoods in the input texture as a set of 1D vectors (each vector is an ordered concatenationof RGB triples) and processes these high dimensional neighborhood vectors using tree structured vectorquantization (TSVQ). This preprocess results in logarithmic complexity for each best-pixel-search and anoverall speedup by two orders of magnitude compared to Efros and Leung’s algorithm, at the price of someartifacts. The algorithm is furthermore extended to a multi-resolution synthesis pyramid which results insmaller search neighborhoods, with synthesis quality comparable to the use of larger neighborhoods with asingle resolution synthesis.

[1] worked on natural (quasi-repeating pattern) textures. He modifies the [18] algorithm to encourageverbatim copying of pieces of the input sample. He relies on visual masking to hide the seams between thepatches. His algorithm starts by randomizing the pixels from the input sample into the output image, keepingin mind the original positions. For each pixel from the L-shape neighborhood in the output image, he usesits original position in the input image to generate a candidate pixel which location is appropriately shifted.The algorithm was able to grow patches starting from some position in the input sample and continuingdown to the bottom of the image.

[8] offered a more complex, and more elegant, algorithm that handles both texture synthesis and tex-ture transfer. They do not only combine and extend the work of [18] and [1], but also generalize to acorresponding pair of images rather than singles textures.

Our technique for pixel-based texture synthesis is mostly based on that of [18] and [1].

2.3 Patch-Based Texture Synthesis

[20] in Chaos Mosaic, [15] in Lapped Textures, and [6] in Image Quilting early worked on the batch basedtexture synthesis. [11] tried to make it real-time. [6] make a number of key insights that motivate theirwork. They point out that the then-dominant pixel-at-a-time synthesis algorithms like those of Efros andLeung, Wei and Levoy, and Ashikhmin, all perform excess computation in the common case of structuredtextures. The extension that Efros and Freeman make to their algorithm for texture transfer (application ofa source texture to a target image) is the requirement that each chosen patch satisfy a correspondence mapin addition to the texture synthesis requirements.

[21] use a data structure called a jump map. Each pixel in the jump map contains a list of referencesand probabilities for matching pixels. Depending on this jump map, they synthesize the target texture inreal-time by copying blocks from the sample texture.

[3] in ”Wang Tiles for Image and Texture Generation” uses Wang Tiles in his Patch-Based TextureSynthesis technique. Wang Tiles are a set of squares in which each edge of each tile is colored. Matchingcolored edges are aligned to tile the plane.

[9] suggest synthesizing new texture by copying irregularly shaped patches from the sample image intothe output image. The patch copying process is performed in two stages. First a candidate rectangular patch(or patch offset) is selected by performing a comparison between the candidate patch and the pixels alreadyin the output image. Second, an optimal portion of this rectangle is copied to the output image. The portionof the patch to copy is determined by using a graph cut algorithm.

[12] view a near-regular texture (NRT) as a statistical distortion of a regular, wallpaper-like congruent

3

Page 6: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

tiling, possibly with individual variations in tile shape, size, color and lighting. In their paper, they categorizethe NRTs (which include a wide range of textures) into three categories: (1) a regular structural layout butirregular color appearance in individual tiles; (2) distorted spatial layout but topologically regular alterationsin color; or (3) deviations from regularity in both structural placement and color intensity. They have donea great job in defining a measure for regularity in textures, and they deal with NRT as regular textures. Theresults are very good compared to other patch-based technique. The user spends a long time in drawinglattices defining the texels, adjusting the misplaced lattice points to form the distorted lattice, lighting mapextraction, and setting the levels of geometry and color regularity (gain). And because of that, we prefer toconsider it as texture editing and not as texture synthesis.

[19] worked on feature matching for patch-based texture synthesis. They propose to perform texturesynthesis using both salient features and their deformation, taking into consideration that not every pixel isequally important in measuring perceptual similarity.

3 Texture Synthesis with FSM

3.1 Finite State Machines

The study of finite state machines (also called finite state automata) has a long and extensive history incomputer science, covering a wide range of topics such as modeling of application behavior, design ofhardware digital systems, software engineering, and the study of computation and languages. This sectionwill briefly explain the main principles of FSMs, and will introduce some necessary terminology. For a moreelaborate overview of FSMs and their applications, we refer the reader to some books, e.g. [10] [16], on thesubject. A FSM can be considered as a model of the behavior of a system, with a limited number of definedconditions or modes, in which transitions between modes change with various inputs and circumstances.Finite state machines consist of 5 main elements:

1. A set of states which define the behavior of the system and which may produce actions. Usually, onestate is designated as the initial state of the FSM;

2. Transitions between states model the dynamic behavior of the system. Not all pairs of states havetransitions defined between them;

3. Rules or conditions that must be met to allow a transition to occur from one state to the next;

4. Input events which are either externally or internally generated and which possibly may trigger rulesthat can lead to state transitions;

5. Each state transition might also generate a specific output.

In figure 1, all the elements relating to our use of FSM are shown.During a simulation, the behavior of the system is characterized by its current state, and possibly also

by the sequence of state transitions the system went through in order to arrive at its current state (the historyof the simulation). Combined with specific input events, rules (which depend on the current state) will beevaluated that prescribe the transition from the current state to the next, and which may produce the outputthat is the result of the simulation.

The nature of these rules subdivides FSMs in two main classes, deterministic and non-deterministicFSM. In deterministic FSMs, the state transition can be precisely predicted from its current state and allinputs. Non-deterministic FSMs allow a degree of uncertainty or probability in the transition rules, andmore than one possible state transition may result from a single input. It may also be the case that multiple

4

Page 7: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Figure 1: A symbolic representation of the FSM elements

inputs are received at various times, meaning that the transition from the current state to the next cannot beknown until all the inputs are properly evaluated.

3.2 The Idea behind using the FSM

The concept of the finite state machine is the same needed for answering the basic question of synthesizinga texture, which is What is th enext pixel to be synthesized?.The states of the FSM are the colors in the sample texture. Each state contains a number of neighborhoods,each of which represents a different history for that color from the sample texture. The transition rule foreach state is the next for each neighborhood, which is the next resulting state after matching the neigh-borhood from the target texture to each stored neighborhood. The decision is taken after finding the mostsimilar neighborhood between the state neighborhoods with the neighborhood from the target texture.We selected the reference pixel to be the first pixel near to the needed pixel. Other pixels was added to thevector and put in the sate of the reference pixel.As shown from figure 2, in each state there are some histories. While synthesizing the texture, in each linestart, we go directly to the state of that pixel taking with us a vector of pixels. Finding the vector of the bestmatch with what in our hands, we decide the color of the synthesized pixel.

3.3 FSM Texture Synthesis

Given the general approach outlined in the previous section, we will now discuss our pixel-based texturesynthesis algorithm in more detail.

3.3.1 Constructing the FSM

Each RGB color that occurs in the source texture will define one state in the FSM. State transitions indicatethat the respective colors might occur next to each other on the same scanline in the target texture. Thus, wehave to examine the source texture, select desired transitions, and encode these in the FSM.In the spirit of previous texture synthesis techniques, the next pixel to be filled out during synthesis will bedependent on the likeliness of the already synthesized neighborhood with a number of possible neighbor-hoods in the source texture. If a very similar neighborhood is found, the pixel color on the corresponding

5

Page 8: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Figure 2: A symbolic graph shows how the FSM is constructed from a sample texture. Each L-shapeneighborhood (12 pixels) is stored in the state of its reference pixel. The symbol is E is an empty pixel.

position of the neighborhood in the source texture is copied to the pixel to be filled out in the target texture.During the construction of the FSM, it is therefore necessary to select good candidate neighborhoods thatcan serve as conditions for state transitions to the next color. In this sense, we limit the search for similarneighborhoods by selecting in advance the possible similar neighborhoods. We select the same L-shapedneighborhood as is used by [18].

As shown in figure 2, each state has some neighborhoods that serve as conditions for state transitions.While synthesizing the texture, for the first pixel of each line, we go directly to the state of that pixel, usingthe already generated neighborhood (partially containing random seed pixels). Finding the best matchedneighborhood, we decide on the state, and thus the color for the next pixel. Following the FSM state bystate, a complete line of the target texture will be generated.

3.3.2 Using the FSM

The difference between neighborhoods is calculated using the L2-norm of RGB components between allcorresponding pixels. The smallest difference determines the state transition, and thus the color of the nextpixel.

In the synthesis phase, the neighborhood block of the target pixel is compared to all stored blocks in thecorresponding state. The targetted state is that of the first pixel to the left of the wanted pixel (which is P1 infigure 3. The transition to the next state has two uses, knowing the target pixel and knowing the next statewhich includes the new set of stored blocks by which the next comparison process will take place. The nextcomparison will be made between the new position block of neighborhoods from the new wanted pixel inthe target texture and the set of stored blocks in the current state.

As noticed, there will not be any search except for the first pixel of each line. The FSM will lead thesynthesis till the end of the line in a smooth and fast manner.

6

Page 9: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Figure 3: An L-shape locally similar neighborhoodwith the three reference pixels’ positions.

Figure 4: One of the states from figure 2 (lightgreen) extended to include F1, F2 and F3 sub-states,showing the position of the reference pixel in eachstored neighborhood. The round colored squaresafter the arrows indicate the next state for eachneighborhood transition.

4 Extending the Idea

In order to increase the probability of including good neighborhoods amongst those which are going tobe tested, we encompass each neighborhood that shares at least one direct neighbor pixel with the targetneighborhood.

We have considered the three closest neighbor pixels as the most important ones to decide which pixelcomes next. We look to the three neighbors P1, P2 and P3, as shown in figure 3, and find the stored neigh-borhoods that share with this neighborhood at least one of these pixels. To make this work, the FSM isextended in the following way. In each state of the extended FSM, there will be three sub-states F1, F2,and F3. Each sub-state will contain all neighborhoods that have the state’s pixel color in the correspondingposition P1, P2, and P3 as shown in figure 4. This means that every t-shaped neighborhood in the sampletexture exists in three positions in the FSM.In our technique, the block size of locally similar neighborhoods is important. In general, it has to be at leastequal to the feature size in a stochastic texture or at least the texel size in a structured texture.

We have tried to take four reference pixels, (the nearest four neighbors), the fourth reference is that inthe top right corner , it gave nearly the same results with an advantage to the three references in timingand memory. We say nearly because there are some textures that produced better results, especially in thedistribution, but the overall is nearly the same.

5 Implementation

Our concentration in implementation was on the Wei and Levoy L-shape neighborhood, while we have alsotried other shapes, like square corner shape. In this case, the synthesis will not be in the horizontal rasterscan order, it will be diagonal scan line sequence. It is not so far from the L-shape, but, it gives somedifferent results. It gave some better looking results in some textures, and worse in others. The overall isworse than the L-shape.

5.1 Theory

Given below is a schematic overview of our algorithm, divided into constructing the FSM and synthesizingthe target texture.

7

Page 10: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Constructing the FSM:For each color c in the texture, add a state S(c) to the FSMFor all pixels px,y , read the neighborhood N(px,y)

Set the next state to S(Color(px,y))Add the block to S(Color(P1)) in F1 sub-stateAdd the block to S(Color(P2)) in F2 sub-stateAdd the block to S(Color(P3)) in F3 sub-state

Synthesis:For each line y in the target texture

Current State = S(p0,y)For each pixel x in the line y

Read N(px,y) (from the target texture)Add all N(p) from S(Color(P1)).F1, S(Color(P2)).F2 and S(Color(P3)).F3 to the working set in S(Color(P1))For all N(p) in the working set:

Find the one with the least difference with N(px,y)nextColor= Color(nextState)currentState= nextState

5.2 The starting region

First, we have tried to start synthesizing the target texture with the random bands suggested by [18], itworked well with the stochastic textures. But, for the structured and near regular textures, it needs longbefore reaching to the steady state synthesis. We tried an other thing which is starting with a random blockfrom the sample texture of half the neighborhood block size by half the neighborhood block size, and thengenerate a band from that block in the horizontal direction. From that band, we synthesized the completetarget texture.

Something to be taken into consideration in constructing this band, which is the neighborhood pixelsfrom the target texture will not be complete, and because of that, we put a flag E to tell whether the pixel isempty or not. Using this band leads to better results than starting with the random pixels from the sourcetexture.

This gives us the opportunity to start taking neighborhood histories from the sample texture very closeto the edges, even if the number of pixels there are not complete. In the synthesis phase, the number ofpixels compared will decide whether to take this neighborhood history in consideration or not.

Figure 5 shows the synthesis procedure in the top-band. It is clear that the comparison is always donewith nearly half the total number of pixels. This will be the same for the pixels near the edges (left and right)for the complete synthesis procedure.

5.3 Choosing the neighborhood block size

The neighborhood block size is very important in our technique, and that is to compare the complete targetfeature with that from the source. The block size is recommended to be of size minimum equals to that ofthe texel, or the feature size. In the checker board example in figure 6, the block size in a, b, is enoughto be 1 pixel more than the square size, but, in c, it has to be more than double the square size (which is acomplete feature, black and white squares).

5.4 Working with different norms and Weights

The difference between two histories can be calculated using different ways. Norms can be changed, andalso weights given to pixels can be changed also.We have used the L1Norm to find the difference between the stored L-shape neighborhood history withthat read from the target texture, it did n’t give good results. The L2Norm is then used, and it is noticed

8

Page 11: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Figure 5: (a) is the block copied from the sample texture, (b, c, and d) synthesizing the upper band with alocally neighborhood block size 5. (e, f) are completing the synthesis. E is for empty pixel, and the arrow isfor the direction of synthesis.

(a) (b) (c)

Figure 6: Checker board synthesis

9

Page 12: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

that it gave better results than the L1Norm. We tried to work on the uniform pixel weighting, which is allthe pixels have the same weight, and then dividing on the number of pixels calculated. An other way is tomultiply each pixel by a factor depending on the distance from the core pixel, giving the higher weight tothe nearer. This gives the best results we have, and that what we have followed.

5.5 Working with different color spaces

5.5.1 RGB color space

Most imaging systems use the linear RGB color space. In our technique, we have used the linear RGB giving8 bits integer for each color. The L2Norm difference is calculated for two pixels p1 and p2 as follows:

D(p1, p2) = 2√

(R1 − R2)2 + (G1 − G2)2 + (B1 − B2)2

where p1 is a pixel from the neighborhood block from the target texture, and p2 is a pixel of the sameposition, but from a stored neighborhood block.

Something important here to be mentioned is, sometimes it is better to work with limited number ofcolors, which means, not so big. In some textures, especially with “.jpg” extension, each pixel nearlyhas different color. In some of these textures, it didn’t work well, but, reducing the number of colors (byindexing) to have a good ratio (like each color is repeated 5 times), the results became better. To avoidchanging the original texture, we keep the original sample texture, and the positions while synthesis. Afterfinishing the synthesis, we compensate from the original sample texture. This take more time, but, givesbetter results.

5.5.2 Luminance

Pixel values in accurate grayscale images are based upon luminance. Pixel values in accurate color imagesare based upon tristimulus values as in the RGB. We noticed that for the grayscale textures, the results werebetter than that in the colored textures, or in an other way, for textures of small number of colors, results arebetter than those of high number of colors. This leaded us to try the luminance in the calculation, but in thecompensation, to compensate the original color. This gives us the same results as in the linear RGB. Theluminance was calculated as follows:

Y709 = 0.2126R + 0.7152G + 0.0722B

5.5.3 XYZ color space

The CIE system is based on the discription of color as luminance component Y, as described above, and twoadditional components X and Z. The spectral weighting curves of X and Z have been standardized by the CIEbased on statistics from experiments involving human observers. XYZ tristimulus values can describe anycolor. The magnitude of XYZ components are proportional to physical energy, but there spectral compositioncorresponds to color matching characteristics of human vision. To convert from RGB color space to XYZcolor spacewhere the components of the RGB are in the nominal range [0.0, 1.0],

[ X Y Z ] = [ r g b ] [M ]

10

Page 13: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

r =

{

R/12.92 R ≤ 0.0445((R + 0.055)/1.055)2.4 R > 0.0445

}

g =

{

G/12.92 G ≤ 0.0445((G + 0.055)/1.055)2.4 G > 0.0445

}

b =

{

B/12.92 B ≤ 0.0445((B + 0.055)/1.055)2.4 B > 0.0445

}

M =

0.412453 0.357580 0.1804230.212671 0.715160 0.0721690.019334 0.119193 0.950227

5.5.4 Luv Color space

In 1976, the CIE defined two new color spaces to enable us to get more uniform and accurate models. Thefirst of these two color spaces is the CIE Luv which component are L*, u* and v*. L* component definesthe luminancy, and u*, v* define chrominancy. CIE Luv is very used in calculation of small colors or colordifferences, especially with additive colors. The CIE Luv color space is defined from CIE XYZ.CIE XYZ =⇒ CIE Luv

L∗ = 116 ∗

(

Y

Yn

)1/3

whetherY

Yn> ε

L∗ = κ ∗

(

Y

Yn

)

whetherY

Yn≤ ε

u∗ = 13 ∗ L∗∗ (u′

− u′n)

v∗ = 13 ∗ L∗∗ (v′ − v′n)

where

u′ =4 ∗ X

X + 15 ∗ Y + 3 ∗ Z

v′ =9 ∗ Y

X + 15 ∗ Y + 3 ∗ Z

and u′n and v′n have the same definitions for u′ and v′ but applied to the white point reference. So, you

have:

u′n =

4 ∗ Xn

Xn + 15 ∗ Yn + 3 ∗ Zn

v′n =9 ∗ Yn

Xn + 15 ∗ Yn + 3 ∗ Zn

ε = 0.008856 in the actual CIE standardκ = 903.3 in the actual CIE standard.

In all our transformations, we use the reference white (normalized white) as calculated from XYZ conver-sion. So, the numbers we have used are Xn = 0.31681867, Yn = 0.333333, andZn = 0.362918.

11

Page 14: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

5.5.5 Lab Color space

As CIE Luv, CIE Lab is a color space introduced by CIE in 1976. It’s a new incorporated color space inTIFF specs. In this color space you use three components: L* is the luminancy, a* and b* are respectivelyred/blue and yellow/blue chrominancies. This color space is also defined with regard to the CIE XYZ colorspaces.CIE XYZ =⇒ CIE Lab

L = 116 ∗

(

Y

Yn

)1/3

whetherY

Yn> ε

L = κ ∗

(

Y

Yn

)

whetherY

Yn≤ ε

a = 500 ∗ f(X

Xn) − f(

Y

Yn)

b = 200 ∗ f(Y

Yn) − f(

Z

Zn)

wheref(t) = t

1

3 ift > ε

f(t) =7.787 ∗ t + 16

116ift ≤ ε

ε = 0.008856 in the actual CIE standardκ = 903.3 in the actual CIE standard.

As we said in the Luv conversion, the normalized reference white is the same used in Lab as in Luv.Figure 7 gives some idea about the results from all color spaces used.

6 Results

We have tested our technique on different types of textures: stochastic textures, irregular textures, near-regular structured textures, and regular textures. As seen from figure 8, it gives good results in most ofthe tested textures. Figure 8 show some of the results with the source texture in the corner of each texture.Some of the tested textures are from previous authors, while other are our textures.

The statistics of the results in figure 8 are put in table 1. As shown from the table, if the number ofcolors is big, the number of tested neighborhood histories is small, and so the time taken to synthesize atexture is small, and vice versa. If the size of the sample texture is big, the time taken for synthesizing atexture is also big, taking into consideration the number of colors. There is one more thing that affects thetimeing, which is the neighborhood size. If the neighborhood size increased, the time -for sure- will beincreased and that because the number of pixels wanted to be tested increased.

The textures in the figure have the same order as there statistics in the table. The neighborhood size fortexture a is 5, for b is 9, for c is 11, for d is 17, for e is 17, for f is 23, for g is 19 and finally for h is 25. Thesize of the sample texture gives an indication to the number of neighborhood histories stored. The timingsare taken on an AMD Athlon 1.84 GHz PC.

12

Page 15: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

(a) Using RGB color space

(b) Using XYZ color space

(c) Using Luv color space

(d) Using Lab color space

Figure 7: Comparisons for diffrent color spaces. In each row, the first texture is synthesized using neighbor-hood block size of 11, the second 13, the third 15.

13

Page 16: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Figure 8: Some examples of our technique

Sample size Prep. time Synthesis time Total time No of States Ntested

100×100 0.0277 0.59 0.62 9204 1.74150×150 0.0432 2.01 2.05 21609 2.26100×100 0.0383 83.53 83.57 2768 98150×150 0.0261 3.01 3.04 21062 2.51192×192 0.177 15.54 15.72 14460 12.29100×100 0.016 8.048 8.064 9374 2.60100×100 0.0352 8.59 8.63 8665 2.9100×100 0.0173 598.5 598.5 234 227.99

Table 1: Timings for the textures in figure 8. The synthesized textures have a resolution of 400×400 pixels.All times are given in seconds. Ntested is the average number of neighborhoods tested for synthesizing apixel.

14

Page 17: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

7 Comparisons

The well known pixel based techniques are those of W/L and Ashikhmin. In this section, we are going tocompare some of our results with their results. Figure 2 gives some comparisons betwen the results fromour technique with those from Wei and Levoy, both exhaustive search and using the TSVQ, and Ashikhmin.Our results are not nearer to Ashikhmin in stochastic textures, but far better in the near-regular textures. It isclear, also, that our results are better than Wei and Levoy in both the exhaustive search and the acceleratedusing TSVQ. For sure, our technique is much faster than W/L, and that is because of the number of historiesto be compared each pixel generation. Pur technique is also faster than Ashikhmin’s, especially in texturesof heigh number of colors and small features, where the results are comparable. But, in textures of bigtexels, our technique becomes nearer to Ashikhmin, where the results become much better.

The sizes are as taken from the sources. Some of the results are taken from their papers, others are takenfrom their web-pages. The first picture is the sample texture.

8 Discussion

The presented algorithm is able to create high quality textures very fast. The main reason for this speed-upis that we do not perform an exhaustive search for a best match. If we have N pixels in the sample textureand C different colors, then we have on average N/C different neighborhoods per state. If C is high, thenthe average number of neighborhoods is low and thus very few neighborhoods have to be compared. As aresult, the best matching neighborhood is found almost immediately. Even if the number of different colorsis low (e.g. 100) then our method is still 100 times faster than an exhaustive search.

Our method also produces higher quality textures. The reason behind this is that our method forces atleast one of the immediate neighboring pixels to match exactly. By forcing such a match a large numberof undesirable matches (i.e. neighborhoods with a low L2-error contribution on distant pixels and a highL2-error contribution on nearby pixels) are removed from the search space. Such undesirable matches leadto cuts and discontinuities in the synthesized texture. Using a distance weighted L2-norm and an exhaustivesearch would result in similar results, however it would be much slower than our method.

Our technique has some limitations as well. The user has to select an optimal block-size, which is usuallyfound by synthesizing the texture a number of times with different block-sizes. Textures with a large texelsize require larger block-sizes, which causes longer processing times and greater memory usage. Sampletextures with few colors also have a larger number of neighborhoods per state, and thus processing-time.However, the results for both are still very good.

9 Conclusion and Future Work

We have presented a novel approach for texture synthesis based on pixel based concepts. Our technique issignificantly faster and produces better results compared to existing methods for a wide range of textures.User interaction is limited to selecting a block-size.

Future work will focus on improving memory usage and automated block-size determination. We wouldalso like to extend our method to directly synthesize on a 3D-surface.

References

[1] Michael Ashikhmin. Synthesizing natural textures. In SI3D ’01: Proceedings of the 2001 symposiumon Interactive 3D graphics, pages 217–226. ACM Press, 2001.

15

Page 18: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

Sample Exhaustive W/L STVQ W/L Ashikhmin Our

Table 2: Comparisons

16

Page 19: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

[2] Jeremy S. De Bonet. Multiresolution sampling procedure for analysis and synthesis of texture images.In SIGGRAPH ’97: Proceedings of the 24th annual conference on Computer graphics and interactivetechniques, pages 361–368. ACM Press/Addison-Wesley Publishing Co., 1997.

[3] Michael F. Cohen, Jonathan Shade, Stefan Hiller, and Oliver Deussen. Wang tiles for image and texturegeneration. ACM Trans. Graph., 22(3):287–294, 2003.

[4] D Ebert, F Musgrave, D Peachey, Perlin, and S Worley. Texture and Modeling, A Procedural Approach.Morgan Kaufmann, 2002.

[5] A. Efros and T. Leung. Texture synthesis by nonparametric sampling. In International Conference onComputer Vision, pages 1033–1038, 1999.

[6] Alexei A. Efros and William T. Freeman. Image quilting for texture synthesis and transfer. In SIG-GRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive tech-niques, pages 341–346. ACM Press, 2001.

[7] David J. Heeger and James R. Bergen. Pyramid-based texture analysis/synthesis. In SIGGRAPH ’95:Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages229–238. ACM Press, 1995.

[8] Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. Imageanalogies. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics andinteractive techniques, pages 327–340. ACM Press, 2001.

[9] Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: imageand video synthesis using graph cuts. ACM Trans. Graph., 22(3):277–286, 2003.

[10] Mark V. Lawson. Finite Automata. Chapman and Hall/CRC, 2003.

[11] Lin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture synthesisby patch-based sampling. ACM Trans. Graph., 20(3):127–150, 2001.

[12] Y. Liu, W. Lin, and J. Hays. Near-regular texture analysis and manipulation. In SIGGRAPH Proceed-ings of the annual conference on Computer graphics and interactive techniques, pages 368–376. ACMPress, 2004.

[13] R. Paget and I. Longstaff. Texture synthesis via a noncausal nonparametric multiscale markov randomfield. In IEEE Transactions on Image Processing, volume 7(6), pages 925–931. IEEE, 1998.

[14] Javier Portilla and Eero P. Simoncelli. A parametric texture model based on joint statistics of complexwavelet coefficients. Int. J. Comput. Vision, 40(1):49–70, 2000.

[15] Emil Praun, Adam Finkelstein, and Hugues Hoppe. Lapped textures. In SIGGRAPH ’00: Proceedingsof the 27th annual conference on Computer graphics and interactive techniques, pages 465–470. ACMPress/Addison-Wesley Publishing Co., 2000.

[16] Kenneth H Rosen. Discrete Mathematics and Its Applications. McGraw-Hill Science/ Engineering/Math, 2003.

[17] Greg Turk. Texture synthesis on surfaces. In SIGGRAPH ’01: Proceedings of the 28th annual confer-ence on Computer graphics and interactive techniques, pages 347–354. ACM Press, 2001.

17

Page 20: A Fast Texture Synthesis Technique using Spatial · PDF file · 2005-09-15A Fast Texture Synthesis Technique using Spatial ... A Fast Texture Synthesis Technique using Spatial Neighborhood

[18] Li-Yi Wei and Marc Levoy. Fast texture synthesis using tree-structured vector quantization. In SIG-GRAPH ’00: Proceedings of the 27th annual conference on Computer graphics and interactive tech-niques, pages 479–488. ACM Press/Addison-Wesley Publishing Co., 2000.

[19] Q. Wu and Y. Yu. Feature matching and deformation for texture synthesis. In SIGGRAPH Proceedingsof the annual conference on Computer graphics and interactive techniques, pages 364–367. ACMPress, 2004.

[20] Y. Q. Xu, B. Guo, and H. Shum. Chaos mosaic: Fast and memory efficient texture synthesis. In Tech.Rep. MSR-TR-2000-32. Microsoft Research, 2000.

[21] Steve Zelinka and Michael Garland. Towards real-time texture synthesis with the jump map. In EGRW’02: Proceedings of the 13th Eurographics workshop on Rendering, pages 99–104. Eurographics As-sociation, 2002.

18