creating entirely textured 3d models of real objects using surface

19
Creating Entirely Textured 3D Models of Real Objects Using Surface Flattening Zsolt Jank´ o, G´ eza K´ os and Dmitry Chetverikov Computer and Automation Research Institute Budapest, Kende u. 13-17, H-1111 Hungary and E¨ otv¨ os Lor´ and University, Budapest [email protected] Abstract. We present a novel method to create entirely textured 3D models of real objects by combining partial texture mappings using surface flattening (surface parametrisation). Texturing a 3D model is not trivial. Texture mappings can be obtained from optical images, but usually one image is not sufficient to show the whole object; multiple images are required to cover the surface entirely. Merging partial texture mappings in 3D is difficult. Surface flattening converts a 3D mesh into 2D space preserving its structure. Transforming optical images to flattening based texture maps allows them to be merged based on the structure of the mesh. In this paper we describe a novel method for merging texture mappings using flattening and show results on synthetic and real data. Key words: 3D modelling, texture mapping, flattening, surface parametrisation. 1. Introduction Building photorealistic 3D models of real-world objects is a fundamental problem in computer vision and computer graphics. Photorealistic models require precise geometry as well as detailed texture 1 on the surface. Textures allow one to obtain visual effects that are essential for high-quality models. There is a number of possible applications of precise photorealistic 3D models: Digitalising cultural heritage objects (the Michelangelo Project [4], the Piet` a Project [9], the Great Buddha Project [15]). Surgical simulations in medical imaging. E-commerce. Architecture. Entertainment (movie, computer games). Obtaining the 3D model of an object can be achieved by various 3D scanners, most frequently by laser scanners. Although laser scanners provide precise geometry, most 1 We use the term texture as it is usually used in computer graphics, just to denote that the surface is covered by an image that has significant variations at fine scale. Machine GRAPHICS & VISION vol. x, no. x, xxxx, pp.

Upload: doannhi

Post on 03-Jan-2017

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Creating Entirely Textured 3D Models of Real Objects Using Surface

Creating Entirely Textured 3D Models of Real ObjectsUsing Surface Flattening

Zsolt Janko, Geza Kos and Dmitry Chetverikov

Computer and Automation Research Institute

Budapest, Kende u. 13-17, H-1111 Hungary

and Eotvos Lorand University, Budapest

[email protected]

Abstract.

We present a novel method to create entirely textured 3D models of real objects by combining partial

texture mappings using surface flattening (surface parametrisation). Texturing a 3D model is not trivial.

Texture mappings can be obtained from optical images, but usually one image is not sufficient to show

the whole object; multiple images are required to cover the surface entirely. Merging partial texture

mappings in 3D is difficult. Surface flattening converts a 3D mesh into 2D space preserving its structure.

Transforming optical images to flattening based texture maps allows them to be merged based on the

structure of the mesh. In this paper we describe a novel method for merging texture mappings using

flattening and show results on synthetic and real data.

Key words: 3D modelling, texture mapping, flattening, surface parametrisation.

1. Introduction

Building photorealistic 3D models of real-world objects is a fundamental problem incomputer vision and computer graphics. Photorealistic models require precise geometryas well as detailed texture1 on the surface. Textures allow one to obtain visual effectsthat are essential for high-quality models.

There is a number of possible applications of precise photorealistic 3D models:

• Digitalising cultural heritage objects (the Michelangelo Project [4], the Pieta Project [9],the Great Buddha Project [15]).• Surgical simulations in medical imaging.• E-commerce.• Architecture.• Entertainment (movie, computer games).

Obtaining the 3D model of an object can be achieved by various 3D scanners, mostfrequently by laser scanners. Although laser scanners provide precise geometry, most

1We use the term texture as it is usually used in computer graphics, just to denote that the surfaceis covered by an image that has significant variations at fine scale.

Machine GRAPHICS & VISION vol. x, no. x, xxxx, pp.

Page 2: Creating Entirely Textured 3D Models of Real Objects Using Surface

of them do not capture texture and colour information, or if they do, the data is notaccurate enough. However, textures can be obtained from optical images as well, butsince they usually show object only from one viewpoint, more images need to be taken.Merging textures in 3D is difficult, although the problem can be converted to 2D byusing the technique of surface flattening, also known as surface parametrisation. Surfaceparametrisation, in particular triangular meshes, has become an important topic of com-putational geometry in the last decade, due to the wide areas of applications like surfacefitting, multiresolution modelling and surface re-meshing, as well as texture mapping.

This study presents a novel and practical application of surface flattening for mergingpartial texture mappings of 3D models of real objects. A pilot short version of thetechnique with a few initial results has been published in [24].

1.1. Problem statement

We address the problem of combining geometric and textural information of an object.The two sources are independent: precise geometry is provided by a 3D laser scanner,while textures are obtained from high quality photographs. In this paper we assume thefollowing: the surface of the 3D model is represented by a triangular mesh; two or moreimages of the object are available; the images are precisely registered to the 3D model.The latter requirement can be fulfilled using the method of Janko et al. [18].

There are several ways to paste texture to the surface of an object. Graphics systemsusually have the requirement of two pieces of information: a texture map and the texturecoordinates. The texture map is the image we paste, while the texture coordinates specifywhere it is mapped to. Texture maps are usually two-dimensional, although during thelast years the application of 3D textures has also become general. In this paper we onlydeal with 2D textures.

The geometric relation between the 3D surface and the 2D texture map can bedescribed by a transformation. The texture mapping function M assigns a 2D texturepoint u to a 3D model point X as u = M(X). The relation between the surface and thetexture map is illustrated in figure 1.

It is straightforward to choose a photo of an object to be a texture map. An opticalimage of an object contains the colour information we need to paste to the surface.Projection of a 3D surface point X can be described in matrix form: v ' P X, where Pis the 3× 4 projection matrix and ˜ means homogenous coordinates [5]. In this waythe texture mapping function is a simple projective transformation.

The difficulty of image-based texturing originates from the problem of occlusion,which yields uncovered areas on the surface. An optical image shows the object fromonly a single view. Therefore, it contains textural information only about the visibleparts of the surface; the occluded areas are hidden from that view. (See figure 2b.) Thisinsufficiency can be reduced – in optimal cases eliminated – by combining several images.

2

Page 3: Creating Entirely Textured 3D Models of Real Objects Using Surface

M3D Model 2D Texture Map

Fig. 1. Relation between 3D model and texture map

a) Input images b) Partially textured models

Fig. 2. Textures cover only parts of the model

Figure 2a shows two images of the globe. These images can be considered as texturemaps. Visualising multiple textures on the same surface is not trivial. Visualisationtoolkits usually apply interpolation techniques to colour the surface continuously. How-ever, interpolation cannot be applied between different texture maps, hence handlingmultiple textures yields a gap appearing between the borders. The result is the samewhen textures are merged by appending the second image to the first one and modifyingthe second mapping function with a translation.

We propose another way to create a texture map based on optical images. Flatten-ing the surface mesh of an object provides an alternative two-dimensional parametrisa-tion. The advantage is that this parametrisation preserves the topology of the three-dimensional mesh. A texture that covers entirely the flattened 2D surface covers alsothe original 3D surface.

1.2. Previous work

There has been a number of related studies which aim at building photorealistic 3Dmodels. Mayer et al. [8] capture multiresolution textures and paste them to objects withlarge flat surfaces, e.g., to buildings. They split rectangular surfaces until each portion

3

Page 4: Creating Entirely Textured 3D Models of Real Objects Using Surface

can uniquely be related to a texture map, which shows it in the highest resolution.Quadtree representation is used to store the multiresolution textures. The disadvantageof this method is that it handles polyhedrons only, while we look for solution for arbitrarysurfaces.

Yemez et al. [19] discuss active and passive methods for creating photorealistic 3Dmodels of real objects. They use triangulated 3D mesh and subdivide each triangle intoparticles. For each particle the best colour is determined from the full set of images fromwhich the particle is visible. They associate a matrix to each triangle as a texture map.These matrices are combined into a single large texture map. This method could notguarantee the continuity of the neighbouring texture maps; in addition, it assumes thatthe triangles have very similar sizes, which is a strong constraint. Papers [9] and [10]present similar techniques for combining multiple texture mappings.

It is not absolutely necessary to stitch partial texture mappings to a full texturemap. Multiple texture mappings can be combined during the rendering phase, as well, aspresented by Debevec et al. [2]. Their method is called view-dependent texture mapping.In each step of rendering for each point of the scene the image with the viewing angleclosest to that of the rendered view is used. To avoid seams, weights are used to smoothrendering. The problem is that these computations during rendering are extremelycostly. Debevec et al. handle only flat surfaces, where one can use a single weight foreach face of the model instead of computing it for each pixel. Although it can acceleratethe method, using single weights is not applicable for complex surfaces.

Several studies examine the problem of texture mapping by using surface flattening.Zigelman et al. [11] discuss how to flatten arbitrary surfaces preserving the structureand having minimal distortions. These properties of flattening are of crucial impor-tance from the point of view of texture mapping. Papers [6, 16, 7, 12] also examinesurface parametrisation from this aspect, although none of them discusses the problemof merging multiple texture mappings.

The problem of merging texture mappings appears in texture synthesis. Papers[20, 22, 17] describe techniques of synthesising 2D or 3D texture map from growabletexture patterns. They use image processing methods for creating smoothed textures,as we also do, but they apply repetitive pattern to produce artifical texture, while wemerge real textures.

In this paper we combine the techniques of surface flattening and texture merging.The contributions of the paper are as follows: (1) Multi-image texturing of 3D objects isdiscussed. (2) A novel method for merging multiple texture maps using surface flatteningis described. (3) Results on synthetic and real data are shown.

4

Page 5: Creating Entirely Textured 3D Models of Real Objects Using Surface

2. Multi-image texturing

As discussed above, it is essential to merge multiple texture maps to one, but this task isnot trivial. Considering the complexity of 3D texture merging, we have decided to flattenthe 3D mesh into a plane. Flattening the surface of an object yields a two-dimensionalparametrisation, with the advantage of preserving the topology of the mesh. It is clearthat texture that covers entirely the flattened 2D surface covers also the original 3Dsurface.

Figure 3 illustrates the relation between the 3D surface, the optical image and theflattened surface. Transforming optical images to flattened surfaces provides partialtexture maps. (See figure 4.) But since flattening preserves the structure of the 3Dmesh, these texture maps can be merged, in contrast to the original optical images.

Image

3D Model

T

P

A

TfT i

Flattened Surface

F

Fig. 3. Relation between 3D model, optical image and flattened surface. T , Tf , Ti: correspondingtriangles; F : flattening; P : projection; A: affine transformation

Partial textures Merged texture

Fig. 4. Partial and merged texture maps

In this section first we give a brief overview of surface flattening based on the paper of

5

Page 6: Creating Entirely Textured 3D Models of Real Objects Using Surface

G. Kos and T. Varady [16], then the methods of mapping optical images onto flattenedsurfaces and merging texture maps are described.

2.1. Surface flattening

We introduce the problem as follows. LetM(V, T ) be an oriented, 2-manifold triangularmesh in the 3D space, where V ⊂ R3 is the set of vertices, T the set of triangles (vertextriplets). For an arbitrary function π : V → R2, the map ofM by π can be defined as

π(M) =({

π(v) : v ∈ V},{(

π(v1), π(v2), π(v3))

: (v1, v2, v3) ∈ T})

.

If triangles in π(M) have the same orientation and have no common points (except theshared vertices and edges), then map π is called a parametrisation of M. We may alsocall map π(M) flattened mesh or planar map ofM.

Solutions published in the literature use different principles and require differenttechniques. Many methods optimise various energy functions – based on physical models– to minimise the geometric distortion in some sense; the price is that the optimisationis highly non-linear. Other methods ignore some geometric attributes in order to havea simpler optimisation problem. The so-called convex combination method is frequentlyused by most of these techniques, including the algorithm [16] applied in our system. Asimilar algorithm has been published in [13]. We also refer to the survey of Floater etal. [23] that summarises various algorithms for surface parametrisation.

a) b)

perimeter

holec)

original boundary

hole filled

Fig. 5. Filling holes (illustration). a) original triangular mesh; b) hole filling; c) filling up a concaveregion

In this section we focus only on the method we used in our experiments. The al-gorithm is equipped to handle various topological cases. Two important pre-processingsteps are performed. The first one is called hole filling, where boundary loops are filledwith supplementary triangles – except one loop, which will be the perimeter of the flat-tened mesh. In this way, the mesh becomes homeomorphic to a disc. (See figure 5.)In case of higher Genus, the handles of the surface must be cut to obtain a flattenable(Genus-0) manifold.

In the second pre-processing step concave boundary portions are covered with furthersupplementary triangles. This can improve the quality of the flattened mesh, particularly

6

Page 7: Creating Entirely Textured 3D Models of Real Objects Using Surface

if the original boundary is zigzagged. To ensure the smoothness of boundary the meshcan be extended by triangle strips. (See figure 6.)

c)

b) d)

a)

Fig. 6. Half toy giraffe. (a) 3D mesh; (b) convexified, decimated mesh; (c) flattened, decimated meshwith supplementary triangles; (d) flattened mesh without supplementary triangles

In general it would be difficult to guarantee that supplementary triangles – used to fillholes and concave boundary segments – do not intersect either each other or the originalsurface. However, the flattening algorithm also transforms self-intersecting surfaces toconsistent planar triangulations. Therefore no care is required of those accidental self-intersections.

After the pre-processing steps, boundary vertices are mapped to the vertices of aconvex polygon in the plane. Our implementation constructs the side vectors of theboundary polygon in three steps. The first step is normalising the angles. Assume thatboundary vertices are v1, v2, . . . , vk, and the sums of triangle angles at those verticesare α1, . . . , αk, respectively. The normalised angles are defined as βi = (k−2)π

α1+···+αkαi.

Second, planar vectors u1, . . . , uk are chosen in the plane such that ‖ui‖ = ‖vi − vi+1‖and ∠(ui−1, ui) = βi for all i = 1, 2, . . . , k. Finally, vectors ui are adjusted so as to sumup to zero, setting the side vectors to

si = ui −‖ui‖

‖u1‖+ · · ·+ ‖uk‖(u1 + · · ·+ uk).

7

Page 8: Creating Entirely Textured 3D Models of Real Objects Using Surface

To find parameters for interior vertices, a weight wij is defined for each vertex pair(vi, vj) ∈ V ×V such that wij is positive if and only if vi and vj are adjacent. Then, themap of each interior vertex vi is chosen such that

π(vi) =

∑j wijπ(vj)∑

j wij.

Note that this is a sparse system of linear equations which can be solved using variousiterative methods. Floater et al. [23] proved that the solution is always unique and, ifthe boundary polygon is convex, it provides a consistent parametrisation.

For the weights wij several different possible constructions exist. We used the so-called Mean Value Coordinates: if (vi, vj , vl) and (vi, vj , vm) are two adjacent trianglesthen wij is defined as

wij =1

‖vi − vj‖

(tan

∠(vj , vi, vk)2

+ tan∠(vj , vi, vl)

2

).

Usually, complex meshes cannot be flattened at once, they need to be cut before flat-tening. We have chosen to cut by plane, since the cutting plane can be easily determinedmanually: three points selected on the surface define a plane. The 3D mesh is cut inhalf by this plane, then the halves are flattened and textured separately. The problemof re-merging the textured halves will be discussed later. Figure 7 shows an example ofusing the algorithm in our experiments.

a) b)

Fig. 7. Mesh of Frog and its parametrisation

2.2. Combining multiple textures

After flattening the 3D surface, we need to convert our optical images to flattenedtexture maps. In contrast to the projection matrix, this map is complicated, sinceneither the transformation of flattening nor the relation between the optical image and

8

Page 9: Creating Entirely Textured 3D Models of Real Objects Using Surface

the texture map can be represented by a matrix. We use the mesh representation of the3D surface for conversion: given a triangle of the mesh, the vertices of the correspondingtriangles are known both in the optical image and on the flattened surface. Let us denotethese triangles by Ti in the optical image and by Tf on the flattened surface (figure 3).One can easily determine the affine transformation between Ti and Tf , which gives thecorrespondence between the points of the triangles. Note that the affine transformationis unique for each triangle pair.The algorithm of the conversion is the following:

Algorithm: Convert image to flattened texture map

For each triangle T of 3D meshIf T completely visible

Projection: Ti ← P · T .Flattening: Tf ← FLAT(T ).Affine transformation: A← AFFINE(Tf , Ti).For each point uf ∈ Tf :

Colourf (uf )← Colouri(A · uf ).End for.

End if.End for.

Conversion of optical images provides partial texture maps. To obtain the entiretextured surface, one needs to merge these texture maps into a mosaic. Since flatteningpreserves the topology of the mesh, the texture coordinates of the partial texture mapsare consistent. The only problem is that of the overlapping areas where texture mapsmust be smoothly interpolated. The size of the overlapping areas depends on the imagesof the object. If the camera movement between two captures is small, the overlap islarge, but many images need to be acquired to cover the entire surface. Conversely,large camera movement results in small overlaps, but requires only a few images to betaken. To guarantee the continuity, points close to the border of a partial texture mapmust be in another texture map, as well. In other words, points at the borders shouldbe well visible from at least two views.

We have tested four methods for handling overlapping areas. The essence of the firstmethod is to decide for each triangle which view it is mostly visible from. The measureof visibility of a 3D point can be the scalar product of the normal vector and the unitvector pointing towards the camera. The measure of visibility of a triangle is the averagevisibility of its three vertices. For each triangle the texture is given by the texture mapthat provides the greatest visibility. The main problem of this method is the seamsappearing at the borders of the texture maps. (See figure 8a.)

The second method tries to eliminate this drawback by blending the views. Themethod collects all the views the given triangle is entirely visible from. All of these

9

Page 10: Creating Entirely Textured 3D Models of Real Objects Using Surface

views are used for setting the colours of the triangle points. The measure of visibilitycan be used to set a weight for each view. If the point is better visible from the view, theweight is greater; if less, the weight is smaller. Note that visibility is calculated for eachpoint separately. For this, the 3D surface point and the normal vector are also estimatedfrom the 3D coordinates and the normal vectors of the vertices, respectively.

The third method applies the multiresolution spline technique [1] for blending theimages. It uses the image pyramid data structure for merging the images in differentresolution levels and eliminating the seams between the borders of the texture maps.The same weighting function was applied as in the second method.

The fourth method uses the gradient-domain image stitching (GIST) technique ofLevin et al. [21]. In this approach the output image is computed by an optimisationprocess that uses image gradients. In [21], two different methods, GIST1 and GIST2,are described, however the differences between their results are insignificant. We havechosen to implement GIST2, where derivative images of the partial maps are mergedand deconvolved to obtain the entire texture map. The weighting function is the sameas in our second method.

All methods use the RGB representation of colours, and colour channels are handledseparately. We have tested different colour representations (CIE XYZ, CIE-L∗ab, CIE-L∗uv, HSL and HSV) as well, but they did not improve the results. Figure 8 shows theresults of the merging methods. Quantitative evaluation and analysis of the differenttechniques can be found in the next section.

In section 2.1 we mentioned that complex meshes need to be cut into pieces beforeflattening. The pieces are textured separately; however, re-merging them yields seamsbetween the borders. These artefacts can be eliminated by alpha blending, as follows.

Colours are usually represented by triples: RGB, CMY, etc. A fourth channel can beused to represent the transparency of the pixel. This is termed as alpha channel and ithas a value between 0 and 1. A value of 0 means that the pixel is fully transparent, whilea value of 1 means that it is fully opaque. Varying alphas of pixels close to the bordersmakes discontinuities disappear. Due to the blending methods discussed above, usingonly values of 0 and 1 is completely sufficient. Cutting the mesh should be performedso as to have overlapping areas, that is, points and triangles close to the borders shouldbelong to both halves. These overlapping areas are covered by the same texture onboth halves. Alpha values of the pixels in the overlapping areas are set to 0 in the firsthalf and 1 in the other. This technique guarantees the continuity of the texture in there-merged model.

3. Experiments

The proposed merging method has been tested both on synthetic and real data. TheEarth dataset consists of 8 images of the globe and a synthetic 3D model. Images

10

Page 11: Creating Entirely Textured 3D Models of Real Objects Using Surface

a) b)

c) d)

Fig. 8. Difference between the merging methods. (a) first method; (b) second method, blending; (c) thirdmethod, multiresolution spline; (d) fourth method, GIST

were obtained by the script of John Walker [3], which also gives the precise projectionmatrices, i.e., the texture mapping functions. The 3D mesh was cut in half. The twohalves were flattened and textured separately, and finally merged by alpha blending.Figure 9 shows two of the input images, the 3D mesh, the merged texture maps of thetwo halves and snapshots of the textured 3D model.

Contrary to the Earth dataset, the Bear, the Frog and the Shell are real data. About5-6 images of each object were taken by a digital camera, and 3D models were obtainedby a 3D laser scanner. Images are registered to the 3D model by the method of Jankoet al. [18]. Similarly to the Earth dataset, the 3D models were cut in half, the halveswere handled separately and merged only at the end of the process. The results can beseen in figures 10, 11 and 12.

Quantitative evaluation and comparison of different merging methods is difficult.Usually the best devices for measuring the quality of resulted images are human eyes.One may use a universal image quality index [14] to measure image quality, but the

11

Page 12: Creating Entirely Textured 3D Models of Real Objects Using Surface

Images 3D Mesh

Texture Maps

Textured 3D Model

Fig. 9. Earth result

12

Page 13: Creating Entirely Textured 3D Models of Real Objects Using Surface

Images Textureless 3D Model

Texture Maps

Textured 3D Model

Fig. 10. Bear result

13

Page 14: Creating Entirely Textured 3D Models of Real Objects Using Surface

Images Textureless 3D Model

Texture Maps

Textured 3D Model

Fig. 11. Frog result

14

Page 15: Creating Entirely Textured 3D Models of Real Objects Using Surface

Images Textureless 3D Model

Texture Maps

Textured 3D Model

Fig. 12. Shell result

15

Page 16: Creating Entirely Textured 3D Models of Real Objects Using Surface

correlation between the quality of the output image and the accuracy of the mergingmethod is not obvious.

We have chosen another approach to calculate the difference between merging meth-ods. Similarly to [21], two criteria are considered: first, regions of the output mosaicimage should be as similar to the corresponding input ones as possible; second, seamsbetween the parts should be invisible. For this reason two different error functions arecalculated as follows.

Consider two overlapping input images, I1 and I2, and denote the result of mergingby Ir. Let R1 and R2 be the regions of image I1 and I2, and W1 and W2 the weightingfunctions of the regions R1 and R2, respectively. (Note that for each u ∈ R1 ∪ R2,0 ≤Wi(u) ≤ 1 and W1(u) + W2(u) = 1.) The error function of colours is defined by theweighted sum of squares:

Ec(I1, I2,W1,W2, Ir) =2∑

i=1

1|Ri|

∑u∈Ri

Wi(u) ‖Ii(u)− Ir(u)‖2 ,

where I(u) is the RGB colour triplet in image I in point u.The visibility of seams can be best measured in the gradient domain. The error

function is defined using the derivatives of the images:

Ed(I1, I2,W1,W2, Ir)

=2∑

i=1

12|Ri|

∑u∈Ri

Wi(u)

(∥∥∥∥∂Ii(u)∂x

− ∂Ir(u)∂x

∥∥∥∥2

+∥∥∥∥∂Ii(u)

∂y− ∂Ir(u)

∂y

∥∥∥∥2)

.

We have measured the errors for each of the merging methods for three datasets.(See table 1.) Considering the derivatives, the simple method performs worse, since itdoes not smooth seams between partial textures. Both the multiresolution spline and theGIST techniques use gradient images to obtain optimal stitching without visible seams,which results in small errors in derivatives. However, the drawback of using gradientimages is that the result is determined only up to an intensity shift. (For colour images,intensity shift is different for each colour channels.) This can be corrected, for instance,by calculating the median of the first input image and the corresponding part of theoutput mosaic image (as in [21]), but the result is not perfect: the errors in colours arelarge.

To estimate the running time we have tested the method while varying the sizes oftextures and models. Tables 2, 3 and 4 show the running times for the Bear, the Frogand the Shell datasets, respectively. It is clear that applying the multiresolution splinemakes the method slower. As far as speed is concerned, the method is more sensitiveto changing the model size than to changing the texture size. Since the technique ofmultiresolution spline does intensive work on images, its speed significantly depends also

16

Page 17: Creating Entirely Textured 3D Models of Real Objects Using Surface

Tab. 1. Colour image and derivative image errors

Bear Frog ShellMerging Ec Ed(·105) Ec Ed(·105) Ec Ed(·105)

Simple 0.01248 5.03326 0.01971 7.42789 0.04391 13.10525

Blending 0.00773 2.27690 0.01190 3.44684 0.02709 4.48553

Mres. spline 0.01257 1.44364 0.16091 1.61872 0.08954 2.40960

GIST 0.03076 1.45559 0.08187 1.32306 0.09373 2.06448

on the texture size. Note that we decided to omit the running times of the GIST method.The result of the multiresolution spline and the GIST method is similar, but GIST isslower, so we found the detailed testing of the former method sufficient.

Tab. 2. Running time for Bear dataset

Merging Texture size Points in mesh Time (sec)

Blending 1024 × 1024 10386 74

Blending 1024 × 1024 3124 23

Blending 512 × 512 10386 61

Blending 512 × 512 3124 10

Mres. spline 1024 × 1024 10386 103

Mres. spline 1024 × 1024 3124 51

Mres. spline 512 × 512 10386 68

Mres. spline 512 × 512 3124 17

Tab. 3. Running time for Frog dataset

Merging Texture size Points in mesh Time (sec)

Blending 1024 × 1024 5341 27

Blending 1024 × 1024 3203 23

Blending 512 × 512 5341 17

Blending 512 × 512 3203 10

Mres. spline 1024 × 1024 5341 57

Mres. spline 1024 × 1024 3203 54

Mres. spline 512 × 512 5341 24

Mres. spline 512 × 512 3203 17

17

Page 18: Creating Entirely Textured 3D Models of Real Objects Using Surface

Tab. 4. Running time for Shell dataset

Merging Texture size Points in mesh Time (sec)

Blending 1024 × 1024 5833 39

Blending 1024 × 1024 2302 30

Blending 512 × 512 5833 24

Blending 512 × 512 2302 10

Mres. spline 1024 × 1024 5833 73

Mres. spline 1024 × 1024 2302 64

Mres. spline 512 × 512 5833 32

Mres. spline 512 × 512 2302 18

4. Conclusion

We have presented a method to solve the problem of combining partial texture maps.Texture maps of 3D models can easily be obtained by digital cameras. However, thesemappings only cover parts of the model, and merging the textures is not trivial. Thenovelty of our approach consists of using surface flattening to convert the problem oftexture merging from 3D to 2D. Optical images are transformed to flattened texturemaps, and since flattening provides a common system of texture coordinates, flatteningbased maps can be merged to a full texture map. Test results with synthetic and realdata demonstrate the feasibility of the proposed method.

Acknowledgements. The authors thank Gabor Renner for his comments on this paper.This work was supported by the EU Network of Excellence MUSCLE (FP6-507752).Geza Kos is also supported by the Bolyai Grant of the Hungarian Academy of Sciences.

References

1983[1] P. J. Burt, E. H. Adelson, A multiresolution spline with application to image mosaics, in: ACM

Trans. Graph. 2 (4) (1983) 217–236.

1996[2] P. E. Debevec, C. J. Taylor, J. Malik, Modeling and rendering architecture from photographs: A

hybrid geometry- and image-based approach, in: Computer Graphics 30 (Annual Conference Series)(1996) 11–20.

[3] J. Walker, Satellite data,URL: http://www.fourmilab.ch/cgi-bin/uncgi/Earth.

2000[4] Marc Levoy et al., The digital Michelangelo project, in: ACM Computer Graphics Proceedings,

SIGGRAPH (2000) 131–144.[5] R. Hartley, A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press,

2000.

18

Page 19: Creating Entirely Textured 3D Models of Real Objects Using Surface

[6] S. Haker, S. Angenent, A. Tannenbaum, R. Kikinis, G. Sapiro, M. Halle, Conformal surface param-eterization for texture mapping, in: IEEE Transactions on Visualization and Computer Graphics6 (2) (2000) 181–189.

[7] D. Piponi, G. Borshukov, Seamless texture mapping of subdivision surfaces by model pelting and tex-ture blending, in: Proc. 27th Annual Conference on Computer Graphics and Interactive Techniques,ACM Press/Addison-Wesley Publishing Co., 2000, pp. 471–478.

2001[8] H. Mayer, A. Bornik, J. Bauer, K. Karner, F. Leberl, Multiresolution texture for photorealistic

rendering, in: Proc. 17th Spring conference on Computer graphics, IEEE Computer Society, 2001, p.109.

2002[9] F. Bernardini, I. M. Martin, J. Mittleman, H. Rushmeier, G. Taubin, Building a digital model of

Michelangelo’s Florentine Pieta, in: IEEE Computer Graphics & Applications 22 (1) (2002) 59–67.[10] V. Sequeira, J. a. G. Goncalves, 3D reality modelling: Photo-realistic 3D models of real world scenes,

in: Proc. 1st International Symposium on 3D Data Processing, Visualization & Transmission, 2002,pp. 776–783.

[11] G. Zigelman, R. Kimmel, N. Kiryati, Texture mapping using surface flattening via multi-dimensionalscaling, in: IEEE Transactions on Visualization and Computer Graphics 8 (2) (2002) 198–207.

[12] A. Sheffer, E. de Sturler, Smoothing an overlay grid to minimize linear distortion in texture mapping,in: ACM Trans. Graph. 21 (4) (2002) 874–890.

[13] Y. Lee, H. Kim, S. Lee, Mesh parameterization with a virtual boundary, in: Computers & Graphics26 (5) (2002) 677–686.

[14] Z. Wang, A. C. Bovik, A universal image quality index, in: IEEE Signal Processing Letters, Vol. 9,No. 3, pp. 81–84, 2002.

2003[15] K. Ikeuchi, A. Nakazawa, K. Hasegawa, T. Ohishi, The great Buddha project: Modeling cultural

heritage for VR systems through observation, in: Proc. IEEE ISMAR03, 2003.[16] G. Kos, T. Varady, Parameterizing complex triangular meshes, in: Proc. 5th International Confer-

ence on Curves and Surfaces, Nashboro Press, 2003, pp. 265–274.[17] S. Magda, D. Kriegman, Fast texture synthesis on arbitrary meshes, in: Proc. 14th Eurographics

workshop on Rendering, Eurographics Association, 2003, pp. 82–89.

2004[18] Z. Janko, D. Chetverikov, Registration of an uncalibrated image pair to a 3D surface model, in:

Proc. 17th International Conference on Pattern Recognition, Vol. 2, 2004, pp. 208–211.[19] Y. Yemez, F. Schmitt, 3D reconstruction of real objects with high resolution shape and texture, in:

Image and Vision Computing 22 (2004) 1137–1153.[20] Y. Chen, H. H. Ip, Texture evolution: 3D texture synthesis from single 2D growable texture pattern,

in: The Visual Computer 20 (2004) 650–664.[21] A. Levin, A. Zomet, S. Peleg, Y. Weiss, Seamless image stitching in the gradient domain, in: Proc.

8th European Conference on Computer Vision, 2004, pp. 377–389.

2005[22] J. Dong, M. Chantler, Capture and synthesis of 3D surface texture, in: International Journal of

Computer Vision 62 (2005) 177–194.[23] M. Floater, K. Hormann, Surface parameterization: a tutorial and survey, in: N. A. Dodgson, M. S.

Floater, M. A. Sabin (Eds.), Advances in Multiresolution for Geometric Modelling, Mathematics andVisualization, Springer-Verlag, Berlin, Heidelberg, 2005, pp. 157–186.

[24] Z. Janko, Combining multiple texture mappings using surface flattening, in: Proc. Joint Hungarian-Austrian Conference on Image Processing and Pattern Recognition, 2005, pp. 155–162.

19