Graph Cut Textures: Image Synthesis Using Graph Cuts


This banner was generated by merging the source images  using our interactive texture merging technique.

Abstract
Image processing is used to be a single unified field in the early sixties and seventies. Today , it has expanded and diversified into several branches based on mathematical tools as well as applications. For instance there are separate topics dealing with fuzzy IP, morphological IP knowledge based IP etc. Similarly several topics deal with diverse application specific tools for remote sensing industrial vision and so forth.
Image analysis issue such as segmentation, edge/line detection, feature extraction, image description and pattern recognition have been covered in great deal and all the state-of-art  concepts have been discussed in many papers.
In this paper we introduce a new approach for image synthesis. In our approach, patch regions from a sample image are transformed and copied to the output and then stitched together along optimal seams to generate a new output. A graph cut technique is used to determine the optimal patch region for any given offset between the input and output texture. Unlike dynamic programming, the graph cut technique for seam optimization is applicable in any dimension. We specifically explore it in 2D to perform regular image synthesis. We present approximate offset search techniques that work well in conjunction with the presented patch size optimization. We show results for synthesizing regular, random, and natural images. We also demonstrate how this method can be used to interactively merge different images to generate new scenes.

Keywords: Texture Synthesis, Image-based Rendering, Image Processing, Machine Learning, Natural Phenomenon.

Introduction
Generating a newer form of output from a smaller example is widely recognized to be important for computer graphics applications. For-example, sample-based image texture synthesis methods are needed to generate large realistic textures for rendering of complex graphics scenes. The primary reason for such example-based synthesis underlies the concept of texture, usually defined as an infinite pattern that can be modeled by a stationary stochastic process. In this paper, we present a new method to generate such an infinite pattern from a small amount of training data; using a small example patch of the texture, we generate a larger pattern with similar stochastic properties. Specifically, our approach for texture synthesis generates textures by copying input texture patches. In this approach we first search for an appropriate location to place the patch it then uses a graph cut technique to find the optimal region of the patch to transfer to the output. In addition, this supports iterative refinement of the output by allowing for successive improvement of the patch seams.
When synthesizing a texture, we want the generated texture to be perceptually similar to the example texture. This concept of perceptual similarity has been formalized as a Markov Random Field (MRF). The output texture is represented as a grid of nodes, where each node refers to a pixel or a neighborhood of pixels in the input texture. The marginal probability of a pair of nodes depends on the similarity of their pixel neighborhoods, so that pixels from similar-looking neighborhoods in the input texture end up as neighbors in the generated texture, preserving the perceptual quality of the input. The goal of texture synthesis can then be restated as the solution for the nodes of the network that maximizes the total likelihood. In particular, texture synthesis algorithms that generate their out-put by copying patches must make two decisions for each patch: 
(1) where to position the input texture relative to the output texture (the offset of the patch),and
(2) which parts of the input texture to transfer into the output space (the patch seam) (Figure 1).Figure 1: Image texture synthesis by placing small patches at various offsets followed by the computation of a seam that enforces visual smoothness between the existing pixels and the newly placed patch.
The primary contribution of this paper is an algorithm for texture synthesis, which after finding a good patch offset, computes the best patch seam (the seam yielding the highest possible MRF likelihood among all possible seams for that offset). The algorithm works by reformulating the problem as a minimum cost graph cut problem: the MRF grid is augmented with special nodes, and a minimum cut of this grid between two special terminal nodes is computed. This minimum cut encodes the optimal solution for the patch seam. We also propose a set of algorithms to search for the patch offset at each iteration. These algorithms try to maintain the large scale structure of the texture by matching large input patches with the output. An important observation is that the flexibility of the our seam optimization technique to paste large patches at each iteration in a non-causal fashion is really what permits the design of our offset search algorithms. The offset searching and seam finding methods are therefore complementary to each other, and work in tandem to generate the obtained results.
Efros and Freeman [2001] were the first to incorporate seam finding by using dynamic programming. However, dynamic programming imposes an artificial grid structure on the pixels and therefore does not treat each pixel uniformly. This can potentially mean missing out on good seams that cannot be modeled within the imposed structure. Moreover, dynamic programming is a memory-less optimization procedure and cannot explicitly improve existing seams. This restricts its use to appending new patches to existing textures. Our graph cut method treats each pixel uniformly and is also able to place patches over existing texture. Most previous work on texture is geared towards 2D images, but the texture problem in a very similar form also appears in 3D for the generation of spatiotemporal textures .However we will be dealing mainly with 2D.

Patch Fitting using Graph Cuts
We synthesize new texture by copying irregularly shaped patches from the sample image into the output image. The patch copying process is performed in two stages. First, a candidate rectangular patch (or patch offset) is selected by performing a comparison between the candidate patch and the pixels already in the output image. The method of selecting candidate patches is described in a later section (Section 4). Second, an optimal (irregularly shaped) portion of this rectangle is computed and only these pixels are copied to the output image (Figure 1). The portion of the patch to copy is determined by using a graph cut algorithm, and this is the heart of our synthesis technique. In order to introduce the graph cut technique, we first describe how it can be used to perform texture synthesis in the manner of Efros and Freeman’s image quilting [2001]. Later we will see that it is a much more general tool. In image quilting, small blocks (e.g., 32 × 32 pixels) from the sample image are copied to the output image. The first block is copied at random, and then sub-sequent blocks are placed such that they partly overlap with previously placed blocks of pixels. The overlap between old and new blocks is typically 4 or 8 pixels in width. Efros and Freeman use dynamic programming to choose the minimum cost path from one end of this overlap region to the other. That is, the chosen path is through those pixels where the old and new patch colors are similar (Figure 2(left)). 

Patch Placements & Matching
Now we describe several algorithms for picking candidate patches. We use one of three different algorithms for patch selection, based on the type of texture we are synthesizing. These selection methods are: (1) random placement, (2) entire patch matching, and (3) subpatch matching.
In all these algorithms, we restrict the patch selection to previously unused offsets. Also, for the two matching-based algorithms, we first find a region in the current texture that needs a lot of improvement. We use the cost of existing seams to quantify the error in a particular region of the image, and pick the region with the largest error. Once we pick such an error-region, we force the patch selection algorithm to pick only those patch locations that completely overlap the error-region.
When the texture is being initialized, i.e., when it is not completely covered with patches of input texture, the error-region is picked differently and serves a different purpose: it is picked so that it contains both initialized and uninitialized portions of the output texture – this ensures that the texture is extended by 
some amount and also that the extended portion is consistent with the already initialized portions of the texture. Now we discuss the three patch placement and matching methods in some detail. The same three placement algorithms are used for synthesis of image (spatial) textures, discussed in Sections 6. Note that patch placement is really just a translation applied to the input before it is added to the output.
Random placement,Entire patch matching,Sub-patch matching,
Extensions & Refinements 
Now we briefly describe a few improvements and extensions that we have implemented for image and video synthesis. These extensions include improvements to the cost functions that account for frequency variations, inclusion of feathering and multi-resolution techniques to smooth out visible artifacts, and speed ups in the SSD-based algorithms used in patch matching.
Adapting the Cost Function,Feathering and multi-resolution splining,FFT-Based Acceleration.

Image Synthesis:
We have applied our technique for image and texture synthesis to generate regular, structured and random textures as well as to synthesize extensions of natural images. Figures 6 and 7  show results for a variety of two-dimensional image textures. We used entire patch matching as our patch selection algorithm for the ESCHER, and KEYBOARD images, while sub-patch matching was used for generating, SHEEP and LILIES. 
The computation for images is quite fast, mainly due to the use of  FFT-based search. All image synthesis results presented here took between 5 seconds and 5 minutes to run. The LILIES image took 5 minutes because it was originally generated to be 128 0×1024 in size. We also compare some of graph cut results with that of Image Quilting  in Figure 6. Now we briefly describe a few specialized extensions and applications of our 2D texture synthesis technique.
Additional Transformations of Source Patches,Interactive Merging and Blending.

Download the Complete Seminar Report:
Mediafire: Download
Password: be
Share on Google Plus

About Unknown

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.

0 comments:

Post a Comment

Thanks for your Valuable comment