3-D from 2-D Using Warping Transformations

Shown in this paper are methods on how to find the third dimension of a single image or from the two views of the image taking in a different angle using the method more accurate and faster to get to the third dimension in the following cases: One image of the same scene. Two views of the same scene from two different perspectives. Pictures of parts of the same scene. Set of pictures for different views of the work of the subject Panorama. This method is known Image Warping, which falls below a set of transfers such as (Affine - Bilinear - Projective - Mosaic – Similarity transformation) was compared to the work of transfers between the previous and this will be applied to more pictures. The idea is based on building code software is built on the programming language Visual C + + with the library for drawing an OpenGL program Matlab, which way can build a model of the following conversions, which fall under the so-called image warping of the conversion linear Bilinear Mapping and conversion Affine Mapping and conversion imagery Projective Mapping . shown in this paper are methods on how to correct camera exposure changes and how to blend the stitching line between the images. We will show panorama photos generated from both still image.


Introduction
Digital image warping is a growing branch of image processing that deals with geometric transformation techniques. Early interest in this area dates back to the mid-1960s when it was introduced for geometric correction applications in remote sensing. Since that time it has experienced vigorous growth, finding uses in such fields as medical imaging, computer vision, and computer graphics. Although image warping has traditionally been dominated by results from the remote sensing community, it has recently enjoyed a new surge of interest from the computer graphics field. This is largely due to the growing availability of advanced graphics workstations and increasingly powerful computers that make warping a viable tool for image synthesis and special effects. Work in this area has already led to successful market products such as real-time video effects generators for the television industry and cost-effective warping hardware for geometric correction. Current trends indicate that this area will have growing impact on desktop video.
Digital image warping has benefited greatly from several fields, ranging from early work in remote sensing to recent developments in computer graphics. The scope of these contributions, however, often varies widely owing to different operating conditions and assumptions morphing = (warping) 2 + blending.
The equation above refers to the fact that morphing is a two-stage process which involves coupling image warping with color interpolation. As the morphing proceeds, the first image (source) is gradually warped towards the second image (target) while fading out. At same time the second image starts warping towards the first image and is faded in. Thus, the early images in the sequence are much like the first image. The middle image of the sequence is the average of the first image distorted halfway towards the second one and the second image distorted halfway back towards the first one. The last images in the sequence are similar to the second one. Then, the whole process consists of warping two images so that they have the same "shape" and then cross dissolving the resulting images [1].
Geometric transformations permit elimination of the geometric distortion that occurs when an image is captured. Geometric distortion may arise because of the lens or because of the irregular movement of the sensor during image capture.
Geometric transformation processing is also essential in situations where there are distortions inherent in the imaging process such as remote sensing from aircraft or spacecraft. On example is an attempt to match remotely sensed images of the same area taken after one year, when the more recent image was probably not taken from precisely the same position. To inspect changes over the year, it is necessary first to execute a geometric transformation, and then subtract one image to other. We might also need to register two or more images of the same scene, obtained from different viewpoints or acquired with different instruments. Image registration matches up the features that are common to two or more images. Registration also finds applications in medical imaging [2] and [3].
People have always been fascinated about capturing the entire view of the scenes. Before the era of digital cameras, wide-angle view is captured using special optical lens. However, these lenses are usually mounted on SLR cameras which most people do not have. Plus, lens distortion is often introduced in these pictures and even with the wide-angle lenses we are still unable to obtain the full 360 degree view. A new generation of digital cameras based on the line scanning technologies, such as the ones produced by Panoscan.com, allows us to capture incredible 360 degree views of the scenes. The pictures produced from these cameras often have very high quality. The drawback is that those cameras are very expensive and far beyond the reach of average consumers. One of the major advantages of using image processing is affordability as anyone can install a piece of software on a PC and is able to process the data to produce the panorama photos. However, since the images are taken at multiple moments while the camera is panning around the scene, they need to be registered to each other in order to obtain the final result. This registration or motion matching has proven to be a difficult problem and that is what most of the research work in this field has focused in the past. In a perfect world where we can have the camera placed horizontally and panning exactly around its focal point, if we know the tilt angle and how much the camera has panned, we can warp all the images to a sphere based on the focal field of view of the camera model. In the case when the tilt angle is zero, a cylinder is a good substitute for the sphere. Theoretically, all the images can be warped to such a common reference sphere or cylinder and we can then reproduce the entire field of view from this sphere or cylinder. This is known as spherical or cylindrical warping [1].
In reality, without knowing any camera angles and camera focal field of view, a correct estimation for this kind of warping is difficult to obtain. Instead, people have been mostly trying to use 2D planar matching techniques to obtain relative matching between two images, such as affine matching. However, without correct warping, there would always be errors during introduced during the matching due to the perspective changes from the 3D scene to the 2D image. One interesting idea is to use only narrow center strips of video frames [4]. This approach works for high frame-rate video data. It, in fact, is mimicking the line-scan cameras mentioned earlier. The line-scan cameras scan one vertical line at a time and there is no geometric distortion. However, issues still remain. What if there are objects moving in the scene? The strips would likely cut the moving objects into parts. Plus, this approach would not work on still photo stitching.
The work in the literature has mostly focused on how to match images in the general cases of transformation, i.e. in the case when the camera pan, rotate, and tilt in any directions. We realize that no matter how well the matching is done, there will be some misalignments between the images. This could happen because the camera is drifting away from its initial focal point position, as is always the case for hand-held cameras. It could also happen because there are moving objects in the scene or because of the 3D to 2D transformation that can not be accounted by 2D image matching. So instead, this work rather focuses on the stitching side to avoid such kind of misalignments or make them less visible [5].
We do assume that people capture the data with the panorama photos in mind. This means that the camera is held roughly horizontally and the panning is done consistently along the horizontal or vertical direction, rather than the general M a y 0 9 , 2 0 1 4 scenario when the camera can be moved in any directions. We will present some interesting observations in motion matching assuming this panorama mode [6] and [7] and [8].
We will also show how to deal with some other practical issues in generating good panorama photos. One problem we have faced is the change of exposure in camera settings, since most cameras are in automatic mode and adjust to lights when taking pictures. As a result, one picture could be significantly brighter than another. And this needs to be corrected before the final stitching process. Another issue is on how to blend two images.

Methods and algorithms
1.0 Steps can be divided in three stages which are as follows: (i) -Access to the images you want to work through the following: -Scanner with a high resolution high dpi.
-Digital camera with a high resolution high dpi. -After the construction of the previous transfers and applied on the image we find a fortress on the third dimension or depth of the image required.

Bilinear mapping
Bilinear mapping are most commonly defined as a mapping of a square into a quadrilateral this mapping can be computed by linearly interpolating u linearly along the top and bottom edges of the quadrilateral and then linearly interpolating v between two interpolated points to yield a destination point ( x , y ) [8] and [9] and [10]. M a y 0 9 , 2 0 1 4

Fig1: Bilinear mapping
As:   11 01 10 00 The general form in matrix rotation is: Given to be obtained A u x  required to find A, the dimensional for x is ( 4 x 2 ) and for u is ( 4 x 4 ) and for A is ( 4 x 2 ) and we can find the The inverse mapping from destination space to source space is not even single valued [11].
We have the x-component: Also have the y-component: Then we can done for v, yielding the two quadratic equations We can find ( u , v ) in terms of ( x , y ) by evaluating the coefficient A, B, C above and then computing, where: The inverse transform is multi-valued and is much more difficult to compute than the forward transform.
Even simpler is the affine transformation for which three pairs of corresponding points are sufficient to find the coefficients y a x a a u The only difference between bilinear and affine transformation is that the coefficients 3 a and 3 b in (1.2) are set to be zeros in (2.3). In fact, affine transformation is a particular form of bilinear transformation.
For example, a second-degree approximation requires only six coefficients to be solved. In this case, N=2 and K=6. We thus have the inverse mapping equation [12].  This is a least-squares problem. Pseudo-inverse solution gives the following results:

Affine Transformations
Affine mappings include scales, rotation, translation, and shears; they are linear mappings plus a translation [3] and [9].

Projective mappings
The Projective mapping, also known as the perspective or homogeneous transformation, is a projection from one to plane through a point onto another plane [9] .Homogeneous transformations are used extensively for 3-D affine modeling transformations and for perspective camera transformations [12].
The 2-D projective mappings studied here are a subset of these familiar 3-D homogeneous transformations.
The general form of a projective mapping is a rational linear mapping: Manipulation of projective mappings is much easier in the homogeneous matrix notation: Although there 9 coefficients in the matrix above, these mappings are homogeneous, so any nonzero scalar multiple of these matrix gives an equivalent mapping. Hence there are only 8 degrees of freedom in a 2-D projective mapping. We can assume without loss of generality that i=1 except in the special case that source point (0, 0) maps to a point at infinity. A projective mapping is affine when g = h = 0.
Projective mappings will in general map the line at infinity to a line in the real plane. We can think of this line as the horizon line of all vanishing points (is a point in a perspective drawing to which parallel lines not perpendicular to the image plane appear to converge), by analogy to the perspective projection.
Affine mapping are the special case of projective mappings that map the line at infinity into itself. By defining the projective mapping over the projective plane and not just the real plane, projective mappings become bijections (one-to-one and onto), except when the mapping matrix is singular. For non-degenerate mappings the forward and inverse transforms are single-valued, just as for an affine mapping [11] and [12]. Note the vanishing points.
Projective mappings share many of the desirable properties of affine mapping. Unlike bilinear mappings, which preserve equispaced points along certain lines, the projective mappings do not in general preserve equispaced points as in Fig 3  M a y 0 9 , 2 0 1 4 Instead they preserve a quantity called the cross ratio of points [11]. Like affine mapping, projective mappings preserve lines at all orientations. In fact, projective mappings are the most general line-preserving projective mappings and may be concatenating their matrices. Another remarkable property is that the inverse of a projective mappings, which explained by reversing the plane-to-plane mapping by which a projective mapping is defined. The matrix for the inverse mapping is the inverse or adjoint of the forward mapping. (The adjoint of a matrix is the transpose of the matrix of cofactors [32], In homogeneous algebra, the adjoint matrix can be used in place of the inverse matrix whenever an inverse transform is needed, since the two are scalar multiples of each other, and the adjiont always exits, while the inverse does not if the matrix is singular. The inverse transformation: In an interactive image warper one might specify the four corners of source and destination quadrilaterals with a tablet or mouse, and wish to warp one area to the other. This sort of task is an ideal application of projective mappings, but the problem is to find the mapping matrix. A projective mapping has 8 degrees of freedom which can be determined from the source and destination coordinates of the four corners of a quadrilateral. Let correspondence map (finite).To compute the forward mapping matrix sd M , assuming that i=1, we have eight equations in the eight unknowns from a to g: For k = 0, 1, 2, 3. This can be rewritten as an 8 x 8 system:  There are more efficient formulas for computing the mapping matrix. The formula above handles the case where the polygon is a general quadrilateral in both source and destination spaces. We will consider three additional cases: squareto-quadrilateral, quadrilateral-to-square, and the general quadrilateral-to-quadrilateral mapping. Case1. The system is easily solved symbolically in the special case where the uv quadrilateral is a unit square. If the vertex correspondence is as follows: Table 1 x Then the eight equations reduce to:- M a y 0 9 , 2 0 1 4 . Gives a projective mapping: x c This computation is much faster than a straightforward 8 x 8 system solver. Then mapping above is easily generalized to map a rectangle to a quadrilateral by pre-multiplying with a scale and translation matrix. Case 2. The inverse mapping, a quadrilateral to a square, can also be optimized. It turns out that the most efficient algorithm for computing this is not purely symbolic, as in the previous case, but numerical. We use the square-toquadrilateral formulas just described to find the inverse of the desired mapping and then take its adjoint to compute the quadrilateral -to-square mapping [11].
Case 3. Since we can compute quadrilateral-to-square and square-to-quadrilateral mappings quickly, the two mappings can easily be composed to yield a general quadrilateral-to-quadrilateral mapping as Fig 4 below. This solution method is faster than a general 8 x 8 system solver.

Comparison of simple mapping
Affine and projective mappings are closed under composition, but bilinear mappings are not. The three basic mapping classes compose as follows: M a y 0 9 , 2 0 1 4 Thus, the composition of two bilinear mapping is a biquadratic mapping. Since (nonsingular) affine and projective mappings are closed under composition, have an identity and inverses and obey the associative law, they each form a group under the operation of composition. Bilinear mappings do not form a group in this sense and [12].
We summarize the properties of affine, bilinear and projective mappings below: From the above table, affine mappings are the simplest of the three classes. If more generality is needed, then projective mappings are preferable to bilinear mappings because of the predictability of line preserving mappings. For the implementer, the group properties of affine and projective mappings make their inverse mapping as easy to compute as their forward mappings. Bilinear mappings are computationally preferable to projective mappings only when the forward mapping is used much more heavily than the inverse mapping [13].

Image Mosaicing
One application for image warping is merging of several images into a complete mosaicing to form a panoramic view. In mosaicing, the transformation between images is often not know beforehand [8]. Two images are merged and we will estimate the transformation by letting the user give points of correspondence (also called landmarks or fiducially markers) in each of the images. In order to recover the transformation, the warping equation so that warping parameters is the vector t in [13] and [14]: Mathematical Formulation of parametric warping for video registration [5] and [16] Let: M a y 0 9 , 2 0 1 4 The transformation called warping can be expressed as: with the intensity constancy constraint, this means that the pixel intensity ravines even if constant and unchanged its position changed, i.e.
For certain image region I, we denote its pixel intensity for each i x  and time t by: By making the constant intensity assumption, the motion of image pixels can be represented in terms of their spatial and temporal derivation as: The above linearization is carried out by Taylor series of t and  as: From the intensity constant constraint, the motion parameter vector of the image region can be estimated at time t by minimizing the following least squares error function as [16] and [19]: By substituting Equ. (5.5) into Equ. (5.7) and ignoring the higher order terms, we obtain:

Applications
-One a single image still image.
-Two views from two different perspectives.

Results
We show here some examples of panorama photos generated from both still photos by using warping transformation method (Affine, Bilinear, Projective, Mosaic, Similarity transformation). Fig. 5 to Fig.12 are generated from both still photos and Fig. 4 is generated from the one still image by sing OpenGL codes using model. It is worth to mention that in both Fig.5 to Fig.12, overall brightness not changes from images to images, intensity correction is done and blending is added to smooth out the transition between the images.
we obtain a smooth panorama photo without any visually disturbing artifacts. This approach does not prevent a moving object from appearing more than once in the image. But then multiple appearances make the picture more dynamic and more interesting. We have good method to find third dimensional from two dimensional as panorama and more acquitted.

Conclusions
This paper presents techniques to handle some practical issues when generating panorama photos. Realizing the fact that there would always be some misalignments between two images no matter how well the matching is done, we propose a warping transformation method that finds a line of best agreement between two images, to make the misalignments less visible bye using Affine, Bilinear, Projective, Mosaic, Similarity transformation. Also shown in this paper are methods on how to find 3D from 2D in three cases: One a single image still image. Two views from two different perspectives. panorama photos generated from both still image.
In the future, we plan to add find 3D from 2D bye anther methods.