FreshPatents.com Logo
stats FreshPatents Stats
12 views for this patent on FreshPatents.com
2012: 12 views
Updated: October 13 2014
newTOP 200 Companies filing patents this week


    Free Services  

  • MONITOR KEYWORDS
  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • ORGANIZER
  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • ARCHIVE
  • View the last few months of your Keyword emails.

  • COMPANY DIRECTORY
  • Patents sorted by company.

Follow us on Twitter
twitter icon@FreshPatents

Image capture using three-dimensional reconstruction

last patentdownload pdfimage previewnext patent


Title: Image capture using three-dimensional reconstruction.
Abstract: Embodiments may take the form of three-dimensional image sensing devices configured to capture an image including one or more objects. In one embodiments, the three-dimensional image sensing device includes a first image device configured to capture a first image and extract depth information for the one or more objects. Additionally, the image sensing device includes a second imaging device configured to capture a second image and determine an orientation of a surface of the one or more objects. ...


USPTO Applicaton #: #20120075432 - Class: 348 48 (USPTO) - 03/29/12 - Class 348 


view organizer monitor keywords


The Patent Description & Claims data below is from USPTO Patent Application 20120075432, Image capture using three-dimensional reconstruction.

last patentpdficondownload pdfimage previewnext patent

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit under 35 U.S.C. §119(e) to 61/386,865, filed Sep. 27, 2010 and titled “Image Capture Using Three-Dimensional Reconstruction,” the disclosure of which is hereby incorporated herein in its entirety.

BACKGROUND

I. Technical Field

The disclosed embodiments relate generally to image sensing devices and, more particularly, to image sensing devices that utilize three-dimensional reconstruction to form a three-dimensional image.

II. Background Discussion

Existing three-dimensional image capture devices, such as digital cameras and video recorders, can derive limited three-dimensional visual information for objects located within a captured area. For example, some imaging devices can extract approximate depth information relating to objects located within the captured area, but are incapable of obtaining detailed geometric information relating to the surfaces of these objects. Such sensors may be able to approximate the distances of objects within the captured area, but cannot accurately reproduce the three-dimensional shape of the objects. Alternatively other imaging devices can obtain and reproduce surface detail information for objects within the captured area, but are incapable of extracting depth information. Accordingly, these sensors may be incapable of differentiating between a small object positioned close to the sensor and a large object positioned far away from the sensor.

SUMMARY

Embodiments described herein relate to systems, apparatuses and methods for capturing a three-dimensional image using one or more dedicated imaging devices. One embodiment may take the form of a three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized image, the first sensor including a first imaging device and a polarized filter associated with the first imaging device; a second sensor for capturing a first non-polarized image; a third sensor for capturing a second non-polarized image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first non-polarized image and the second non-polarized image, the processing module further operative to combine the polarized image, the first non-polarized image, and the second non-polarized image to form a composite three-dimensional image.

Another embodiment may take the form of three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized chrominance image and determining surface information for the one or more objects, the first sensor including a color imaging device and a polarized filter associated with the color imaging device; a second sensor for capturing a first luminance image; a third sensor for capturing a second luminance image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first luminance image and the second luminance image and combining the polarized chrominance image, the first luminance image, and the second luminance image to form a composite three-dimensional image utilizing the surface information and the depth information.

Still another embodiment may take the form of a method for capturing at least one image of an object, comprising: capturing a polarized image of the object; capturing a first non-polarized image of the object; capturing a second non-polarized image of the object; deriving depth information for the object from at least the first non-polarized image and the second non-polarized image; determining a plurality of surface normals for the object, the plurality of surface normals derived from the polarized image; and creating a three-dimensional image from the depth information and the plurality of surface normals.

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Other features, details, utilities, and advantages will be apparent from the following more particular written description of various embodiments, as further illustrated in the accompanying drawings and defined in the appended claims.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A is a functional block diagram that illustrates certain components of one embodiment of a three-dimensional imaging apparatus;

FIG. 1B is a close-up view of one embodiment of the second imaging device shown in FIG. 1A;

FIG. 1C is a close-up view of another embodiment of the second imaging device shown in FIG. 1A;

FIG. 1D is a close-up view of another embodiment of the second imaging device shown in FIG. 1A;

FIG. 2 is a functional block diagram that illustrates certain components of another embodiment of a three-dimensional imaging apparatus;

FIG. 3 is a functional block diagram that illustrates certain components of another embodiment of a three-dimensional imaging apparatus;

FIG. 4 depicts a sample polarization filter that may be used in accordance with embodiments discussed herein, including the imaging apparatuses of FIGS. 1A-3; and

FIG. 5 depicts a second sample polarization filter that may be used in accordance with embodiments discussed herein, including the imaging apparatuses of FIGS. 1A-3.

DETAILED DESCRIPTION

One embodiment may take the form of a three-dimensional imaging apparatus, including a first and second imaging device. The first imaging device may have two unique imaging devices that may be used in concert to derive depth data for objects within the field of detection of the sensors. Alternatively, the first imaging device may have a single imaging device that provides depth data. The second imaging device may be at least partially overlaid with a polarizing filter in order to obtain polarization data of light impacting the device, and thus the surface orientation of any objects reflecting such light.

The first imaging device may derive approximate depth information relating to objects within its field of detection and supply the depth information to an image processing device. The second imaging device may capture surface detail information relating to objects within its field of detection and supply the surface detail information to the image processing device. The image processing device may combine the depth information with the surface detail information in order to create a three-dimensional image that includes both surface detail and accurate depth information for objects in the image.

In the following discussion of illustrative embodiments, the term “image sensing device” includes, without limitation, any electronic device that can capture still or moving images. The image sensing device may utilize analog or digital sensors, or a combination thereof, for capturing the image. In some embodiments, the image sensing device may be configured to convert or facilitate converting the captured image into digital image data. The image sensing device may be hosted in various electronic devices including, but not limited to, digital cameras, personal computers, personal digital assistants (PDAs), mobile telephones, or any other devices that can be configured to process image data. Sample image sensing devices include charge-coupled device (CCD) sensors, complementary metal-oxide-semiconductor sensors, infrared sensors, light detection and ranging sensors, and the like. Further, the image sensing devices may be sensitive to a range of colors and/or luminances, and may employ various color separation mechanisms such as Bayer arrays, Foveon X3 configurations, multiple CCD devices, dichroic prisms and the like.

FIG. 1A is a functional block diagram of one embodiment of a three-dimensional imaging apparatus for capturing and storing image data. In one embodiment, the three-dimensional imaging apparatus may be a component within an electronic device. For example, the three-dimensional imaging apparatus may be employed in a standalone digital camera, a laptop computer, a media player, a mobile phone, and so on and so forth.

As shown in FIG. 1A, the three-dimensional imaging apparatus 100 may include a first imaging device 102, a second imaging device 104, and an image processing module 106. The first imaging device 102 may include a first imaging device and the second imaging device 104 may include a second imaging device and a polarizing filter 108 associated with the second imaging device. As will be further discussed below, the first imaging device 102 may be configured to derive approximate depth information relating to objects in the image, and the second imaging device 104 may be configured to derive surface orientation information relating to objects in the image.

In one embodiment, the fields of view of the first and second imaging devices 112, 114 may be offset so that the received images are slightly different. For example, the field of view 112 of the first imaging device 102 may be vertically, diagonally, or horizontally offset from the second imaging device 104, or may be closer or further away from a reference plane or point. As will be further discussed below, offsetting the fields of view of the first and second imaging devices 112, 114 may provide data useful for generating stereo disparity maps, as well as extracting depth information. However, in other embodiments, the fields of view of the first and second imaging devices 112, 114 may be substantially the same.

The first and second imaging devices 102, 104 may be each be formed from an array of light-sensitive pixels. That is, each pixel of the imaging devices may detect at least one of the various wavelengths that make up visible light. The signal generated by each such pixel may vary depending on the wavelength of light impacting it so that the array may thus reproduce a composite image of the object. In one embodiment, the first and second imaging devices 102, 104 may have substantially identical pixel array configurations. For example, the first and second imaging devices may have the same number of pixels, the same pixel aspect ratio, the same arrangement of pixels, and/or the same size of pixels. However, in other embodiments, the first and second imaging devices may have different numbers of pixels, pixel sizes, and/or layouts. For example, in one embodiment, the first imaging device 102 may have a smaller number of pixels than the second imaging device 104, or vice versa, or the arrangement of pixels may be different between the sensors.

The first imaging device 102 may be configured to capture a first image and process the image to detect depth or distance information relating to objects in the image. For example, the first imaging device 102 may be configured to derive an approximate relative distance of an object 110 by measuring properties of electromagnetic waves as they are reflected off or scattered by the object and captured by the first imaging device. In one embodiment, the first imaging device may be a Light Detection And Ranging (LIDAR) sensor. The LIDAR sensor may emit laser pulses that are reflected off of the surfaces of objects in the image and detect the reflected signal. The LIDAR sensor may then calculate the distance of an object from the sensor by measuring the time delay between transmission of a laser pulse and the detection of the reflected signal. Other embodiments may utilize other types of depth-detection techniques, such as infrared reflection, RADAR, laser detection and ranging, and the like.

Alternatively, a stereo disparity map may be generated to derive depth or distance information relating to objects present in the image. In one embodiment, a stereo disparity map may be formed from the first image captured by the first imaging device and a second image captured by the second imaging device. Various methods and processes for creating stereo disparity maps from two offset images are known to those skilled in the art and thus are not discussed further herein. Generally, the stereo disparity map is a depth map in which depth information for objects shown in the images is derived from the offset first and second images. For example, the second image may include some or all of the objects captured in the first image, but with the position of the objects being shifted in one direction (typically, although not necessarily, horizontally). This shift may be measured and used to calculate the distance of the objects from the first and second imaging devices.

The second imaging device 104 may be configured to capture a second image and derive detailed surface information for objects in the image. As shown in FIG. 1A, in one embodiment, a polarizing filter 108 may be positioned between the second imaging device and an object 110, such that light reflected off the object passes through the polarizing filter to produce polarized light. The polarized light is then transmitted by the filter 108 to the second imaging device 104. The second imaging device 104 may be any electronic sensor capable of detecting various wavelengths of light, such as those commonly used in digital cameras, digital video cameras, mobile telephones and personal digital assistants, web cameras, and so on and so forth. For example, the second imaging device 104 may be, but is not limited to, a charge-coupled device (CCD) imaging device or a complementary metal-oxide-semiconductor (CMOS) sensor.

In one embodiment, a polarizing filter may overlay the second imaging device. As shown in FIG. 1B, the polarizing filter 108 may include an array of polarizing subfilters 120. Each of the polarizing subfilters 122 within the array may overlay one or more pixels 124 of the second imaging device 104. In one embodiment, the polarizing filter 108 may be overlaid over the second imaging device 104 so that each polarizing subfilter 122 in the array 120 is aligned with a corresponding pixel 124. The polarizing subfilters 122 may have different types of polarizations. For example, a first polarizing subfilter may have a horizontal polarization, a second subfilter may have a vertical polarization, a third may have +45 degree polarization, a fourth may have a −45 degree polarization, and so on and so forth. In some embodiments, left and right-hand circular polarizations may be used. Accordingly, the polarized light that is transmitted from the polarizing filter 108 to the second imaging device 104 may be polarized differently for some of the pixels than for others.

Another embodiment, shown in FIG. 1C, may include a microlens array 130 overlaying the polarization filter 108. Each of the microlenses 132 in the microlens array 130 may overlay one or more polarizing subfilters 122 to focus polarized light onto a corresponding pixel 124 of the second imaging device. The microlenses 132 in the array 130 may each be configured to refract light impacting on the second imaging device, as well as transmit light to an underlying polarizing subfilter 122. Accordingly, each microlens 132 may correspond to one of the pixels 124 of the second imaging device 104. The microlenses 132 can be formed from any suitable material for transmitting and diffusing light through the light guide, including plastic, acrylic, silica, glass, and so on and so forth. Additionally, the light guide may include combinations of reflective material, highly transparent material, light absorbing material, opaque material, metallic material, optic material, and/or any other functional material to provide extra modification of optical performance. In another embodiment, shown in FIG. 1D, the microlenses 134 of the microlens array 136 may be polarized. In this embodiment, the polarized microlens array 136 may overlay the pixels 122 of the second imaging device 104 so that polarized light is focused onto the pixels 122 of the second imaging device.

In one embodiment, the microlenses 136 may be convex and have a substantially rounded configuration. Other embodiments may have different configurations. For example, in one embodiment, the microlenses 136 may have a conical configuration, in which the top end of each microlens is pointed. In other embodiments, the microlenses 136 may define truncated cones, in which the tops of the microlenses form a substantially flat surface. Additionally, in some embodiments, the microlenses 136 may be concave surfaces, rather than convex. As is known, the microlenses may be formed using a variety of techniques, including laser-cutting techniques, and/or micro-machining techniques, such as diamond turning. After the microlenses 136 are formed, an electrochemical finishing technique may be used to coat and/or finish the microlenses to increase their longevity and/or enhance or add any desired optical properties. Other methods for forming the microlenses may entail the use of other techniques and/or machinery, as is known.

Unpolarized light that is reflected off of the surfaces of objects in the image may be fully or partially polarized according to Fresnel\'s laws. Generally, the polarization may be correlated to the plane angle of incidence on the surface, as well as to the physical properties of the material. For example, light reflecting off highly reflective materials, such as polished metal, may be less polarized than light reflecting off of a dull surface. In one embodiment, light that is reflected off the surfaces of objects in the image may be passed through the array of polarization filters. The resulting polarized light may be captured by the pixels of the second imaging device so that each such pixel of the second imaging device receives light only if that light is polarized according to the polarization scheme of its corresponding filter. The second imaging device 104 may then measure the polarization of the light impacting on each pixel and derive the surface geometry of the object. For example, the second imaging device 104 may determine the orientation and/or curvature of the surface of an object. In one embodiment, the orientation and/or curvature of the surface may be determined for each pixel of the second imaging device 104 and combined to obtain the surface geometry for all of the surfaces of the object.

The first and second images may be transmitted to the image processing module 106, which may combine the first image captured by and transmitted from the first imaging device 102 with the second image captured by and transmitted from the second imaging device 104, to output a composite three-dimensional image. This may be accomplished by aligning the first and second images and overlaying one of the images on top of the other using a variety of techniques, including warping the first and second images, selectively cropping at least one of these images, using calibration data for the image processing module, and so on and so forth. As discussed above, the first image may supply depth information relating to the objects in the image, while the second image may supply surface geometry information for the objects in the image. Accordingly, the combined three-dimensional image may include accurate depth information for each object, while also providing accurate object surface detail.

In one embodiment, the first image supplying the depth information may have a lower or coarser resolution (e.g., lower pixel count per unit area), than the second image supplying the surface geometry information. In this embodiment, the composite three-dimensional image may include high resolution surface detail for objects in the image, but the amount of overall processing by the image processing module may be reduced due to the lower resolution of the first image. As discussed above, other embodiments may produce first and second images having substantially the same resolution, or the first image supplying the depth information may have a higher resolution than the second image.

Another embodiment of a three-dimensional imaging apparatus 200 is shown in FIG. 2. The imaging apparatus 200 generally includes a first chrominance sensor 202, a luminance sensor 204, a second chrominance sensor 206, and an image processing module 208. The luminance sensor 204 may be configured to capture a luminance component of incoming light. Additionally, each of the chrominance sensors 202 may be configured to capture color components of incoming light. In one embodiment, the chrominance sensors 202,206 may sense the R (Red), G (Green), and B (Blue) components of an image and process these components to derive chrominance information. Other embodiments may be configured to sense other color components, such as yellow, cyan, magenta, and so on. Further, in some embodiments, two luminance sensors and a single chrominance sensor may be used. That is, certain embodiments may employ a first luminance sensor, a first chrominance sensor and a second luminance sensor, such that a stereo disparity (e.g., stereo depth) map may be generated based on the offsets of the two luminance images. Each luminance sensor captures one of the two luminance images in this embodiment. Further, in such an embodiment, the chrominance sensor may be used to capture color information for a picture, while one or both luminance sensors capture luminance information. In this embodiment, both of the luminance sensors may be overlaid, fitted, or otherwise associated with one or more polarizing filters to receive and capture surface normal information for a surface, as described in more detail herein. Multiple luminance sensors with polarizing filters may be used, for example, in low light conditions where chrominance information may be lost or muted.

“Chrominance sensors” 202, 206 may be implemented in a variety of fashions and may sense/capture more than just chrominance. For example, the chrominance sensor(s) 202, 206 may be implemented as a Bayer array, an RGB sensor, a CMOS sensor, and so on and so forth. Accordingly, it should be appreciated that a chrominance sensor may also capture luminance information; chrominance is typically derived from the RGB sensor data.

Returning to an embodiment having two chrominance sensors 202, 206 and a single luminance sensor 104, the first chrominance sensor 202 may take the form of a first color imaging device. The luminance sensor may take the form of a luminance imaging device that is overlaid by a polarizing filter. The second chrominance sensor 206 may take the form of a second color imaging device. In one embodiment, the luminance sensor 204 and two chrominance sensors 202, 206 may be separate integrated circuits. However, in other embodiments, the luminance and chrominance sensors may be formed on the same circuit and/or formed on a single board or other element. In alternative embodiments, the polarizing filter 210 may be placed over either of the chrominance sensors instead of (or in addition to) the luminance sensor.

As shown in FIG. 2, the polarizing filter 210 may be positioned between the luminance sensor 204 and an object 211, such that light reflected off the object passes through the polarizing filter and impacts the corresponding luminance sensor. The luminance sensor 204 may be any electronic sensor capable of detecting various wavelengths of light, such as those commonly used in digital cameras, digital video cameras, mobile telephones and personal digital assistants, web cameras, and so on and so forth.

As discussed above with respect to FIGS. 1A and 1B, the luminance and chrominance sensors 202, 204, 206 may be formed from an array of color-sensitive pixels. The pixel arrangement may vary between sensors or may be identical, in a manner similar to that previously discussed.

In one embodiment, respective color filters may overlay the first and second color sensors and allow the sensors to capture the color portions of a sensed image as chrominance images. Similarly, an additional filter may overlay the luminance sensors and allow the imaging device to capture the luminance portion of a sensed image as a luminance image. The luminance image, along with the chrominance images, may be transmitted to the image processing module 208. As will be further described below, the image processing module 208 may combine the luminance image captured by and transmitted from the luminance sensor 204 with the chrominance images captured by and transmitted from the chrominance sensors, to output a composite image.

It should be appreciated that the luminance of an image may be expressed as a weighted sum of red, green and blue wavelengths of the image, in the following manner:

L=0.59 G+0.3 R+0.11 B

Where L is luminance, G is detected green light, R is detected red light, and B is detected blue light. The chrominance portion of an image may be the difference between the full color image and the luminance image. Accordingly, the full color image may be the chrominance portion of the image combined with the luminance portion of the image. The chrominance portion may be derived by mathematically processing the R, G, and B components of an image, and may be expressed as two signals or a two dimensional vector for each pixel of an imaging device. For example, the chrominance portion may be defined by two separate components Cr and Cb, where Cr may be proportional to detected red light less detected luminance, and where Cb may be proportional to detected blue light less detected luminance. In some embodiments, the first and second chrominance sensors 202, 206 may be configured to detect red and blue light and not green light, for example, by covering pixel elements of the color imaging devices with a red and blue filter array. This may be done in a checkerboard pattern of red and blue filter portions. In other embodiments, the filters may include a Bayer-pattern filter array, which includes red, blue, and green filters. Alternatively, the filter may be a CYGM (cyan, yellow, green, magenta) or RGBE (red, green, blue, emerald) filter.



Download full PDF for full patent description/claims.

Advertise on FreshPatents.com - Rates & Info


You can also Monitor Keywords and Search for tracking patents relating to this Image capture using three-dimensional reconstruction patent application.
###
monitor keywords



Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like Image capture using three-dimensional reconstruction or other areas of interest.
###


Previous Patent Application:
Wide angle field of view active illumination imaging system
Next Patent Application:
Efficient information presentation for augmented reality
Industry Class:
Television
Thank you for viewing the Image capture using three-dimensional reconstruction patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 0.59438 seconds


Other interesting Freshpatents.com categories:
Tyco , Unilever , 3m

###

Data source: patent applications published in the public domain by the United States Patent and Trademark Office (USPTO). Information published here is for research/educational purposes only. FreshPatents is not affiliated with the USPTO, assignee companies, inventors, law firms or other assignees. Patent applications, documents and images may contain trademarks of the respective companies/authors. FreshPatents is not responsible for the accuracy, validity or otherwise contents of these public document patent application filings. When possible a complete PDF is provided, however, in some cases the presented document/images is an abstract or sampling of the full patent application for display purposes. FreshPatents.com Terms/Support
-g2-0.2
     SHARE
  
           

FreshNews promo


stats Patent Info
Application #
US 20120075432 A1
Publish Date
03/29/2012
Document #
13246821
File Date
09/27/2011
USPTO Class
348 48
Other USPTO Classes
348E13074
International Class
04N13/02
Drawings
6



Follow us on Twitter
twitter icon@FreshPatents