FreshPatents.com Logo
stats FreshPatents Stats
n/a views for this patent on FreshPatents.com
Updated: July 21 2014
newTOP 200 Companies filing patents this week


    Free Services  

  • MONITOR KEYWORDS
  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • ORGANIZER
  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • ARCHIVE
  • View the last few months of your Keyword emails.

  • COMPANY DIRECTORY
  • Patents sorted by company.

Follow us on Twitter
twitter icon@FreshPatents

Generating images from light fields utilizing virtual viewpoints

last patentdownload pdfdownload imgimage previewnext patent


20140092281 patent thumbnailZoom

Generating images from light fields utilizing virtual viewpoints


Systems and methods for the synthesis of light field images from virtual viewpoints in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system includes a processor and a memory configured to store captured light field image data and an image manipulation application, wherein the captured light field image data includes image data, pixel position data, and a depth map, and wherein the image manipulation application configures the processor to obtain captured light field image data, determine a virtual viewpoint for the captured light field image data, where the virtual viewpoint includes a virtual location and virtual depth information, compute a virtual depth map based on the captured light field image data and the virtual viewpoint, and generate an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map.
Related Terms: Fields Image Manipulation Rspec

Browse recent Pelican Imaging Corporation patents - Mountain View, CA, US
USPTO Applicaton #: #20140092281 - Class: 348262 (USPTO) -


Inventors: Semyon Nisenzon, Ankit K. Jain

view organizer monitor keywords


The Patent Description & Claims data below is from USPTO Patent Application 20140092281, Generating images from light fields utilizing virtual viewpoints.

last patentpdficondownload pdfimage previewnext patent

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No. 61/707,691, filed on Sep. 28, 2012, the disclosure of which is hereby incorporated by reference in its entirety.

FIELD OF THE INVENTION

The present invention relates to systems and methods for generating images from light field image data and more specifically to systems and methods for generating images from light field image data using virtual viewpoints.

BACKGROUND

Imaging devices, such as cameras, can be used to capture images of portions of the electromagnetic spectrum, such as the visible light spectrum, incident upon an image sensor. For ease of discussion, the term light is generically used to cover radiation across the entire electromagnetic spectrum. In a typical imaging device, light enters through an opening (aperture) at one end of the imaging device and is directed to an image sensor by one or more optical elements such as lenses. The image sensor includes pixels or sensor elements that generate signals upon receiving light via the optical element. Commonly used image sensors include charge-coupled device (CCDs) sensors and complementary metal-oxide semiconductor (CMOS) sensors.

Image sensors are devices capable of converting an optical image into a digital signal. Image sensors utilized in digital cameras are made up of an array of pixels; the number of pixels determines the megapixel rating of the image sensor. For example, an image sensor having a width×height of 2272×1704 pixels would have an actual pixel count of 3,871,488 pixels and would be considered a 4 megapixel image sensor. Each pixel in an image sensor is capable of capturing light and converting the captured light into electrical signals. In order to separate the colors of light and capture a color image, a Bayer filter is often placed over the image sensor, filtering the incoming light into its red, blue, and green (RGB) components which are then captured by the image sensor. The RGB signal captured by the image sensor plus Bayer filter can then be processed and a color image can be created.

Generally, image capture utilizes a single image sensor, to capture individual images, one at a time. A digital camera typically combines both an image sensor and processing capabilities. When the digital camera takes a photograph, the data captured by the image sensor is provided to the processor by the image sensor. Processors are able to control aspects of a captured image by changing image capture parameters of the sensor elements or groups of sensor elements used to capture the image.

The ISO/IEC 10918-1 standard, more commonly referred to as the JPEG standard after the Joint Photographic Experts Group that developed the standard, establishes a standard process for digital compression and coding of still images. The JPEG standard specifies a codec for compressing an image into a bitstream and for decompressing the bitstream back into an image.

SUMMARY

OF THE INVENTION

Systems and methods for the synthesis of light field images from virtual viewpoints in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a system configured to synthesize images using captured light field image data includes a processor and a memory connected to the processor and configured to store captured light field image data and an image manipulation application, wherein the captured light field image data includes image data, pixel position data, and a depth map, wherein the depth map includes depth information for one or more pixels in the image data, and wherein the image manipulation application configures the processor to obtain captured light field image data, determine a virtual viewpoint for the captured light field image data based on the pixel position data and the depth map for the captured light field image data, where the virtual viewpoint includes a virtual location and virtual depth information, compute a virtual depth map based on the captured light field image data and the virtual viewpoint, and generate an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map, where the generated image includes a plurality of pixels selected from the image data based on the pixel position data and the virtual depth map.

In another embodiment of the invention, the virtual viewpoint corresponds to a focal plane in an array camera utilized to create the captured light field image data.

In an additional embodiment of the invention, the captured light field image data further includes a reference viewpoint within the captured light field image data and the virtual viewpoint is a separate viewpoint within the captured light field image data from the reference viewpoint.

In yet another additional embodiment of the invention, the captured light field image data was captured by an array camera having an imager array including a plurality of imagers and the reference viewpoint corresponds to the viewpoint of a first imager within the imager array in the array camera.

In still another additional embodiment of the invention, the virtual viewpoint corresponds to the viewpoint of a second imager within the imager array, where the second imager is separate from the first imager.

In yet still another additional embodiment of the invention, the virtual viewpoint is a viewpoint that does not correspond to the viewpoint of any of the imagers within the imager array.

In yet another embodiment of the invention, the virtual viewpoint is selected from a position selected from the group consisting of in front of the imager array and behind the imager array.

In still another embodiment of the invention, the image manipulation application further configures the processor to generate an image from the perspective of the virtual viewpoint by projecting pixels from the captured light field image data based on the pixel position data and the depth map, where the projected pixels are described in the image data and the depth map.

In yet still another embodiment of the invention, the captured light field image data further includes occluded pixel information describing pixels not visible from a reference viewpoint of the captured light field image data and the projected pixels include at least one occluded pixel in the occluded pixel information that is visible from the perspective of the virtual viewpoint.

In yet another additional embodiment of the invention, at least one projected pixel in the generated image is not described in the image data, the pixel position data, and the depth map and the image manipulation application further configures the processor to generate the at least one projected pixel by resampling the image data, the pixel position data, and the depth map.

In still another additional embodiment of the invention, a pinhole camera model is utilized to project pixels within the generated image based on light rays projecting from the virtual viewpoint, where each projected pixels is associated with at least one of the projected light rays.

In yet still another additional embodiment of the invention, projected pixel depth information is determined for at least one pixel in the generated image based on the depth map, the virtual viewpoint, and the light rays associated with the projected pixel.

In yet another embodiment of the invention, the depth information for a projected pixel is based on minimizing the variance for the projected pixel across the image data within the captured light field image data.

In still another embodiment of the invention, the image manipulation application further configures the processor to combine projected pixels having the same location within the generated image.

In yet still another embodiment of the invention, the pixels are combined based on the weighted average of the pixels, where the weighted average of the pixels is the inverse of the distance from the imager from which the projected pixel originated to the virtual viewpoint.

In yet another additional embodiment of the invention, the system further includes an input device configured to obtain input data indicative of a position within the captured light field image data.

In still another additional embodiment of the invention, the input device is a touchscreen interface.

In yet still another additional embodiment of the invention, the input device is a sensor configured to obtain spatial location information.

In yet another embodiment of the invention, the input device is a camera configured to obtain input data selected from the group consisting of head tracking data and gaze tracking data.

In still another embodiment of the invention, the virtual viewpoint is selected based on the input data.

In yet still another embodiment of the invention, the image manipulation application further configures the processor to obtain a first input data indicative of a first position within the captured light field image data, determine a first virtual viewpoint based on the first input data, generate a first image from the perspective of the first virtual viewpoint, obtain a second input indicative of a second position within the captured light field image data, where the second position is separate from the first position, determine a second virtual viewpoint based on the second input data, generate at least one intermediate virtual viewpoint by interpolating between the first virtual viewpoint and the second virtual viewpoint, generate at least one intermediate image based on the generated at least one intermediate virtual viewpoints, where each intermediate image is from the perspective of an intermediate virtual viewpoint, and generate a second image from the perspective of the second virtual viewpoint.

In yet another additional embodiment of the invention, the image is generated utilizing a super-resolution process.

Still another embodiment of the invention includes a process for generating an image from a virtual viewpoint, including obtaining captured light field image data using an image manipulation device, where the captured light field image data includes image data, pixel position data, and a depth map and where the depth map includes depth information for one or more pixels in the image data, determining a virtual viewpoint for the captured light field image data based on the pixel position data and the depth map for the captured light field image data using the image manipulation device, where the virtual viewpoint includes a virtual location and virtual depth information, computing a virtual depth map based on the captured light field image data and the virtual viewpoint using the image manipulation device, and generating an image from the perspective of the virtual viewpoint based on the captured light field image data and the virtual depth map using the image manipulation device, where the generated image includes a plurality of pixels selected from the image data based on the pixel position data and the virtual depth map.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A conceptually illustrates an array camera including a 5×5 imager array connected with a processor in accordance with an embodiment of the invention.

FIG. 1B conceptually illustrates a 5×5 array camera module in accordance with an embodiment of the invention.

FIG. 1C conceptually illustrates a color filter pattern for a 4×4 array camera module in accordance with an embodiment of the invention.

FIG. 2 is a diagram conceptually illustrating a device capable of processing light field images in accordance with an embodiment of the invention.

FIG. 3A is a diagram conceptually illustrating virtual viewpoints for a given light field in accordance with an embodiment of the invention.

FIG. 3B is a diagram conceptually illustrating a light field image rendered from a virtual viewpoint for a given light field in accordance with an embodiment of the invention.

FIG. 3C is a diagram conceptually illustrating a light field image rendered from a second virtual viewpoint for a given light field in accordance with an embodiment of the invention.

FIG. 4 is a flow chart conceptually illustrating a process for generating a light field image from a virtual viewpoint in a light field in accordance with an embodiment of the invention.

FIG. 4B is a flow chart conceptually illustrating a process for generating a light field image from a virtual viewpoint using projected light rays in accordance with an embodiment of the invention.

FIG. 5 is a flow chart conceptually illustrating a process for reprojecting light rays in relation to a virtual viewpoint in accordance with an embodiment of the invention.

FIG. 6 is a flow chart conceptually illustrating a process for computing a depth map for a virtual viewpoint in accordance with an embodiment of the invention.

FIG. 7 is a flow chart conceptually illustrating a process for projecting pixels to form a light field image corresponding to a virtual viewpoint in accordance with an embodiment of the invention.

FIG. 8 is a flow chart conceptually illustrating a process for interactively generating light field images from virtual viewpoints in accordance with an embodiment of the invention.

DETAILED DESCRIPTION

Turning now to the drawings, systems and methods for generating images from light field image data using virtual viewpoints in accordance with embodiments of the invention are illustrated. A light field is often defined as a 4D function characterizing the light from all direction at all points in a scene and can be interpreted as a two-dimensional (2D) collection of 2D images of a scene. Array cameras, such as those described in U.S. patent application Ser. No. 12/935,504 entitled “Capturing and Processing of Images using Monolithic Camera Array with Heterogeneous Imagers” to Venkataraman et al., can be utilized to capture light fields. In a number of embodiments, super-resolution processes such as those described in U.S. patent application Ser. No. 12/967,807 entitled “Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes” to Lelescu et al., are utilized to synthesize a higher resolution 2D image or a stereo pair of higher resolution 2D images from the lower resolution images in the light field captured by an array camera. The terms high or higher resolution and low or lower resolution are used here in a relative sense and not to indicate the specific resolutions of the images captured by the array camera. The disclosures of U.S. patent application Ser. No. 12/935,504 and U.S. patent application Ser. No. 12/967,807 are hereby incorporated by reference in their entirety.

A file containing an image synthesized from light field image data and metadata derived from the light field image data can be referred to as a light field image file. The encoded image in a light field image file is typically synthesized using a super resolution process from a number of lower resolution images. The light field image file can also include metadata describing the synthesized image derived from the light field image data that enables post processing of the synthesized image. In many embodiments, a light field image file is created by encoding an image synthesized from light field image data and combining the encoded image with a depth map derived from the light field image data. In several embodiments, the encoded image is synthesized from a reference viewpoint and the metadata includes information concerning pixels in the light field image that are occluded from the reference viewpoint. In a number of embodiments, the metadata can also include additional information including (but not limited to) auxiliary maps such as confidence maps, edge maps, occluded pixel information, and missing pixel maps that can be utilized during post processing of the encoded image to improve the quality of an image rendered using the light field image data file. By transmitting a light field image file including an encoded image, and metadata describing the encoded image, a rendering device (i.e. a device configured to generate an image rendered using the information within the light field image file) can render new images using the information within the file without the need to perform super resolution processing on the original light field image data. In this way, the amount of data transmitted to the rendering device and the computational complexity of rendering an image is reduced. In several embodiments, rendering devices are configured to perform processes including (but not limited to) refocusing the encoded image based upon a focal plane specified by the user, synthesizing an image from a different viewpoint, and generating a stereo pair of images. A variety of file formats may be utilized to store light field image files in accordance with embodiments of the invention. One such file format is the JPEG-DX extension to ISO/IEC 10918-1 described in U.S. Provisional Patent Application No. 61/540,188 entitled “JPEG-DX: A Backwards-compatible, Dynamic Focus Extension to JPEG”, to Venkataraman et al., filed Sep. 28, 2011, the entirety of which is incorporated by reference.



Download full PDF for full patent description/claims.

Advertise on FreshPatents.com - Rates & Info


You can also Monitor Keywords and Search for tracking patents relating to this Generating images from light fields utilizing virtual viewpoints patent application.
###
monitor keywords



Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like Generating images from light fields utilizing virtual viewpoints or other areas of interest.
###


Previous Patent Application:
Image processing apparatus and image processing method
Next Patent Application:
Spectral imaging apparatus
Industry Class:
Television
Thank you for viewing the Generating images from light fields utilizing virtual viewpoints patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 0.80204 seconds


Other interesting Freshpatents.com categories:
Nokia , SAP , Intel , NIKE ,

###

All patent applications have been filed with the United States Patent Office (USPTO) and are published as made available for research, educational and public information purposes. FreshPatents is not affiliated with the USPTO, assignee companies, inventors, law firms or other assignees. Patent applications, documents and images may contain trademarks of the respective companies/authors. FreshPatents is not affiliated with the authors/assignees, and is not responsible for the accuracy, validity or otherwise contents of these public document patent application filings. When possible a complete PDF is provided, however, in some cases the presented document/images is an abstract or sampling of the full patent application. FreshPatents.com Terms/Support
-g2-0.2644
     SHARE
  
           

FreshNews promo


stats Patent Info
Application #
US 20140092281 A1
Publish Date
04/03/2014
Document #
14042275
File Date
09/30/2013
USPTO Class
348262
Other USPTO Classes
International Class
04N9/09
Drawings
12


Fields
Image Manipulation
Rspec


Follow us on Twitter
twitter icon@FreshPatents