stats FreshPatents Stats
3 views for this patent on
2012: 2 views
2011: 1 views
Updated: April 14 2014
newTOP 200 Companies filing patents this week

    Free Services  

  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • View the last few months of your Keyword emails.

  • Patents sorted by company.


Follow us on Twitter
twitter icon@FreshPatents

System and method for inserting content into an image sequence

last patentdownload pdfimage previewnext patent

Title: System and method for inserting content into an image sequence.
Abstract: Systems and methods for augmenting an image sequence with content are disclosed. A system may include a character generator, a graphics frame buffer and a graphics insertion system. The character generator may generate pixel block content. The graphics frame buffer may be in communication with the character generator and store the pixel block content. The graphics insertion system may be in communication with the first graphics frame buffer. The graphics insertion system may be used to retrieve the pixel block content from the first graphics frame buffer and modify an image sequence with an insert graphic based on the pixel block content. ...

Browse recent Sportsmedia Technology Corporation patents - Durham, NC, US
Inventors: John D. Dengler, Erik J. Garci, Brian C. Cox, Kenneth T. Tolman, Hans X. Weber, Gerard J. Hall
USPTO Applicaton #: #20110057941 - Class: 345545 (USPTO) - 03/10/11 - Class 345 

view organizer monitor keywords

The Patent Description & Claims data below is from USPTO Patent Application 20110057941, System and method for inserting content into an image sequence.

last patentpdficondownload pdfimage previewnext patent


This application is a continuation of U.S. patent application Ser. No. 11/278,722 filed Apr. 4, 2006 which is a continuation of U.S. patent application Ser. No. 10/613,273 filed Jul. 3, 2003 and claims the benefit and priority of these applications, which is incorporated herein by reference in their entirety.


The present invention is directed to a system and method for augmenting an image sequence with content, such that the content appears to have been part of the original scene as displayed by the image sequence.


This section will present a subset of the nomenclature which is relevant to the domain of the present invention. Precise definition of these terms will aid the reader in correct interpretation of this document. Take note that many of these terms have been used in a different or inconsistent fashion within previously published descriptions of prior art. Despite this, these terms will be used consistently throughout this document, according to the Glossary contained herein.

AUGMENTED REALITY (AR)—the virtual augmentation of a real world physical environment (scene) for the purpose of indirect (video or other) display to a viewer, such that said augmentation appears to belong within the real world. For example, an advertisement may be added to the television display of a blank baseball outfield wall. From all possible camera views, this advertisement will appear, to the television viewer, to be painted onto the outfield wall.

AUGMENTED REALITY INSERT (AR INSERT)—a rendered graphic placed into a camera view which allows for creating the illusion that the rendered graphic is indeed part of the real world scene being displayed.

BACKGROUND—the portion of the scene intended to be covered by the AR insert. The background typically includes, but is not limited to, unchanging parts of the physical scene; e.g., the playing field, bleachers, etc.

COLOR SEPARATION—the process of determining what is foreground and what is background within a displayed scene. Typically, an AR insert is drawn on top of the background elements, but underneath the foreground elements; thus giving the impression that the object is indeed part of the background within the scene.

COLOR SEPARATOR—the portion of an AR system that implements the color separation method.

FOREGROUND—the portion of the scene intended to appear in front of the AR insert. The foreground typically includes, but is not limited to, moving parts of the physical scene; e.g., players, referees, yard markers, swirling leaves, fans, etc.

GRAPHICS FRAME BUFFER (GM)—a two dimensional buffer which stores pixel data content, where pixel data content is typically in the form of RGBA (red-green-blue-alpha) information.

INDUCTIVE TRANSFORM—the transformation function used by the view modeler for the purpose of converting a point P[a] within view A to point P[b] within different view B, such that P[a] and P[b] identify the same location within real world space.

REAL WORLD SPACE—the three dimensional physical space of the scene. Dimensions within real world space represent real world scale units of physical measurements, such as those measured relative to the location of the broadcast camera. The units of measurement within the real world space coordinate system are required to be real world units, such as millimeters, feet, etc.

SCENE—the actual, physical real world environment which is displayed.

SCENE COMPONENT—a portion of the scene, defined due to its significance with respect to the actual broadcast coverage of the event. For example, during a football game, the football field may be defined as a scene component. A three-dimensional model representation of a scene component is referred to as a scene component model (SCM).

VIEW—the image of a scene, as generated by a specific camera. The view of a scene is determined by the placement and orientation of the camera relative to the scene, as well as intrinsic parameters of the camera, such as radial distortion of the camera lens. The term “camera view” is used synonymously with “view” throughout this document.

VIEW MODELING—the process of determining and representing the perspective and display characteristics associated with the camera view, for the purpose of realistically rendering AR inserts into that view.

VIEW MODELER—the portion of an AR system that implements the view modeling method.

In the remainder of this document, please refer to the Glossary section for clarification of domain specific nomenclature.


Today, there exist multiple examples of AR inserts within the domain of broadcast television. For example, a staple of many current television broadcasts of football games in the U.S. is the display of a virtual line on the playing field which encompasses the yard line which the offensive team must cross in order to achieve a first down. Another example of an AR insert during a sports broadcast is the placement of virtual advertisements into the stadium or arena where the game is being played. For example, during the television broadcast of a baseball game, a virtual advertising billboard may be placed onto the backstop behind home plate. The content of these virtual advertisements will typically be changed each inning in order to support multiple sponsors during the game. Another common example of an AR insert, within the domain of news broadcasts, is the creation of a virtual studio. Virtual studios typically involve the display of walls, desks, screens, and other studio equipment around a newscaster in order to give the impression that a full studio set has been constructed.

It should be noted that the overlay of an AR insert onto either static or moving objects is supported by the present invention. For example, a logo may be placed onto the hood of a moving car during an automobile race. The display of such a moving AR insert requires a system and method to support dynamic motion throughout the scene. The present invention includes such a method.

Referring to the Glossary section above, real world space is defined as the three dimensional physical space of the scene. Locations (coordinates) are defined within real world space, such as coordinates relative to the location of the broadcast camera. The units of measurement within the real world space coordinate system are required to be real world units, such as match with definition millimeters, feet, etc. A view modeling method may be considered “real world space dependent” if the method depends on knowledge of any locations or measurements within real world space; i.e., in real world units in the x, y and z directions, such as those relative to the camera.

The problems with a real world space dependent view modeling approach are related to the fact that both collecting and maintaining three dimensional real world space location and measurement information is often an imposing or even impractical task. With respect to the area of information collection, the gathering of highly accurate real world location and measurement information often involves the usage of specialized and expensive equipment, such as GPS systems, survey equipment, laser planes, or inertial navigation systems (e.g., see U.S. Pat. No. 4,084,184 to Crain and U.S. Pat. No. 6,266,100 to Gloudemans, et al.). The usage of such equipment implies that special training must be given to technicians who will be setting up and calibrating this equipment on-site at the broadcast venue. This limits the usefulness of such AR systems when used within a broadcast environment where television personnel who have not received special training will be required to set up and operate the AR system. Furthermore, the gathering of location and measurement information using such equipment is often time consuming. This means that AR systems which depend on this equipment may be impractical within a television broadcast setup environment where production costs have been trimmed by limiting on-site setup time for the television crew.

Download full PDF for full patent description/claims.

Advertise on - Rates & Info

You can also Monitor Keywords and Search for tracking patents relating to this System and method for inserting content into an image sequence patent application.
monitor keywords

Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like System and method for inserting content into an image sequence or other areas of interest.

Previous Patent Application:
Reading a local memory of a processing unit
Next Patent Application:
Efficient data access for unified pixel interpolation
Industry Class:
Computer graphics processing, operator interface processing, and selective visual display systems
Thank you for viewing the System and method for inserting content into an image sequence patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 0.75184 seconds

Other interesting categories:
Computers:  Graphics I/O Processors Dyn. Storage Static Storage Printers -g2-0.2635

FreshNews promo

stats Patent Info
Application #
US 20110057941 A1
Publish Date
Document #
File Date
Other USPTO Classes
International Class

Frame Buffer

Follow us on Twitter
twitter icon@FreshPatents