CROSS-REFERENCE TO RELATED APPLICATIONS
- Top of Page
This application is a divisional of U.S. patent application Ser. No. 11/868,522 filed on Oct. 7, 2007, entitled “DIGITAL NETWORK-BASED VIDEO TAGGING SYSTEM,” the entire disclosure of which is hereby incorporated by reference herein.
- Top of Page
Playback of video on digital networks such as the Internet is becoming more prevalent. In addition to viewing digital video, some sites allow users to post comments about the video in a bulletin board, “blog,” chat, email or other web page-based format. For example, social networking sites typically allow viewers of a video to post their comments on a web page from which the video can also be viewed. The comments can be displayed in reverse chronological order (most recent first) in a list below the video. Once a viewer has watched the video the viewer can then read the comments and add a new comment, if desired.
Commercial sponsors or other third parties may have a desire to advertise or otherwise provide information in association with a video. Such ads typically include text or images placed near the video such as commercial text, banner ads, images, etc. In some cases, the advertisements may appear for a short time before the video is allowed to play. Or the advertisements may be placed in a small region along the bottom of the video or adjacent to the video while the video is playing. Typically, these advertisements are created by an ad agency and integrated with the video or with a web page that hosts playback of the video.
Although these approaches allow some user and third-party participation to communicate about, or in association with, video content, such communication is limited.
BRIEF DESCRIPTION OF THE DRAWINGS
- Top of Page
FIG. 1 illustrates a first example video-tag-handling system based on a social networking site.
FIG. 2 illustrates a second example video-tag-handling system.
FIG. 3 illustrates a third example video-tag-handling system.
FIG. 4 illustrates a first example video-playback interface suitable for use with the video-tag-handling systems of FIGS. 1-3.
FIG. 5 illustrates a first example video-tag authoring interface that may be activated via the video-playback interface of FIG. 4.
FIG. 6 illustrates a first example video-tag animation interface that may be activated via the video-tag authoring interface of FIG. 5.
FIG. 7 illustrates a second video-tag authoring interface, which is suitable for use with the video-tag-handling systems of FIGS. 1-3, and enables users to author, edit, and animate video tags.
FIG. 8 illustrates a third video-tag authoring interface, interface, which is suitable for use with the video-tag-handling systems of FIGS. 1-3, and is adapted for use with blogging applications.
FIG. 9 is a flow diagram of a first method suitable for use with the video-tag-handling systems and interfaces of FIGS. 1-8.
FIG. 10 is a flow diagram of a second method suitable for use with the video-tag-handling systems and interfaces of FIGS. 1-8.
FIG. 11 is a flow diagram of a third method suitable for use with the video-tag-handling systems and interfaces of FIGS. 1-8.
FIG. 12 illustrates a system for associating a tag dataset to a video and for synchronizing additional content included in the tag dataset with playback of the video.
FIG. 13 illustrates an example tag dataset format.
- Top of Page
OF EXAMPLE EMBODIMENTS
A preferred embodiment of the invention allows additional information to be presented in synchronization with playback of a video. Tag information includes visual information such as text, images, symbols, etc., and other types of information such as animation or behavior, audio, links to network locations or objects, etc., that is not included in an original video to which the tags are applied. The tag information can be synchronized to appear at specified times and places in the video and exhibit predetermined behavior when the video is presented to a user. For example, one type of tag can identify items in a video scene, as described in the related patent applications referenced above. Tag datasets are associated with a video by an identification process and can be created, edited and maintained separately from associated videos. A synchronization process maintains the desired tag presentation when playback of a video is modified with standard video transport controls.
In an example embodiment, a tag controller manages tag datasets that can be modified by one or more users. Users can create, delete or modify tags and specify how the tags are synchronized to a video. The tag controller\'s functions can include, for example, prohibiting or filtering information, replacing or adding information, or otherwise modifying the user-created information. The tag controller can perform such functions to ensure that undesirable user-generated content is not provided to other users. The tag controller can also allow third-party information to be included in addition to or in place of user content where it is determined appropriate, or merely desirable, to include such third-party information. In a particular embodiment, network linking to objects or locations (i.e., “hyper-linking”) passes through a central controller so that user behavior (e.g., clicking on a link, visiting a website, making a purchase etc.) can be monitored or controlled and used for further purposes.
Various embodiments are described including a web-based social network. The social network website embodiment allows a community of users to generate and modify tags on selected videos for purposes of providing user discussion; commercial or informational speech; educational or entertaining dialogue, etc.
Associating a Tag Dataset with a Video
FIG. 12 illustrates a system for associating a tag dataset to a video and for synchronizing additional content included in the tag dataset with playback of the video. A particular video source can be selected from a collection of multiple video sources 420. One or more tag datasets from tag dataset collection 430 is identified for association with the selected video. The tag dataset includes additional content for presentation in synchronization with the video playback to user 402 via playback engine 440, also referred to as processing system 440, or simply process 440.
FIG. 12 shows user 402 provided with output devices such as display 404 and speaker 406. Display 404 and speaker 406 are used to present image and audio information to the user 402 as, for example, during the presentation of digital video content. The digital video content may be any sequence of images displayed to a viewer to create movement or motion as is known in the art. Any video, movie or animation format (e.g., Motion Picture Experts Group (MPEG), Audio Video Interleave (AVI), Adobe video formats such as FLV, motion JPEG, etc.) can be employed. Any type of suitable delivery method may be used such as playing back from a local or remote file, file streaming, etc. Although embodiments of the invention are discussed primarily with respect to digital video formats and delivery, other formats or approaches for displaying information may also benefit from embodiments discussed and claimed herein such as analog transmissions, computer rendered (so-called “machinima”) content, animation formats such as Adobe Flash™ SWF file formats, Microsoft™ Sparkle and Silverlight formats, etc.
User 402 is also provided with one or more user input devices 414 such as keyboard 408, mouse 410, remote control 412, etc. In general, any suitable type of user input mechanism may be employed unless otherwise noted. User input signals and presentation output signals are controlled by control 442 within playback engine 440. Functions performed by playback engine 440 may be, for example, performed by processes within a device such as a personal computer, personal digital assistant (PDA), a cell phone, email device, music player, or other presently known or future-developed processing device with sufficient presentation (e.g., display, audio) and user input (if needed) capabilities. Playback engine functionality can also be provided in other ways such as within or in cooperation with a web page browser, provided by a remote device or process via a network, etc.
In some embodiments, presentation of audio information alone, without visual tag content, may benefit from features of the invention. In applications where visual display is used, any type of display mechanism may be used. For example, display goggles, projected images, multiple display screens, television screens, or other displays may be employed.
Processing system 440 can include typical processing components (not shown) such as a processor, memory, stored instructions, network interface, internal bus, etc. Processing system 440 includes synchronization function 444 which synchronizes tag data such as tag dataset 434 with a video source such as video source 422.