FreshPatents.com Logo
stats FreshPatents Stats
2 views for this patent on FreshPatents.com
2014: 1 views
2012: 1 views
Updated: November 16 2014
newTOP 200 Companies filing patents this week


    Free Services  

  • MONITOR KEYWORDS
  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • ORGANIZER
  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • ARCHIVE
  • View the last few months of your Keyword emails.

  • COMPANY DIRECTORY
  • Patents sorted by company.

Follow us on Twitter
twitter icon@FreshPatents

Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices

last patentdownload pdfdownload imgimage previewnext patent


20120297309 patent thumbnailZoom

Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices


A character having a plurality of attributes is created by a network user while within a character-enabled network site. Each attribute is defined by at least one of either audio data and/or visual image data and is selected by the user from a plurality of attributes presented to the user through a user interface. The combination of attributes defines a persona for the character. At least one of either an audio presentation and/or a visual image presentation is provided to the user interface. The presentations presented are selected from a plurality of presentations based on the character's persona. Data related to character attributes are stored in a database. One or more of the presentations presented to the user may be interactive, in that it allows for the user to make choices.

Browse recent Treehouse Avatar Technologies Inc. patents - Ottawa, CA
USPTO Applicaton #: #20120297309 - Class: 715738 (USPTO) - 11/22/12 - Class 715 
Data Processing: Presentation Processing Of Document, Operator Interface Processing, And Screen Saver Display Processing > Operator Interface (e.g., Graphical User Interface) >For Plural Users Or Sites (e.g., Network) >Network Resource Browsing Or Navigating

view organizer monitor keywords


The Patent Description & Claims data below is from USPTO Patent Application 20120297309, Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices.

last patentpdficondownload pdfimage previewnext patent

CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of pending U.S. application Ser. No. 11/186,723, filed Jul. 20, 2005, which is a continuation of U.S. application Ser. No. 09/614,572, filed Jul. 12, 2000 and issued Oct. 4, 2005 as U.S. Pat. No. 6,952,716.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The invention relates generally to an apparatus and method for presenting data over an information network based on choices made by the users of the network and collecting data related to the choices made by the users. More particularly, the invention relates to an apparatus and method for presenting audio presentations and visual image presentations to a network user based on choices made by the user while in a network site and collecting data related to the choices in real-time. As used herein “visual image” is broadly defined as drawn, printed or modeled objects, characters or scenes, including still, animation, motion, live action and video. Throughout the specification, the term “character” is used to describe certain aspects and features of the invention, for example, the term “character-enabled” is often used. The use of “character” instead of a collective “character, object or scene” is done for ease in readability of the specification and is not intended in any way to limit the scope of the invention.

2. Description of Related Art

The information and data made available over a network site is typically the same for each visitor to that network site. For example, in the context of the world-wide-web (“the web”), each visitor to a web site is generally presented the same audio and visual image data contained within the various web pages comprising the web site. Links presented on the web pages generally transfer the visitor to other web pages or in some cases to other web sites. All in all, contemporary web sites are static in nature in that they fail to take into consideration the individuality of their visitors and instead present to each visitor a substantially identical audio/visual experience. As a result, visitors to contemporary web sites often become bored with the web site in a relatively short time thereby reducing visitor time on a web site and the possibility of frequent, repeat visits by the user.

Hence, those concerned with increasing network site loyalty have sensed the need for an apparatus and method for presenting to network user\'s audio data and visual image data that is indicative of the individuality of the network user. The present invention fulfills this need and others.

The collection of data related to the personal choices and preferences of an individual is essential for effective market research. The major purpose of market research is to minimize the risk to be undertaken by a company. By itself, market research is rarely conclusive, but instead is a useful tool to enable companies to make decisions that are more informed. Market research is used for a variety of purposes, including: market strategy, product development, product adoption, program evaluation, price sensitivity, name and message testing, awareness, usage, attitude, and behavior tracking, advertising testing, market tracking, customer satisfaction, customer profiling and segmentation, corporate image studies, employee satisfaction, bench marking and public opinion polls.

There are two basic types of market research, qualitative and quantitative. Qualitative research involves the more “touchy-feely” aspect of gauging tastes, preferences and opinions, and includes focus groups, on-line focus groups, one-on-one interviews and executive interviews. Quantitative research involves the sampling of a base of respondents to enable the statistical inference of the data over a larger population. The data obtained is tabulated into useful categories that allow the researcher to draw statistically-sound conclusions. Qualitative research includes telephone surveys, mail surveys, intercept surveys and e-mail surveys.

Current market research is expensive and often time consuming. For example, for a hypothetical manufacturing company to gauge the tastes, preferences and opinions of the teen market as a basis to improve product development and enhance revenues, it has been suggested that focus groups, on-line focus groups and mall intercepts are the best approaches.

The cost estimate for a market research firm to conduct, analyze and summarize a focus group with between eight to ten people is between $4,000 to $6,000. Market research firms also employ the Internet to conduct focus group studies. Some firms have a database of e-mail addresses of individuals who have agreed to be surveyed on an as-needed basis, while other firms purchase lists of e-mail addresses that fit a targeted profile. These focus groups are conducted by showing a user pictures of products or a concept and then posing a series of questions to the user. Those responses are then tabulated with the responses from other users. The costs associated with on-line focus groups are similar to regular focus groups.

The most common quantitative method suggested for teen-market analysis is mall intercepts. In a mall intercept, interviewers intercept mall shoppers that meet a certain targeted profile. These individuals are then interviewed for no more than twenty minutes and asked product and concept questions. The cost to perform a mall-intercept study varies, depending on the number of respondents targeted, the malls involved, and the time involved to conduct the surveys. For example, the cost of a mall intercept, in which 1,000 responses are received from shoppers in several geographic regions throughout the US may be as high as $100,000.

Hence, those concerned with collecting information related to user and consumer choices and preferences have sensed a need for an apparatus and method that enables a less expensive, more efficient and more reliable means of capturing specific and broad-base data on users, consumers and products. A need has also been felt for an apparatus and method of collecting market research data in real-time. The present invention clearly fulfills these needs and others.

SUMMARY

OF THE INVENTION

Briefly, and in general terms, the present invention is directed to an apparatus and method that employs selectable and modifiable animation to collect data related to the choices made by the users of an information network.

In a first aspect, the invention relates to a method having application within an information network having at least one character-enabled network site. The method provides for the presentation of data to a network user based on choices made by the user while the user is within a character-enabled network site. In its basic form the method includes the step of creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a persona for the character. Each attribute is defined by at least one of either audio data and/or visual image data. An attribute may comprise one or more pieces of audio data, one or more pieces of visual image data or a combination of one or more pieces of audio data and visual image data. The method further includes the step of providing to the user interface, at least one of either an audio presentation or a visual image presentation selected from a plurality of presentations based on the persona of the character created.

By providing audio and visual image presentations to the user interface based on the persona of the created character, the present invention presents to the user a customized audio and/or visual image experience while the user is visiting the network site.

In a more detailed facet of the invention, the method further comprises the step of storing persona data indicative of the selected attributes. By storing this data, the present invention allows for the collection of user choices which may be indicative of the user\'s tastes, preferences and opinions. In another detailed aspect, the plurality of presentations may include passive presentations and interactive presentations, each in turn comprising one or both of a visual image displayed on the user interface and sound heard through the user interface. In another detailed facet, when an interactive presentation is provided to the user interface, the method further includes the step of, in response to user interaction with the interactive presentation, providing to the user interface at least one of either an audio presentation and/or a visual image presentation selected from the plurality of presentations. By providing audio and/or visual image presentations to the user interface based on the response made by the user to an interactive presentation the present invention allows for further customization of the audio/visual experience. In yet another detailed aspect of the invention, the method further includes the step of storing data indicative of user interaction with the interactive presentation.

In a second aspect, the invention relates to an apparatus for presenting data to a network user based on choices made by the user while within a character-enabled network site. The apparatus includes a character processor for creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a persona for the character. Each attribute is defined by audio data and/or visual image data. The apparatus further includes a selection processor for providing to the user interface, at least one of either an audio presentation and/or a visual image presentation selected from a plurality of presentations based on the persona of the character created.

In a third aspect, the invention relates to a method having application within an information network having at least one character-enabled network site. The method provides for the presentation of data to a network user based on choices made by the user while the user is within a character-enabled network site. In its basic form the method includes the step of associating a character with the user. The character has a plurality of attributes, each defined by at least one of either audio data and/or visual image data. The plurality of attributes collectively defines a character persona. The method further includes the step of providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona. The interactive presentation is defined by audio data and/or visual image data. Also included in the method is the step of, in response to user interaction with the interactive presentation, providing to the user interface at least one of another interactive presentation and a passive presentation. The passive presentation is defined by at least one of audio data and visual image data.

By providing one or more of either an interactive or a passive presentation to the user interface based on the responses and choices made by the user to an interactive presentation, the present invention takes into account the actions of the user, which are likely to be indicative of the tastes, preferences and opinions of the user, and customizes the audio/visual experience presented to the user accordingly.

In a detailed aspect of the invention, the step of providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona includes the steps of linking the character persona with interactive presentations of interest; and selecting for presentation to the user interface those interactive presentation that are linked with the character persona. In another facet of the invention, the step of providing to the user interface at least one of another interactive presentation and a passive presentation in response to user interaction with the interactive presentation comprises the steps of linking the user interaction with other interactive presentations and passive presentations of interest; and selecting for presentation to the user interface, those other interactive presentations and passive presentations that are linked with the character persona.

In a fourth aspect, the invention relates to an apparatus for presenting data to a network user based on choices made by the user while within a character-enabled network site. The apparatus includes a character processor for associating a character with the user. The character has a plurality of attributes, each attribute defined by at least one of either audio data and/or visual image data. The plurality of attributes collectively defines a character persona. In a basic configuration of the apparatus the character processor may comprise a user interface functioning in cooperation with site programs which may be resident in the character-enabled network site. The apparatus further includes a selection processor for providing to the user interface, at least one interactive presentation selected from a plurality of presentations based on the character persona. The interactive presentation is defined by audio data and/or visual image data. The selection processor also, in response to user interaction with the interactive presentation, provides to the user interface at least one of another interactive presentation and a passive presentation. The passive presentation is defined by at least one of either audio data and/or visual image data. In a basic configuration of the apparatus the selection processor may comprise site programs which may be resident in the character-enabled network site. These site programs operate in conjunction with various stored audio data/presentations and visual image data/presentations to provide the presentations to the user interface.

In a fifth aspect, the invention relates to a method that finds application within an information network having a database and at least one character-enabled network site accessible through a user interface with audio and visual image presentation capability. The method is for obtaining and storing data indicative of one or more attribute selections made by a network user while within the character-enabled network site. The method includes the steps of storing at least one of either audio data and/or visual image data of a plurality of characters, each character having at least one associated modifiable attribute. For each modifiable attribute the method further includes the step of storing at least one of either audio data and/or visual image data of at least one modification attribute. The method also includes the step of presenting the plurality of characters to the user through the user interface for selection by the user. Upon selection of a character, the method includes the step of storing data indicative of the selected character in a database and presenting the at least one modification attribute to the user through the user interface for selection by the user. Upon selection of the modification attribute, the method further includes the step of storing data indicative of the selected modification attribute in the database.

In a sixth aspect, the invention relates to an apparatus for obtaining and storing data indicative of one or more attribute selections made by a network user through a user interface with audio and visual image presentation capability. The apparatus includes a character memory storing at least one of either audio data and/or visual image data of a plurality of characters, each having at least one associated modifiable attribute. For each modifiable attribute, the apparatus further includes an attribute memory for storing at least one of either audio data and/or visual image data of at least one modification attribute. The apparatus also includes a processor for presenting the plurality of characters to the user through the user interface for selection by the user. Upon selection of a character, the processor presents the at least one modification attribute to the user for selection by the user. Further included in the apparatus is a database for storing data indicative of the selected character and the selected at least one modification attribute.

In a seventh aspect, the invention relates to a method finding application in an information network having at least one character-enabled network site. The method is for sharing data among network users based on choices made by each of the users while within a character-enabled network site. The method includes the steps of, for each user, creating a character having a plurality of attributes. Each attribute is selected by the user from a plurality of attributes presented to the user through a user interface to create a character profile. Each attribute is defined by at least one of either audio data and/or visual image data. The method also includes the step of providing to at least one user interface, at least one of either an audio presentation and/or a visual image presentation indicative of at least one other character profile. Also included is the step of providing a communications link between the users.

These and other features and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate by way of example the features of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of an information network including a user side and a network-site side having character-enabled network sites operating in accordance with the present invention;

FIG. 2 is a top-level flowchart depicting the process by which a network user explores the information network of FIG. 1;

FIG. 3 is a detailed flowchart depicting the process by which a user interacts with the character-enabled network sites of FIG. 1;

FIG. 4 depicts a page of an exemplary character-enabled network site having a collection of pre-profiled characters;

FIG. 5 depicts a follow-up to the screen of FIG. 4, in which one of the pre-profiled characters has been selected in order to gather additional information related to the persona of the character;

FIG. 6 depicts a follow-up screen to the screen of FIG. 5, in which a detail of the selected pre-profiled character is presented and animated comments indicative of the character\'s persona are presented;

FIG. 7 depicts a follow-up screen to the screen of FIG. 6, in which the remaining characters are dismissed and the opportunity to modify the selected pre-profiled character is presented;

FIG. 8 depicts a follow-up screen to the screen of FIG. 7 in which a roll-over of the shirt causes the shirt to highlight thereby indicating that the shirt may be modified;

FIG. 9 depicts a follow-up screen to the screen of FIG. 8 in which several choices with regard to the brand of shirt are presented;

FIG. 10 depicts a follow-up screen to the screen of FIG. 9 in which the shirt selected is displayed on the character;

FIG. 11 depicts an exemplary database table including records of choices made by network users; and

FIG. 12 is a flow chart depicting the process of collecting and analyzing the data generated by users when exploring character-enabled network sites.

DETAILED DESCRIPTION

OF THE PREFERRED EMBODIMENTS

Referring now to the drawings, wherein like reference numerals denote like or corresponding parts throughout the drawing figures, and particularly to FIG. 1, there is shown an information network including a user side 10 and a network-site side 12 interfacing through a network 14. The network 14 provides the means through which a user may access a plurality of network sites 16a, 16b and character-enabled network sites (“C-E sites”) 16c, 16d. The features of the C-E sites 16c, 16d are described in detail below. The network 14 may include, by way of example, but not necessarily by way of limitation, the Internet, Internet II, Intranets, and similar evolutionary versions of same.

The client side 10 includes a user interface 18 and network browser 20 through which a user may communicate with the network-site side 12 via the network 14. The user interface 18 may include a personal computer, network work station or any other similar device having a central processing unit (CPU) and monitor with at least one of audio presentation, i.e. sound, capability and visual image presentation, e.g. video, animation, etc., capability. Other devices may include portable communication devices that access the information network, such as cellular telephones or hand held devices, e.g., Palm Pilots. The client side 10 further includes a graphical user interface (GUI) that facilitates communication between the client side and the network-site side 12. Client-side software may be resident in the user interface 18. Alternatively, the client-side software may be network-based software capable of being accessed over the network 14. For example, a user may be able to access the client-side software directly on the World-Wide-Web (“the Web”).

The network-site side 12 includes a plurality of network sites 16a-16d and associated servers 22a, 22b. Also included on the network-site side 12 is a central database 24 for storing information and a search engine 26. The server 22b houses a program memory 28 for storing the network-site software programs, i.e., “site programs”, which operate each of the C-E sites 16c, 16d in accordance with the invention. Also housed within the program memory 28 is the search engine software and database software. The server 22b also houses a source data 30 for storing the data required by the site programs. While FIG. 1 depicts only one server 22b with two associated C-E sites, 16c, 16d, the information network may include any number of these items. The other server 22a on the network-site side 12 includes similar memory and storage devices, which for ease of illustration are not depicted. The devices store the programs and data necessary to operate the network sites 16a, 16b associated with the server 22a. In the exemplary information network of FIG. 1, however, these network sites 16a, 16b are not configured to operate as character-enabled sites.

In accordance with the invention, C-E sites 16c, 16d operate under the control of site programs housed in the program memory 28. The site programs are created in browser usable file formats, such as but not limited to JavaScript, Flash Animation (.SWF), HTML, dHTML, CGI, ASP and Cold Fusion, to present either one or both of audio data/presentations and visual image data/presentations to the user interface 18. The audio data and visual image data required by the site programs is stored in the source data 30.

The site programs are designed to provide to the user interface 18 audio presentations and visual image presentations tailored to the “persona” of a character, as defined by a network user. These audio presentations and visual image presentations are selected from a plurality of presentations resident within the information network. The “persona” of a character is defined by a number of attributes, which in turn are defined by at least one of audio data and visual image data. “Attributes” as used herein means a quality or characteristic inherent or ascribed to a character, object, or scene. Character attributes may include physical characteristics, emotional characteristics, personal interests, opinions and preferences. Object and scene attributes generally include but are not limited to physical characteristics. The persona of a character may be further defined by the actions of the character, as controlled by the user through the user interface 18.

In accordance with the present invention, the “attribute” aspect of a character persona may be defined by a user in any of several ways. For example, the character may have a pre-determined persona which the user may choose to adopt. Alternatively the user may modify or customize the persona of a pre-profiled character. Additionally, the user may create his own character persona from scratch. Each of these character development approaches is described more fully below. The “action” aspect of a character persona is defined by the user based on how the user interacts with the audio presentations and visual image presentations provided to the user interface.

The persona of a character determines the experience the user has on the C-E site 16c, 16d. Different characters call up different audio presentations and visual image presentations. For example, depending on the persona of the character selected, different music, games, books, movies, and videos may be provided to the user interface 18. The present invention cross references or links character attributes and character actions to specific audio presentations or visual images presentations. This cross referencing or linking may be accomplished through a look-up table or through frame technology. Using the attributes and actions associated with a given character, the site program determines which audio presentation and visual image presentations to present to the user interface 18.

With regard to pre-profiled characters, the site program in combination with the audio data and visual image data stored in the source data 30 define one or more pre-profiled characters. The site program/data defines the characters such that each has his or her own persona. An example of several characters is presented in FIG. 4. A detail of one of these characters is presented in FIG. 6. The user gets a quick glimpse of the character\'s persona in two ways. First, the user sees what the character looks like and how he is dressed. Second, as the user does a roll-over of each character, there is a visual or audio response that gives the user a sense of that character\'s personality.



Download full PDF for full patent description/claims.

Advertise on FreshPatents.com - Rates & Info


You can also Monitor Keywords and Search for tracking patents relating to this Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices patent application.
###
monitor keywords



Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices or other areas of interest.
###


Previous Patent Application:
Auto-suggested content item requests
Next Patent Application:
Shop floor interaction center
Industry Class:
Data processing: presentation processing of document
Thank you for viewing the Method and system for presenting data over a network based on network user choices and collecting real-time data related to said choices patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 1.01549 seconds


Other interesting Freshpatents.com categories:
Computers:  Graphics I/O Processors Dyn. Storage Static Storage Printers

###

Data source: patent applications published in the public domain by the United States Patent and Trademark Office (USPTO). Information published here is for research/educational purposes only. FreshPatents is not affiliated with the USPTO, assignee companies, inventors, law firms or other assignees. Patent applications, documents and images may contain trademarks of the respective companies/authors. FreshPatents is not responsible for the accuracy, validity or otherwise contents of these public document patent application filings. When possible a complete PDF is provided, however, in some cases the presented document/images is an abstract or sampling of the full patent application for display purposes. FreshPatents.com Terms/Support
-g2--0.3049
     SHARE
  
           


stats Patent Info
Application #
US 20120297309 A1
Publish Date
11/22/2012
Document #
13298095
File Date
11/16/2011
USPTO Class
715738
Other USPTO Classes
715810
International Class
/
Drawings
15



Follow us on Twitter
twitter icon@FreshPatents