RELATED APPLICATION DATA
This application is a division of application Ser. No. 13/299,140, filed Nov. 17, 2011, which is a continuation-in-part of international application PCT/US11/59412, filed Nov. 4, 2011 (published as WO2012061760), which claims priority to the following provisional applications:
61/410,217, filed Nov. 4, 2010;
61/449,529, filed Mar. 4, 2011;
61/467,862, filed Mar. 25, 2011;
61/471,651, filed Apr. 4, 2011;
61/479,323, filed Apr. 26, 2011;
61/483,555, filed May 6, 2011;
61/485,888, filed May 13, 2011;
61/501,602, filed Jun. 27, 2011;
and which also is a continuation-in-part of each of the following applications:
13/174,258, filed Jun. 30, 2011;
13/207,841, filed Aug. 11, 2011 (published as 20120116559); and
13/278,949, filed Oct. 21, 2011 (published as 20120134548).
The disclosures of these applications are incorporated herein by reference, in their entireties.
- Top of Page
The present technology generally primarily concerns sensor-equipped consumer electronic devices, such as smartphones and tablet computers.
The present specification details a diversity of technologies, assembled over an extended period of time, to serve a variety of different objectives. Yet they relate together in various ways, and often can be used in conjunction, and so are presented collectively in this single document.
This varied, interrelated subject matter does not lend itself to a straightforward presentation. Thus, the reader's indulgence is solicited as this narrative occasionally proceeds in nonlinear fashion among the assorted topics and technologies.
The detailed technology builds on work detailed in previous U.S. patent filings. These include applications:
Ser. No. 13/149,334, filed May 31, 2011;
Ser. No. 13/088,259, filed Apr. 15, 2011;
Ser. No. 13/079,327, filed Apr. 4, 2011;
Ser. No. 13/011,618, filed Jan. 21, 2011 (published as 20110212717);
Ser. No. 12/797,503, filed Jun. 9, 2010 (published as 20110161076);
Ser. No. 12/774,512, filed May 5, 2010 (published as 20110274310);
Ser. No. 12/716,908, filed Mar. 3, 2010 (published as 20100228632);
Ser. No. 12/490,980, filed Jun. 24, 2009 (published as 20100205628);
Ser. No. 12/271,772, filed Nov. 14, 2008 (published as 20100119208);
Ser. No. 11/620,999, filed Jan. 8, 2007 (published as 20070185840);
U.S. Pat. No. 7,003,731; and
U.S. Pat. No. 6,947,571.
In the few years since their introduction, portable computing devices (e.g., smartphones, music players, and tablet computers) have transitioned from novelties to near-necessities. With their widespread adoption has come an explosion in the number of software programs (“apps”) available for such platforms. Over 300,000 apps are now available from the Apple iTunes store alone.
Many apps concern media content. Some are designed to provide on-demand playback of audio or video content, e.g., television shows. Others serve to complement media content, such as by enabling access to extra content (behind-the-scenes clips, cast biographies and interviews, contests, games, recipes, how-to videos), by allowing social network-based features (communicating with other fans, including by Twitter, Facebook and Foursquare, blogs), etc. In some instances a media-related app may operate in synchrony with the audio or video content, e.g., presenting content and links at time- or event-appropriate points during the content.
Apps are now being specialized to particular broadcast and recorded media content. The ABC television show My Generation, for example, was introduced with a companion iPad app dedicated exclusively to the program—providing polls, quizzes and other information in synchronized fashion. Traditional media companies, such as CNN, ESPN, CBS, etc., are increasingly becoming app companies as well.
It is difficult for apps to gain traction in this crowded marketplace. Searching iTunes, and other app stores, is the most common technique by which users find new apps for their devices. The next most popular technique for app discovery is through recommendations from friends. Both approaches, however, were established when the app market was much smaller, and have not scaled well.
In the case of the My Generation iPad ap, for example, the show's producers must reach out to the target audience and entice them to go to the app store, where they must type in the title of the app, download it, install it, and then run it when the television program is playing.
In accordance with certain embodiments of the present technology, a different solution is provided. In one such embodiment, a microphone-equipped user device samples ambient content, and produces content-identifying data from the captured audio. This content-identifying data is then used to look-up an app recommended by the proprietor of the content, which app is then installed and launched—with little or no action required by the user.
By such arrangement, the content effectively selects the app. The user doesn't select the software; the user's activity selects the software. Over time, each user device becomes app-adapted to the content preferences of the user—thereby becoming optimized to the user's particular interests in the content world.
To some degree, this aspect of the present technology is akin to the recommendation features of TiVo, but for apps. The user's content consumption habits (and optionally those of the user's social network friends) lead the device to recommend apps that serve the user's interests.
Desirably, it is artists that are given the privilege of specifying the app(s) to be invoked by their creative works. Many countries have laws that recognize artists' continuing interest in the integrity with which their works are treated (so-called “moral rights”). Embodiments of the present technology serve this interest—providing artists a continuing role in how their art is presented, enabling them to prescribe the preferred mechanisms by which their works are to be experienced. Continuity is provided between the artist's intention and the art's delivery.
It is not just stand-alone apps that can be treated in this fashion. More granular software choices can similarly be made, such as the selection of particular rendering codecs to be used by media players (e.g., Windows Media Player). For example, the National Hockey League may prefer that its content be rendered with a codec designed for maximum frame rate. In contrast, the Food Network may prefer that its content be rendered with a codec optimized for truest color fidelity.
Historically, the “channel” was king, and content played a supporting role (i.e., drawing consumers to the channel, and to its advertising). From the consumer's standpoint, however, these roles should be reversed: content should be primary. Embodiments of the present technology are based on this premise. The user chooses the content, and the delivery mechanism then follows, as a consequence.
The foregoing and other features and advantages of the present technology will be more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
- Top of Page
FIG. 1 is a block diagram of a system that can be used in certain embodiments of the present technology.
FIG. 2 is a representation of a data structure that can be used with the embodiment of FIG. 1.
FIGS. 3-7 detail features of illustrative gaze-tracking embodiments, e.g., for text entry.
FIGS. 8 and 9 detail features of an illustrative user interface.
FIG. 10 shows a block diagram of a system incorporating principles of the present technology.
FIG. 11 shows marker signals in a spatial-frequency domain.
FIG. 12 shows a mixed-domain view of a printed object that includes the marker signals of FIG. 11, according to one aspect of the present technology.
FIG. 13 shows a corner marker that can be used to indicate hidden data.
FIG. 14 shows an alternative to the marker signals of FIG. 11.
FIG. 15 shows a graph representation of data output from a smartphone camera.
FIG. 16 shows a middleware architecture for object recognition.
FIG. 17 is similar to FIG. 16, but is particular to the Digimarc Discover implementation.
FIG. 18 is a bar chart showing impact of reading image watermarks on system tasks.
FIG. 19 further details performance of a watermark recognition agent running on an Apple iPhone 4 device.
FIG. 20 shows locations of salient points in first and second image frames.
FIG. 21 shows histograms associated with geometric alignment of two frames of salient points.
FIG. 22 shows an image memory in a smartphone, including three color bit plane of 8-bit depth each.
FIG. 23 shows a similar smartphone memory, but now utilized to store RDF triples.
FIG. 24 shows some of the hundreds or thousands of RDF triples that may be stored in the memory of FIG. 23.
FIG. 25 shows the memory of FIG. 23, now populated with illustrative RDF information detailing certain relationships among people.
FIG. 26 shows some of the templates that may be applied to the Predicate plane of the FIG. 25 memory, to perform semantic reasoning on the depicted RDF triples.
FIG. 27 names the nine RDF triples within a 3×3 pixel block of memory.
FIG. 28 shows a store of memory in a smartphone.
FIGS. 29A and 29B depict elements of a graphical user interface that uses data from the FIG. 28 memory.
FIG. 30 shows use of a memory storing triples, and associated tables, to generate data used in generate a search query report to a user.
FIG. 31 shows another store of memory in a smartphone, depicting four of more planes of integer (e.g., 8-bit) storage.
FIG. 32 shows a smartphone displaying an image captured from a catalog page, with a distinctive graphical effect that signals presence of a steganographic digital watermark.
FIGS. 33 and 34 show how a smartphone can spawn tags, presented along an edge of the display, associated with different items in the display.
FIG. 35 shows information retrieved from a database relating to a watermark-identified catalog page (i.e., object handles for an object shape).
FIG. 36 shows how detection of different watermarks in different regions of imagery can be signaled to a user.
FIG. 38 shows an LED-based communication system, incorporating both high bandwidth and low bandwidth channels.
- Top of Page
The present technology, in some respects, expands on technology detailed in the assignee\'s above-detailed patent applications. The reader is presumed to be familiar with such previous work, which can be used in implementations of the present technology (and into which the present technology can be incorporated).
Referring to FIG. 1, an illustrative system 12 includes a device 14 having a processor 16, a memory 18, one or more input peripherals 20, and one or more output peripherals 22. System 12 may also include a network connection 24, and one or more remote computers 26.
An illustrative device 14 is a smartphone or a tablet computer, although any other consumer electronic device can be used. The processor can comprise a microprocessor such as an Atom or A4 device. The processor\'s operation is controlled, in part, by information stored in the memory, such as operating system software, application software (e.g., “apps”), data, etc. The memory may comprise flash memory, a hard drive, etc.
The input peripherals 20 may include a camera and/or a microphone. The peripherals (or device 14 itself) may also comprise an interface system by which analog signals sampled by the camera/microphone are converted into digital data suitable for processing by the system. Other input peripherals can include a touch screen, keyboard, etc. The output peripherals 22 can include a display screen, speaker, etc.