FreshPatents.com Logo
stats FreshPatents Stats
n/a views for this patent on FreshPatents.com
Updated: October 13 2014
Browse: Google patents
newTOP 200 Companies filing patents this week


    Free Services  

  • MONITOR KEYWORDS
  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • ORGANIZER
  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • ARCHIVE
  • View the last few months of your Keyword emails.

  • COMPANY DIRECTORY
  • Patents sorted by company.

Follow us on Twitter
twitter icon@FreshPatents

Audio control of multimedia objects

last patentdownload pdfdownload imgimage previewnext patent


20120263319 patent thumbnailZoom

Audio control of multimedia objects


In some examples, aspects of the present disclosure may include techniques for audio control of one or more multimedia objects. In one example, a method includes receiving an electronic document that includes a group of one or more multimedia objects capable of generating audio data. The method also includes registering a multimedia object of the group of one or more multimedia objects, wherein registering the multimedia object comprises storing a multimedia object identifier that identifies the multimedia object. The method further includes receiving audio data; and determining, by a computing device, a volume level of the audio data generated by the registered multimedia object based on one or more configuration parameters, wherein the one or more configuration parameters define one or more volume levels associated with the multimedia object identifier. The method also includes outputting, to an output device, the audio data at the determined volume level.

Google Inc. - Browse recent Google patents - Mountain View, CA, US
Inventor: Johnny Chen
USPTO Applicaton #: #20120263319 - Class: 381107 (USPTO) - 10/18/12 - Class 381 
Electrical Audio Signal Processing Systems And Devices > Including Amplitude Or Volume Control >Automatic

view organizer monitor keywords


The Patent Description & Claims data below is from USPTO Patent Application 20120263319, Audio control of multimedia objects.

last patentpdficondownload pdfimage previewnext patent

This Application is a continuation of U.S. application Ser. No. 13/086,268, filed Apr. 13, 2011, the entire content of which is incorporated herein by reference.

TECHNICAL FIELD

This disclosure relates to electronic devices and, more specifically, to audio controls of electronic devices.

BACKGROUND

A user may interact with applications executing on a computing device (e.g., mobile phone, tablet computer, smart phone, or the like). For instance, a user may install, view, or delete an application on a computing device.

In some instances, a user may interact with the computing device through a graphical user interface. In some examples, the computing device may include one or more sound devices. An application executing on the computing device may access the sound device.

SUMMARY

In one example, a method includes receiving an electronic document that includes a group of one or more multimedia objects capable of generating audio data. The method further includes registering a multimedia object of the group of one or more multimedia objects, wherein registering the multimedia object comprises storing a multimedia object identifier that identifies the multimedia object. The method also includes receiving audio data generated by the registered multimedia object. The method further includes determining, by a computing device, a volume level of the audio data based on one or more configuration parameters, wherein the one or more configuration parameters define one or more volume levels associated with the multimedia object identifier. The method further includes outputting, to an output device, the audio data at the determined volume level.

In one example, a computer-readable storage medium is encoded with instructions that, when executed, cause one or more processors of a computing device to perform operations including receiving an electronic document that includes a group of one or more multimedia objects capable of generating audio data. The instructions further cause one or more processors to perform operations including registering a multimedia object of the group of one or more multimedia objects, wherein registering the multimedia object comprises storing a multimedia object identifier that identifies the multimedia object. The instructions further cause one or more processors to perform operations including receiving audio data generated by the registered multimedia object; determining, by a computing device, a volume level of the audio data based on one or more configuration parameters, wherein the one or more configuration parameters define one or more volume levels associated with the multimedia object identifier. The instructions further cause one or more processors to perform operations including outputting, to an output device, the audio data at the determined volume level.

In one example, a computing device includes: one or more processors. The computing device further includes an audio control module, executable by the one or more processors to receive an electronic document that includes a group of one or more multimedia objects capable of generating audio data. The audio control module is further executable to register a multimedia object of the group of one or more multimedia objects, wherein registering the multimedia object comprises storing a multimedia object identifier that identifies the multimedia object. The audio control module is further executable to receive audio data generated by the registered multimedia object. The computing device also includes means for determining a volume level of the audio data based on one or more configuration parameters, wherein the one or more configuration parameters define one or more volume levels associated with the multimedia object identifier. The computing device further includes an output device to output the audio data at the determined volume level.

In one example, a method includes receiving an electronic document that includes a group of two or more multimedia objects capable of generating audio data. The method also includes registering a first multimedia object and a second multimedia object of the group with an application, wherein registering the first multimedia object comprises storing a first multimedia object identifier that identifies the first multimedia object, and wherein registering the second multimedia object comprises storing a second multimedia object identifier that identifies the second multimedia object. The method further includes receiving first audio data generated by the first multimedia object and second audio data generated by the second multimedia object. The method also includes receiving, during execution of the application, a first configuration parameter from a user that indicates a first volume level of the first multimedia object. The method further includes receiving, during execution of the application, a second configuration parameter from the user that indicates a second volume level of the second multimedia object. The method also includes outputting, to an output device, the first audio data at the first volume level. The method further includes outputting, to the output device, the second audio data at the second volume level.

The details of one or more examples of this disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.

BRIEF DESCRIPTION OF DRAWINGS

FIG. 1 is a block diagram illustrating an example of a computing device that may be configured to execute one or more applications, in accordance with one or more aspects of the present disclosure.

FIG. 2 is a block diagram illustrating further details of one example of computing device 2 shown in FIG. 1, in accordance with one or more aspects of the present disclosure.

FIG. 3 is a flow diagram illustrating an example method that may be performed by a computing device to perform audio control of one or more multimedia objects, in accordance with one or more aspects of the present disclosure.

FIG. 4 is a block diagram illustrating an example of a computing device that may be configured to execute one or more applications, in accordance with one or more aspects of the present disclosure.

FIG. 5 is a block diagram illustrating an example of a computing device that may be configured to execute one or more applications, in accordance with one or more aspects of the present disclosure.

DETAILED DESCRIPTION

In general, aspects of the present disclosure are directed to techniques for control of multimedia objects. Advancements in application and network technologies have enabled developers to create documents that include rich and dynamic content. For example, an application may display multiple multimedia objects in a single document. Each multimedia object may provide a source of audio and/or visual content. In one example, a document may include many multimedia objects that each provides video and audio content. The application may initially execute multimedia objects included in the document. The application may further provide multimedia objects with access to I/O devices, e.g., an audio device, via various application programming interfaces (APIs). In some examples, multiple multimedia objects may simultaneously require access to an audio device for audio and/or video playback. In such examples, each multimedia object may compete for access to the audio device.

Presently, applications do not provide the user with granular audio control over individual multimedia objects. Therefore, multimedia objects may simultaneously send audio data to the audio device, which may result in an audio signal that is a combination of all audio data. This lack of control may lead to undesirable user experiences. For example, a user listening to audio of a first object may be interrupted by audio of a second object. The combined audio signal may be unintelligible, and the second object may distract the user from audio of the first object. A user may therefore desire one or more techniques to granularly control individual multimedia objects that share a single audio device.

Techniques of the present disclosure provide granular volume controls for multimedia objects that may simultaneously require access to the same audio device. In one example, an audio control module is included in a web browser. When a multimedia object is initially rendered by the web browser, the multimedia object is registered with the audio control module. Consequently, the audio module maintains a list of multimedia objects in the web browser. The audio control module further provides a sound control API that is accessible by multimedia objects. When a multimedia object generates audio data to be output by the audio device, the multimedia object may call a function included in the sound control API to send audio data to the audio device. The audio data may be received by the audio control module via the function call. Once the audio data is received by the audio control module, the audio control module may, for example, change the volume of the audio data. The transformed audio data may then be sent to the audio device via another API that may be provided by the operating system to the web browser.

FIG. 1 is a block diagram illustrating an example of a computing device 2 that may be configured to execute one or more applications, e.g., application 8, in accordance with one or more aspects of the present disclosure. As shown in FIG. 1, computing device 2 may include a display 4, an audio device 6, and an application 8. Application 8 may, in some examples, include an audio control module 10.

Computing device 2, in some examples, includes or is a part of a portable computing device (e.g. mobile phone/netbook/laptop/tablet device) or a desktop computer. Computing device 2 may also connect to a wired or wireless network using a network interface (see, e.g., FIG. 2). One non-limiting example of computing device 2 is further described in the example of FIG. 2.

In some examples, computing device 2 may include display 4. In one example display 4 may be an output device 50 as shown in FIG. 2. In some examples, display 4 may be programmed by computing device 2 to display graphical content. Graphical content, generally, includes any visual depiction displayed by display 4. Examples of graphical content may include images, text, videos, visual objects and/or visual program components such as scroll bars, text boxes, buttons, etc. In one example, application 8 may cause display 4 to display graphical user interface (GUI) 16.

As shown in FIG. 1, application 8 may execute on computing device 2. Application 8 may include program instructions and/or data that are executable by computing device 2. Examples of application 8 may include a web browser, email application, text messaging application or any other application that receives user input and/or displays graphical content.

In some examples, application 8 causes GUI 16 to be displayed in display 4. GUI 16 may include interactive and/or non-interactive graphical content that presents information of computing device 2 in human-readable form. In some examples GUI 16 may enable a user to interact with application 8 through display 4. For example, a user may provide a user input via an input device such as a mouse, keyboard, or touch-screen. In response to receiving the user input, computing device 2 may perform one or more operations. In this way, GUI 16 may enable a user to create, modify, and/or delete data of computing device 2.

In some examples, application 8, as shown in FIG. 1, may be a web browser software application (hereinafter “web browser”). One example of a web browser may be the Google Chrome™ web browser. A web browser, in some examples, may retrieve and present information resources on a network such as the Internet. A web browser may also send information to other devices on a network. In some examples, an information resource may be a document such as a HyperText Markup Language (HTML) document. A HTML document may include structured data that is interpretable by a web browser. In some examples, structured data may include text, pictures, and multimedia objects. A web browser may, for example, display the structured data of an HTML document in a human-interpretable form.

As shown in FIG. 1, application 8 may be a web browser that displays an HTML document 18. HTML document 18 may, for example, include text 20, multimedia object 22, and multimedia object 24. A multimedia object may be any source of visual, audio, and/or other sensory data embedded in document 18. In some examples, multimedia objects may include video objects and/or sound objects. Examples of multimedia objects may include Macromedia® Flash®, Java® applets, Quicktime® movies, MPEG-4 videos, MP3 audio, and WAV audio. In some examples, a multimedia object may include an animation and audio content. In some examples, a creator of a document, e.g., document 18, may embed one or more multimedia objects in document 18. A user of computing device 2 may use application 8 to view document 18 and interact with multimedia objects 22 and 24.

In some examples, multiple multimedia objects may be included in a single document 18. For example, as shown in FIG. 1, two multimedia media objects 22 and 24 are embedded in document 18. In the example of FIG. 1, the multimedia object 22 may be a video entitled “Android Cloud to Device Messaging Framework” as indicated by text 20. Document 18 may further include multimedia object 24. Multimedia object 24 may, as shown in FIG. 1, include an audio visual advertisement. For example, multimedia object 24 may include a visual animation of an advertised product or service and may, in some examples, further include audio associated with the animation.

Application 8 may, in some examples, include a rendering engine to interpret structured data of document 18. The rendering engine of application 8 may, in some examples present the structured data in human-interpretable form. As described herein, “render” may, in some examples, include presenting any structured data in human-interpretable form. Structured data of an HTML document may include tags that enclose content to be rendered by the rendering engine. Tags may be of different types and therefore enable the rendering engine to render content encompassed by different tags in different ways. Thus, in one example, text 20 may be enclosed by “text” tags that enable the rendering engine to display “Android Cloud to Device Messaging Framework” as text.

In other examples, multimedia tags may be included in document 18 to specify multimedia objects 22 and 24. In such examples, the rendering engine of application 8 may process the multimedia tags to present multimedia objects 22, 24 in human-interpretable form to a user. The rendering engine may, in some examples, include functionality to render some but not all types of content associated with various different tags. For example, a rendering engine may natively render text but may not natively render multimedia objects. In such examples, tags for a multimedia object may specify a separate multimedia application to render the content of the multimedia object. For example, application 8 may not, in one example, natively render multimedia object 22. Instead, tags included in document 18 and associated with multimedia object 22 may indicate a separate video application to render the content of multimedia object 22. Application 8 may, when processing the tags associated with multimedia object 22, execute the separate video application that, in turn, may render the content of multimedia object 22. In this way, application 8 may be extensible to render various different types of content.

As shown in FIG. 1 and described herein, a document may include multiple multimedia objects. In some examples, application 8 may render some or all of the structured data of document 18 simultaneously. For example, application 8 may render tags for multimedia objects 22, 24 and, consequently, application 8 may present content of multimedia objects 22, 24 to a user simultaneously. In such examples, each multimedia object may include content that may be provided to a user via one or more output devices. For example, multimedia objects 22, 24 may each include audio content. Each of multimedia objects 22, 24 may therefore provide audio data to an audio device 6, e.g., a sound card and/or speaker, to present the audio content to a user. In some examples, audio device 6 may receive audio data from application 8. The audio data may include a representation of audio content. Audio device 6 may provide an audio signal that includes a human-interpretable representation of the audio content based on the audio data.

In some examples, multimedia objects 22, 24 may compete for access to audio device 6. For example, application 8 may render document 18 that includes multimedia object 22 and multimedia visual advertisement object 24. Each multimedia object may include audio content and may therefore provide corresponding audio data to audio device 6. Audio device 6 may receive the audio data both multimedia objects 22, 24 simultaneously. In some examples, audio device 6 may output an audio signal that includes combined or interlaced audio content of each multimedia object 22, 24.

Various drawbacks are apparent in the present example. For example, when audio data of multiple multimedia objects are combined or interlaced, the resulting audio signal may be garbled or uninterpretable by a human. In other examples, a user\'s focus on audio content generated by a first multimedia object may be disrupted by audio content of a second multimedia media object. In such examples, a user may therefore desire not to hear audio content of the second multimedia object. In some examples, a multimedia object may not provide the user with the ability to directly control to the audio content associated with the multimedia object. In other examples, the user may need to identify each multimedia object individually in order to disable or lower the volume of each multimedia object. Consequently, the user may apply substantial effort to limit the undesirable effects of numerous multimedia objects competing to access an audio device.

Aspects of the present disclosure described hereinafter may overcome various deficiencies presented by multiple media objects that may compete for an audio output device. As shown in FIG. 1, application 8, e.g., a web browser, may initially access document 18 that includes one or more multimedia objects 22, 24. Application 8, in some examples, may render the structured data of document 18 as previously described herein. For example, application 8 may render document 18 and identify one or more tags associated with text 20, multimedia object 22 (hereinafter, video object 22), and multimedia object 24 (hereinafter, advertisement object 24).

In the current example, audio control module 10 may automatically register one or more multimedia objects of document 18 when rendered by application 8. To automatically register a multimedia object, audio control module 10 may identify tags associated with multimedia objects. In some examples, one or more tags associated with a multimedia object may indicate the multimedia object includes content of a particular type. Audio control module 10 may, in some examples, register a multimedia object based on its content type. For example, various content types may include audio content, and therefore, audio control module 10 may be configured to register multimedia objects associated with such content types.

In some examples, audio control module 10 may generate a multimedia object identifier that identifies the multimedia object. Audio control module 10 may use a multimedia object identifier to register the multimedia object. A multimedia object identifier may in some examples include a unique alphanumeric string of numbers and or letters, e.g., a hash code. Audio control module 10, may in some examples, store a multimedia object identifier for later retrieval in a map, hashtable, database or other data storage structure of computing device 2 or of some other computing device coupled to computing device 2. In one example, audio control module 10 may store a multimedia object identifier in object identifier repository 12.

In the example of FIG. 1, audio control module 10 may register video object 22 and advertisement object 24 as document 18 is rendered by application 8. Audio control module 10 may generate a multimedia object identifier “A1,” which corresponds to video object 22. Audio control module 10 may, in the current example, generate a multimedia object identifier “A2,” which corresponds to advertisement object 22. Each identifier may be stored by application 8 for later retrieval.

As previously described herein, application 8 may, in some examples, execute, e.g., a separate, multimedia application to render content of a multimedia object. In some examples, application 8 may execute the multimedia application as a child process of application 8. When application 8 executes the multimedia application as a child process, audio control module 10 may provide the multimedia application access to an Application Programming Interface (API). The multimedia application may access resources of computing device 2, e.g., storage, output devices, input devices, etc., via the API. For example, a multimedia application may send audio data to audio speaker 6 via an API provided by application 8. In this way, application 8 may control access to resources of computing device 2 and modify data received from the multimedia application.

In some examples, audio control module 10 may include logic to modify a volume level associated with a registered multimedia object. For example, audio control module 10 may receive audio data from a multimedia application that renders content of a registered multimedia object. Audio control module 10 may receive the audio data from the multimedia application via an API provided by application 8. In response to receiving the audio data, audio control module 10 may perform one or more operations increase or decrease a volume level associated with the audio data. For example, audio control module 10 may generate data specifying a volume level in response to, e.g., a user input or data stored on computing device 2. The data specifying the volume level may be associated with the audio data received from the multimedia application. Audio control module 10 may send the volume level data to audio device 6. Audio control module 10 may also send corresponding audio data received from the multimedia application to audio device 6. In this way, audio device 6 may generate an audio signal based on the audio data and the specified volume level. Thus, in examples that include many registered multimedia objects, audio control module 10 may provide fine-grain audio control of each volume level associated with each multimedia object based on any number of configuration parameters.

In some examples, audio control module 10 may determine a volume level of audio data generated by a multimedia object based on one or more configuration parameters. In one example, a configuration parameter may define one or more volume levels associated with a multimedia object identifier that identifies a multimedia object. Computing device 2, in some examples, may include a parameter repository 14 to store configuration parameters. Thus, in some examples, audio control module 10 may automatically execute one or more configuration parameters when audio control module 10 registers a multimedia object. In this way, audio control module 10 may, in some examples, automatically configure volume levels based on one or more configuration parameters thereby reducing the level of manual user configuration.

In some examples, application 8 may include a parameter repository 14. Parameter repository 14 may store one or more configuration parameters associated with multimedia objects. In some examples, parameter repository 12 may include a database, lookup table, or other suitable data structure capable of storing data. In one example, a user may define one or more parameters that are stored in parameter repository 14. Various configuration parameters are further described and illustrated in, e.g., FIG. 5.

In some examples, audio control module 10 may determine a volume level of the audio data based on one or more configuration parameters, wherein the one or more configuration parameters define one or more volume levels associated with the multimedia object identifier. For example, audio control module 10 may select a configuration parameter from parameter repository 14 or receive a configuration parameter at runtime that corresponds to a user\'s volume level selection. The configuration parameter may specify a volume level associated with a multimedia object. Audio module 10, may generate volume level setting data corresponding to the volume level, which may be sent to an audio driver of audio device 6 or an operating system executing on computing device 2. In either case, the volume level setting data may cause audio device 6 to output the audio data at the corresponding volume level.

In one example use case of FIG. 1, document 18 may include video object 22 and advertisement object 24. Application 8 may be a web browser. In the current example, audio control module 10 may automatically register video object 22 and store a corresponding multimedia object identifier “A1.” Audio control module 10 may automatically register advertisement object 24 and store multimedia object identifier “A2.” In response to registering video object 22 and advertisement object 24, audio control module 10 may automatically select and execute one or more configuration parameters stored in parameter repository 14. In other examples, a user may specify configuration parameters at run-time. For example, a configuration parameter provided to audio control module 10 by the user may specify a volume level that audio control module 10 may use to change a volume level of audio data.

In the current example, application 8 may execute first and second multimedia applications as child processes that may render content of video object 22 and advertisement object 24, respectively. Video object 22 and advertisement object 24 may each include audio content that may be sent as audio data to audio device 6. In the current example, the first application may send audio data of video object 22 to audio device 6 via an API of audio control module 10. Simultaneously, the second application may send audio data of advertisement object 24 to audio device 6 via an API of audio control module 10. In the current example, a configuration parameter may, for example, specify that sound of advertisement object 24 is to be disabled. Consequently, audio control module 10 may set the volume level associated with advertisement object 24 to mute, e.g., no volume. Consequently, a user may not be interrupted by audio from advertisement object 24 because audio control module 10 has automatically executed the corresponding configuration parameter.

In some examples, application 8 may further include a control panel 26 that indicates a volume level associated with each registered multimedia object. For example, volume selector 28A may be associated with a first multimedia object, e.g., video object 22, as indicated by labels 28B and 36. Volume selector 30A may be associated with a second multimedia object, e.g., advertisement object 24, as indicated by labels 30B and 38. Video object 22 and advertisement object 24 may each be registered with audio control module 10, e.g., multimedia object identifiers that identify each object may be stored by audio control module 10. A volume selector may indicate a volume level of a corresponding multimedia object within a range of selectable volume levels.

In one example, audio control panel 26 may enable a user to modify a volume level associated with a multimedia object via audio control module 10 by changing a volume selector. For example, a user may change a volume level using a volume selector by sliding a volume selector from one volume level position to different volume level position. Audio control module 10, in response to determining the user has adjusted the volume selector, may receive a configuration parameter indicating the new volume level. Audio control module 10 may, in response to receiving the configuration parameter, output the audio data at the new volume level indicated by the configuration parameter. In this way, the user maintains fine-grain control over the volume level of each multimedia object in document 18. In some examples, a volume level associated with a multimedia object may be stored in object identification repository 12.

As shown in FIG. 1, a control panel may display other controls associated with a multimedia object in addition to a volume selector. For example, FIG. 1 further includes exclusive selectors 32, 34. In one example, exclusive selector 32 may be associated with video object 22 and exclusive select 24 may be associated with advertisement object 24. An exclusive selector when selected, in one example, may indicate that only audio data from the corresponding selected multimedia object may be sent to an audio device 6 of computing device 2. In such examples, audio control module 10 may identify the multimedia object corresponding to the selected exclusive selector and only provide audio data from the selected multimedia object to audio device 6.

As shown in FIG. 1, application 8 may display a representation, e.g., a label, that includes a multimedia object identifier in control panel 26. For example, label 28B may include a multimedia object identifier associated with video object 22. In some examples, application 8 may also display a representation of the multimedia object identifier at or near the location of the multimedia object displayed in GUI 16. For example, label 36 may indicate video object 22 is associated with volume selector 28A of control panel 26. In this way, a user may quickly identify a volume selector that corresponds to a multimedia object.



Download full PDF for full patent description/claims.

Advertise on FreshPatents.com - Rates & Info


You can also Monitor Keywords and Search for tracking patents relating to this Audio control of multimedia objects patent application.
###
monitor keywords



Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like Audio control of multimedia objects or other areas of interest.
###


Previous Patent Application:
Systems, methods, apparatus, and computer readable media for equalization
Next Patent Application:
Gain control device for an amplifier and related methods, and an audio processing device
Industry Class:
Electrical audio signal processing systems and devices
Thank you for viewing the Audio control of multimedia objects patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 0.89824 seconds


Other interesting Freshpatents.com categories:
Novartis , Pfizer , Philips , Procter & Gamble ,

###

Data source: patent applications published in the public domain by the United States Patent and Trademark Office (USPTO). Information published here is for research/educational purposes only. FreshPatents is not affiliated with the USPTO, assignee companies, inventors, law firms or other assignees. Patent applications, documents and images may contain trademarks of the respective companies/authors. FreshPatents is not responsible for the accuracy, validity or otherwise contents of these public document patent application filings. When possible a complete PDF is provided, however, in some cases the presented document/images is an abstract or sampling of the full patent application for display purposes. FreshPatents.com Terms/Support
-g2-0.2328
     SHARE
  
           

FreshNews promo


stats Patent Info
Application #
US 20120263319 A1
Publish Date
10/18/2012
Document #
13251111
File Date
09/30/2011
USPTO Class
381107
Other USPTO Classes
International Class
03G3/00
Drawings
6



Follow us on Twitter
twitter icon@FreshPatents