FreshPatents.com Logo
stats FreshPatents Stats
4 views for this patent on FreshPatents.com
2013: 4 views
Updated: October 13 2014
Browse: Apple patents
newTOP 200 Companies filing patents this week


    Free Services  

  • MONITOR KEYWORDS
  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • ORGANIZER
  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • ARCHIVE
  • View the last few months of your Keyword emails.

  • COMPANY DIRECTORY
  • Patents sorted by company.

Follow us on Twitter
twitter icon@FreshPatents

Devices with enhanced audio

last patentdownload pdfdownload imgimage previewnext patent


20130028443 patent thumbnailZoom

Devices with enhanced audio


A system for enhancing audio including a computer and an output device. The computer includes a sensor configured to determine a user location relative to the computer. The sensor is also configured to gather environment data corresponding to an environment of the computer. The computer also includes a processor in communication with the sensor and configured to process the user location and the environment data and adjust at least one of an audio output or a video output. The output device is in communication with the processor and is configured to output at least one of the audio output or the video output.
Related Terms: Audio Output Device

Apple Inc. - Browse recent Apple patents - Cupertino, CA, US
USPTO Applicaton #: #20130028443 - Class: 381107 (USPTO) - 01/31/13 - Class 381 
Electrical Audio Signal Processing Systems And Devices > Including Amplitude Or Volume Control >Automatic

Inventors: Aleksandar Pance, Brett Bilbrey, Darby E. Hadley, Martin E. Johnson, Ronald Nadim Isaac

view organizer monitor keywords


The Patent Description & Claims data below is from USPTO Patent Application 20130028443, Devices with enhanced audio.

last patentpdficondownload pdfimage previewnext patent

TECHNICAL FIELD

The present invention relates generally to electronic devices, and more specifically, to audio output for electronic devices.

BACKGROUND

Electronic devices, such as computers, mobile phones, audio players, laptops, tablet computers, televisions (hereinafter an “electronic device”) typically may have an integrated audio output device (e.g., speakers) or may be able to communicate with an audio output device. Additionally, many electronic devices may also include a visual or video output device or communicate with a video display device.

Many audio/visual output devices may be able to have an improved audio or video output, if the audio output is able to be adjusted to the environment, surroundings, circumstances, program, and/or environment. However, many audio and video output devices may require a user input or interaction in order to change a particular output or may not have variable output settings. In these instances the audio and/or video output may not be performing or outputting the best quality sound or images for the particular environment, programs, circumstance, or the like.

SUMMARY

Examples of the disclosure may take the form of a method for outputting audio from a computing device. The method may include detecting a user by a sensor. Once a user is detected, a process determines whether the user is an optimum range for a current audio output of an audio output device. If the user is not within the optimum range, the processor modifies the audio output. Additionally, the sensor determines whether the user is orientated towards the computing device. Based on the user orientation the processor adjusts an audio device.

Other examples of the disclosure may take the form of a method for enhancing audio for a computer. The method may include determining by a sensor a user location relative to the computer. Once the user location has been determined, the sensor may gather environment data corresponding to an environment of the computer. Then, a processor adjusts an audiovisual setting view of the environment data and the user location.

Still other examples of the disclosure may take the form of a system for enhancing audio including a computer and an output device. The computer includes a sensor configured to determine a user location relative to the computer. The sensor is also configured to gather environment data corresponding to an environment of the computer. The computer also includes a processor in communication with the sensor and configured to process the user location and the environment data and adjust at least one of an audio output or a video output. The output device is in communication with the processor and is configured to output at least one of the audio output or the video output.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1A is a diagram illustrating a system for enhancing audio output.

FIG. 1B is a block diagram of a computer of the system of FIG. 1A.

FIG. 1C is a diagram illustrating the computer in communication over a network with a second computer.

FIG. 2 is a block diagram of the system of FIG. 1A with select audio and video processing paths illustrated.

FIG. 3 is a flow chart illustrating an exemplary method for adjusting an audio output based on a user location and position.

FIG. 4 is a flow chart of an exemplary method for enhancing an audio and/or video output.

FIG. 5A is a diagram of the computer displaying a multi-person video conference.

FIG. 5B is a top plan view of users displayed on the computer of FIG. 5A being captured by a second computer.

FIG. 5C is a diagram of the computer of FIG. 5A with the audio and video of a Person A and B enhanced.

FIG. 6 is a flow chart illustrating an exemplary method for enhancing the audio and/or video of a particular person during a video conferencing session as illustrated in FIGS. 5A-5C.

FIG. 7A is a diagram of the computer with an instant messaging, voice, or video chat program running and displaying multiple instances.

FIG. 7B is a diagram of an audio direction for Audios A, B, C, D corresponding to multiple audio/video instances of FIG. 7A.

FIG. 8 is a flow chart illustrating an exemplary method for directing the audio of a particular audio/video instance.

DETAILED DESCRIPTION

Overview

In some embodiments herein, the disclosure may take the form of a method to enhance audio output from an electronic device based on one or more criteria, such as an active application, user interactions, and environmental parameters. The method may also include providing user input without significant (if any) active user interaction. In other words, the system may rely on sensors and imaging devices to interpolate user inputs so that the user may not have to physically or knowingly enter them into the system. This may allow for an audio output device to dynamically adjust to different user parameters to enhance the audio output without requiring active inputs from the user directly.

In one embodiment, the system may enhance the audio output for a video conferencing or chat. Some users may use video conference to have conversations with a group of people. For example, a parent traveling may video conference with the entire family, including children and a spouse. With groups of people, some people may be positioned closer or farther away from the computer. Additionally, there may be multiple different people talking at a single time. During video conferencing, the user on the receiving end may have a difficult time determining what each person is saying, especially if there are multiple people talking at a single time.

The system may capture images of the different users (e.g., via a video camera) and the receiving user may be able to enhance the audio for a particular user. For example, the receiving user may tap on the image of the particular user (or otherwise select or indicate the user) upon whom he or she wishes the embodiment to focus, and the system may digitally enhance the audio as well as steer a microphone towards the user in order to better capture the user\'s audio input. In one example, the system may include a computer having multiple microphones spaced around a perimeter of a display screen, and the particular microphones may be turned on/off as well as rotated in order to best capture a desired audio signal.

Similarly, the system may also be configured to direct a microphone, enhance the audio and/or focus the video image on a person that is speaking. Mouth tracking or speech recognition may be used to focus the audio and/or video on a particular user that is speaking. This may allow a user receiving an audio data stream to better be able to hear the user speaking (e.g., the transmitting user of the system). Thus, the enhancement feature of either or both of the audio or video images of a user may be automatic (e.g., based on mouth tracking or speech recognition) or may be based on user input (e.g., a user can select a user or focus area).

Output audio quality may depend, at least partially, on the environment. For example, echo cancellation may be desired and/or affected by the size and acoustics of the room. Two factors that may affect the quality of output audio may include room dimension and reverberant qualities. In one embodiment, the system may be configured to adjust the audio output depending a user\'s location with respect to the audio output device, the user\'s position (e.g., facing head-on or turned away) with respect to the audio output device, and environmental inputs (such as the size of the room, reverberation of the room, temperature, and the like). The user\'s inputs may include his or her location within a room, whether he or she is facing the audio output device and the like. Furthermore, the system may vary the audio output not only based on the user and environmental inputs, but also the current application that the computer or audio output device may be running. For example, if the application is a telephone call the response may be varied as compared with a music player application.

In various embodiments the system may include video, audio, and environmental sensors. For example, image sensors (e.g., cameras), depth sensors (ultrasonic, infrared, radio frequency and so on), and the like may be used. Additionally, the desired output may also be changed based on a user location to the computer, e.g., if a user is far away from the computer in a large room versus if a user is close to the computer in small room. For example, if an object is presented in a video as being positioned far away from the user, the output audio of the particular object (or user) may be varied in order to sound to the user as though the object is far away. In this implementation, depth may be provided to local audio of a far-field image in order to enhance the overall audio/visual experience of the user.

In still other embodiments, the system may be configured to adjust an output audio based on the user. Men, women, and children may all have different hearing spectrums, generally women may hear better than men and children may hear better than either men or women adults. The system may utilize speech or facial recognition or other gender identifying techniques in order to vary the output audio depending on the particular user.

Exemplary System

In an exemplary embodiment, the disclosure may take the form of a system for providing an enhanced audio experience for a user. FIG. 1A is a block diagram of an exemplary system 100 for providing enhanced audio. The system 100 may include a computer 102 or other electronic device and audio output devices 106, 110 (which may be integrated, separate or a combination of both from the computer 102). The computer 102 may be substantially any type of electronic device with processing capabilities, including, but not limited to, a laptop, tablet, smart phone, audio player, and television. In this embodiment, the computer 102 is in communication with an external audio output device 110 and an integrated audio output device 106. However, it should be noted that in some instances, the system 100 may include a single audio output device 106, 110 or may include multiple other audio output devices (e.g., surround-sound 5-speaker system). The audio output devices 106, 110 may be a speaker or set of speakers, headphones, or other device capable of producing a sound in response to an electronic signal.

The audio devices 106, 110 may be positioned substantially anywhere on the computer 102 and/or around the computer 102. The type, power, and structure of the audio devices 106, 110 may effect the quality of the audio produced from the computer 102, as well as may effect the various software changes that may be needed to produce the best sound.

FIG. 1B is a block diagram of an exemplary computer 102. The computer 102 may include a processor 118, a network/communication interface 120, an input/output interface 126, a video input/output interface 128, sensors 124, memory 130, audio input/output interface 132, video sensor 134, and/or a microphone 136. The various computer 102 components may be electronically connected together via a system bus 122 (or multiple system buses). It should be noted that any of the various components may be omitted and/or combined. For example, the video input/output interface 128 may be combined with either or both the audio input/output interface 132 and the general input/output interface 126. Furthermore, the computer 102 may include additional local or remote components that are not shown; and FIG. 2 is meant to be exemplary only.

The processor 118 may control the operation of the computer 102 and its various components. The processor 118 may be substantially any electronic device cable of processor, receiving, and/or transmitting instructions. For example, the processor 118 may be a microprocessor or a microcomputer.

The network/communication interface 120 may receive and transmit various electrical signals. For example, the network/communication interface 120 may be used to connect the computer 102 to a network in order to transmit and receive signals to and/or from other computers or electronic devices via the network. The network/communication interface 120 may also be used to transmit and send electronic signals via a wireless or wired connection (including, but not limited to, Internet, WiFi, Bluetooth, Ethernet, USB, and Firewire).

The memory 130 may store electronic data that may be utilized by the computer 102. For example, the memory 130 may store electrical data containing any type of content, including, but not limited to, audio files, video files, document files, and data files. Store data may correspond to one or more various applications and/or operations of the computer. The memory 130 may be generally any format, including, but not limited, to non-volatile storage, a magnetic storage medium, optical storage medium, magneto-optical storage medium, electrical storage medium, read only memory, random access memory, erasable programmable memory, and flash memory. The memory 130 may be provided local to and/or remote from the computer 102.

The various input/output interfaces 126, 128, 132 may provide communication to and from input/output devices. For example, the audio input/output interface 132 may provide input and output to and from the audio devices 106, 110. Similarly, the video input/output interface 128 may provide input and output to a display device (e.g., computer monitor, display screen, or television). Additionally, the general input/output interface 126, 128, 132 may receive input from control buttons, switches and so on. In some embodiments, the input interfaces may be combined. For example, the input/output interfaces 126, 128, 132 may receive data from a user (e.g., via a keyboard, touch sensitive surface, mouse, audible input or other device), control buttons on the computer 102 (e.g., power button, volume buttons), and so on. Additionally, the input/output interface 112 may also receive/transmit data to and from an external drive, e.g., a universal serial bus (USB), or other video/audio/data inputs.

As can be seen in FIG. 1C, in some instances, the computer 102 may be in communication with a second computer 103 via a network 138. Additionally, as shown in FIG. 1 C, in some instances, the computer 102 may be connected via a network 140 to another or second computer 103 (or server). For example, the computer 102 may connect with the second computer 103 for conferencing or chat applications. Additionally, the computer 102 may receive streaming audio and/or video from the second computer 103.

The network 138 provides electronic communication between the first computer 102 and the second computer 103. The network 138 may be virtually any type of electronic communication mechanism/path and may be wireless, wired, or a combination of wired and wireless. The network 138 may include the Internet, Ethernet, universal serial bus (USB) cables, or radio signals (e.g., WiFi, Bluetooth).

The microphone 136 may be integrated to the computer 102 or separately attached and in communication with the processor 118. The microphone 136 is an acoustic to electric transmitter and is configured to receive an audio input and produce an electrical output corresponding to the audio. There may be multiple microphones 136 incorporated or otherwise in communication with the computer 102. For example, in some implementations, there may be a microphone array of multiple microphones positioned at various locations around the computer 102.

The video sensor 134 may be a video or image capturing device(s). The video sensor 134 may be integrated into the computer 102 (e.g., connected to an enclosure of the computer 102) and/or may be external and in communication with the computer 102. The video sensor 134 may be used to capture video and still images that may be used for various applications such as video conferencing/chat.

FIG. 2 is a block diagram of the system 100 illustrating an exemplary audio/video processing paths from input to output. Referring to FIG. 1A, 1 B, and 2, the system 100 may communicate between various sensors to enhance and adjust an audio and video output. The video sensor 134 may provide video input to the processor 118, the miscellaneous sensors 124 may provide user and environmental data to the processor 118, and the audio input 132 may provide input audio the processor 118. The processor 118 may separately or jointly process the various inputs and adjust a video and audio output to present to the speaker 110 and/or display 104.

In one example, the video sensor 134, sensors 124, and audio input 132 may provide image data regarding the user and/or the environment (e.g., room, surroundings) of the computer 102. The processor 118 may then enhance or alter the audio output characteristics provided to the speaker 110 to provide an enhanced audio experience. The way the audio output may sound to a user may be dependent on or affected by where a user may be located with respect to the audio output device, as well characteristics of the room or environment. If the audio characteristics or settings are not altered, an audio signal that may have a particular sound in a first room may sound drastically different in a second room. For example, if the first room is smaller than the second room or if the first room has carpet and the second room has wood flooring.

Therefore, after receiving video and image input and audio input 132 (e.g., echoing characteristics, location of a user with respect to the computer 102, direction of the user with respect to the computer 102), the audio and video output can be enhanced by the processor 118. This may enable the computer 102 to adjust the audio and/or video to best accommodate the user and/or environment.

As can be seen in FIG. 2, the processor 118 may include separate processing units, such as an image processing unit 142, a user/environment interface processing unit 144, an audio processing unit 146, and an output processing unit 145. These processing units 142, 144, 145, 146 may be integrated into the processor 118 or may be separate devices. Each processing unit 142, 144, 145, 146 may be in communication with a particular sensor in order to receive output from the sensors as well as to adjust the sensor inputs. For example, the audio processing unit 146 may direct or steer the microphone 136 towards a particular user speaking to better capture his or her voice. Similarly, the image processing unit 142 may focus or zoom the video sensor 134 on a particular user. In still other examples, the user/interface processing unit 144 may direct particular sensors 124 to gather additional environmental/user data. Additionally, the output processing 145 may include frequency filters to post-process an audio signal (e.g., to reduce noise frequencies, enhance particular frequencies, and so on), correct errors in audio levels, adjust loudness to a particular level (e.g., equalize an audio output), echo-cancellation, peaking filters and so on.

Adjusting Audio Output Based on User Location and Position

FIG. 3 is a flow chart illustrating an exemplary method 200 for adjusting an audio output based on a user location and position. The method 200 may begin with operation 202 and the computer 102 may detect a user or users. The computer 102 may utilize the sensors 124 to capture motion, may utilize the video sensor 134 to capture and analyze an image (e.g., facial recognition), or may utilize the audio sensors 132 to capture noise may by a user or users.

Once a user or user is detected, the method 200 may proceed to operation 204 and the computer 102 may determine if the user or users are within an optimum range based on the current audio output settings and speaker 110 arrangement. For example, the computer 102 may determine a user location utilizing various sensors. The computer 102 may use the same sensors and methods to detect a user\'s presence to determine the user\'s location with respect to the computer 102 and/or the speakers 110. The detection of a user\'s location may be an estimate or single input, e.g., the computer 102 may simply detect that a user is not directly in front of the computer 102, or the detection may be more detailed and the computer 102 may utilize more advanced sensing techniques to determine the approximate location of the user with respect to the computer 102.

Once the user\'s location is determined, the computer 102 may compare the user\'s location with the current audio output settings to determine whether the audio is within an optimum range based on the position of the user. As one example, if the user is located a few yards away from the computer 102 and the audio is configured to output as if the user were sitting directly in front the of the speakers 110 or computer 102, the audio may need to be adjusted. The audio may be adjusted for the user so that the volume may be increased, the external speakers 110 may be turned on, internal speakers 106 turned off, surround sound may be switched from a “screen channels” setting into a surround sound format, or the surround sound channels may be redirected from internal speakers to external speakers and to left-surround channels and right-surround channels. On the other hand, if the audio is already adjusted or configured with a distance setting, the audio may not need to be adjusted based on the user\'s location.



Download full PDF for full patent description/claims.

Advertise on FreshPatents.com - Rates & Info


You can also Monitor Keywords and Search for tracking patents relating to this Devices with enhanced audio patent application.
###
monitor keywords



Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like Devices with enhanced audio or other areas of interest.
###


Previous Patent Application:
Audio device with volume adjusting function and volume adjusting method
Next Patent Application:
Method for processing audio signal and audio signal output apparatus adopting the same
Industry Class:
Electrical audio signal processing systems and devices
Thank you for viewing the Devices with enhanced audio patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 0.51408 seconds


Other interesting Freshpatents.com categories:
Software:  Finance AI Databases Development Document Navigation Error

###

Data source: patent applications published in the public domain by the United States Patent and Trademark Office (USPTO). Information published here is for research/educational purposes only. FreshPatents is not affiliated with the USPTO, assignee companies, inventors, law firms or other assignees. Patent applications, documents and images may contain trademarks of the respective companies/authors. FreshPatents is not responsible for the accuracy, validity or otherwise contents of these public document patent application filings. When possible a complete PDF is provided, however, in some cases the presented document/images is an abstract or sampling of the full patent application for display purposes. FreshPatents.com Terms/Support
-g2-0.1569
     SHARE
  
           

FreshNews promo


stats Patent Info
Application #
US 20130028443 A1
Publish Date
01/31/2013
Document #
13193461
File Date
07/28/2011
USPTO Class
381107
Other USPTO Classes
International Class
03G3/00
Drawings
12


Audio
Output Device


Follow us on Twitter
twitter icon@FreshPatents