FreshPatents.com Logo
stats FreshPatents Stats
1 views for this patent on FreshPatents.com
2011: 1 views
Updated: October 13 2014
Browse: Nokia patents
newTOP 200 Companies filing patents this week


    Free Services  

  • MONITOR KEYWORDS
  • Enter keywords & we'll notify you when a new patent matches your request (weekly update).

  • ORGANIZER
  • Save & organize patents so you can view them later.

  • RSS rss
  • Create custom RSS feeds. Track keywords without receiving email.

  • ARCHIVE
  • View the last few months of your Keyword emails.

  • COMPANY DIRECTORY
  • Patents sorted by company.

Follow us on Twitter
twitter icon@FreshPatents

Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device

last patentdownload pdfimage previewnext patent


Title: Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device.
Abstract: A method including causing, at least in part, rendering of a perspective view showing one or more objects in a field of view. The method further including retrieving content associated with an object of the one or more objects in the field of view, and causing, at least in part, rendering of a graphic representation relating to the content on a surface of the object visible in the perspective view in a user interface for a location-based service of a mobile device. ...


Nokia Corporation - Browse recent Nokia patents - Espoo, FI
USPTO Applicaton #: #20110279446 - Class: 345419 (USPTO) - 11/17/11 - Class 345 


view organizer monitor keywords


The Patent Description & Claims data below is from USPTO Patent Application 20110279446, Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device.

last patentpdficondownload pdfimage previewnext patent

BACKGROUND

Service providers (e.g., wireless, cellular, Internet, content, social network, etc.) and device manufacturers are continually challenged to deliver value and convenience to consumers by, for example, providing compelling network services. One area of interest has been the development of mapping and navigating graphics (e.g., digital maps) and/or images (e.g., 360° panoramic street-level views of various locations and points of interest) augmented with, for instance, navigation tags and location relevant content. Typically, navigation, mapping, and other similar services can display either panoramic views or two-dimensional rendered maps. Content information is typically limited to use in 2D map views, and augmented reality views that attempt to display content tend to provide an unstable, cluttered display.

SOME EXAMPLE EMBODIMENTS

Therefore, there is a need for an approach for rendering a perspective view of objects and content related thereto for location-based services on a mobile device.

According to one embodiment, a method comprises causing, at least in part, rendering of a perspective view showing one or more objects in a field of view. The method also comprises retrieving content associated with an object of the one or more objects in the field of view. The method further comprises causing, at least in part, rendering of a graphic representation relating to the content on a surface of the object visible in the perspective view in a user interface for a location-based service of a mobile device.

According to another embodiment, an apparatus comprising at least one processor, and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause, at least in part, the apparatus to cause, at least in part, rendering of a perspective view showing one or more objects in a field of view. The apparatus is also caused to retrieve content associated with an object of the one or more objects in the field of view. The apparatus is further caused to cause, at least in part, rendering of a graphic representation relating to the content on a surface of the object visible in the perspective view in a user interface for a location-based service of the apparatus, wherein the apparatus is a mobile device.

According to another embodiment, a computer-readable storage medium carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause, at least in part, an apparatus to perform causing, at least in part, rendering of a perspective view showing one or more objects in a field of view. The apparatus is also caused to perform retrieving content associated with an object of the one or more objects in the field of view. The apparatus is further caused to perform causing, at least in part, rendering of a graphic representation relating to the content on a surface of the object visible in the perspective view in a user interface for a location-based service of a mobile device.

According to another embodiment, an apparatus comprises means for causing, at least in part, rendering of a perspective view showing one or more objects in a field of view. The apparatus also comprises means for retrieving content associated with an object of the one or more objects in the field of view. The apparatus further comprises means for causing, at least in part, rendering of a graphic representation relating to the content on a surface of the object visible in the perspective view in a user interface for a location-based service of a mobile device.

Still other aspects, features, and advantages of the invention are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the invention. The invention is also capable of other and different embodiments, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.

BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings:

FIG. 1 is a diagram of a system capable of rendering a perspective view of objects and content related thereto for location-based services on a mobile device, according to one embodiment;

FIG. 2 is a diagram of the components of a mapping and user interface application, according to one embodiment;

FIG. 3A is a flowchart of a process for rendering a perspective view of objects and content related thereto for location-based services on a mobile device, according to one embodiment;

FIG. 3B is a flowchart of a process for omitting a graphic representation of a distant object that is obstructed by the rendering of another object in a perspective view, according to one embodiment;

FIGS. 4A and 4B are diagrams of user interfaces utilized in the processes of FIGS. 3A and 3B, according to various embodiments;

FIG. 5 is a diagram of a user interface utilized in the processes of FIGS. 3A and 3B, according to one embodiment;

FIG. 6 is a diagram of a user interface utilized in the processes of FIGS. 3A and 3B, according to one embodiment;

FIG. 7 is a diagram of hardware that can be used to implement an embodiment of the invention;

FIG. 8 is a diagram of a chip set that can be used to implement an embodiment of the invention; and

FIG. 9 is a diagram of a mobile terminal (e.g., handset) that can be used to implement an embodiment of the invention.

DESCRIPTION OF SOME EMBODIMENTS

Examples of a method, apparatus, and computer program for rendering a perspective view of objects and content related thereto for location-based services on a mobile device are disclosed. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It is apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without these specific details or with an equivalent arrangement. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the embodiments of the invention.

As used herein, the term “image” refers to one or a series of images taken by a camera (e.g., a still camera, digital camera, video camera, camera phone, etc.) or any other imaging equipment. Although various embodiments are described with respect to a live camera view, it is contemplated that the approach described herein may be used with other live or real-time images (e.g., a still image, a live view, a live webcam view, etc.) as long as the image is associated with a location, a tilt angle, and heading of the imaging device (e.g., camera) at the time of image capture.

As used herein, the term “point of interest” (POI) refers to any point specified by a user or service provider. The term POI is also used interchangeably with the term “object.” By way of example, the point of interest can be a landmark, restaurant, museum, building, bridge, tower, dam, factory, manufacturing plant, space shuttle, etc.

As used herein, the term “perspective view” refers to any view that provides some perspective to an object shown therein, either when shown using 2D or 3D displays, using or 2D or 3D images. Such perspective views can be real-time images (e.g., in an augmented reality setting using a camera of the device), a panoramic image (e.g., a pre-stored panoramic photograph), 3D modeling in virtual reality, or other modified views that attempt to show real or virtual depth to objects or surroundings whether constructed with 2D images or 3D images.

FIG. 1 is a diagram of a system capable of rendering a perspective view of objects and content related thereto for location-based services on a mobile device, according to one embodiment.

As mentioned previously, navigation, mapping, and other like services and systems display either panoramic views or two-dimensional rendered maps, however, they do not attempt to merge the two views. When content is presented in a 2D view, certain content might be clustered or too close to be visible for the user. When content is presented in a 3D view, e.g. a panoramic image or directly through camera view, then the visible content is limited to the current scene or position of the camera. It is usually a problem that switching views can cause confusion in the understanding of the space and location, especially when the user is not very familiar with the place in view. Some related art services show content only in the map and only when the view is maximized. Other augmented reality or mixed reality services may display content in different ways depending on the kind of content; however, typically the content is shown in a shaking manner and not affixed in a stable manner to an object or POI.

To address shortcomings of other related art systems, a system 100 of FIG. 1 introduces the capability of rendering a perspective view of objects and content related thereto for location-based services on a mobile device. The system 100 can render a user interface for a location-based service that has a main view portion and a preview portion, which can allow a user to simultaneously visualize both a perspective view, for example, showing panoramic images of an area, and a corresponding plan view of a map of the area, and switch between such views as desired.

Thus, a small preview can be displayed in the mobile UI, where the most relevant content is shown. For example, when a user is browsing a panoramic view on the UI, the user has the option to preview the map showing the surroundings of what is displayed in the panorama view. Similarly, when browsing the map, the preview shows the closest panorama of the point the user has focused on the map. Both views display the content that can be found in the area, allowing a better sense of the space and location. The actions in the main view are reflected in the preview, so that the user always has a sense of where to go physically if the user happens to be in the location in view or virtually if the user is remotely browsing the area. Selecting rich content information in a crowded area on the map can open a list view of all the content in that crowded area, while selecting content on the panorama can open more specific content or list view. The perspective view also limits the display of graphic representations of such rich content information to objects/POIs that are visible in the perspective view, and omits graphic representations for those that are not visible, in order to provide an uncluttered perspective view.

The preview can easily be tapped to switch views and to navigate easily depending on the user\'s needs. The preview can also be hidden easily by starting a full screen view mode. If the user is navigating in the map or plan view, the user can tap in any new location in the map and that will take the user\'s point of view to the tapped spot on the map, and at the same time the panorama in the preview will update to the closest panorama image from that new defined spot on the map. The user can also rotate the phone or the point of view (POV) icon to move the orientation of the map, which will affect the orientation of the panorama preview as well. The panorama image can be taken from the main panorama view in low resolution to adapt in size and be quick.

This solution allows users to understand better their surroundings and the remote surroundings when browsing location based content or navigating in 2D maps and 3D panoramic images. The discovery of content and the understanding of the precise place to attach content become easier and nicer. Switching from one view to the other is very intuitive, as both views show the same location and orientation.

As an example, when the user stands at a current location (e.g., the Farragut West METRO Station), the user can operate a user interface of a user device (e.g., user equipment (UE) 101) to show a plan view of a map of the surrounding area (or of another area, such as a final destination of the user) in a main view portion of the user interface, while a perspective view of the surrounding area is shown in a preview portion of the user interface in order to give the user an idea of the 3D panoramic view of the surrounding area. The perspective view can be generated by using the camera of the user device to capture images of the surrounding area in real-time (e.g., in augmented reality), by using pre-stored images (e.g., previously captured images or virtual reality images), or a combination of real-time images and pre-stored images (e.g., mixed reality). The portion of the user interface showing the plan view of the map can include an orientation representation (e.g., a periscope icon with an outwardly extending cone of vision) that indicates the field of view of the perspective view. The field of vision can be adjusted by the user by adjusting the orientation of the user device (e.g., utilizing a compass or other device to determine the change in orientation), by manually manipulating the orientation representation of the field of view on the plan view of the map on the user interface, and/or by manually manipulating the view in the perspective view on the user interface. The user can switch the plan view of the map from the main view portion of the user interface to the preview portion, and thus also switch the perspective view from the preview portion to the main view portion of the user interface. This dual window configuration allows a user to easily interpret the location and orientation of the perspective view, and allows a user to quickly and intuitively navigate to a POI or otherwise determine their location.

As noted above, the perspective view can be displayed using real-time images, pre-stored (pre-recorded) images, or the system 100 can retrieve and stitch a prerecorded still image right next to the live image side by side then displays the seamlessly stitched images to the user. To make the switch seamlessly, the system 100 correlates a prerecorded panoramic image that has the same tilt angle and has directional heading right next to the live image, and displays the correlated prerecorded panoramic image on the screen. Even if two images were taken by the same device at the same location with the same tilt angle and the same directional heading, the coverage of the images can be different due to a height of the user or the settings (e.g., digital zooming, contrast, resolution, edited, clipped, etc.). If two images were taken by two devices at the same location with the same tilt angle and the same directional heading, the coverage of the images can still be different due to different specifications of the two devices. The devices can have different imaging specifications, such as LCD size, optical zoom, digital zoom, zoom wide, zoom telephoto, effective pixels, pixel density, image stabilization, aperture range, etc. which affect the quality and depth of images taken by two devices.

However, the existing photo matching technology allows near 100% matching between the live image and the prerecorded panoramic images. There are photo matching applications (e.g., photo-match online search engines which compare images pixel by pixel) for choosing the best matched panoramic still image for the live image. There are also photo stitching applications which make the boundary between the live image and a prerecorded panoramic still image seamlessly. As the user continues touching the navigational arrow touching the edge of the screen, more prerecorded panoramic still images are matched and stitched to roll out to the screen as a panoramic view on the fly.

To navigate from the current location to a POI, the user indicates to the system 100 the POI as the destination. By way of example, when the system 100 receives a target location such as the International Monetary Fund (IMF) Building as the intended POI (e.g., received as text, or on a digital map on the screen of the UE 101, etc.), the system 100 retrieves location data (e.g., an address, GPS coordinates, etc.) of the IMF, or the location data of the device used to capture a prerecorded panoramic image of the IMF (e.g., if the POI is not as well-known as the IMF, such as a carousel in a park). The system 100 then maps a route from the current location (e.g., the METRO Station) to the designated POI, and presents the route on a digital map to the user in either the main view portion or the preview portion. While the user is walking along the route, the system 100 also presents a live image view of the surrounding location on the screen in the other of the preview portion or main view portion. Whenever the user wants to switch among the perspective view in the main view portion (and the plan view in the preview portion) and the plan view in the main portion (and the perspective view in the preview portion), the user can freely do so using the user interface. Other points of interest may be located on the route, and a filter can be used to select the types of POIs that are labeled using graphic representations and which are not labeled.

Alternatively, the user can utilize the user interface to view a remote location. For example, if the user planned to visit a particular POI later in the day, then the user could locate the POI on the plan view of the map (e.g., by scrolling to the location of the POI, entering an address of the POI, searching for the POI using keywords or the name of the POI, etc.), for example, in the main view portion of the user interface. Then, the user can manipulate the orientation representation of the field of view to provide a desired vantage point. For example, if the user planned to travel down a certain road to get to the POI, then the user can manipulate the field of view to provide a vantage point along that road that the user will see while travelling down the road and arriving at the POI. With the field of view set to the desired orientation, then the user can see a preview of the perspective view of the POI in the preview portion of the user interface, and the user can switch the perspective view of the POI to the main view portion of the user interface in order to view an enlarged image of the POI. Thus, the user will be able to see what the POI looks like, thereby allowing the user to recognize the POI upon arrival at the POI later in the day. The perspective view of the POI can also include graphic representations or tags (e.g., bubbles, icons, images, text, etc.) that provide a link to content related to the POI (e.g., name, address, telephone number, weblink, etc.), which can be selected by the user in the user interface in order to obtain further content information regarding the POI.

In one embodiment, the system 100 displays on the screen of the UE 101 different portions of the prerecorded panoramic view depending upon the tilt angle and directional heading of the UE 101 as tilted and/or rotated by the user. In this embodiment, the user can change the prerecorded panoramic image in the prerecorded panoramic view, without moving/dragging a viewing tag on the screen of the UE 101.

In another embodiment, the system 100 further utilizes the augmented reality or augmented virtuality (e.g., using 3D models and 3D mapping information) to insert rich content information relevant to the POI (e.g., drawn from the Internet, user inputs, etc.) in the live image view in a real time manner. Tags are displayed on a surface of the object or POI and virtually affixed thereto in the perspective view, and shown in a fixed 3D orientation on the surface of the object or POI. The content relevant to the POI can also be seen in the prerecorded panoramic view, and the contact may be already embedded/tagged in the in the prerecorded panoramic view, or inserted in a real time manner. The POIs can be pre-set by users, service providers (e.g., wireless, cellular, Internet, content, social network, etc.), and/or device manufacturers, and the relevant content can be embedded/tagged by any one of a combination of these entities as well.

By way of example, the user selects the fourth floor of a department as a POI, and tags content information of the POI retrieved from the department store website. The system 100 saves the POI and the tagged content, and presents to the user most updated content information in the live image view and/or the prerecorded panoramic view, automatically or on demand. The content information may include: (1) a floor plan of the POI, (2) the occupants/shops/facilities located in the POI (e.g., in thumbnail images, animation, audio alerts, etc.), (3) introduction and background content with respect to the occupants/shops/facilities, (4) marketing and sales content with respect to the occupants/shops/facilities, or any other data or information tied to the POI. It is also contemplated that content may be associated with multiple floors. The content information includes live media, stored media, metadata associated with media, text information, location information of other user devices, mapping data, geo-tagged data, or a combination thereof.

While the plan view of the map can show all of the graphic representations for the objects, which link to the rich content information thereof, in a given area, the graphic representations affixed to the objects in the perspective view are only shown for objects that are visible in the field of view of the perspective view in certain embodiments. Thus, graphic representations for objects that are hidden from view in the perspective view (e.g., for objects that are hidden behind a building, or hidden behind a tree, etc.) can be omitted from the perspective view in order to prevent cluttering of the perspective view of the user interface.

As shown in FIG. 1, a user equipment (UE) 101 may retrieve content information (e.g., content and location information) and mapping information (e.g., maps, GPS data, prerecorded panoramic views, etc.) from a content mapping platform 103 via a communication network 105. The content and mapping information can be used by a mapping and user interface application 107 on the UE 101 (e.g., an augmented reality application, navigation application, or other location-based application) to a live image view and/or a prerecorded panoramic view. In the example of FIG. 1, the content mapping platform 103 stores mapping information in the map database 109a and content information in the content catalog 109b. By way of example, mapping information includes digital maps, GPS coordinates, prerecorded panoramic views, geo-tagged data, points of interest data, or a combination thereof. By way of example, content information includes one or more identifiers, metadata, access addresses (e.g., network address such as a Uniform Resource Locator (URL) or an Internet Protocol (IP) address; or a local address such as a file or storage location in a memory of the UE 101, description, or the like associated with content. In one embodiment, content includes live media (e.g., streaming broadcasts), stored media (e.g., stored on a network or locally), metadata associated with media, text information, location information of other user devices, or a combination thereof. The content may be provided by the service platform 111 which includes one or more services 113a-113n (e.g., music service, mapping service, video service, social networking service, content broadcasting service, etc.), the one or more content providers 115a-115m (e.g., online content retailers, public databases, etc.), other content source available or accessible over the communication network 105.

Additionally or alternatively, in certain embodiments, a user map and content database 117 of the UE 101 may be utilized in conjunction with the application 107 to present content information, location information (e.g., mapping and navigation information), availability information, etc. to the user. The user may be presented with an augmented reality interface associated with the application 107 and/or the content mapping platform allowing 3D objects or other representations of content and related information to be superimposed onto an image of a physical environment on the UE 101. In certain embodiments, the user interface may display a hybrid physical and virtual environment where 3D objects from the map database 109a are superimposed on top of a physical image.

By way of example, the UE 101 may execute the application 107 to receive content and/or mapping information from the content mapping platform 103 or other component of the network 105. As mentioned above, the UE 101 utilizes GPS satellites 119 to determine the location of the UE 101 to utilize the content mapping functions of the content mapping platform 103 and/or the application 107, and the map information stored in the map database 109a may be created from live camera views of real-world buildings and other sites. As such, content can be augmented into prerecorded panoramic views and/or live camera views of real world locations (e.g., based on location coordinates such as global positioning system (GPS) coordinates).

The application 107 and the content mapping platform 103 receive access information about content, determines the availability of the content based on the access information, and then presents a prerecorded panoramic view or a live image view with augmented content (e.g., a live camera view of the IMF building with augmented content, such as its origin, mission, facilities information: height, a number of floor, etc.). In certain embodiments, the content information may include 2D and 3D digital maps of objects, facilities, and structures in a physical environment (e.g., buildings).

By way of example, the communication network 105 of the system 100 includes one or more networks such as a data network (not shown), a wireless network (not shown), a telephony network (not shown), or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fiber-optic network. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (WiFi), satellite, mobile ad-hoc network (MANET), and the like.

The UE 101 is any type of mobile terminal, fixed terminal, or portable terminal including a mobile handset, station, unit, device, multimedia computer, multimedia tablet, Internet node, communicator, desktop computer, laptop computer, Personal Digital Assistants (PDAs), or any combination thereof. It is also contemplated that the UE 101 can support any type of interface to the user (such as “wearable” circuitry, etc.).

By way of example, the UE 101, and content mapping platform 103 communicate with each other and other components of the communication network 105 using well known, new or still developing protocols. In this context, a protocol includes a set of rules defining how the network nodes within the communication network 105 interact with each other based on information sent over the communication links. The protocols are effective at different layers of operation within each node, from generating and receiving physical signals of various types, to selecting a link for transferring those signals, to the format of information indicated by those signals, to identifying which software application executing on a computer system sends or receives the information. The conceptually different layers of protocols for exchanging information over a network are described in the Open Systems Interconnection (OSI) Reference Model.

Communications between the network nodes are typically effected by exchanging discrete packets of data. Each packet typically comprises (1) header information associated with a particular protocol, and (2) payload information that follows the header information and contains information that may be processed independently of that particular protocol. In some protocols, the packet includes (3) trailer information following the payload and indicating the end of the payload information. The header includes information such as the source of the packet, its destination, the length of the payload, and other properties used by the protocol. Often, the data in the payload for the particular protocol includes a header and payload for a different protocol associated with a different, higher layer of the OSI Reference Model. The header for a particular protocol typically indicates a type for the next protocol contained in its payload. The higher layer protocol is said to be encapsulated in the lower layer protocol. The headers included in a packet traversing multiple heterogeneous networks, such as the Internet, typically include a physical (layer 1) header, a data-link (layer 2) header, an internetwork (layer 3) header and a transport (layer 4) header, and various application headers (layer 5, layer 6 and layer 7) as defined by the OSI Reference Model.

In one embodiment, the application 107 and the content mapping platform 103 may interact according to a client-server model, so that the application 107 of the UE 101 requests mapping and/or content data from the content mapping platform 103 on demand. According to the client-server model, a client process sends a message including a request to a server process, and the server process responds by providing a service (e.g., providing map information). The server process may also return a message with a response to the client process. Often the client process and server process execute on different computer devices, called hosts, and communicate via a network using one or more protocols for network communications. The term “server” is conventionally used to refer to the process that provides the service, or the host computer on which the process operates. Similarly, the term “client” is conventionally used to refer to the process that makes the request, or the host computer on which the process operates. As used herein, the terms “client” and “server” refer to the processes, rather than the host computers, unless otherwise clear from the context. In addition, the process performed by a server can be broken up to run as multiple processes on multiple hosts (sometimes called tiers) for reasons that include reliability, scalability, and redundancy, among others.

FIG. 2 is a diagram of the components of a mapping and user interface application, according to one embodiment. By way of example, the mapping and user interface application 107 includes one or more components for correlating and navigating between a live camera image and a prerecorded panoramic image. It is contemplated that the functions of these components may be combined in one or more components or performed by other components of equivalent functionality. In this embodiment, the mapping and user interface application 107 includes at least a control logic 201 which executes at least one algorithm for executing functions of the mapping and user interface application 107. For example, the control logic 201 interacts with an image module 203 to provide to a user a live camera view of the surrounding of a current location of the UE 101 (e.g., the Farragut West METRO Station). The image module 203 may include a camera, a video camera, a combination thereof, etc. In one embodiment, visual media is captured in the form of an image or a series of images.

Next, the control logic 201 interacts with a location module 205 to retrieve location data of the current location of the UE 101. In one embodiment, the location data may include addresses, geographic coordinates (e.g., GPS coordinates) or other indicators (e.g., longitude and latitude information) that can be associated with the current location. For example, the location data may be manually entered by the user (e.g., entering an address or title, clicking on a digital map, etc.) or extracted or derived from any geo-tagged data. It is contemplated that the location data or geo-tagged data could also be created by the location module 205 by deriving the location associated metadata such as media titles, tags, and comments. More specifically, the location module 205 can parse the metadata for any terms that indicate association with a particular location.



Download full PDF for full patent description/claims.

Advertise on FreshPatents.com - Rates & Info


You can also Monitor Keywords and Search for tracking patents relating to this Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device patent application.
###
monitor keywords



Keyword Monitor How KEYWORD MONITOR works... a FREE service from FreshPatents
1. Sign up (takes 30 seconds). 2. Fill in the keywords to be monitored.
3. Each week you receive an email with patent applications related to your keywords.  
Start now! - Receive info on patent apps like Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device or other areas of interest.
###


Previous Patent Application:
Method and apparatus for presenting location-based content
Next Patent Application:
Method for calculating ocular distance
Industry Class:
Computer graphics processing, operator interface processing, and selective visual display systems
Thank you for viewing the Method and apparatus for rendering a perspective view of objects and content related thereto for location-based services on mobile device patent info.
- - - Apple patents, Boeing patents, Google patents, IBM patents, Jabil patents, Coca Cola patents, Motorola patents

Results in 1.01567 seconds


Other interesting Freshpatents.com categories:
Medical: Surgery Surgery(2) Surgery(3) Drug Drug(2) Prosthesis Dentistry  

###

Data source: patent applications published in the public domain by the United States Patent and Trademark Office (USPTO). Information published here is for research/educational purposes only. FreshPatents is not affiliated with the USPTO, assignee companies, inventors, law firms or other assignees. Patent applications, documents and images may contain trademarks of the respective companies/authors. FreshPatents is not responsible for the accuracy, validity or otherwise contents of these public document patent application filings. When possible a complete PDF is provided, however, in some cases the presented document/images is an abstract or sampling of the full patent application for display purposes. FreshPatents.com Terms/Support
-g2--0.4015
     SHARE
  
           

FreshNews promo


stats Patent Info
Application #
US 20110279446 A1
Publish Date
11/17/2011
Document #
12780914
File Date
05/16/2010
USPTO Class
345419
Other USPTO Classes
International Class
06T15/00
Drawings
11


Graphic
Objects
Rendering
Services


Follow us on Twitter
twitter icon@FreshPatents