Maemo

ABSTRACT

Maemo is a software platform developed by Nokia for smartphones and Internet Tablets. It is based on the Debian Linux distribution. The platform comprises the Maemo operating system and the Maemo SDK.

Maemo is mostly based on open source code, and has been developed by Maemo Devices within Nokia in collaboration with many open source projects such as the Linux kernel, Debian, and GNOME. Maemo is based on Debian GNU/Linux and draws much of its GUI, frameworks, and libraries from the GNOME project. It uses the Matchbox window manager, and the GTK-based Hildon as its GUI and application framework.

The UI in Maemo 4 is similar to many handheld interfaces, and features a “Home” screen, which acts as a central point from which all applications and settings are accessed. The Home Screen is divided into areas for launching applications, a menu bar, and a large customisable area that can display information such as an RSS reader, Internet radio player, and Google search box.

The Maemo 5 UI is slightly different: The menu bar and info area are consolidated to the top of the display, and the four desktops can be customized with shortcuts and widgets.

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

Advertisements

Utilization of Photosynth Point Clouds for 3D Object Reconstruction

ABSTRACT

There is a growing demand in modeling amorphous shapes like statues, figurine or monuments for computer visualization and documentation. Using photographs as the data source is a most convenient recording mode on site. The equipment is easy to handle and transportation is in no case a problem as it can occur if a laser scanning device has to be employed.

A photograph is a container of high information density. It carries radiometric information and can provide range values as well. In terms of computer vision, structure from motion is a process to find the correspondence between images. Features must be tracked from one image to the next. The 3D positions of the feature points and the camera movement is the result of the registration process. In the cultural heritage community the Epoch ARC 3D Web service is in common use. Linked with the open source tool MeshLab it provides an automated workflow including object reconstruction, mesh processing and textured rendering.

Recent developments from Microsoft introduce photo browsing. The Web community can view images from cities in Virtual Earth and participate with own objects applying Photosynth. Photosynth is designed as an image browser for objects, documented by internet imagery. The user navigates through a bundle of images representing the object. A smooth transition from one photo to the next is leading to the impression of a 3D model. Photo positions are known as well as a point cloud emerged from the registration process.

Applying a network protocol analyzer provides the location of the binary point cloud files. A Python script affords conversion into common CAD formats like PLY or DXF. Producing a low level polygon mesh for real time visualization exacts selecting the points of interest and mesh processing. The latter task will be solved applying MeshLab. It has to be mentioned, that the focus in this context points more on that low level polygon model instead of a high accurate and high density model. Furthermore Blender with its comfortable function for UV texture mapping and export script to X3D will be applicable to construct a complete 3D scene. This contribution introduces a 3D object reconstruction method from a series of photographs processed with Web tools and open source software.

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

Rover technology

ABSTRACT

Location-aware computing involves the automatic tailoring of information and services based on the current location of the user. We have designed and implemented Rover, a system that enables location-based services, as well as the traditional time-aware, user-aware and device-aware services. To achieve system scalability to very large client sets, Rover servers are implemented in an “action-based” concurrent software architecture that enables fine-grained application-specific scheduling of tasks. We have demonstrated feasibility through implementations for both outdoor and indoor environments on multiple platforms. The intriguing aspect of this scenario is the automatic tailoring of information and services based on the current location of the user. We refer to this paradigm as location-aware computing.

The different technology components needed to realize location-aware computing are present today, powered by the increasing capabilities of mobile personal computing devices and the increasing deployment of wireless connectivity (IEEE 802.11 wireless LANs [7], Bluetooth [1], Infra-red [2], Cellular services, etc.). Location-aware, in addition to the more traditional notions of time-aware, user-aware, and device-aware. Rover has a location service that can track the location of every user, either by automated location determination technology (for example, using signal strength or time difference) or by the user manually entering current location (for example, by clicking on a map).

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

  • © 2008 – 2013 seminars4you,

  • All rights reserved.