Skinput

ABSTRACT

Skinput is an input technology that uses bio-acoustics sensing to localize finger taps on the skin. When augmented with a pico projector, the device can provide a direct manipulation, graphical user interface on the body. The technology was developed by Chris Harrison, Desney Tan and Dan Morris at Microsoft Research’s Computational User experience Group.

Skinput represents one way to decouple input from electronic devices with the aim of allowing devices to become smaller without simultaneously shrinking the surface area on which input can be performed. While other systems, like Sixth sense have attempted this with computer vision, Skinput employs acoustics, which take the advantage of the human body’s natural sound conductive properties. This allows the body to be annexed as an input surface without the need for the skin to be invasively instrumented with sensors, tracking, markers, or other items.

If you are interested in this seminar topic, Click here to know
how to get the full report. * conditions apply

The Artificial Brain

ABSTRACT


We have always been interested in the notion of consciousness fact, which is, for us, the fact that an individual endowed with a brain can think of something related to his position in the world right here right now. It is not about the continuity, or the performance, nor the profoundness of the thought, but it is about thinking of something in a knowable manner and which can be specified from a linguistic or mathematical angle, without it being an automatic and predefined response to a given situation.
By analogy to the notion lengthily investigated by philosophers, psychologists, neurobiologists, we will pose the question of artificial consciousness: how can one transpose the fact of “thinking of something” into the computable field, so that an artificial system, founded on computer processes, would be able to generate consciousness facts, in a viewable manner. The system will have intentions, emotions and ideas about things and events related to it-self. The system would have to have a body that it could direct and which would constrain the system. It would also have to have a history, and intentions to act and, most of all, to think. It would have to have knowledge, notably language knowledge. It would have to have emotions, intentions and finally a certain consciousness about itself.
We can name this system, by sheer semantic analogy, an artificial brain. However we will see that its architecture is quite different from living brains. The concern is transposing the effects, the movements; certainly not reproducing the components like neurons and glial cells. We should keep in mind principally one characteristic of the process of thinking unfolding in a brain: there is a complex neural, biochemical, electrical activation movement happening. This movement is coupled to a similar but of a different mode in the nervous system deployed in the whole body. This complex movement generates, by selective emergence and by reaching a particular configuration, what we call a thought about something. This thought rapidly leads to actuators or language activity and descends then in the following thought which can be similar or different. This is the very complex phenomenon that has to be transposed into the computable domain.
Hence, we should approach the sudden appearance of thoughts in brains at the level of the complex dynamics of a system building and reconfiguring recurrent and temporized flow. We can transpose this into computer processes architectures containing symbolic meaning and we should make it geometrically self-controlled. Two reasonable hypotheses are made for this transposition:
• analogy between the geometrical dynamics of the real brain and of the artificial brain. For one, flows are complex images, almost continuous; for the other, these are dynamical graphs which deformations are evaluated topologically.
• combinatory complexity reduction of the real brain in the computable domain by using symbolic and pre-language level for this approach. The basic elements are completely different; they are not of the same scale.
However, once these hypotheses made, one should not start to develop an architecture that will operate its own control from the aspects of its changing geometry. One needs to ask the proper question about consciousness fact generation. A philosopher, a couple of decades ago, M. Heidegger, asked the proper question: what brings us to think about this thing right here right now? The answer, quite elaborate, to this question will conduct to a system architecture choice that will take us away from reactive or deductive systems. The system will generate intentionally its consciousness facts, intention as P. Ricoeur understood it. There are no consciousness facts without intention to think. This settles the question, considered as a formidable, of freedom to think. One thinks of everything according to his memory and his intuition on the moment, but only if it is expressible as a thought by the system producing thoughts. Some might see something infinite in this process; however it is not our case. A finite set of component which movements occur in a finite space has only a finite number of states in which it can be. Also, as the permanence of the physical real apprehensible by the sense is very strong, the preoccupation to think by man is quite limited, in his civilizations. Let us point out that artificial systems that will think artificially will be able to communicate directly at the level of forms of the ideas, without using a language mediator, and hence, would be co-active as well as being numerous in space.
For different reasons, numerous people think that the path of artificial consciousness’ investigation should not be taken at all. I feel differently, because, discoveries have been the very root of our existence, from fire to the mighty F-16s. The mind is a work of art moulded in mystery, and any effort to unlock its doors should be encouraged because, I am sure, that its discovery is only going to help us respect the great architect more.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



A Search Engine for 3D Models

ABSTRACT


As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional text-based search techniques are not always effective for 3D data. In this paper, we investigate new shape-based search methods.
The key challenges are to develop query methods simple enough for novice users and matching algorithms robust enough to work for arbitrary polygonal models. We present a web-based search engine system that supports queries based on 3D sketches, 2D sketches, 3D models, and/or text keywords. For the shape-based queries, we have developed a new matching algorithm that uses spherical harmonics to compute discriminating similarity measures without requiring repair of model degeneracies or alignment of orientations. It provides 46{245% better performance than related shape matching methods during precision-recall experiments, and it is fast enough to return query results from a repository of 20,000 models in under a second. The net result is a growing interactive index of 3D models available on the Web (i.e., a Google for 3D models).



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Neural Networks

ABSTRACT


With the dawn of the genome era computational methods in the automatic analysis of biological data have become increasingly important. The explosion in the production rate of expression level data has highlighted the need for automated techniques that help scientists analyze, understand, and cluster (sequence) the enormous amount of data that is being produced. Example of such problems are analyzing gene expression level data produced by microarray technology on the genomic scale, sequencing genes on the genomic scale, sequencing proteins and amino acids, etc. Researchers have recognised Artificial Neural Networks as a promising technique that can be applied to several problems in genome informatics and molecular sequence analysis. This seminar explains how Neural Networks have been widely employed in genome informatics research.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Reconstruction of Recorded Sound

ABSTRACT


Bulk of recorded sound history the audio information was stored in mechanical media, such as a phonograph record or wax cylinder, via undulated surface incisions (grooves). The grooves’ shape and position can be reconstructed without mechanical contact by using precision optical metrology tools. The surface map thus obtained can be digitally processed further to remove noise artifacts due to damage and wear, and to convert the groove positional information into audio format. The viability of this approach was recently demonstrated on a 78 rpm shellac disc using two dimensional image capture and analysis methods. The present work expands on these results. A three dimensional reconstruction of mechanically recorded sound is reported. The surface of the source material, a wax cylinder, was scanned using co focal microscopy techniques and resulted in a faithful playback of the recorded information. The approach holds promise for careful reconstruction of valuable historical recording using full surface information to improve the sound fidelity, as well as means of automated mass digitization. Fast processing is required for the latter application. Methods to accelerate the scan rates, thereby making these techniques practical for use in working archives, are reported.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



FACE RECOGNITION USING Neural Network

ABSTRACT


Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult [43]. We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation.
The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Lo`eve transform in place of the self-organizing map, and a multi-layer perceptron in place of the convolutional network. The Karhunen-Lo`eve transform performs almost as well (5.3% error versus 3.8%). The multi-layer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach [43] on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Simple SAN Storage Resource Management in Microsoft® Windows Server™ 2003 R2

ABSTRACT


Can Storage Area Networks (SANs) be simple to install and use for the small and medium-sized business? Is it possible to provide basic storage resource management functions for administrators to use without having to purchase an expensive management application?
With the new storage management functions included with Microsoft® Windows Server™ 2003 R2 and the work done by various storage vendors, the goal of achieving a “Simple SAN” has been met, including the use of fibre channel and iSCSI storage technologies. This paper describes the goals and accomplishments of the Simple SAN program including actual deployment steps taken by Demartek to install and use a simple SAN. This paper also describes the use of basic storage resource management functions provided by Windows Server™ 2003 R2 including quota management and storage resource utilization.

This paper is divided into the following sections:
• Management Perspective on Fibre Channel and iSCSI SANs
• Simple SAN
• Installation on one server
• Using Storage Manager for SANs across multiple servers
• Storage Resource Management



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



  • © 2008 – 2013 seminars4you,

  • All rights reserved.