Performance Evaluation of Efficient and Reliable Routing Protocols for Fixed-Power Sensor Networks

ABSTRACT

A Fixed–power wireless sensor networks are cost effective and prevalent, and they are facing lots of problems like RF interference, node failure from environmental noise and energy constraints. A routing protocol for Fixed – power wireless sensor networks must overcome these problems. Because it have to achieve reliability, energy efficiency and scalability in message delivery. In this paper, we propose an efficient and reliable routing protocol (EAR) that achieves reliable and scalable performance with minimal compromise of energy efficiency. The routing design of EAR is based on four parameters – expected path length and a weighted combination of distance traversed, energy levels and link transmission success history, to dynamically determine and maintain the best routes. We will evaluate the performance of efficient and reliable routing protocols for such networks.

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

The Artificial Brain

ABSTRACT


We have always been interested in the notion of consciousness fact, which is, for us, the fact that an individual endowed with a brain can think of something related to his position in the world right here right now. It is not about the continuity, or the performance, nor the profoundness of the thought, but it is about thinking of something in a knowable manner and which can be specified from a linguistic or mathematical angle, without it being an automatic and predefined response to a given situation.
By analogy to the notion lengthily investigated by philosophers, psychologists, neurobiologists, we will pose the question of artificial consciousness: how can one transpose the fact of “thinking of something” into the computable field, so that an artificial system, founded on computer processes, would be able to generate consciousness facts, in a viewable manner. The system will have intentions, emotions and ideas about things and events related to it-self. The system would have to have a body that it could direct and which would constrain the system. It would also have to have a history, and intentions to act and, most of all, to think. It would have to have knowledge, notably language knowledge. It would have to have emotions, intentions and finally a certain consciousness about itself.
We can name this system, by sheer semantic analogy, an artificial brain. However we will see that its architecture is quite different from living brains. The concern is transposing the effects, the movements; certainly not reproducing the components like neurons and glial cells. We should keep in mind principally one characteristic of the process of thinking unfolding in a brain: there is a complex neural, biochemical, electrical activation movement happening. This movement is coupled to a similar but of a different mode in the nervous system deployed in the whole body. This complex movement generates, by selective emergence and by reaching a particular configuration, what we call a thought about something. This thought rapidly leads to actuators or language activity and descends then in the following thought which can be similar or different. This is the very complex phenomenon that has to be transposed into the computable domain.
Hence, we should approach the sudden appearance of thoughts in brains at the level of the complex dynamics of a system building and reconfiguring recurrent and temporized flow. We can transpose this into computer processes architectures containing symbolic meaning and we should make it geometrically self-controlled. Two reasonable hypotheses are made for this transposition:
• analogy between the geometrical dynamics of the real brain and of the artificial brain. For one, flows are complex images, almost continuous; for the other, these are dynamical graphs which deformations are evaluated topologically.
• combinatory complexity reduction of the real brain in the computable domain by using symbolic and pre-language level for this approach. The basic elements are completely different; they are not of the same scale.
However, once these hypotheses made, one should not start to develop an architecture that will operate its own control from the aspects of its changing geometry. One needs to ask the proper question about consciousness fact generation. A philosopher, a couple of decades ago, M. Heidegger, asked the proper question: what brings us to think about this thing right here right now? The answer, quite elaborate, to this question will conduct to a system architecture choice that will take us away from reactive or deductive systems. The system will generate intentionally its consciousness facts, intention as P. Ricoeur understood it. There are no consciousness facts without intention to think. This settles the question, considered as a formidable, of freedom to think. One thinks of everything according to his memory and his intuition on the moment, but only if it is expressible as a thought by the system producing thoughts. Some might see something infinite in this process; however it is not our case. A finite set of component which movements occur in a finite space has only a finite number of states in which it can be. Also, as the permanence of the physical real apprehensible by the sense is very strong, the preoccupation to think by man is quite limited, in his civilizations. Let us point out that artificial systems that will think artificially will be able to communicate directly at the level of forms of the ideas, without using a language mediator, and hence, would be co-active as well as being numerous in space.
For different reasons, numerous people think that the path of artificial consciousness’ investigation should not be taken at all. I feel differently, because, discoveries have been the very root of our existence, from fire to the mighty F-16s. The mind is a work of art moulded in mystery, and any effort to unlock its doors should be encouraged because, I am sure, that its discovery is only going to help us respect the great architect more.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Brain Machine Interface

ABSTRACT


A brain-machine interface is a communication system that does not depend on the brains normal output pathways of peripheral nerves and muscles. It is a new communication link between a functioning human brain and the outside world. These are electronic interfaces with the brain, which has the ability to send and receive signals from the brain. BMI uses brain activity to command, control, actuate and communicate with the world directly through brain integration with peripheral devices and systems. The signals from the brain are taken to the computer via the implants for data entry without any direct brain intervention. BMI transforms mental decisions and/or reactions into control signals by analyzing the bioelectrical brain activity.
While linking the brain directly with machines was once considered science fiction, advances over the past few years have made it increasingly viable. It is an area of intense research with almost limitless possibilities. The human brain is the most complex physical system we know of, and we would have to understand its operation in great detail to build such a device. An immediate goal of brain-machine interface study is to provide a way for people with damaged sensory/motor functions to use their brain to control artificial devices and restore lost capabilities. By combining the latest developments in computer technology and hi-tech engineering, paralyzed persons will be able to control a motorized wheel chair, computer painter, or robotic arm by thought alone. In this era where drastic diseases are getting common it is a boon if we can develop it to its full potential. Recent technical and theoretical advances, have demonstrated the ultimate feasibility of this concept for a wide range of space-based applications. Besides the clinical purposes such an interface would find immediate applications in various technology products also.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Adding Intelligence to Internet

ABSTRACT


Two scaling problems face the Internet today. First, it will be years before terrestrial networks are able to provide adequate bandwidth uniformly around the world, given the explosive growth in Internet bandwidth demand and the amount of the world that is still unwired. Second, the traffic distribution is not uniform worldwide: Clients in all countries of the world access content that today is chiefly produced in a few regions of the world (e.g., North America). A new generation of Internet access built around geosynchronous satellites can provide immediate relief. The satellite system can improve service to bandwidth-starved regions of the globe where terrestrial networks are insufficient and supplement terrestrial networks elsewhere. This new generation of satellite system manages a set of satellite links using intelligent controls at the link endpoints. The intelligence uses feedback obtained from monitoring end-user behavior to adapt the use of resources. Mechanisms controlled include caching, dynamic construction of push channels, use of multicast, and scheduling of satellite bandwidth. This paper discusses the key issues of using intelligence to control satellite links, and then presents as a case study the architecture of a specific system: the Internet Delivery System, which uses INTELSAT’s satellite fleet to create Internet connections that act as wormholes between points on the globe.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



EUVL

ABSTRACT


This paper discusses the basic concepts and current state of development of EUV lithography (EUVL), a relatively new form of lithography that uses extreme ultraviolet (EUV) radiation with a wavelength in the range of 10 to 14 nanometers (nm) to carry out projection imaging. Currently, and for the last several decades, optical projection lithography has been the lithographic technique used in the high-volume manufacture of integrated circuits. It is widely anticipated that improvements in this technology will allow it to remain the semiconductor industry’s workhorse through the 100 nm generation of devices. However, some time around the year 2005, so-called Next-Generation Lithographies will be required. EUVL is one such technology vying to become the successor to optical lithography. This paper provides an overview of the capabilities of EUVL, and explains how EUVL might be implemented. The challenges that must be overcome in order for EUVL to qualify for high-volume manufacture are also discussed.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



ASSYMETRIC DIGITAL SUBSCRIBER LINE (ADSL)

ABSTRACT


Digital Subscriber Lines (DSL) are used to deliver high-rate digital data over existing ordinary phone-lines. A new modulation technology called Discrete Multitone (DMT) allows the transmission of high speed data. DSL facilitates the simultaneous use of normal telephone services, ISDN, and high speed data transmission, e.g., video. DMT-based DSL can be seen as the transition from existing copper-lines to the future fiber-cables. This makes DSL economically interesting for the local telephone companies. They can offer customers high speed data services even before switching to fiber-optics.
DSL is a newly standardized transmission technology facilitating simultaneous use of normal telephone services, data transmission of 6 M bit/s in the downstream and Basic- rate Access (BRA). DSL can be seen as a FDM system in which the available bandwidth of a single copper-loop is divided into three parts. The base band occupied by POTS is split from the data channels by using a method which guarantees POTS services in the case of ADSL-system failure (e.g. passive filters).



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



AUDIO SPOTLIGHTING

ABSTRACT


Audio spot lighting is a very recent technology that creates focused beams of sound similar to light beams coming out of a flashlight. By ‘shining’ sound to one location, specific listeners can be targeted with sound without others nearby hearing it. It uses a combination of non-linear acoustics and some fancy mathematics. But it is real and is fine to knock the socks of any conventional loud speaker. This acoustic device comprises a speaker that fires inaudible ultrasound pulses with very small wavelength which act in a manner very similar to that of a narrow column. The ultra sound beam acts as an airborne speaker and as the beam moves through the air gradual distortion takes place in a predictable way due to the property of non-linearity of air. This gives rise to audible components that can be accurately predicted and precisely controlled. Joseph Pompei’s Holosonic Research Labs invented the Audio Spotlight that is made of a sound processor, an amplifier and the transducer. The American Technology Corporation developed the Hyper Sonic Sound-based Directed Audio Sound System. Both use ultrasound based solutions to beam sound into a focused beam. Audio spotlight can be either directed at a particular listener or to a point where it is reflected.

The targeted or directed audio technology is going to a huge commercial market in entertainment and consumer electronics and technology developers are scrambling to tap in to the market. Being the most recent and dramatic change in the way we perceive sound since the invention of coil loud speaker, audio spot light technology can do many miracles in various fields like Private messaging system, Home theatre audio system, Navy and military applications, museum displays, ventriloquist systems etc. Thus audio spotlighting helps us to control where sound comes from and where it goes!



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Smart note taker

ABSTRACT


The Smart Note Taker is such a helpful product that satisfies the needs of the people in today’s technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one’s self with something. With the help of Smart Note Taker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It’s also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.

There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com 
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Artificial Neural Network (ANN)

ABSTRACT


An Artificial Neural Network (ANN) is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.

Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived several eras. Many important advances have been boosted by the use of inexpensive computer emulations. The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pitts.

There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked. Their networks were based on simple neurons, which were considered to be binary devices with fixed threshold.

Not only was neuroscience, but psychologists and engineers also contributed to the progress of neural network simulations. Rosenblatt (1958) stirred considerable interest and activity in the field when he designed and developed the Perceptron. The Perceptron had three layers with the middle layer known as the association layer. This system could learn to connect or associate a given input to a random output unit.

Another system was the ADALINE (Adaptive Linear Element) which was developed in 1960 by Widrow and Hoff (of Stanford University). The ADALINE was an analogue electronic device made from simple components. The method used for learning was different to that of the Perceptron, it employed the Least-Mean-Squares (LMS) learning rule. Progress during the late 1970s and early 1980s was important to the re-emergence on interest in the neural network field.Significant progress has been made in the field of neural networks-enough to attract a great deal of attention and fund further research. Neurally based chips are emerging and applications to complex problems developing. Clearly, today is a period of transition for neural network technology.

Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an “expert” in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer “what if” questions.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Neural Networks

ABSTRACT


With the dawn of the genome era computational methods in the automatic analysis of biological data have become increasingly important. The explosion in the production rate of expression level data has highlighted the need for automated techniques that help scientists analyze, understand, and cluster (sequence) the enormous amount of data that is being produced. Example of such problems are analyzing gene expression level data produced by microarray technology on the genomic scale, sequencing genes on the genomic scale, sequencing proteins and amino acids, etc. Researchers have recognised Artificial Neural Networks as a promising technique that can be applied to several problems in genome informatics and molecular sequence analysis. This seminar explains how Neural Networks have been widely employed in genome informatics research.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



  • © 2008 – 2013 seminars4you,

Follow

Get every new post delivered to your Inbox.

Join 1,314 other followers