Fluorescent Multilayer Disc (FMD)

ABSTRACT

Compact discs were a revolutionary product at its time and influenced many spheres of human activity. People started recording music of high quality, which didn’t get worse with the time as it happens to be on tape. As soon as CDs appeared in computer industry they immediately became an undoubted helper both for users and for programmers. The latter were able to increase volume of their program products by adding video and audio elements etc. Later discs were used for digital video (VideoCD).

But technologies are progressing. Data are growing faster and faster. A usual CD is far not enough (640 MBytes). So, there appeared DVD technology. Of course we are happy with those 17 GBytes that can be kept on one DVD disc, but this is a limiting point. So we need a completely new method of storing information on portable data medium. And at last, the company Constellation 3D demonstrates a new format: FMD (Fluorescent Multilayer Disk), which can provide us with a staggering 140 GB of storage space seems to be an enticing solution for the storage-hungry masses.

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

Zone Based Ant Colony Routing In MANET

ABSTRACT

Ant colony optimization (ACO) is a stochastic approach for solving combinatorial optimization problems like routing in computer networks. The idea of this optimization is based on the food accumulation methodology of the ant community. Position based routing algorithms (POSANT) had some significant loopholes to find route like it never guarantees the route would be the shortest one, in cases while it is able to find it. The routing algorithms which are based on ant colony optimization find routing paths that are close in length to the shortest paths. The drawback of these algorithms is the large number of control messages that needs to be sent or the long delay before the routes are established from a source to a destination.This paper presents a new routing algorithm for mobile ad hoc network by combining the concept of Ant Colony approach and Zone based routing approach using clustering to get shortest path with small number of control messages to minimize the overhead. Also this shows that zone based ant colony routing algorithm using cluster is more efficient than POSANT routing algorithm by comparative Overhead study of POSANT and Zone based Ant using clustering concept with respect to varying Node Number, Zone Size and Mobility.

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

An Energy Aware Framework for Dynamic Software Management in Mobile Computing Systems

ABSTRACT

Energy efficiency is a very important and challenging issue for resource-constrained mobile computers. Here, a novel dynamic software management (DSOM) framework to improve battery utilization is introduced. DSOM module is designed and implemented in user space, independent of the operating system. DSOM explores quality-of-service adaptation to reduce system energy and employs a priority based pre-emption policy for multiple applications to avoid competition for limited energy resources. Software energy macromodels for mobile applications are employed to predict energy demand at each QoS level, so that DSOM module is able to select the best possible trade-off between energy conservation and application QoS; it also honors the priority desired by the user. Experimental results on some mobile applications like video player, speech recognizer and voice-over-IP show that this approach can meet user specified task oriented goals and significantly improve battery utilization.

If you are you interested in this seminar topic, mail to us to get

the full report * of the seminar topic.

Mail ID: - contact4seminars@gmail.com 

* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)

The Artificial Brain

ABSTRACT


We have always been interested in the notion of consciousness fact, which is, for us, the fact that an individual endowed with a brain can think of something related to his position in the world right here right now. It is not about the continuity, or the performance, nor the profoundness of the thought, but it is about thinking of something in a knowable manner and which can be specified from a linguistic or mathematical angle, without it being an automatic and predefined response to a given situation.
By analogy to the notion lengthily investigated by philosophers, psychologists, neurobiologists, we will pose the question of artificial consciousness: how can one transpose the fact of “thinking of something” into the computable field, so that an artificial system, founded on computer processes, would be able to generate consciousness facts, in a viewable manner. The system will have intentions, emotions and ideas about things and events related to it-self. The system would have to have a body that it could direct and which would constrain the system. It would also have to have a history, and intentions to act and, most of all, to think. It would have to have knowledge, notably language knowledge. It would have to have emotions, intentions and finally a certain consciousness about itself.
We can name this system, by sheer semantic analogy, an artificial brain. However we will see that its architecture is quite different from living brains. The concern is transposing the effects, the movements; certainly not reproducing the components like neurons and glial cells. We should keep in mind principally one characteristic of the process of thinking unfolding in a brain: there is a complex neural, biochemical, electrical activation movement happening. This movement is coupled to a similar but of a different mode in the nervous system deployed in the whole body. This complex movement generates, by selective emergence and by reaching a particular configuration, what we call a thought about something. This thought rapidly leads to actuators or language activity and descends then in the following thought which can be similar or different. This is the very complex phenomenon that has to be transposed into the computable domain.
Hence, we should approach the sudden appearance of thoughts in brains at the level of the complex dynamics of a system building and reconfiguring recurrent and temporized flow. We can transpose this into computer processes architectures containing symbolic meaning and we should make it geometrically self-controlled. Two reasonable hypotheses are made for this transposition:
• analogy between the geometrical dynamics of the real brain and of the artificial brain. For one, flows are complex images, almost continuous; for the other, these are dynamical graphs which deformations are evaluated topologically.
• combinatory complexity reduction of the real brain in the computable domain by using symbolic and pre-language level for this approach. The basic elements are completely different; they are not of the same scale.
However, once these hypotheses made, one should not start to develop an architecture that will operate its own control from the aspects of its changing geometry. One needs to ask the proper question about consciousness fact generation. A philosopher, a couple of decades ago, M. Heidegger, asked the proper question: what brings us to think about this thing right here right now? The answer, quite elaborate, to this question will conduct to a system architecture choice that will take us away from reactive or deductive systems. The system will generate intentionally its consciousness facts, intention as P. Ricoeur understood it. There are no consciousness facts without intention to think. This settles the question, considered as a formidable, of freedom to think. One thinks of everything according to his memory and his intuition on the moment, but only if it is expressible as a thought by the system producing thoughts. Some might see something infinite in this process; however it is not our case. A finite set of component which movements occur in a finite space has only a finite number of states in which it can be. Also, as the permanence of the physical real apprehensible by the sense is very strong, the preoccupation to think by man is quite limited, in his civilizations. Let us point out that artificial systems that will think artificially will be able to communicate directly at the level of forms of the ideas, without using a language mediator, and hence, would be co-active as well as being numerous in space.
For different reasons, numerous people think that the path of artificial consciousness’ investigation should not be taken at all. I feel differently, because, discoveries have been the very root of our existence, from fire to the mighty F-16s. The mind is a work of art moulded in mystery, and any effort to unlock its doors should be encouraged because, I am sure, that its discovery is only going to help us respect the great architect more.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



Parasitic Computing

ABSTRACT


“PARASITE” as the word suggests is an entity that resides on another entity exploiting the resources of the latter. The term “PARASITIC COMPUTING” refers to the technique of using the resources of one computer by another computer without the knowledge of the former. Distributed computing networks turn home users’ computers into part of a virtual supercomputer that can perform time-intensive operations. This seminar provides an insight into the details of how parasitic computing uses the computation power of the computers connected to the internet in solving complex mathematical problems. This technique was developed by the scientist at the Notre Dame University, Indiana (USA). According to the scientists, the transmission control protocol (TCP), could be used to solve a piece of a mathematical problem whose answer could then be relayed back to the original user. The implementation is discussed with the NP-Complete problem as example. Unlike hackers who exploit flaws to gain direct access to machines, the Notre Dame computer scientists created a virtual computer by using the fundamental components of distributed computing.



If you are you interested in this seminar topic, mail to us to get
the full report * of the seminar topic.
Mail ID: - contact4seminars@gmail.com
* conditions apply

– OR –

Click here for Quick Contact (Request for Topics)



  • © 2008 – 2013 seminars4you,

Follow

Get every new post delivered to your Inbox.

Join 1,345 other followers