Thursday, February 26, 2015
artificial Neural Networks
HUMAN BODY ANATOMY ORGANS :
Computers outperform humans in many tasks. Although humans must write the instructions, once the schedule is up and running, a computer can achieve arithmetic or sort a list in a fraction of the time a person would want to do the same job. The most advanced computers today are trillions of times faster than humans in sure tasks, and Ibms supercomputer Deep Blue defeated Garry Kasparov, the reigning chess champion, in a 1997 chess match.
artificial Neural Networks
But even the fastest computers cannot outperform humans in all tasks. Computers excel at tasks requiring a large estimate of easy operations, but unlike humans, computers are not yet commonly capable of development new discoveries. The human brain is an astonishingly involved organ composed of billions of cells; one type of cell, called a neuron, communicates with other neurons to generate vast networks. The complexity, adaptability, and information-processing capacity of these neural networks provide humans with the intelligence to guide experiments, test scientific theories, formulate normal principles, learn new things, and write computer programs. A computer can only carry out its instructions. Computers are able to run involved programs, but the schedule must consist of a sequence of easy instructions, and a computers processor can only follow these instructions - it does what it is told to do, and nothing more. Deep Blue won its chess match by performing billions of easy calculations that evaluated the outcome of possible moves.
Artificial intelligence (Ai) is a subject of computer science aimed at creating machines capable of showing a sure degree of intelligence. The extreme goal of Ai is a computer that can think like a person. One option to reach this goal would be to give computers a "brain" that is similar to a human brain. Many Ai researchers who pursue this option have started tinkering with synthetic neural networks, which are not biological though they are based on the operating system of the brain.
Artificial neural networks were not possible until scientists had some idea about the biological neural networks in the brain. Neurons are enclosed in a membrane and are tiny, having a cell body with a diameter of about 0.02-0.06 inches (0.05-0.150 cm) and a long, thin projection called an axon.
Detailed study of neurons began in 1873, when Italian researcher Camillo Golgi (1843-1926) advanced a formula of staining the cells so that they could be in effect viewed in microscopes. Neurons, like most cells, are mostly transparent, and they are tightly packed together, development these small objects nearly impossible for scientists to see and study even under little magnification. Golgis formula involved a dye consisting of silver nitrate, which some (though not all) neurons take up. The dye stained these neurons and made them stand out against a background of unstained cells. (If the dye had stained all the cells, the follow would have been a uniform field of color - as useless as the original, transparent condition, because researchers could not have studied personel cells.) Why some but not all neurons take up this dye is still not well understood, but the formula gives scientists a good look at these leading cells.
Using Golgis technique, Spanish anatomist Santiago Ramon y Cajal (1852-1934) suggested that neurons process data by receiving inputs from other cells and sending outputs down the axon. Cajals theories proved to be mostly correct. Neurons send and receive data from cell to cell by way of small junctions called synapses, named by British physiologist Sir Charles Sherrington (1857-1952) in 1897. As shown in the figure, synapses are ordinarily formed in the middle of the axon of the sending neuron - the presynaptic neuron - and a dendrite or cell body of the receiving neuron - the postsynaptic neuron. The form illustrates the anatomy of a neuron and its synapses.
Information in the brain has a distinct form than it has in a computer or in human languages such as English. Neurons articulate a small electrical possible of about -70 millivolts (a millivolt is a thousandth of a volt) - the interior of a neuron is about 70 millivolts more negative than the outside. This small voltage is only about 1/20th the voltage of an commonplace flashlight battery and is not grand by itself (though some animals such as electric eels can consolidate the small potentials produced by their cells to generate a grand shock). More leading is a neurons ability to change its voltage briefly, causing a voltage spike that lasts a few milliseconds. The spike is known as an activity potential.
Neurons send data in the form of sequences of activity potentials. An activity possible travels down an axon until it arrives at a extra site called an axon terminal, which is ordinarily settled at a synapse. In most synapses, the spike causes the presynaptic neuron to release molecules known as neurotransmitters that cross the synaptic gap and attach to a receptor in the postsynaptic membrane. This activates the receptor, which sets sure biochemical reactions into appeal and can slightly change the possible of the postsynaptic neuron. Neurons are continually receiving these synaptic inputs, ordinarily from a thousand or more neurons, some of which slightly elevate the neurons possible and some of which depress it. A neuron will commonly activate an activity possible if its voltage exceeds a threshold, perhaps 10 or 15 millivolts higher (more positive) than its resting possible of -70 millivolts. In this way, neurons are constantly "processing" their inputs, some of which are excitatory, tending to cause the neuron to spike by pushing it closer to the threshold, and some of which are inhibitory, development it more difficult for a neuron to spike by dropping the possible farther away from the threshold. The follow of this processing is the brain activity responsible for all the intriguing - and sometimes not so intriguing - things that habitancy do.
Vision, for example, begins when extra cells in the eye called photoreceptors discharge light. Other cells change the light signals into trains of activity potentials that represent the dark and intriguing areas development up the image. Dozens of neural networks, distributed over vast areas in the brain, process this optic information, extracting data such as the estimate and type of objects and the color and appeal of these objects. At some point - scientists are not sure how and where - the person perceives and becomes consciously aware of this optic information.
Information processing in the brain is much distinct than in an commonplace computer. A computer commonly operates on binary values using digital logic circuits to transform data. Each processor in a computer works serially, one step at a time. In the brain, data processing occurs in parallel. Millions of neurons are working at the same time, summing their synaptic inputs and generating more or fewer activity potentials. This activity is sometimes called parallel distributed processing, which refers to the simultaneous parallel operations distributed over a broad area.
The parallel nature of data processing in the brain is the conjecture it can work so quickly. Computers are much faster in arithmetic, but the brains original function is not to add or subtract numbers quickly. The brain evolved to analyze sensory inputs - vision, hearing, smell, taste, and touch - and excerpt vital data with regard to food and predators. Neural networks in the brain can justify an image more rapidly and accurately than any computer program, for example. Each neuron behaves like a little processor, contributing its part to the wide computation. Supercomputers gain speed by using a lot of processors working in parallel, but the brain has practically a trillion neurons, which gives it a computational capacity greatly exceeding computers for jobs it evolved to perform.
width="300" height="250" border="0">
No comments:
Post a Comment