Mehrotra & altri - Elements of Artificial Neural Networks, stooges (hasło - stooges), ebooks, Consciousness Books ...

[ Pobierz całość w formacie PDF ]
Preface
This
book is intendedas an introductionto the
subject
of artificial neuralnetworksfor
readersat the senior
undergraduate
or
beginninggraduate
levels
,
as well as
professional
engineers
and scientist. The
backgroundpresumed
is
roughly
a
year
of
college
-
level
mathematic
,
andsomeamountof
exposure
to thetaskof
developingalgorithms
andcomputer
program
. For
completenes
,
someof the
chapters
containtheoreticalsectionsthat
discussissuessuchasthe
capabilities
of
algorithmspresente
. Thesesection
,
identified
by
an asteriskin the sectionname
,
requiregreater
mathematical
sophistication
and
may
be
skippedby
readerswho are
willing
to assume
theexistenceof theoreticalresults
about
neuralnetwork
algorithm
.
Many
off
-
the
-
shelfneuralnetworktoolkits areavailabl
,
including
someon theInterne
,
andsomethatmakesourcecodeavailablefor
experimentatio
. Toolkitswith use
-
friendly
interfacesareusefulin
attackinglargeapplication
;
for a
deeperunderstandin
,
werecommend
thatthereaderbe
willing
to
modify computer
program,
ratherthanremaina userof
codewritten elsewher.
The authorsof this bookhaveusedthe
materialin
teaching
coursesat
Syracuse
University
,
covering
various
chapters
in thesame
sequence
asin thebook. Thebookis
organized
sothatthemost
frequently
usedneuralnetwork
algorithms(
suchaserror
backpropagatio
)
areintroduced
very
early,
so that thesecan form the basisfor
initiating
course
projects
.
Chapters
2
,
3
,
and4 havea linear
dependency
and
,
thus
,
shouldbe coveredin the same
sequenc
. Howeve
,
chapters
5 and6 are
essentiallyindependent
of eachotherand
earlier
chapter
,
sothese
may
becoveredin
any
relativeorde. If the
emphasis
in a course
is to be
on associativenetwork
,
for instanc
,
then
chapter
6
may
be coveredbefore
chapters
2
,
3
,
and4.
Chapter
6 shouldbe discussedbefore
chapter
7. If the
"
non
-
neura
"
parts
of
chapter
7
(
sections7.2 to 7.5
)
arenot coveredin a shortcours
,
thendiscussionof section7.1
may immediately
follow
chapter
6. The inter
-
chapterdependency
rules are
roughly
as
follows.
1
-...,.
2
-...,.
3
-...,.
4
1
-...,.
5
1
-...,.
6
3
-...,.
5.3
6.2
-...,.
7.1
Within each
chapte
,
it is bestto covermostsectionsin the same
sequence
asthe text
;
this is not
logically
necessary
for
parts
of
chapters
4
,
5
,
and 7
,
but
minimizes student
confusion
.
Material
for
transparenciesmay
beobtainedfrom the
author . Wewelcome
suggestions
for
improvements
andcorrection. I
.
nstructorswho
plan
to usethebookin a courseshould
Preface
sendelectronicmail to
one of the author
,
so
that we can indicate
any
last
-
minute corrections
needed
(
if errors
arefound afterbook
prOductio)
. New theoretical
and
practical
developments
continueto
be
reported
in the neural
networkliterature
,
and
someof these
are
relevantevenfor
newcomers
to the field
;
we
hope
to communicatesome
suchresults
to
instructorswho contactus.
~
e authorsof this book
have arrived at neural
networks
through
different
paths
(
statistic
,
artificial
intelligence
,
and
parallel
computing
)
and have
developed
the material
throughteaching
courses
in
Computer
andInformation
Scienc . Someof our
blases
may
show
through
the text
, while
perspectives
found in
otherbooks
may
be
missing;
for
instanc
,
we do not discountthe
importance
of
neurobiological
issue
,
although
theseconsume
little ink in the book. It is
hoped
that this book will
help
newcomersunderstand
the rationale
,
advantage
,
and
limitations of variousneuralnetwork
models. For
"
details
regarding
some
o~ the more
mathematicaland technicalmateria
,
the readeris referred
to more
advancedtexts suchas those
by
Hertz
,
Krogh
,
and Palmer
(
1990
)
and
Haykin
(
1994
)
.
We
express
our
gratitiude
to all the
researcherswho haveworkedon and
written about
neuralnetwork
,
and
whosework hasmade
this book
possibl
. We thank
Syracuse
University
andthe
University
of Florida
, Gainesville
,
for
supporting
us
during
the
process
of
writing
this book. We
thankLi
-
Min Fu
,
Joydeep
Ghosh
,
andLockwoodMorris
for
many
useful
suggestions
that
have
helpedimprove
the
presentatio
. We thank all the
students
whohavesuffered
through
earlierdraftsof this book
,
and
whosecommentshave
improved
this book
,
especially
S. K.
Bolaza
,
M. Gunwan
,
A . R.
Menon
,
andZ.
Zeng
. We thank
ElaineWeinma
,
who has
contributedmuchto the
development
of thetext.
Harry
Stanton
of the
MIT Presshasbeenan
excellenteditorto work with.
Suggestions
on an
early
draft
of the
book
,
by
various
reviewer
,
have
helped
correct
many
errors.
Finally
,
our families
have
beenthe sourceof muchneeded
supportduring
the
many
monthsof work this book
has
entailed.
We
expect
that someerrorsremain in
the text
,
and welcomecomments
and corrections
from
reader. The authors
may
be
reached
by
electronicmail at mehrotra
@
SY
:edu
,
ckmohan@
SY
:edu
,
and ranka@cis.u
.fl.
edu. In
particula,
therehasbeenso
much recent
research
in neural
networksthat we
may
have
mistakenly
failed to mentionthe
namesof
researcherswho
have
developed
someof theideas
discussed
in this book.
Errat ,
computer
program
,
anddatafiles
will bemadeaccessible
by
Interne.
In
troducti0n
Ifwe couldfirst
know where we are
,
and whither we are
we could better
judge
what to do
,
and how to do it.
-
tending
,
Abraham Lincoln
Many
tasks
involving intelligence
or
patternrecognition
are
extremely
difficult to automate
,
but
appear
to be
performedvery easilyby
animals. For instanc
,
animals
recognize
various
objects
and make senseout of the
large
amountof visual information in their
surrounding,
apparentlyrequiringvery
little effort. It standsto reasonthat
computingsystems
that
attempt
similar
taskswill
profit enormously
from
understanding
how animals
perform
thesetasks
,
and
simulating
these
process
esto theextentallowed
by physicallim
-
itations. This necessitates
the
study
andsimulationof NeuralNetwork .
Theneuralnetworkof an
animalis
part
of its nervous
system
,
containing
a
large
number
of interconnectedneurons
(
nervecells
)
.
"
Neural
"
is an
adjective
for neuron
,
and
"
network
"
denotesa
graph
-
like structur .
Artificial
neural networksrefer to
computingsystems
whosecentralthemeis borrowedfrom
the
analogy
of
biological
neuralnetworks.
Bowing
to common
practice
,
we omit the
prefix
"
artificial.
"
Thereis
potential
forconfusing
the
(
artificial
) poor
imitationfor the
(biological)
real
thing
;
in this text
,
non
-
biological
wordsandnamesareusedasfar as
possible
.
Artificial neuralnetworksare also referredto as
"
neuralnets
,
" "
artificial
neural
systems
,
"
"
parallel
distributed
processingsystem
,
"
and
"
connectionist
system
.
"
For acom
-
puting system
to be called
by
these
pretty
name
,
it is
necessary
for the
system
to have
a labeleddirected
graph
structurewherenodes
perform
some
simplecomputation
. From
elementarygraphtheory
we recallthata
"
directed
graph
"
consistsof a setof
"
node
"
(
vertices
)
anda setof
"
connection
"
(
edge
/links/arcs
) connectingpairs
of node . A
graph
is a
"
labeled
graph
"
if eachconnectionis associatedwith a label to
identify
some
property
of
the connectio . In a neuralnetwork
,
eachnode
performs
some
simplecomputation
,
and
eachconnection
conveys
a
signal
from onenodeto anothe
,
labeled
by
a numbercalled
the
"
connection
strengt
"
or
"
weight
"
indicating
the extentto which a
signal
is
amplified
or
diminished
by
a connectio . Not
every
such
graph
canbecalleda neuralnetwork
,
asillustrated
in
example
1.1
using
a
simple
labeleddirected
graph
thatconductsan
elementary
computatio
.
1.1
The
"
AND
"
of two
binaryinputs
is an
elementary
EXAMPLE
logical operatio
,
implemented
in
hardware
using
an
"
AND
gate
.
"
If the
inputs
to theAND
gate
are
Xl
E
to
,
I
}
and
X2
E
to,
I
},
the desired
output
is 1 if
Xl
=
X2
= 1
,
and0 otherwise. A
graphrepresenting
this
computation
is shownin
figure
1.1
,
with one
nodeat which
computation
(
multiplication
)
is
carried
out
,
two nodesthat hold the
inputs
(Xl, XV,
and onenodethat holds one
outpu
. Howeve
,
this
graph
cannotbeconsidereda neuralnetwork
sincethe connections
Multiplier
I
I
1
Introduction
E
{Oil
}
Xl
0
=
xl
AND
~
~
E
{
l
}
Figure
1
.1
AND
2ate2fao .
0
=
XI
AN
~
Figure
1.2
AND
gatenetwork.
between
thenodesarefixed and
appear
to
play
no otherrole than
carrying
the
inputs
to the
node
that
computes
their
conjunction
.
We
maymodify
the
graph
in
figure
1.1to obtaina network
containingweights(
connection
strength
),
asshownin
figure
1.2. Differentchoicesfor the
weights
resultin different
functions
being
evaluated
by
the network. Given a network whose
weights
are
initially
random
,
and
given
that we
know the taskto be
accomplishedby
the
networ ,
a
"
learning
algorithm
"
mustbe used
to detenninethe valuesof the
weights
that will achievethe
desiredtask. The
graph
structur
,
with connection
weights
modifiable
usingalearningal
-
gorithm
,
qualifies
the
computingsystem
to becalledanartificial neuralnetwork.
EXAMPLE1.2 For the network shownin
fip
1.2
,
the
following
is an
example
of a
learningalgorithm
that will allow
learning
the AND
function
,
starting
from
arbitrary
values
of
WI
and
W2
. The trainer usesthe
following
four
examples
to
modify
the
weights
:
{
(Xl
=
1
, X2
=
1
,
d
=
1
)
,
(Xl
=
0
,
X2
=
0
,
d
=
0
), (Xl
=
1
, X2
=
0
,
d
=
0
), (Xl
=
0
, X2
=
1
,
d
=
O
)
}
. An
(Xl, X2
) pair
is
presented
to the
networ,
andthe result0
computedby
the
networkis observe. If the valueof 0 coincideswith the desiredresultd
,
the
weights
are
not
change
. If the valueof 0 is smallerthanthe desiredresult
, WI
is increased
by
0.1
;
andif thevalueof 0 is
larger
thanthe desiredresul
,
WI
is decreased
by
0.1. For
instanc
,
[ Pobierz całość w formacie PDF ]
  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • sylwina.xlx.pl