0% found this document useful (0 votes)
31 views16 pages

Lect 1

Uploaded by

adarshhalse17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views16 pages

Lect 1

Uploaded by

adarshhalse17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Fundamentals of

Neural Network
Dr. Sheetal Mapare
Model Of A Biological
Neuron

Dendrite: Receives signals from


other neurons
Soma: Processes the information
Axon: Transmits the output of
this neuron
Synapse: Point of connection to
other neurons
Model Of A Biological Neuron
McCulloch-Pitts Neuron
It may be divided into 2 parts.
• The first part, g takes an
input performs an
aggregation and
• based on the aggregated
value the second
part, f makes a decision.

Threshold (θ theta):
•A fixed value that the weighted sum must exceed for the
neuron to "fire" or activate
McCulloch-Pitts Neuron
• Input: x1=1,x2=1
• Weights: w1=0.7 (excitatory), w2=−0.5 (inhibitory)
• Threshold (θ): 0.2
• The net input is:
• Net Input =(x1⋅w1)+(x2⋅w2)=(1⋅0.7)+(1⋅−0.5)=0.7
• If the threshold is 0.2, the neuron just meets the threshold and
"fires."
McCulloch-Pitts Neuron
The model is inspired by biological Artificial Neurons (McCulloch-Pitts or Modern Models)
neurons: In artificial neural networks, excitatory and inhibitory
signals are modeled using weights:
1.Inputs: Represent dendrites
1.Excitatory Weights:
receiving signals.
1. Positive weights (w>0) amplify the input signal,
2.Weights: Correspond to the increasing the likelihood of neuron activation.
strength of synaptic 2. Example: If an input is 1 and the weight is +0.5,
connections.
the contribution to the summation is +0.5, which
3.Summation: Mimics how a adds positively to the net input.
neuron integrates inputs. 2.Inhibitory Weights:
4.Threshold: Reflects the firing 1. Negative weights (w<0) suppress the input signal,
threshold of a neuron. reducing the likelihood of neuron activation.
5.Output: Indicates whether the 2. Example: If an input is 1 and the weight is −0.5,
neuron fires (sends a signal via the contribution to the summation is −0.5, which
its axon). subtracts from the net input.
Boolean Functions Using M-P Neuron

This representation just


denotes that, for the
boolean
inputs x_1, x_2 and x_
3 if
the g(x) i.e., sum ≥ th
eta, the neuron will fire
otherwise, it won’t.
NOT and NOR Gate
Nand Gate
Perceptron
• Definition of Perceptron
• The Perceptron is a type of artificial neuron introduced by Frank Rosenblatt in 1958. It is a simple supervised learning algorithm for binary classification
tasks. A perceptron takes multiple input values, computes their weighted sum, applies an activation function (typically a step function), and produces an
output.
Working of a perceptron
Example of AND Gate
Learning with Weight Adjustments
Learning with Weight Adjustments

Predicted output (y): 1 - Target output (t): 1


Learning with Weight Adjustments
Solve

the target output y=0


Solution

You might also like