In unit seven, I gained a fundamental understanding of the multi-layer perceptron and how it differs from the perceptron. I was introduced to the operation of the back propagation network that I feel will require additional research to understand the use cases and its’ application. I continued to dive deeper into the concepts of artificial neurons and their operation. I gained a better understanding of the concepts of feed forward network algorithms. I have a significant knowledge gap in implementing a weighting algorithm of a perceptron and realize that practice is necessary to increase my fundamental understanding to help me build forward. I am building confidence towards a better understanding of a multi-layered perceptron and back propagation algorithm. I participated…show more content… It was interesting to research the differences between a perceptron and a feed forward neural network. I summarize the primary difference between training a perceptron and a feed-forward neural network with back propagation is the way that the parameters updates are performed. Perceptron’s update weights based on perceived errors, whereas back propagation updates parameters from output to input. Perceptron training learns and adjusts the weights as it learns from its’ perceived errors. Feed-forward neural networks use a feedback loop to learn and parameters are updated from the output layer into the input layer, contrary to the way that the information flows. The full learning experience of this class was thought provoking and exciting. Another exciting aspect is building from what I have learned. Additional practice is necessary to implement and design a perceptron, multi-layer perceptron, and a back propagation algorithm, but the desire to learn more is invigorating. I must say that every discussion topic has been creative, insightful, and helpful in my learning path. The perspectives from my classmates are