Free Essay

Neural Network

In:

Submitted By lamlouma
Words 3829
Pages 16
Prepared by: Lama R. Khreiss

Advanced Quantitative Methods in Business – MGT 501
Neural Network Technique

Outline

* Overview ………………………………………………………….……… 4 * Definition …………………………………………………4 * The Basics of Neural Networks……………………………………………5 * Major Components of an Artificial Neuron………………………………..5 * Applications of Neural Networks ……………….9 * Advantages and Disadvantages of Neural Networks……………………...12 * Example……………………………………………………………………14 * Conclusion …………………………………………………………………14

Overview

One of the most crucial and dominant subjects in management studies is finding more effective tools for complicated managerial problems, and due to the advancement of computer and communication technology, tools used in management decisions have undergone a massive change. Artificial Neural Networks (ANNs) is an example, knowing that it has become a critical component of business intelligence. The below article describes the basics of neural networks as well as some work done on the application of ANNs in management sciences.

Definition of a Neural Network?
The simplest definition of a neural network, particularly referred to as an 'artificial' neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen who defines a neural network as follows:
"...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs."Neural Network Primer: Part I" by Maureen Caudill, AI Expert, Feb. 1989.

ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of the mamalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor units, whereas a mamalian brain has billions of neurons with a corresponding increase in magnitude of their overall interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks accurately resemble biological systems, yet some are. For example, researchers have accurately simulated the function of the retina and modeled the eye rather well.
Despite that the mathematics involved with neural networking is not a trivial matter, a user can rather easily gain at least an operational understanding of their structure and function.
A neural network is an information processing system that is non-algorithmic , non – digital , and intensely parallel . It is not a computer, nor it is programmed like one . Instead , it consists of a number of very simple and highly interconnected processes called artificial neurons that represent analogs of the biological neural cells , or neurons. In the brain , the neurons are connected by a large number of weighted links , over which signals can pass. Each neuron typically receives many signals over its incoming connections ; some of these incoming signals may arise from other neurons , and others come from the outside world .The neuron usually has many of these incoming signal connections; however , it never produces more than a single outgoing signal .
The Basics of Neural Networks
Neural neworks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output as shown in the graphic below.

.
Most ANNs contain some form of 'learning rule' which modifies the weights of the connections according to the input patterns that are presented with it. That is, ANNs learn by example as do their biological counterparts; a child learns to recognize dogs from examples of dogs.
Although there are many different kinds of learning rules used by neural networks, this demonstration is concerned only with one; the delta rule. The delta rule is often utilized by the most common class of ANNs called 'backpropagational neural networks' (BPNNs). Backpropagation is an abbreviation for the backwards propagation of error.

Major Components of an Artificial Neuron
This section describes the seven major components which make up an artificial neuron. These components are valid whether the neuron is used for input, output, or is in one of the hidden layers.

Component 1: Weighting Factors: A neuron usually receives many simultaneous inputs. Each input has its own relative weight which gives the input the impact that it needs on the processing element's summation function. These weights perform the same type of function as the varying synaptic strengths of biological neurons. In both cases, some inputs are made more important than others in order to have a greater effect on the processing element as they combine to produce a neural response.
Weights are adaptive coefficients within the network that determine the intensity of the input signal as registered by the artificial neuron. They are a measure of an input's connection strength. These strengths can be modified in response to various training sets and according to a network's specific topology or through its learning rules.
Component 2: Summation Function: The first step in processing an element's operation is to compute the weighted sum of all of the inputs. Mathematically, the inputs and the corresponding weights are vectors which can be represented as (i1, i2 . . . in) and (w1, w2 . . . wn). The total input signal is the dot, or inner, product of these two vectors. This simplistic summation function is found by multiplying each component of the i vector by the corresponding component of the w vector and then adding up all the products. Input1 = i1 * w1, input2 = i2 * w2, etc., are added as input1 + input2 + . . . + inputn. The result is a single number, not a multi-element vector.
Geometrically, the inner product of the two vectors can be considered a measure of their similarity. If the vectors point in the same direction, the inner product is maximum; if the vectors point in opposite direction (180 degrees out of phase), the inner product is minimum.
The summation function can be more complex than just the simple input and weight sum of products. The input and weighting coefficients can be combined in many different ways before passing on to the transfer function. In addition to a simple product summing, the summation function can select the minimum, maximum, majority, product, or several normalizing algorithms. The specific algorithm for combining neural inputs is determined by the chosen network architecture and paradigm.
Some summation functions have an additional process applied to the result before it is passed on to the transfer function. This process is sometimes called the activation function. The purpose of utilizing an activation function is to allow the summation output to vary with respect to time. Activation functions currently are pretty much confined to research. Most of the current network implementations use an "identity" activation function, which is equivalent to not having one. Moreover, such a function is likely to be a component of the network as a whole rather than of each individual processing element component.

Component 3:
Transfer Function: The result of the summation function, that’s almost always the weighted sum, is transformed to a working output through an algorithmic process known as the transfer function. In the transfer function the summation total can be compared with some threshold to determine the neural output. If the sum is greater than the threshold value, the processing element generates a signal. If the sum of the input and weight products is less than the threshold, no signal (or some inhibitory signal) is generated. Both types of response are significant.
The threshold, or transfer function, is generally non-linear. Linear (straight-line) functions are limited because the output is simply proportional to the input. However, linear functions are not very useful, and this was the problem in the earliest network models as noted in Minsky and Papert's book Perceptrons.
The transfer function could be something as simple as depending upon whether the result of the summation function is positive or negative. The network could output zero and one, one and minus one, or other numeric combinations. The transfer function would then be a "hard limiter" or step function.
Another type of transfer function, the threshold or ramping function could mirror the input within a given range and still act as a hard limiter outside that range. It is a linear function that has been clipped to minimum and maximum values, making it non-linear. Yet another option would be a sigmoid or S-shaped curve. That curve approaches a minimum and maximum value at the asymptotes. It is common for this curve to be called a sigmoid when it ranges between 0 and 1, and a hyperbolic tangent when it ranges between -1 and 1. Mathematically, the exciting feature of these curves is that both the function and its derivatives are continuous. This option works fairly well and is often the transfer function of choice. Other transfer functions are dedicated to specific network architectures and shall be discussed later in the article.
Prior to applying the transfer function, uniformly distributed random noise may be added. The source and amount of this noise is determined by the learning mode of a given network paradigm. This noise is normally referred to as "temperature" of the artificial neurons. The name, temperature, is derived from the physical phenomenon which explains that as people becomes too hot or cold their thinking abilities are affected. Electronically, this process is simulated by adding noise. Indeed, by adding different levels of noise to the summation result, more brain-like transfer functions are realized. To more closely mimic nature's characteristics, some experimenters use Gaussian noise source. Gaussian noise is similar to a uniformly distributed noise except that the distribution of random numbers within the temperature range is along a bell curve. The use of temperature is an ongoing research area and is not being applied to many engineering applications.
NASA announced a network topology which uses what it calls a temperature coefficient in a new feed-forward, back-propagation learning function. But this temperature coefficient is a global term which is applied to the gain of the transfer function, and is different from the more common term, temperature, which is simple noise being added to individual neurons. In contrast, the global temperature coefficient allows the transfer function to have a learning variable much like the synaptic input weights. This concept is claimed to create a network which has a significantly faster (by several order of magnitudes) learning rate and provides more accurate results than other feed forward, back-propagation networks.
Component 4:
Scaling and Limiting: After processing the element's transfer function, the result can pass through additional processes which scale and limit. This scaling simply multiplies a scale factor by the transfer value, and then adds an offset. Limiting is the mechanism which insures that the scaled result does not exceed an upper or lower bound. This limiting is in addition to the hard limits that the original transfer function may have performed.
This type of scaling and limiting is mainly used in topologies to test biological neuron models, such as James Anderson's brain-state-in-the-box.
Component 5:
Output Function (Competition): Each processing element is allowed one output signal which it may output to hundreds of other neurons. This is just like the biological neuron, where there are many inputs and only one output action. Normally, the output is directly equivalent to the transfer function's result. Some network topologies, however, modify the transfer result to incorporate competition among neighboring processing elements. Neurons are allowed to compete with each other, inhibiting processing elements unless they have great strength. Competition can occur at one or both of two levels. First, competition determines which artificial neuron will be active, or provides an output. Second, competitive inputs help determine which processing element will participate in the learning or adaptation process.
Component 6:
Error Function and Back-Propagated Value: In most learning networks the difference between the current output and the desired output is calculated. This raw error is then transformed by the error function to match particular network architecture. The most basic architectures use this error directly, but some square the error while retaining its sign, some cube the error, and other paradigms modify the raw error to fit their specific purposes. The artificial neuron's error is then typically propagated into the learning function of another processing element. This error term is sometimes called the current error.
The current error is typically propagated backwards to a previous layer. Yet, this back-propagated value can be either the current error, the current error scaled in some manner (often by the derivative of the transfer function), or some other desired output depending on the network type. Normally, this back-propagated value, after being scaled by the learning function, is multiplied against each of the incoming connection weights to modify them before the next learning cycle.

Component 7:
Learning Function: The purpose of the learning function is to modify the variable connection of weights on the inputs of each processing element according to some neural based algorithm. This process of changing the weights of the input connections to achieve some desired result can also be called the adaption function, as well as the learning mode. There are two types of learning: supervised and unsupervised. Supervised learning requires a teacher. The teacher may be a training set of data or an observer who grades the performance of the network results. Either way, having a teacher is learning by reinforcement. When there is no external teacher, the system must organize itself by some internal criteria designed into the network. This is learning by doing.

Applications of Neural Networks
Neural networks are universal approximators, and they work best if the system that’s being used to model them has a high tolerance to error. One would therefore not be advised to use a neural network to balance one's cheque book! However they work very well when: * Capturing associations or discovering regularities within a set of patterns; * The volume, number of variables or diversity of the data is very great; * The relationships between variables are vaguely understood; or, * The relationships are difficult to describe adequately with conventional approaches.
Application of Artificial neural networks :
Artificial neural networks are applied in different aspects of business sciences such as marketing, finance , manufacturing and strategic management .
ANN can be applied in many marketing decision making problems which could be done previously by multivariate statistical analysis only. Typical problems turn out to be market segmentation tasks and classification of consumer spending patterns , new product analysis ,identification of customer characteristics , sale forecasts ,targeted marketing , and modeling the relationship between market orientation and performance ,
The critical point for research in the field of marketing is the lack of applications on the individual level data which is a problem encountered in ANN applications .
ANN application in finance : ANNs are now frequently used in many modeling and forecasting problems. The main advantage of this tool is the ability to approximate almost any nonlinear function arbitrarily close . The ANN can provide a better fit compared with parametric linear models. On the other hand it’s difficult to interpret the meaning of parameters. The essential topics in finance are forecasts of changes in the value of financial assets under the form of stocks , indexes and currencies , analysis of financial statements .
ANN application in manufacturing and production :
ANN can be used in forecasting (production costs , delivery dates , etc.),quality control and optimization that are predominant in production problems . As quality control problems correspond to classification. The appropriateness of ANN approaches is supposed to be as good in fields of marketing and finance .
ANN applied with the field of strategic planning and business policy: The empirical research in strategic planning systems has focused on two areas : the impact of strategic planning on firm performance and the role of strategic planning in strategic decision making .

Table 1: reported applications in the field of marketing and sales Business area | Problem type | | | Marketing and sales | Forecasting costumer response Sales forecast Target marketing Market segmentation Brand analysis Storage layoutCustomer gender analysisMarketing data mining New product acceptance research Consumer choice prediction Market share forecasting | | |

Table 2: reported applications in the field of finance and accounting

Business area | Problem type | | | Finance and accounting | Financial health predictionBankruptcy classificationAnalytical review processCredit scoringRisk assessmentForecastingBond ratingMutual fund selectionCredit evaluation | | |

Table 3: reported applications in the field of manufacturing and production Business area | Problem type | | | | | | | manufacturing and production | Engineering designQuality controlStorage designInventory controlSupply chain managementDemand forecastingProcess selection | | |

Table 4: reported applications in the field of strategic management and business policy

Business area | Problem type | | | | | | | strategic management and business policy | Strategic performance and planningAssessing decision makingEvaluating strategies | | |

Advantages and Disadvantages of Neural Networks:
The advantage of neural networks over conventional programming lies on their ability to solve problems that do not have an algorithmic solution or when the available solution is too complex to be found. Neural networks are well suited to tackle problems that people are good at solving, such as prediction and pattern recognition. Neural networks have been applied within the medical domain for clinical diagnosis, image analysis and interpretation, signal analysis and interpretation, and drug development. The classification of the applications presented below is simplified, since most of the examples lie in more than one category (e.g. diagnosis and image interpretation; diagnosis and signal interpretation). Depending on the nature of the application and the strength of the internal data patterns you can generally expect a network to train quite well. This applies to problems where the relationships may be quite dynamic or non-linear. ANNs provide an analytical alternative to conventional techniques which are often limited by strict assumptions of normality, linearity, variable independence etc. Because an ANN can capture many kinds of relationships it allows the user to quickly and relatively easily model phenomena which otherwise may have been very difficult or impossible to explain otherwise.

Neural networks almost always underperform approaches that have stronger statistical foundations, and tend to be more difficult to work with, for the following reasons:

1. Neural networks are not magic hammers. Neural networks are still viewed by many as being magic hammers that can solve any machine learning problem, and as a result people tend to apply them indiscriminately to problems for which they are not well suited. Although neural networks do have a proven track record of success for certain specific problems, as a consumer of machine learning technology you're almost always better off using approaches that have stronger theoretical underpinnings. Rather than just throwing a general-purpose neural network at your problem and hoping for the best, try to understand and simplify your problem as well as possible, and look for techniques that have a structure which will fit the task well. 2. Neural networks are too much of a black box. This makes them difficult to train; the training outcome can be nondeterministic and depend crucially on the choice of initial parameters, e.g. the starting point for gradient descent when training back propagation networks; and their opaque nature makes it very hard to determine how they are solving a problem. They are difficult to troubleshoot when they don't work, and when they do work, you never really feel confident that they will generalize well to data not included in your training set because, fundamentally, you don't understand what your network is doing. 3. Neural networks are not probabilistic. For the most part, NNs have few if any probabilistic underpinnings, unlike their more statistical or Bayesian counterparts in the more general field of machine learning. It's incredibly useful to know how confident your classifier is about its answers, because that information allows you to better manage the cost of making errors. A neural network might give you a continuous number as its output (e.g. a score) but translating that into a probability is hard. Approaches with stronger theoretical foundations tend to give you those probabilities directly. 4. Neural networks are not a substitute for understanding problems: Rather than using a neural net, you're almost always better off investing a little extra time studying, analyzing and dissecting data first, then choosing some better grounded technique that you know will work well for the problem. For example, if you are building a classifier, rather than just throwing a neural network at your data and hoping for the best, spend time visualizing the dataset and selecting or creating the best input features using whatever domain-specific knowledge and expertise available to you. You might discover that you can clearly differentiate between your training classes using just three out of your original ten features, or that you need to develop a more sophisticated nonlinear derived feature to better separate your classes. Ultimately for example, you might end up using a Gaussian mixture model (GMM) to directly model the density function of your classes, which will allow you to use Bayes theorem to deduce class probabilities and give you a far better classifier than you could ever build using a neural network.

Advantages:
· A neural network can perform tasks that a linear program cannot.
· When an element of the neural network fails, it can continue without any problem since it’s parallel by nature.
· A neural network learns and does not need to be reprogrammed.
· It can be implemented in any application and without any problem.
Disadvantages:
· The neural network needs training to operate.
· The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated.
· Requires high processing time for large neural networks.

Examples:

* Fraudulent credit card detection (VISA) * Speech recognition, text-to-speech (NetTalk) * Price share prediction (time series analysis) * Image compression

Conclusion:
By studying artificial Neural Network it can be concluded that since technology is developing day by day the need of Artificial Intelligence is increasing because of only parallel processing. Parallel Processing is more needed in this present time because it saves more time and money in any work related to computers and robots. Considering future work one can only say that more algorithms, and other problem solving techniques should be developed so that limitations of Artificial Neural Network are removed. In addition, Artificial Neural Network used to learn to predict future events based on the patterns that have been observed in the historical training data and learn to classify unseen data into pre-defined groups based on characteristics observed in the training data. Artificial neural networks are applications in management , marketing , financing , applied with the field of manufacturing and production , to illustrate that a properly designed ANN is an interactive software – based system intended to help decision makers compile useful information from raw data , documents ,personal knowledge , and/ or business models to identify ,solve problems , and make decisions .

References:
Hejase , H. J., & Hejase , A. J. (2011). management information systems . (first ed., pp. 147-148). Beirut: Dar Sader.
White, H., 1989. Neural Network Learning and Statistics. AI Expert, 4(12): 48-82.
Wang, S., An adaptive approach to market development forecasting. Neural Computing and Applications, 1999. 8(1): p. 3-8.
Vellido, A., P.J.G. Lisboa and K. Meehan, 1999. Segmentation of the online shopping market using neural networks. Expert Systems with Applications, 17(4): 303-314.

Similar Documents

Free Essay

Neural Network

...EEL5840: Machine Intelligence Introduction to feedforward neural networks Introduction to feedforward neural networks 1. Problem statement and historical context A. Learning framework Figure 1 below illustrates the basic framework that we will see in artificial neural network learning. We assume that we want to learn a classification task G with n inputs and m outputs, where, y = G(x) , (1) x = x1 x2 … xn T and y = y 1 y 2 … y m T . (2) In order to do this modeling, let us assume a model Γ with trainable parameter vector w , such that, z = Γ ( x, w ) (3) where, z = z1 z2 … zm T . (4) Now, we want to minimize the error between the desired outputs y and the model outputs z for all possible inputs x . That is, we want to find the parameter vector w∗ so that, E ( w∗ ) ≤ E ( w ) , ∀w , (5) where E ( w ) denotes the error between G and Γ for model parameter vector w . Ideally, E ( w ) is given by, E(w) = ∫ y – z 2 p ( x ) dx (6) x where p ( x ) denotes the probability density function over the input space x . Note that E ( w ) in equation (6) is dependent on w through z [see equation (3)]. Now, in general, we cannot compute equation (6) directly; therefore, we typically compute E ( w ) for a training data set of input/output data, { ( x i, y i ) } , i ∈ { 1, 2, …, p } , (7) where x i is the n -dimensional input vector, x i = x i 1 x i 2 … x in T (8) x2 y2 … … Unknown mapping G xn ym z1 z2 Trainable model Γ … zm -1- model outputs y1 … inputs x1...

Words: 7306 - Pages: 30

Free Essay

Neural Network

...ARTIFICIAL NEURAL NETWORK FOR SPEECH RECOGNITION One of the problem found in speech recognition is recording samples never produce identical waveforms. This happens due to different in length, amplitude, background noise, and sample rate. This problem can be encountered by extracting speech related information using Spectogram. It can show change in amplitude spectra over time. For example in diagram below: X Axis : TimeY Axis : FrequencyZ Axis : Colour intensity represents magnitude | | A cepstral analysis is a popular method for feature extraction in speech recognition applications and can be accomplished using Mel Frequency Cepstrum Coefficient (MFCC) analysis Input Layer is 26 Cepstral CoefficientsHidden Layer is 100 fully-connected hidden-layerWeight is range between -1 and +1 * It is initially random and remain constantOutput : * 1 output unit for each target * Limited to values between 0 and +1 | | First of all, spoken digits were recorded. Seven samples of each digit consist of “one” through “eight” and a total of 56 different recordings with varying length and environmental conditions. The background noise was removed from each sample. Then, calculate MFCC using Malcolm Slaney’s Auditory Toolbox which is c=mfcc(s,fs,fix((3*fs)/(length(s)-256))). Choose intended target and create a target vector. If the training network recognise spoken one, target has a value of +1 for each of the known “one” stimuli and 0 for everything else. This will be supervised...

Words: 341 - Pages: 2

Free Essay

Arificial Neural Network

...A Review of ANN-based Short-Term Load Forecasting Models Y. Rui A.A. El-Keib Department of Electrical Engineering University of Alabama, Tuscaloosa, AL 35487 Abstract - Artificial Neural Networks (AAN) have recently been receiving considerable attention and a large number of publications concerning ANN-based short-term load forecasting (STLF) have appreared in the literature. An extensive survey of ANN-based load forecasting models is given in this paper. The six most important factors which affect the accuracy and efficiency of the load forecasters are presented and discussed. The paper also includes conclusions reached by the authors as a result of their research in this area. Keywords: artificial neural networks, short-term load forecasting models Introduction Accurate and robust load forecasting is of great importance for power system operation. It is the basis of economic dispatch, hydro-thermal coordination, unit commitment, transaction evaluation, and system security analysis among other functions. Because of its importance, load forecasting has been extensively researched and a large number of models were proposed during the past several decades, such as Box-Jenkins models, ARIMA models, Kalman filtering models, and the spectral expansion techniques-based models. Generally, the models are based on statistcal methods and work well under normal conditions, however, they show some deficiency in the presence of an abrupt change in environmental or sociological variables...

Words: 3437 - Pages: 14

Free Essay

Artificial Neural Network Essentials

...NEURAL NETWORKS by Christos Stergiou and Dimitrios Siganos |   Abstract This report is an introduction to Artificial Neural Networks. The various types of neural networks are explained and demonstrated, applications of neural networks like ANNs in medicine are described, and a detailed historical background is provided. The connection between the artificial and the real thing is also investigated and explained. Finally, the mathematical models involved are presented and demonstrated. Contents: 1. Introduction to Neural Networks 1.1 What is a neural network? 1.2 Historical background 1.3 Why use neural networks? 1.4 Neural networks versus conventional computers - a comparison   2. Human and Artificial Neurones - investigating the similarities 2.1 How the Human Brain Learns? 2.2 From Human Neurones to Artificial Neurones   3. An Engineering approach 3.1 A simple neuron - description of a simple neuron 3.2 Firing rules - How neurones make decisions 3.3 Pattern recognition - an example 3.4 A more complicated neuron 4. Architecture of neural networks 4.1 Feed-forward (associative) networks 4.2 Feedback (autoassociative) networks 4.3 Network layers 4.4 Perceptrons 5. The Learning Process  5.1 Transfer Function 5.2 An Example to illustrate the above teaching procedure 5.3 The Back-Propagation Algorithm 6. Applications of neural networks 6.1 Neural networks in practice 6.2 Neural networks in medicine 6.2.1 Modelling and Diagnosing the Cardiovascular...

Words: 7770 - Pages: 32

Free Essay

Segmentation Using Neural Networks

...SEGMENTATION WITH NEURAL NETWORK B.Prasanna Rahul Radhakrishnan Valliammai Engineering College Valliammai Engineering College prakrish_2001@yahoo.com krish_rahul_1812@yahoo.com Abstract: Our paper work is on Segmentation by Neural networks. Neural networks computation offers a wide range of different algorithms for both unsupervised clustering (UC) and supervised classification (SC). In this paper we approached an algorithmic method that aims to combine UC and SC, where the information obtained during UC is not discarded, but is used as an initial step toward subsequent SC. Thus, the power of both image analysis strategies can be combined in an integrative computational procedure. This is achieved by applying “Hyper-BF network”. Here we worked a different procedures for the training, preprocessing and vector quantization in the application to medical image segmentation and also present the segmentation results for multispectral 3D MRI data sets of the human brain with respect to the tissue classes “ Gray matter”, “ White matter” and “ Cerebrospinal fluid”. We correlate manual and semi automatic methods with the results. Keywords: Image analysis, Hebbian learning rule, Euclidean metric, multi spectral image segmentation, contour tracing. Introduction: Segmentation can be defined as the identification of meaningful image components. It is a fundamental task in image processing providing the basis for any kind of...

Words: 2010 - Pages: 9

Free Essay

Artificial Neural Network for Biomedical Purpose

...ARTIFICIAL NEURAL NETWORKS METHODOLOGICAL ADVANCES AND BIOMEDICAL APPLICATIONS Edited by Kenji Suzuki Artificial Neural Networks - Methodological Advances and Biomedical Applications Edited by Kenji Suzuki Published by InTech Janeza Trdine 9, 51000 Rijeka, Croatia Copyright © 2011 InTech All chapters are Open Access articles distributed under the Creative Commons Non Commercial Share Alike Attribution 3.0 license, which permits to copy, distribute, transmit, and adapt the work in any medium, so long as the original work is properly cited. After this work has been published by InTech, authors have the right to republish it, in whole or part, in any publication of which they are the author, and to make other personal use of the work. Any republication, referencing or personal use of the work must explicitly identify the original source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book. Publishing Process Manager Ivana Lorkovic Technical Editor Teodora Smiljanic Cover Designer Martina Sirotic Image Copyright Bruce Rolff, 2010. Used under license from Shutterstock.com First published March, 2011 Printed in...

Words: 43079 - Pages: 173

Free Essay

Neural Networks for Matching in Computer Vision

...Neural Networks for Matching in Computer Vision Giansalvo Cirrincione1 and Maurizio Cirrincione2 Department of Electrical Engineering, Lab. CREA University of Picardie-Jules Verne 33, rue Saint Leu, 80039 Amiens - France exin@u-picardie.fr Universite de Technologie de Belfort-Montbeliard (UTBM) Rue Thierry MIEG, Belfort Cedex 90010, France maurizio.cirricione@utbm.fr 1 2 Abstract. A very important problem in computer vision is the matching of features extracted from pairs of images. At this proposal, a new neural network, the Double Asynchronous Competitor (DAC) is presented. It exploits the self-organization for solving the matching as a pattern recognition problem. As a consequence, a set of attributes is required for each image feature. The network is able to find the variety of the input space. DAC exploits two intercoupled neural networks and outputs the matches together with the occlusion maps of the pair of frames taken in consideration. DAC can also solve other matching problems. 1 Introduction In computer vision, structure from motion (SFM) algorithms recover the motion and scene parameters by using a sequence of images (very often only a pair of images is needed). Several SFM techniques require the extraction of features (corners, lines and so on) from each frame. Then, it is necessary to find certain types of correspondences between images, i.e. to identify the image elements in different frames that correspond to the same element in the scene. This paper...

Words: 3666 - Pages: 15

Free Essay

A 3-Layer Artificial Neural Network

...1. Describe (a) the basic structure of and (b) the learning process for a 3-layer artificial neural network. A 3-layer artificial neural network consists of an input, output and a hidden layer in the middle. For e.g. To recognize male and female faces, the input layer would be made up of a computer program analyzing a camera shot. The output layer would be the word male or female appearing on the screen. The hidden layer is where all action takes place and connections are made between input and output. In an ANN these connections are mathematical. It works by learning from success (hits) and failures (misses) by making adjustments in these mathematical connections. 2. According to Churchland, why does intrapersonal (within one person) moral conflict occur? Intrapersonal moral conflict occurs when some contextual feature is alternately magnified or minimized and one’s overall perceptual take flips back and forth between two distinct activation patterns in the neighborhood of 2 distinct prototypes. In such case, an individual is morally conflicted eg. Should I protect a friends feeling by lying about someone’s hurtful slur or should I tell him the truth? 3. According to Churchland, when should moral correction occur and why? According to Churchland, moral correction should occur at an early age, before child turns into a young adult. Reasons - 1. Firstly, cognitive plasticity and eagerness to imitate found...

Words: 549 - Pages: 3

Free Essay

Prediction of Oil Prices Using Neural Networks

...Oil Price Prediction using Artificial Neural Networks Author: Siddhant Jain, 2010B3A7506P Birla Institute of Technology and Science, Pilani Abstract: Oil is an important commodity for every industrialised nation in the modern economy. The upward or downward trends in Oil prices have crucially influenced economies over the years and a priori knowledge of such a trend would be deemed useful to all concernd - be it a firm or the whole country itself. Through this paper, I intend to use the power of Artificial Neural Networks (ANNs) to develop a model which can be used to predict oil prices. ANNs are widely used for modelling a multitude of financial and economic variables and have proven themselves to be a very powerful tool to handle volumes of data effectively and analysing it to perform meaningful calculations. MATLAB has been employed as the medium for developing the neural network and for efficiently handling the volume of calculations involved. Following sections shall deal with the theoretical and practical intricacies of the aforementioned model. The appendix includes snapshots of the generated results and other code snippets. Artificial Neural Networks: Understanding To understand any of the ensuing topics and the details discussed thereof, it is imperative to understand what actually we mean by Neural Networks. So, I first dwell into this topic: In simplest terms a Neural Network can be defined as a computer system modelled on the human brain and nervous system...

Words: 3399 - Pages: 14

Free Essay

Rough Set Approach for Feature Reduction in Pattern Recognition Through Unsupervised Artificial Neural Network

...First International Conference on Emerging Trends in Engineering and Technology Rough Set Approach for Feature Reduction in Pattern Recognition through Unsupervised Artificial Neural Network A. G. Kothari A.G. Keskar A.P. Gokhale Rucha Pranjali Lecturer Professor Professor Deshpande Deshmukh agkothari72@re B.Tech Student B.Tech Student diffmail.com Department of Electronics & Computer Science Engineering, VNIT, Nagpur Abstract The Rough Set approach can be applied in pattern recognition at three different stages: pre-processing stage, training stage and in the architecture. This paper proposes the application of the Rough-Neuro Hybrid Approach in the pre-processing stage of pattern recognition. In this project, a training algorithm has been first developed based on Kohonen network. This is used as a benchmark to compare the results of the pure neural approach with the RoughNeuro hybrid approach and to prove that the efficiency of the latter is higher. Structural and statistical features have been extracted from the images for the training process. The number of attributes is reduced by calculating reducts and core from the original attribute set, which results into reduction in convergence time. Also, the above removal in redundancy increases speed of the process reduces hardware complexity and thus enhances the overall efficiency of the pattern recognition algorithm Keywords: core, dimensionality reduction, feature extraction, rough sets, reducts, unsupervised ANN as any...

Words: 2369 - Pages: 10

Premium Essay

Market Segmentation

...www.elsevier.com/locate/atoures Annals of Tourism Research, Vol. 32, No. 1, pp. 93–111, 2005 Ó 2005 Elsevier Ltd. All rights reserved. Printed in Great Britain 0160-7383/$30.00 doi:10.1016/j.annals.2004.05.001 MARKET SEGMENTATION A Neural Network Application Jonathan Z. Bloom University of Stellenbosch, South Africa Abstract: The objective of the research is to consider a self-organizing neural network for segmenting the international tourist market to Cape Town, South Africa. A backpropagation neural network is used to complement the segmentation by generating additional knowledge based on input–output relationship and sensitivity analyses. The findings of the self-organizing neural network indicate three clusters, which are visually confirmed by developing a comparative model based on the test data set. The research also demonstrated that Cape Metropolitan Tourism could deploy the neural network models and track the changing behavior of tourists within and between segments. Marketing implications for the Cape are also highlighted. Keywords: segmentation, SOM neural network, input–output analysis, sensitivity analysis, deployment. Ó 2005 Elsevier Ltd. All rights reserved. ´ ´ Resume: Segmentation du marche: une application du reseau neuronal. Le but de la ´ ´ recherche est de considerer un reseau neuronal auto-organisateur pour segmenter le marche ´ ´ ´ touristique international a Cape Town, en Afrique du Sud. On utilise un reseau neuronal de ` ´ retropropogation pour...

Words: 7968 - Pages: 32

Free Essay

Hurst Wx

...stronger trend. In this paper we investigate the use of the Hurst exponent to classify series of financial data representing different periods of time. Experiments with backpropagation Neural Networks show that series with large Hurst exponent can be predicted more accurately than those series with H value close to 0.50. Thus Hurst exponent provides a measure for predictability. KEY WORDS Hurst exponent, time series analysis, neural networks, Monte Carlo simulation, forecasting In time series forecasting, the first question we want to answer is whether the time series under study is predictable. If the time series is random, all methods are expected to fail. We want to identify and study those time series having at least some degree of predictability. We know that a time series with a large Hurst exponent has strong trend, thus it’s natural to believe that such time series are more predictable than those having a Hurst exponent close to 0.5. In this paper we use neural networks to test this hypothesis. Neural networks are nonparametric universal function approximators [9] that can learn from data without assumptions. Neural network forecasting models have been widely used in financial time series analysis during the last decade [10],[11],[12]. As universal function approximators, neural networks can be used for surrogate predictability. Under the same conditions, a time series with a smaller forecasting error than another is said to be more predictable. We study the Dow-Jones...

Words: 1864 - Pages: 8

Free Essay

Stereoscopic Building Reconstruction Using High-Resolution Satellite Image Data

...Stereoscopic Building Reconstruction Using High-Resolution Satellite Image Data Anonymous submission Abstract—This paper presents a novel approach for the generation of 3D building model from satellite image data. The main idea of 3D modeling is based on the grouping of 3D line segments. The divergence-based centroid neural network is employed in the grouping process. Prior to the grouping process, 3D line segments are extracted with the aid of the elevation information obtained by using area-based stereo matching of satellite image data. High-resolution IKONOS stereo images are utilized for the experiments. The experimental result proved the applicability and efficiency of the approach in dealing with 3D building modeling from high-resolution satellite imagery. Index Terms—building model, satellite image, 3D modeling, line segment, stereo I. I NTRODUCTION Extraction of 3D building model is one of the important problems in the generation of an urban model. The process aims to detect and describe the 3D rooftop model from complex scene of satellite imagery. The automated extraction of the 3D rooftop model can be considered as an essential process in dealing with 3D modeling in the urban area. There has been a significant body of research in 3D reconstruction from high-resolution satellite imagery. Even though a natural terrain can be successfully reconstructed in a precise manner by using correlation-based stereoscopic processing of satellite images [1], 3D building reconstruction...

Words: 2888 - Pages: 12

Free Essay

Ebusiness-Process-Personalization Using Neuro-Fuzzy Adaptive Control for Interactive Systems

...International Review of Business Research Papers Vol.2. No.4. December 2006, Pp. 39-50 eBusiness-Process-Personalization using Neuro-Fuzzy Adaptive Control for Interactive Systems Zunaira Munir1 , Nie Gui Hua2 , Adeel Talib3 and Mudassir Ilyas4 ‘Personalization’, which was earlier recognized as the 5th ‘P’ of e-marketing , is now becoming a strategic success factor in the present customer-centric e-business environment. This paper proposes two changes in the current structure of personalization efforts in ebusinesses. Firstly, a move towards business-process personalization instead of only website-content personalization and secondly use of an interactive adaptive scheme instead of the commonly employed algorithmic filtering approaches. These can be achieved by applying a neuro-intelligence model to web based real time interactive systems and by integrating it with converging internal and external e-business processes. This paper presents a framework, showing how it is possible to personalize e-business processes by adapting the interactive system to customer preferences. The proposed model applies Neuro-Fuzzy Adaptive Control for Interactive Systems (NFACIS) model to converging business processes to get the desired results. Field of Research: Marketing, e-business 1. Introduction: As Kasanoff (2001) mentioned, the ability to treat different people differently is the most fundamental form of human intelligence. "You talk differently to your boss than to...

Words: 4114 - Pages: 17

Free Essay

Prediction and Optimisation of Fsw

...EXECUTIVE SUMMARY INTRODUCTION/BACKGROUND The objective of the thesis is to predict and optimize the mechanical properties of Aircraft fuselage aluminium (AA5083). Firstly, data-driven modelling techniques such as Artificial Neural – Fuzzy networks and regressive analysis are used and by making the effective use of experimental data, FIS membership function parameters are trained. At the core, mathematical model that functionally relates tool rotational speed and forward movement per revolution to that of Yield strength, Ultimate strength and Weld quality are obtained. Also, simulations are performed, and the actual values are compared with the predicted values. Finally, multi-objective optimization of mechanical properties fuselage aluminium was undertaken using Genetic Algorithm to improve the performance of the tools industrially. AIMS AND OBJECTIVES Objectives of the dissertation include  Understanding the basic principles of operation of Friction Stir Welding (FSW).  Gaining experience in modelling and regressive analysis.  Gaining expertise in MATLAB programming.  Identifying the best strategy to achieve the yield strength, Ultimate Tensile strength and Weld quality of Friction Stir Welding.  Performing optimization of mechanical properties of FSW using Genetic Algorithm. I  To draw conclusions on prediction of mechanical properties of FSW optimization of aircraft fuselage aluminium. ACHIEVEMENTS  The basic principles of friction welding of the welding...

Words: 9686 - Pages: 39