...www.elsevier.com/locate/atoures Annals of Tourism Research, Vol. 32, No. 1, pp. 93–111, 2005 Ó 2005 Elsevier Ltd. All rights reserved. Printed in Great Britain 0160-7383/$30.00 doi:10.1016/j.annals.2004.05.001 MARKET SEGMENTATION A Neural Network Application Jonathan Z. Bloom University of Stellenbosch, South Africa Abstract: The objective of the research is to consider a self-organizing neural network for segmenting the international tourist market to Cape Town, South Africa. A backpropagation neural network is used to complement the segmentation by generating additional knowledge based on input–output relationship and sensitivity analyses. The findings of the self-organizing neural network indicate three clusters, which are visually confirmed by developing a comparative model based on the test data set. The research also demonstrated that Cape Metropolitan Tourism could deploy the neural network models and track the changing behavior of tourists within and between segments. Marketing implications for the Cape are also highlighted. Keywords: segmentation, SOM neural network, input–output analysis, sensitivity analysis, deployment. Ó 2005 Elsevier Ltd. All rights reserved. ´ ´ Resume: Segmentation du marche: une application du reseau neuronal. Le but de la ´ ´ recherche est de considerer un reseau neuronal auto-organisateur pour segmenter le marche ´ ´ ´ touristique international a Cape Town, en Afrique du Sud. On utilise un reseau neuronal de ` ´ retropropogation pour...
Words: 7968 - Pages: 32
... 2011, p. 282, para. 2) One important issue with beer is flavor. Typically, the flavor is determined by test panels. These tests are usually time-consuming. Coors wants to understand the chemical composition of flavors and if they knew that, it would open doors that have not been opened yet. “The relationship between chemical analysis and beer flavor is not clearly understood yet” (Turban, Sharda, & Delen, 2011, p. 282, para. 3). There is data on sensory analysis and chemical composition and Coors needs a way to link them together. The answer was Neural networks. Neural Networks The simplist defination of Neural Networks, more referred to as an ‘Artificial neural network’ (ANN) is defined by Dr. Robert Hecht-Nielsen, as a “computer system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs” (A Basic Introduction To Neural Networks, n.d.). With different interconnected layers such as many input layers, one output layers; one operation called the training operation, one could add many inputs (variables) into the system to reach the desired outcome. However at first it must be said that in order to reach the desired outcome, many inputs have to be entered over and over again and changing...
Words: 941 - Pages: 4
...Stereoscopic Building Reconstruction Using High-Resolution Satellite Image Data Anonymous submission Abstract—This paper presents a novel approach for the generation of 3D building model from satellite image data. The main idea of 3D modeling is based on the grouping of 3D line segments. The divergence-based centroid neural network is employed in the grouping process. Prior to the grouping process, 3D line segments are extracted with the aid of the elevation information obtained by using area-based stereo matching of satellite image data. High-resolution IKONOS stereo images are utilized for the experiments. The experimental result proved the applicability and efficiency of the approach in dealing with 3D building modeling from high-resolution satellite imagery. Index Terms—building model, satellite image, 3D modeling, line segment, stereo I. I NTRODUCTION Extraction of 3D building model is one of the important problems in the generation of an urban model. The process aims to detect and describe the 3D rooftop model from complex scene of satellite imagery. The automated extraction of the 3D rooftop model can be considered as an essential process in dealing with 3D modeling in the urban area. There has been a significant body of research in 3D reconstruction from high-resolution satellite imagery. Even though a natural terrain can be successfully reconstructed in a precise manner by using correlation-based stereoscopic processing of satellite images [1], 3D building reconstruction...
Words: 2888 - Pages: 12
...stronger trend. In this paper we investigate the use of the Hurst exponent to classify series of financial data representing different periods of time. Experiments with backpropagation Neural Networks show that series with large Hurst exponent can be predicted more accurately than those series with H value close to 0.50. Thus Hurst exponent provides a measure for predictability. KEY WORDS Hurst exponent, time series analysis, neural networks, Monte Carlo simulation, forecasting In time series forecasting, the first question we want to answer is whether the time series under study is predictable. If the time series is random, all methods are expected to fail. We want to identify and study those time series having at least some degree of predictability. We know that a time series with a large Hurst exponent has strong trend, thus it’s natural to believe that such time series are more predictable than those having a Hurst exponent close to 0.5. In this paper we use neural networks to test this hypothesis. Neural networks are nonparametric universal function approximators [9] that can learn from data without assumptions. Neural network forecasting models have been widely used in financial time series analysis during the last decade [10],[11],[12]. As universal function approximators, neural networks can be used for surrogate predictability. Under the same conditions, a time series with a smaller forecasting error than another is said to be more predictable. We study the Dow-Jones...
Words: 1864 - Pages: 8
...International Review of Business Research Papers Vol.2. No.4. December 2006, Pp. 39-50 eBusiness-Process-Personalization using Neuro-Fuzzy Adaptive Control for Interactive Systems Zunaira Munir1 , Nie Gui Hua2 , Adeel Talib3 and Mudassir Ilyas4 ‘Personalization’, which was earlier recognized as the 5th ‘P’ of e-marketing , is now becoming a strategic success factor in the present customer-centric e-business environment. This paper proposes two changes in the current structure of personalization efforts in ebusinesses. Firstly, a move towards business-process personalization instead of only website-content personalization and secondly use of an interactive adaptive scheme instead of the commonly employed algorithmic filtering approaches. These can be achieved by applying a neuro-intelligence model to web based real time interactive systems and by integrating it with converging internal and external e-business processes. This paper presents a framework, showing how it is possible to personalize e-business processes by adapting the interactive system to customer preferences. The proposed model applies Neuro-Fuzzy Adaptive Control for Interactive Systems (NFACIS) model to converging business processes to get the desired results. Field of Research: Marketing, e-business 1. Introduction: As Kasanoff (2001) mentioned, the ability to treat different people differently is the most fundamental form of human intelligence. "You talk differently to your boss than to...
Words: 4114 - Pages: 17
...MIDTERM: CS 6375 INSTRUCTOR: VIBHAV GOGATE October, 23 2013 The exam is closed book. You are allowed a one-page cheat sheet. Answer the questions in the spaces provided on the question sheets. If you run out of room for an answer, use an additional sheet (available from the instructor) and staple it to your exam. • NAME • UTD-ID if known • SECTION 1: • SECTION 2: • SECTION 3: • SECTION 4: • SECTION 5: • Out of 90: 1 CS 6375 FALL 2013 Midterm, Page 2 of 13 October 23, 2013 CS 6375 FALL 2013 Midterm, Page 3 of 13 October 23, 2013 SECTION 1: SHORT QUESTIONS (15 points) 1. (3 points) The Naive Bayes classifier uses the maximum a posteriori or the MAP decision rule for classification. True or False. Explain. Solution: True. The decision rule for the Naive Bayes classifier is: P (Xi |Y = y) arg; max P (Y = y) y i One can think of P (Y = y) as the prior distribution and P (Xi |Y = y) as the data likelihood. Note that when we do the learning, we are using the MLE approach. The decision rule is using MAP inference but the learning algorithm is using the MLE approach. Make sure you understand what this distinction means. 2. (6 points) Let θ be the probability that “Thumbtack 1” (we will abbreviate it as T1) shows heads and 2θ be the probability that “Thumbtack 2” (we will abbreviate it as T2) shows heads. You are given the following Dataset (6 examples). T1 Tails T2 Heads T1 Tails T1 Tails T2 Heads T2 ...
Words: 2270 - Pages: 10
...2 CHAPTER 2.1 2.2 2.3 Decision Making and Business Processes Why Do I Need To Know This LEARNING OUTCOMES Explain the difference between transactional data and analytical information, and between OLTP and OLAP. Define TPS, DSS, and EIS, and explain how organizations use these types of information systems to make decisions. Understand what AI is and the four types of artificial intelligence systems used by organizations today. Describe how AI differs from TPS, DSS, and EIS. Describe the importance of business process improvement, business process reengineering, business process modelling, and business process management to an organization and how information systems can help in these areas. This chapter describes various types of business information systems found across the enterprise used to run basic business processes and used to facilitate sound and proper decision making. Using information systems to improve decision making and re-engineer business processes can significantly help organizations become more efficient and effective. ? 2.4 2.5 As a business student, you can gain valuable insight into an organization by understanding the types of information systems that exist in and across enterprises. When you understand how to use these systems to improve business processes and decision making, you can vastly improve organizational performance. After reading this chapter, you should have gained an appreciation of the various kinds of information systems employed...
Words: 16302 - Pages: 66
...EXECUTIVE SUMMARY INTRODUCTION/BACKGROUND The objective of the thesis is to predict and optimize the mechanical properties of Aircraft fuselage aluminium (AA5083). Firstly, data-driven modelling techniques such as Artificial Neural – Fuzzy networks and regressive analysis are used and by making the effective use of experimental data, FIS membership function parameters are trained. At the core, mathematical model that functionally relates tool rotational speed and forward movement per revolution to that of Yield strength, Ultimate strength and Weld quality are obtained. Also, simulations are performed, and the actual values are compared with the predicted values. Finally, multi-objective optimization of mechanical properties fuselage aluminium was undertaken using Genetic Algorithm to improve the performance of the tools industrially. AIMS AND OBJECTIVES Objectives of the dissertation include Understanding the basic principles of operation of Friction Stir Welding (FSW). Gaining experience in modelling and regressive analysis. Gaining expertise in MATLAB programming. Identifying the best strategy to achieve the yield strength, Ultimate Tensile strength and Weld quality of Friction Stir Welding. Performing optimization of mechanical properties of FSW using Genetic Algorithm. I To draw conclusions on prediction of mechanical properties of FSW optimization of aircraft fuselage aluminium. ACHIEVEMENTS The basic principles of friction welding of the welding...
Words: 9686 - Pages: 39
...\subsection{KNN classifier} Cover and Hart \cite{cover1967nearest} introduced the idea of nearest neighbor pattern classification.In pattern recognition field, KNN is one of the most non-parametric classifiers that is used for a supervised learning algorithm \cite{murphy2012machine,kaghyan2012activity}. % ,murthy2015ann,suguna2010improved}. KNN searches the K points in the training set that are closest to the test input x which is judged to the highest class probability \cite{murphy2012machine}. Based on the predefined(classified) class labels which is a set of N labeled instances $\{x_i, y_i\}^1_N$, the classifier task is to predict the label of class that has the unknown(unclassified) query vector $x_0$ \cite{song2007iknn}. Using the K nearest neighbors of $x_0$, the KNN could make a majority vote to decide the class label of $x_0$. In order to classify a new object, KNN don't use any model to fit, but it depends on memory(the attributes and training samples). The criteria of classification are yielded by the training samples themselves without any additional data. KNN compute a prediction value from the neighborhood classes to classify the new query instance. %the decision rule is to assign an unclassified sample %point to the classification of the nearest of a collection of predetermined classified %points. Various ways are existing to compute the distance between two points in multidimensional space. Suppose we have x, y two n-dimensional vectors, such that $x=\{x_1...
Words: 642 - Pages: 3
...boisestate.edu/update/files/2013/08/Memritor620x320.jpg) Today’s computing chips are incredibly complex and contain billions of nano-scale transistors, allowing for fast, high-performance computers, pocket-sized smartphones that far outpace early desktop computers, and an explosion in handheld tablets. Despite their ability to perform thousands of tasks in the blink of an eye, none of these devices even come close to rivaling the computing capabilities of the human brain. At least not yet. But a Boise State University research team could soon change that. Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena are joining forces on a project titled “CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors.” (http://news.boisestate.edu/update/files 1 of 3 3/15/2014 12:37 PM Researchers Building a Computer Chip Based on the Human Brain - U... http://news.boisestate.edu/update/2013/08/14/research-team-building-a-... /2013/08/PCB_image.png) Team members are experts in machine learning (artificial intelligence), integrated circuit design and memristor devices. Funded by a three-year, $500,000 National Science Foundation grant, they have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer. “By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately...
Words: 780 - Pages: 4
...this test take time, to achieve the customer satisfaction Coors have to understand the beer flavor based chemical composition. This may help to satisfy the customer taste and may help to increase the profitability of the firm. The people are ready to accept the new products in the market if it is reasonable and affordable price with the quality. Question 2 What is the objective of the neural network used at Coors? Neural networks was used to create a link between chemical composition and sensory analysis. The neural network was designed to understand the relationship between the input and output. The neural network consists of a MLP architecture with two hidden layers. Data were normalized within the network and enabling comparison between the results for the various sensory outputs. Question 3 Why were the results of Coors' neural network initially poor, and what was done to improve the results? The results were poor because of two factors. The first reason was concentrated on a single products quality meant that the variation in the data was low. This couldn’t help the neural network to extract useful relationships from the data. Second was subsets of provided input...
Words: 574 - Pages: 3
...150. This neural network explanation technique is used to determine the relative importance of individual input attributes. A. sensitivity analysis B. average member technique C. mean squared error analysis D. absolute average technique ANSWER: A 151. Which one of the following is not a major strength of the neural network approach? A. Neural networks work well with datasets comprising noisy data. B. Neural networks can be used for both supervised learning and unsupervised clustering. C. Neural network learning algorithms are guaranteed to converge to an optimal solution. D. None of the above ANSWER: C 152. During back propagation training, the use of the delta rule is to make weight adjustments so as to A. Minimize the number of times...
Words: 490 - Pages: 2
...2D1432 Artificial Neural Networks and Other Learning Systems ! Plasticity vs. Stability Dilemma Plasticity: Network needs to learn new patterns Stability: Network needs to memorize old patterns Human brain: face recognition Adaptive Resonance Theory (ART) ! ! ! Plasticity vs. Stability Dilemma Backpropagation ! ART Characteristics Goal: Design a neural network that preserves its previously learned knowledge while continuing to learn new things. Biologically plausible: ART has a selfregulating control structure that allows autonomous recognition and learning no supervisory control or algorithmic implementation. ! ! ! New patterns require retraining of the network No Stabilization Stabilization achieved by decreasing learning rate Decreasing learning rate reduces plasticity ! ! Kohonen maps (SOM) ! ! other Neural Networks ART Online learning Self-organizing (unsupervised) Maintains permanent plasticity Learn in approximate match phase Non-stationary world Other ANN (BP) Offline learning supervised Plasticity regulated externally Learn in mismatch phase (error based) Stationary world ART Terminology STM : Short term memory ! ! Refers to the dynamics of neural units (recognition, matching) Refers to the adaptation of weights (learning) control structure to activate/deactivate search and matching ! LTM : Long term memory ! ! Gain control : ! 1 ART Basic Architecture F2 gain + ! ART Basic Architecture ...
Words: 1571 - Pages: 7
...Chemical Product and Process Modeling Volume 2, Issue 3 2007 Article 12 Nonlinear Modelling Application in Distillation Column Zalizawati Abdullah, Universiti Sains Malaysia Norashid Aziz, Universiti Sains Malaysia Zainal Ahmad, Universiti Sains Malaysia Recommended Citation: Abdullah, Zalizawati; Aziz, Norashid; and Ahmad, Zainal (2007) "Nonlinear Modelling Application in Distillation Column," Chemical Product and Process Modeling: Vol. 2 : Iss. 3, Article 12. Available at: http://www.bepress.com/cppm/vol2/iss3/12 DOI: 10.2202/1934-2659.1082 ©2007 Berkeley Electronic Press. All rights reserved. Nonlinear Modelling Application in Distillation Column Zalizawati Abdullah, Norashid Aziz, and Zainal Ahmad Abstract Distillation columns are widely used in chemical processes and exhibit nonlinear dynamic behavior. In order to gain optimum performance of the distillation column, an effective control strategy is needed. In recent years, model based control strategies such as internal model control (IMC) and model predictive control (MPC) have been revealed as better control systems compared to the conventional method. But one of the major challenges in developing this effective control strategy is to construct a model which is utilized to describe the process under consideration. The purpose of this paper is to provide a review of the models that have been implemented in continuous distillation columns. These models are categorized under three major groups: fundamental...
Words: 9415 - Pages: 38
...EEL5840: Machine Intelligence Introduction to feedforward neural networks Introduction to feedforward neural networks 1. Problem statement and historical context A. Learning framework Figure 1 below illustrates the basic framework that we will see in artificial neural network learning. We assume that we want to learn a classification task G with n inputs and m outputs, where, y = G(x) , (1) x = x1 x2 … xn T and y = y 1 y 2 … y m T . (2) In order to do this modeling, let us assume a model Γ with trainable parameter vector w , such that, z = Γ ( x, w ) (3) where, z = z1 z2 … zm T . (4) Now, we want to minimize the error between the desired outputs y and the model outputs z for all possible inputs x . That is, we want to find the parameter vector w∗ so that, E ( w∗ ) ≤ E ( w ) , ∀w , (5) where E ( w ) denotes the error between G and Γ for model parameter vector w . Ideally, E ( w ) is given by, E(w) = ∫ y – z 2 p ( x ) dx (6) x where p ( x ) denotes the probability density function over the input space x . Note that E ( w ) in equation (6) is dependent on w through z [see equation (3)]. Now, in general, we cannot compute equation (6) directly; therefore, we typically compute E ( w ) for a training data set of input/output data, { ( x i, y i ) } , i ∈ { 1, 2, …, p } , (7) where x i is the n -dimensional input vector, x i = x i 1 x i 2 … x in T (8) x2 y2 … … Unknown mapping G xn ym z1 z2 Trainable model Γ … zm -1- model outputs y1 … inputs x1...
Words: 7306 - Pages: 30