Free Essay

Rough Set Approach for Feature Reduction in Pattern Recognition Through Unsupervised Artificial Neural Network

In:

Submitted By ruchadesh9
Words 2369
Pages 10
First International Conference on Emerging Trends in Engineering and Technology

Rough Set Approach for Feature Reduction in Pattern Recognition through Unsupervised Artificial Neural Network
A. G. Kothari A.G. Keskar A.P. Gokhale Rucha Pranjali Lecturer Professor Professor Deshpande Deshmukh agkothari72@re B.Tech Student B.Tech Student diffmail.com Department of Electronics & Computer Science Engineering, VNIT, Nagpur Abstract
The Rough Set approach can be applied in pattern recognition at three different stages: pre-processing stage, training stage and in the architecture. This paper proposes the application of the Rough-Neuro Hybrid Approach in the pre-processing stage of pattern recognition. In this project, a training algorithm has been first developed based on Kohonen network. This is used as a benchmark to compare the results of the pure neural approach with the RoughNeuro hybrid approach and to prove that the efficiency of the latter is higher. Structural and statistical features have been extracted from the images for the training process. The number of attributes is reduced by calculating reducts and core from the original attribute set, which results into reduction in convergence time. Also, the above removal in redundancy increases speed of the process reduces hardware complexity and thus enhances the overall efficiency of the pattern recognition algorithm Keywords: core, dimensionality reduction, feature extraction, rough sets, reducts, unsupervised ANN as any type of ANN is the most general tool and can work well in noisy conditions also. The image data set in this project consists of printed alphanumeric characters in ten fonts. Since the input data contains lots of inconsistencies, the unsupervised type ANN is preferred. The result of suggested approach is used as a benchmark for comparison with the Rough set based approach. As the discernibility in features respective to inter class classification is not exploited in the unsupervised ANN approach; the attributes used can be redundant. Because of this, dimensionality of feature space increases which further makes the process complicated and time consuming. Since it is a real time application early convergence is also equally important. In such circumstances individualistic approach based on rough sets is required. The rough set based postulation basically exploits fully the above referred discernibility between the parameters required for inter-class classification. This is done by finding reducts and core using rough sets which reduce the dimensionality of the feature space [2]. But for a very huge data set like this, pure rough set approach will also have limitation of delayed convergence. Hence a rough set approach be used for preprocessing or attribute reduction and the reduced attribute set can be fed to the neural classifier for pattern recognition.

1. Introduction
In some cases of object classification, there may be a conflict regarding the exact class to which a data object belongs. It may belong to the boundary region between two classes and hence cannot be classified crisply. In such cases, when the boundary region is non-empty, the set is said to be rough. Rough Set Theory deals with the classification and analysis of such inconsistent data. For efficient pattern recognition, two main processes can be used: statistical process or artificial neural network based process. Because of numerous limitations of statistical processes and inherent advantages of ANN, the second approach is preferred

2. Image Extraction

Pre-Processing

&

feature

2.1 Image Pre-processing
The original data set is subjected to a number of preliminary processing steps to make it usable by the feature extraction algorithm. Pre-processing aims at producing data that is easy for the pattern recognition system to operate on accurately. The main objectives of pre-processing [3] are: binarization, noise reduction, skeletonization, boundary extraction, stroke width compensation [10], truncation of redundant portion of image and resizing to a specific size. Image binarization consists of conversion of a gray scale

978-0-7695-3267-7/08 $25.00 © 2008 IEEE DOI 10.1109/ICETET.2008.230

1196

Authorized licensed use limited to: San Jose State University. Downloaded on February 22, 2009 at 00:57 from IEEE Xplore. Restrictions apply.

image into a binary image. Noise reduction is performed using morphological operations like dilation, erosion etc. Skeletonization of the image gives an approximate single-pixel skeleton, which helps in the further stages of feature extraction and classification. The outermost boundary of the image is extracted to further obtain the boundary related attributes such as chain codes and number of loops. Stroke width compensation is performed to repair the character strokes, to fill small holes and to reduce uneven nature of the characters. The white portion surrounding the image can create noise in the feature extraction process and also increases the size of image unnecessarily. Truncation is performed to remove this white portion. In the end, the image is resized to a predefined size: 64pixel x 64pixel in this case

Structural features are based on topological and geometrical properties of the character, such as aspect ratio, loops, strokes and their directions etc. In this project, the boundary of the image is obtained and chain code is calculated for it. Then the number of ones, twos till number of eights is calculated. This is a good attribute for discernibility between different characters. Also the number of loops and perimeter of the image is obtained

Figure. 2. Zoning

Figure 3. Crossings

3. Unsupervised Neural Network Approach
In a neural network, if the target output is not known; the training method adopted is unsupervised learning. The network may modify the weight so that the most similar input vector is assigned to the same output unit. It is also referred to as self-organization, because it self organizes data presented to the network and detects their emergent collective properties. Advantages of unsupervised training are that classification is closer to human thinking and training does not change with small inconsistencies in data. It is also less lengthy compared to supervised neural networks. Paradigms of unsupervised learning are Hebbian learning and competitive learning rule. In this project, we use the latter. Winner-take-all algorithm is used here for competitive learning and is applied to the Kohonen Self-organizing network. The neural network units update their weights by forming a new weight vector that is a linear combination of the old weight vector and the current input vector. The unit whose weight vector is closest to the input vector is allowed to learn. An excel sheet with rows representing characters and columns representing attributes is used as input to the training algorithm. It is as shown in Fig. 4. The first column belongs to the decision attribute while remaining columns (30) denote the condition attributes. The decision attribute gives the outcome based on the values of the condition attributes, which are the inputs to the neural network. The data set in this project consists of printed alpha-numeric characters in ten fonts with 5 repetitions creating a data set of 1300 characters.

Figure. 1. Image pre-processing

2.2 Feature Extraction
In feature extraction stage, each character is represented as a feature vector, which becomes its identity. The major goal of feature extraction is to extract a set of features, which maximizes the recognition rate with the least amount of elements. The feature extraction process used in this project consists of two types of features: statistical and structural [2, 3, 7]. The major statistical features used are: zoning, crossings, pixel density, Euler number and compactness. In zoning, the 64 x 64 character image is divided into 16 x 16 pixel parts and pixel density of each part is calculated individually. This helps in obtaining local characteristics rather than global characteristics and is an important attribute for pattern recognition. Crossings count the number of transitions from background to foreground pixels along vertical and horizontal lines through the character image. For eg, In Fig. 3. , there are 6 vertical crossings (white to black and black to white) and 4 horizontal crossings in both the upper and lower part. Pixel Density is calculated over the whole 64 x 64 image. Euler number of an image is a scalar whose value is the total number of objects in the image minus the total number of holes in those objects. Euler number is also calculated for each image.

1197

Authorized licensed use limited to: San Jose State University. Downloaded on February 22, 2009 at 00:57 from IEEE Xplore. Restrictions apply.

Figure 4. Data set containing image attributes used for training

4. Rough-Neuro Hybrid Approach
Use of rough sets results into dimensionality reduction and optimized classification with removal of redundant attributes. Also, neural network is the most generalized tool for pattern recognition and has capability of working in noisy conditions also. This paper proposes a new Rough-Neuro Hybrid Approach in the pre-processing stage of pattern recognition. This reduces the convergence time, which was higher for the Kohonen Algorithm based training. Reducts and Cores are extracted from the set of given attributes. In this process, a set of equivalence classes which are indiscernible using the set of given attributes are identified. Only those attributes are kept which preserve the indiscernibility relation and the redundant ones are removed, as they do not affect the classification. A reduct is thus, a reduced set of attributes, which classifies the data set with the same efficiency as that of the original attribute set. Reducts ease the process of making predictions and decision making which in turn gives improved classification with reduced dimensionality of feature space [5, 6].

which is subset of A consists of the set of attributes that can be used to discern between objects x, y which are elements of U: “MA(x, y) consists of all those a which belongs to A and discerns between x and y.”[5,6]. After finding discernibility matrix, overall relative weight of each attribute is calculated by adding its relative weights of individual’s pairs of objects as shown in table 1.0. This becomes our parameter to choose the reduced set of attributes. Core is the set of all those attributes, which are essential for classification between two classes, and there is no alternative for those attributes. The third step thus includes finding such core attributes and also finding finally the reducts. Core can be found from the discernibility matrix itself. Any entry where each attribute has weight 0 except a single attribute, which has weight 1. Union of core and attributes from relative weight is taken. Top 45%-50% percent attributes are taken as reducts, which is then combined with core to form the final reducts. And then this set is given to the neural networks for decisionmaking.

4.1 Steps for finding reducts
The flowchart of algorithm for extracting reducts is as shown in figure 5. In first step the discernibility matrix is calculated for the input information. Whereas the information is technically defined as “information system I in terms of a pair (U, A), where U is a nonempty finite set of objects and A is a non-empty finite set of attributes.”[5,6] While the discernibility matrix is defined as an information system I define a matrix MA called a discernibility matrix. Each entry MA(x, y)

Figure 5 Steps for finding reducts For better understanding the example shown in Table 2.0 can be seen. A close look of this pattern recognition data reveals that the conditional attribute PixDen (zone1) is redundant.

1198

Authorized licensed use limited to: San Jose State University. Downloaded on February 22, 2009 at 00:57 from IEEE Xplore. Restrictions apply.

Thus only Euler Number and Avg PixDen are needed to take a decision. These reduced attributes are called reducts Table 1: Relative contribution of various condition attributes.

First Impression, 2006, PEARSON Education, pages 348497 [3] Hongsheng Su, Qunzhan Li, “Fuzzy Neural Classifier for Fault Diagnosis of Transformer Based on Rough Sets Theory”, IEEE, CS, 2223 to 2227 [4] Zdzislaw Pawlak, “ROUGH SETS- Theoretical Aspects of Reasoning about data”, 1991, Kluwer Academic Publishers, pages 1-43 [5] Hongsheng Su, Qunzhan Li Fuzzy “Neural Classifier for Fault Diagnosis of Transformer Based on Rough Sets Theory”, IEEE, CS, 2223 to 2227 [6] Andrew Kusiak “Rough Set Theory: A Data Mining Tool for Semiconductor Manufacturing”, IEEE transactions on electronics packaging manufacturing, vol. 24, no. 1, January 2001,Pages 44-50 [7] Giorgos Vamvakas , “Optical Character Recognition for Handwritten Characters”, National Center for Scientific Research “Demokritos” Athens - Greece , Institute of Informatics and Telecommunications and Computational Intelligence Laboratory (CIL) [8] Seethalakshmi R. , Sreeranjani T.R. , Balachandar T, Abnikant Singh, Markandey Singh, Ritwaj Ratan, Sarvesh Kumar, “Optical Character Recognition for printed Tamil text using Unicode”, Journal of Zhejiang University SCIENCE [9] Noor Ahmed Shaikh, Dr. Zubair A. Shaikh, “A Generalized Thinning Algorithm for Cursive and NonCursive Language Scripts” [10] Jianming Hu, Donggang Yu and Hong Yan, “Algorithm for stroke width compensation of handwritten characters”, Electronics Letters Online No19961501 [11] Chichang Jou, Tai- Yuan Hsiao, Hung-Chang Lee, “Handwritten Numeral Recognition based on Reduced features extraction and Fuzzy Membership function” [12] Jerzy W. Grzymala-Busse, “Introduction to Rough Set Theory and Applications”

Table 2.0: Training data with condition and decision attributes. PixDen (zone 1) is the local pixel density of the first zone obtained in zoning of image. Avg PixDen is the normalized pixel density of the entire image

5. Results
The Pure Neural approach, used for benchmarking, gave an efficiency of 55%. The method proposed above was used for finding reducts which reduced the dimensionality of the attribute set. The Kohonen network was trained again with this reduced set. Efficiency of 53.33% was obtained using Rough-Neuro Hybrid approach. The slight reduction in efficiency is compensated by the resulting removal in redundancy, decrease in number of epochs required for convergence and reduction in hardware complexity. Thus RoughNeuro hybrid approach proves to be better for pattern recognition as compared to pure neural approach

6. References
[1] S. N. Sivanandam S. Sumathi S. N. Deepa, “Introduction to Neural Networks using Matlab 6.0” first edition, 2006, Tata MCGraw Hill, pages 531-536 [2] Rafael C. Gonzalez, Richard E. Woods, Steven L. Eddiins, “Digital Image Processing using MATLAB” ,

1199

Authorized licensed use limited to: San Jose State University. Downloaded on February 22, 2009 at 00:57 from IEEE Xplore. Restrictions apply.

Similar Documents

Premium Essay

Data Mining

...mining is the process through which previously unknown patterns in data were discovered. Another definition would be “a process that uses statistical, mathematical, artificial intelligence, and machine learning techniques to extract and identify useful information and subsequent knowledge from large databases.” This includes most types of automated data analysis. A third definition: Data mining is the process of finding mathematical patterns from (usually) large sets of data; these can be rules, affinities, correlations, trends, or prediction models. Data mining has many definitions because it’s been stretched beyond those limits by some software vendors to include most forms of data analysis in order to increase sales using the popularity of data mining. What recent factors have increased the popularity of data mining? Following are some of most pronounced reasons: * More intense competition at the global scale driven by customers’ ever-changing needs and wants in an increasingly saturated marketplace. * General recognition of the untapped value hidden in large data sources. * Consolidation and integration of database records, which enables a single view of customers, vendors, transactions, etc. * Consolidation of databases and other data repositories into a single location in the form of a data warehouse. * The exponential increase in data processing and storage technologies. * Significant reduction in the cost of hardware...

Words: 4581 - Pages: 19

Premium Essay

Hai, How Are U

...UNIVERSITY OF KERALA B. TECH. DEGREE COURSE 2008 ADMISSION REGULATIONS and I  VIII SEMESTERS SCHEME AND SYLLABUS of COMPUTER SCIENCE AND ENGINEERING B.Tech Comp. Sc. & Engg., University of Kerala 2 UNIVERSITY OF KERALA B.Tech Degree Course – 2008 Scheme REGULATIONS 1. Conditions for Admission Candidates for admission to the B.Tech degree course shall be required to have passed the Higher Secondary Examination, Kerala or 12th Standard V.H.S.E., C.B.S.E., I.S.C. or any examination accepted by the university as equivalent thereto obtaining not less than 50% in Mathematics and 50% in Mathematics, Physics and Chemistry/ Bio- technology/ Computer Science/ Biology put together, or a diploma in Engineering awarded by the Board of Technical Education, Kerala or an examination recognized as equivalent thereto after undergoing an institutional course of at least three years securing a minimum of 50 % marks in the final diploma examination subject to the usual concessions allowed for backward classes and other communities as specified from time to time. 2. Duration of the course i) The course for the B.Tech Degree shall extend over a period of four academic years comprising of eight semesters. The first and second semester shall be combined and each semester from third semester onwards shall cover the groups of subjects as given in the curriculum and scheme of examination ii) Each semester shall ordinarily comprise of not less than 400 working periods each of 60 minutes duration...

Words: 34195 - Pages: 137

Free Essay

Appliication of Image Search Engine

...Image Search by Integrating Text and Visual Features Yuxin Chen, Student Member, IEEE, Hariprasad Sampathkumar, Student Member, IEEE, Bo Luo, Member, IEEE Computer Society, and Xue-wen Chen, Senior Member, IEEE Abstract—With the development of Internet and Web 2.0, large-volume multimedia contents have been made available online. It is highly desired to provide easy accessibility to such contents, i.e., efficient and precise retrieval of images that satisfies users’ needs. Toward this goal, content-based image retrieval (CBIR) has been intensively studied in the research community, while text-based search is better adopted in the industry. Both approaches have inherent disadvantages and limitations. Therefore, unlike the great success of text search, web image search engines are still premature. In this paper, we present iLike, a vertical image search engine that integrates both textual and visual features to improve retrieval performance. We bridge the semantic gap by capturing the meaning of each text term in the visual feature space, and reweight visual features according to their significance to the query terms. We also bridge the user intention gap because we are able to infer the “visual meanings” behind the textual queries. Last but not least, we provide a visual thesaurus, which is generated from the statistical similarity between the visual space representation of textual terms. Experimental results show that our approach improves both precision and recall, compared...

Words: 11319 - Pages: 46

Premium Essay

Data Mining Practical Machine Learning Tools and Techniques - Weka

...Related Technologies Jim Melton and Andrew Eisenberg Database: Principles, Programming, and Performance, Second Edition Patrick O’Neil and Elizabeth O’Neil The Object Data Standard: ODMG 3.0 Edited by R. G. G. Cattell, Douglas K. Barry, Mark Berler, Jeff Eastman, David Jordan, Craig Russell, Olaf Schadow, Torsten Stanienda, and Fernando Velez Data on the Web: From Relations to Semistructured Data and XML Serge Abiteboul, Peter Buneman, and Dan Suciu Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations Ian H. Witten and Eibe Frank Joe Celko’s SQL for Smarties: Advanced SQL Programming, Second Edition Joe Celko Advanced SQL: 1999—Understanding Object-Relational and Other Advanced Features Jim Melton Joe Celko’s Data and Databases: Concepts in Practice Joe Celko Database Tuning: Principles, Experiments, and Troubleshooting Techniques Dennis Shasha and Philippe Bonnet Developing Time-Oriented Database...

Words: 191947 - Pages: 768

Premium Essay

Dataminig

...Data Mining Third Edition This page intentionally left blank Data Mining Practical Machine Learning Tools and Techniques Third Edition Ian H. Witten Eibe Frank Mark A. Hall AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Morgan Kaufmann Publishers is an imprint of Elsevier Morgan Kaufmann Publishers is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA This book is printed on acid-free paper. Copyright © 2011 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must...

Words: 194698 - Pages: 779

Free Essay

Big Data

...management and policy issues. MGI research combines two disciplines: economics and management. Economists often have limited access to the practical problems facing senior managers, while senior managers often lack the time and incentive to look beyond their own industry to the larger issues of the global economy. By integrating these perspectives, MGI is able to gain insights into the microeconomic underpinnings of the long-term macroeconomic trends affecting business strategy and policy making. For nearly two decades, MGI has utilized this “micro-to-macro” approach in research covering more than 20 countries and 30 industry sectors. MGI’s current research agenda focuses on three broad areas: productivity, competitiveness, and growth; the evolution of global financial markets; and the economic impact of technology. Recent research has examined a program of reform to bolster growth and renewal in Europe and the United States through accelerated productivity growth; Africa’s economic potential; debt and deleveraging and the end of cheap capital; the impact of multinational companies on the US economy; technology-enabled business trends; urbanization in India and China; and the competitiveness of sectors and industrial policy. MGI is led by three McKinsey & Company directors: Richard Dobbs, James Manyika, and Charles Roxburgh. Susan Lund serves as MGI’s director of research. MGI...

Words: 60035 - Pages: 241

Free Essay

Mcda Analysis

...management and policy issues. MGI research combines two disciplines: economics and management. Economists often have limited access to the practical problems facing senior managers, while senior managers often lack the time and incentive to look beyond their own industry to the larger issues of the global economy. By integrating these perspectives, MGI is able to gain insights into the microeconomic underpinnings of the long-term macroeconomic trends affecting business strategy and policy making. For nearly two decades, MGI has utilized this “micro-to-macro” approach in research covering more than 20 countries and 30 industry sectors. MGI’s current research agenda focuses on three broad areas: productivity, competitiveness, and growth; the evolution of global financial markets; and the economic impact of technology. Recent research has examined a program of reform to bolster growth and renewal in Europe and the United States through accelerated productivity growth; Africa’s economic potential; debt and deleveraging and the end of cheap capital; the impact of multinational companies on the US economy; technology-enabled business trends; urbanization in India and China; and the competitiveness of sectors and industrial policy. MGI is led by three McKinsey & Company directors: Richard Dobbs, James Manyika, and Charles Roxburgh. Susan Lund serves as MGI’s director of research. MGI...

Words: 60035 - Pages: 241

Free Essay

Fuzzy Control

...Fuzzy Control Kevin M. Passino Department of Electrical Engineering The Ohio State University Stephen Yurkovich Department of Electrical Engineering The Ohio State University An Imprint of Addison-Wesley Longman, Inc. Menlo Park, California • Reading, Massachusetts Don Mills, Ontaria • Sydney • Bonn • Harlow, England • Berkeley, California • Amsterdam • Mexico City ii Assistant Editor: Laura Cheu Editorial Assistant: Royden Tonomura Senior Production Editor: Teri Hyde Marketing Manager: Rob Merino Manufacturing Supervisor: Janet Weaver Art and Design Manager: Kevin Berry Cover Design: Yvo Riezebos (technical drawing by K. Passino) Text Design: Peter Vacek Design Macro Writer: William Erik Baxter Copyeditor: Brian Jones Proofreader: Holly McLean-Aldis Copyright c 1998 Addison Wesley Longman, Inc. All rights reserved. No part of this publication may be reproduced, or stored in a database or retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. Printed simultaneously in Canada. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and AddisonWesley was aware of a trademark claim, the designations have been printed in initial caps or in all caps. MATLAB is a registered trademark of The MathWorks...

Words: 211473 - Pages: 846

Free Essay

Business Process Management

...Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen 6336 Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany Richard Hull Jan Mendling Stefan Tai (Eds.) Business Process Management 8th International Conference, BPM 2010 Hoboken, NJ, USA, September 13-16, 2010 Proceedings 13 Volume Editors Richard Hull IBM Research, Thomas J. Watson Research Center 19 Skyline Drive, Hawthorne, NY 10532, USA E-mail: hull@us.ibm.com Jan Mendling Humboldt-Universität zu Berlin, Institut für Wirtschaftsinformatik Unter den Linden 6, 10099 Berlin, Germany E-mail: contact@mendling.com Stefan Tai Karlsruhe Institute of...

Words: 147474 - Pages: 590