Free Essay

Data Mining

In:

Submitted By oluwasoulomorn20
Words 3343
Pages 14
2000_ ________ __ _______ _7_ _2_

*Peter J. Bentley, *Jungwon Kim, **Gil-Ho Jung and ***Jong-Uk Choi
*Department of Computer Science, University College London
**Department of Computer Science, SungKyunKwan University
*** Department of 888 , Sangmyung University e-mail: J.Kim@cs.ucl.ac.uk

Fuzzy Darwinian Detection of Credit Card Fraud
Peter J. Bentley, *Jungwon Kim, **Gil-Ho Jung and ***Jong-Uk Choi
*Department of Computer Science, University College London
**Department of Computer Science, SungKyunKwan University
*** Department of 888 , Sangmyung University e-mail: J.Kim@cs.ucl.ac.uk




Credit evaluation is one of the most important and difficult tasks for credit card companies, mortgage companies, banks and other financial institutes. Incorrect credit judgement causes huge financial losses. This work describes the use of an evolutionary-fuzzy system capable of classifying suspicious and non-suspicious credit card transactions.
The paper starts with the details of the system used in this work. A series of experiments are described, showing that the complete system is capable of attaining good accuracy and intelligibility levels for real data.

1. INTRODUCTION
Fraud is a big problem today. Looking at credit card transactions alone, with millions of purchases every month, it is simply not humanly possible to check every one. And when many purchases are made with stolen credit cards, this inevitably results in losses of significant sums. The only viable solution to problems of this scale is automation by computer. Just as computers are used for credit scoring, risk assessment and customer profiling, it is possible to use computers to assess the likelihood of credit card transactions being “suspicious”. Such automated detection can be performed by using simple statistical techniques, or by applying ‘rules of thumb’ to claims.
However, the fingerprints of fraudulent activity may be diverse and complex, resulting in the failure of these traditional methods. This motivates the use of newer techniques, called machine learning or p a t t e r n classification, which are capable of finding complex nonlinear ‘fingerprints’ in data.
This paper investigates one such technique: the use of genetic programming to evolve fuzzy logic rules capable of classifying credit card transactions into “suspicious” and “non-suspicious” classes. The paper follows on from
[1] and [2], describing the application of the committeedecision making system to a new problem.

2. SYSTEM OVERVIEW
This section describes the evolutionary fuzzy system used
(with different setups) as members of a committee. Full details of this system can be found in [2].
The system developed during this research comprises two main elements: a Genetic Programming (GP) search algorithm and a fuzzy expert system. Figure 1 provides an overview. 2.1 CLUSTERING
Data is provided to the system in the form of two commaseparated-variable (CSV) files: training data and test data.
When started, the system first clusters each column of the training data into three groups using a one-dimensional clustering algorithm. A number of clusterers are implemented in the system, including C-Link, S-Link, Kmeans [5].
After every column of the data has been successfully clustered into three, the minimum and maximum values in each cluster are found. These values are then used to define the domains of the membership functions of the fuzzy expert system [6].
2.2 MEMBERSHIP FUNCTIONS
Three membership functions, corresponding to the three groups generated by the clusterer, are used for each column of data. Each membership function defines the
1

2000_ ________ __ _______ _7_ _2_

11010111 binary

‘degree of membership’ of every data value in each of the three fuzzy sets: ‘LOW’, ‘MEDIUM’ and ‘HIGH’ for its corresponding column of data. Since every column is clustered separately, with the clustering determining the domains of the three membership functions, every column of data has its own, unique set of three functions.

01010010 unary
11110111binary
10010011 leaf

1D clusterer

Modal information 2.3.2 Rule Evaluation

Fuzzified Data

GP system

Rule Parser

Evolved rules

selection, reproduction fitness functions

genotypes
(coded rules)

Random rule initialisation phenotypes
(rules)

NOT (IS_LOW Fred OR NOT
IS_HIGH Harry)
(IS_MEDIUM Fred OR NOT
IS_HIGH Harry)
NOT (IS_LOW Susan)

Figure 1 Block diagram of the Evolutionary-fuzzy system.

The system can use one of three types of membership function: ‘non-overlapping’, ‘overlapping’, and ‘smooth’
[2]. The first two are standard trapezoidal functions, the third is a set of functions based on the arctangent of the input in order to provide a smoother, more gradual set of
‘degree of memberships’.
Whichever set of membership functions are selected, they are then shaped according to the clusterer and used to fuzzify all input values, resulting in a new database of fuzzy values. The GP engine is then seeded with random genotypes (coded rules) and evolution is initiated
2.3 EVOLVING RULES
The implementation of the GP algorithm employs many of the techniques used in GAs to overcome some of the problems associated with simple GP systems. For example, this evolutionary algorithm uses a crossover operator designed to minimise the disruption caused by standard
GP crossover, it uses a multiobjective fitness ranking method to allow solutions which satisfy multiple criteria to be evolved, and it also uses binary genotypes which are mapped to phenotypes.
2.3.1 Genotypes and Phenotypes
Genotypes consist of variable sized trees, where each node consists of a binary number and a flag defining whether the node is binary, unary or a leaf, see figure 2. At the start of evolution, random genotypes are created. Genotypes are mapped onto phenotypes to obtain fuzzy rules, e.g. the genotype shown in fig. 2 maps onto the phenotype:
“(IS_MEDIUM (Height OR IS_LOW
IS_MEDIUM Age)”.

Age) AND

Currently the system uses two binary functions: ‘OR’ and ‘AND’, four unary functions: ‘NOT’, ‘IS_LOW’,
‘IS_MEDIUM’, ‘IS_HIGH’, and up to 256 leaves (column labels such as “Date”, “PolicyNumber”, “Age”, “Cost”).
Depending on the type of each node, the corresponding binary value is mapped to one of these identifiers and added to the phenotype. The mapping process is also used to ensure all rules are syntactically correct, see [2].

2

00010111 unary

Figure 2: An example genotype used by the system.

Membership functions Fuzzifier

00010011 leaf

00011010 leaf

Fuzzy system
Data

10010011 unary

Every evolved phenotype (or fuzzy rule) is evaluated by using the fuzzy expert system to apply it to the fuzzified training data, resulting in a defuzzified score between 0 and 1 for every fuzzified data item. This list of scores is then assessed by fitness functions which provide separate fitness values for the phenotype, designed to:
i.
ii. iii. iv.

minimise the number of misclassified items. maximise the difference between the average scores for correctly classified “suspicious” items and the average scores for “normal” items. maximise the sum of scores for “suspicious” items. penalise the length of any rules that contain more than four identifiers (binary, unary, or leaf nodes).

2.3.3 Rule Generation
Using these four fitness values for each rule, the GP system then employs the SWGR multiobjective optimisation ranking method [4] to determine how many offspring each pair of rules should have.
Child rules are generated using one of two forms of crossover. The first type of crossover emulates the singlepoint crossover of genetic algorithms by finding two random points in the parent genotypes that resemble each other, and splicing the genotypes at that point. By ensuring that the same type of nodes, in approximately the same places, are crossed over, and that the binary numbers within the nodes are also crossed, an effective exploration of the search space is provided without excessive disruption [3]. The second type of crossover generates child rules by combining two parent rules together using a binary operator (an ‘AND’ or ‘OR’). This more unusual method of generating offspring (applied approximately one time out of every ten instead of the other crossover operator) permits two parents that detect different types of
“suspicious” data to be combined into a single, fitter individual. Mutation is also occasionally applied, to modify randomly the binary numbers in each node by a single bit.
The GP system employs population overlapping, where the worst Pn% of the population are replaced by the new offspring generated from the best Pm%. Typically values of Pn = 80 and Pm = 40 seem to provide good results. The population size was normally 100 individuals.
2.3.4 Modal Evolution
Each evolutionary run of the GP system (usually only 15 generations) results in a short, readable rule which detects some, but not all, of the “suspicious” data items in the

2000_ ________ __ _______ _7_ _2_

training data set. Such a rule can be considered to define one mode of a multimodal problem. All items that are correctly classified by this rule (recorded in the modal database, see figure 1) are removed and the system automatically restarts, evolving a new rule to classify the remaining items. This process of modal evolution continues until every “suspicious” data item has been described by a rule. However, any rules that misclassify more than a predefined percentage of claims are removed from the final rule set by the system.
2.4 ASSESSMENT OF FINAL RULE SET
Once modal evolution has finished generating a rule set, the complete set of rules (joined into one by disjunction,
i.e., ‘OR’ed together) is automatically applied to the training data and test data, in turn. Information about the system settings, which claims were correctly and incorrectly classified for each data set, total processing time in seconds, how the data was clustered and the rule set are stored to disk.
2.5 APPLYING RULES TO FUZZY DATA
The path of evolution through the multimodal and multicriteria search space is guided by fitness functions.
These functions use the results obtained by the Rule Parser
- a fuzzy expert system that takes one or more rules and interprets their meaning when they are applied to each of the previously fuzzified data items in turn.
This system is capable of two different types of fuzzy logic rule interpretation: traditional fuzzy logic, and membership-preserving fuzzy logic, an approach designed during this research. Depending on which method of interpretation has been selected by the user, the meaning of the operators within rules and the method of defuzzification is different. Complete details of the fuzzy interpretation methods are provided in [2].
2.6 COMMITTEE DECISIONS
As should now be apparent, the evolutionary-fuzzy system has a number of very different parameters that can be used at any one time. What may be a good setup for one data set is not so good for another. In addition, the multiple results generated by multiple different system setups need to be assessed against multiple criteria. To achieve this, the system equips a multi-model decision aggregation system.
The user can now set up as many as four different versions of the system and have them run in parallel on the same data set. The committee decision maker employs aggregation of weighted normalised values for accuracy and importance [1]. The default weighting values were 0.3 and 1.0 for accuracy and importance, respectively. Once every rule set has been assigned a score, the set(s) with the highest score for each committee member are reported to the user. The committee decision maker then performs the same analysis globally, finding the globally most accurate and intelligible rule set(s), then assigning every rule set a score based on globally aggregated, weighted, normalised values. The best overall rule set(s) are then reported to the user. For full details, see [1].

3. APPLYING THE SYSTEM TO CREDICT CARD
DATA
3.1 DATA
The data used in this work was gathered from a domestic credit card company. Even though the company provided real credit card transaction data for this research, it required that the company name was kept confidential.
The data was gathered from January to December of 1995 and a total of 4000 transaction records were provided, each with 96 fields. 62 fields were selected for the experiments.
The excluded 34 fields were regarded as clearly irrelevant for distinguishing the credit status. (Examples include the client code number and the transaction index number.) The details of selected field names were not allowed to be reported. In order to allow the fuzzy rule evolution of the system, the collected data was labeled as “suspicious” or
“non-suspicious”. These labels were made by following the heuristics used in the credit card company. Specifically, when the customer’s payment is not overdue or the number of overdue payment is less than three months, the transaction is considered as “non-suspicious”, otherwise it is considered “suspicious”.
To prepare a training set and a test set, we employed a simple cross-validation method. We held one-third of the data for testing and used the remaining two-thirds for training. The system executed its rule-evolution three times on three different training data sets. For each run, the system replaced the training set with the other third of the data set. This cross-validation was performed in order to ensure the evolved rule sets were not biased by a certain group of training set. By comparing the three different evolved rules based on three different groups of training data set, the final rule set is expected to represent the features of the entire data set. Unfortunately, the distribution of collected credit card transaction data was not even for each class. It had a larger number of examples for the "non-suspicious" class than for the "suspicious" class. The total number of items belonging to the smaller size of "suspicious" class was 985. This number is large enough to be divided into three subsets. Thus, the four committee members with identical experiment setups were run three times on each data subset respectively. The examples included in each set are shown in Table 1.
Exp

"SUSPICIOUS"

"NON-SUSPICIOUS"
Training
Test
Training
Test
1
1-656
657-985
1-2000
2000-3015
2
329-985
1-328
1001-3015
1-1000
3
657-985 &
329-656
2001-3015 &
1001-2000
1-328
1-1000
Table 1. Credit card data distribution for three experiments. The number in this table shows the IDs of examples belonging to each set. Exp stands for the experiment.

3.2 EXPERIMENTS
Three sets of experiments were performed with the committee decision system and the four different setups of fuzzy rule evolver were run for each experiment:

3

2000_ ________ __ _______ _7_ _2_

R
1
2
3

3
2
3

[A] Fuzzy Logic with non[B] Fuzzy Logic with overlapping [C] MP-Fuzzy Logic with overlapping [D] MP-Fuzzy Logic with smooth MFs overlappingMFs MFs
MFs
Training
Test
R
Training
Test
R
Training
Test
R
Training
Test
TP% FN% TP% FN%
TP% FN% TP% FN%
TP% FN% TP% FN%
TP% FN% TP% FN%
2
100
0
100
85.1
16
10.9
5.79
100
100
5
48.6
6.09 3.81 10.4 3.35
5.79 42.5 10.3
3
100
1.67
99.7
6.38
3
1.37
5.64
99.7
100
10
41.6
44.1 5.79 47.8 9.45
5.79 47.6 12.5
3
100
5.78
100
5.79
4
1.67
5.64
86.9
100
16
42.7
46.8 5.18 46.9 6.09
5.94 42.9 6.40
Table 2 Intelligibility (number of rules) and accuracy (number of correct classifications of “suspicious” items) of rule sets for test and training data.
R shows the number of rules in the generated rule set and TP and FN is represented in %.

1. standard fuzzy logic with non-overlapping membership functions 2. standard fuzzy logic with overlapping membership functions 3. membership-preserving fuzzy logic with overlapping membership functions
4.
membership-preserving fuzzy logic with smooth membership functions
(Previous work had shown that varying these aspects of the system caused the largest variation in behaviour [2].)
All four committee members were trained on one selected training set and test set. This resulted in different rule sets being generated for this problem, each with different levels of intelligibility and accuracy.
3.3 RESULTS AND ANALYSIS
Table 2 presents the results of the experiments. The accuracy of the system is described by a True Positive
(TP) prediction rate and a False Negative (FN) error rate.
The TP is the rate that the predicted output is "suspicious" class when the desired output is "suspicious" class. The FN is the probability of which the predicted output is
"suspicious" when the desired output is "non-suspicious" class. The desired system will have a high TP and a low
FN.
As Table 2 explains, committee member [B] provides the most accurate and intelligible classifications for all experiments with this data. The best accuracy overall is achieved by [B], detecting 100% of the “suspicious” claims for both on the training and the test set, whilst showing that 5.79% of false negative error, which is relatively low. In addition, the most accurate and intelligible rule sets that are generated by [B] contain just three rules. Overall, the best rule set as reported by the committee decision maker is for experiment 2:
(IS_LOW field57 OR field50)
IS_MEDIUM field56
(field56 OR field56) and for the experiment 3:
(Filed49 OR Field56)
(IS_LOW Field26 OR field15)
IS_MEDIUM field56
These best rule sets are clearly dominated by the field56. This implies that this field seems to be the single best indicator of “suspicious” case. In summary, the prediction results of these best rule sets are satisfying in terms of the accuracy and intelligibility.
Another interesting observation is that the results of
4

experiments rapidly change depending on the specific experiment setup. While [B] setup always generated the good rule sets, [C] setup provided almost meaningless rule sets, which showed nearly random prediction results. The setup [D] showed the consistent results, which the differences of TP and FN for both the training and the test sets are within 6%, but the best result is not satisfying.
These results show again the large variance of committee member performance and illustrate the validity of the committee-decision maker approach for this problem.
In addition, from [A] and [B]’s results, it could be implied that the data set used in the experiment 1 seems to have somewhat different characters from other two data sets. The quite large difference, about 40% for TP in [A] and 80% for FN in [B] represent that the importance of data sampling during the fuzzy rule evolution stage.
4.

CONCLUSION

This paper has described the application of a committeedecision-making evolutionary fuzzy system for credit card evaluation. The results for this real-world problem confirm previous results obtained in [1] for real home insurance data. They illustrate that the use of evolution with fuzzy logic can enable both accurate and intelligible classification of difficult data. The results also show the importance of committee-decision making to help ensure that good results will always be generated.
REFERENCES
[1] Bentley, P. J., “Evolutionary, my dear Watson: Investigating
Committee-based Evolution of Fuzzty Rules for the Detection of
Suspicious Insurance Claims”, In the Proceeding of GECCO’
2000, July 8-12, Las Vegas, Nevada, USA, pp** - **, 2000.
[2] Bentley, P. J., “Evolving Fuzzy Detectives: An Investigation into the Evolution of Fuzzy Rule”, A late-breaking paper in
GECCO '99, July 14-17, 1999, Orlando, Florida USA, pp. 38-47,
1999.
[3] Bentley, P. J. & Wakefield, J. P., “Hierarchical Crossover in
Genetic Algorithms”, In Proceedings of the 1st On-line
Workshop on Soft Computing (WSC1), (pp. 37-42), Nagoya
University, Japan, 1996.
[4] Bentley, P. J. & Wakefield, J. P., “Finding Acceptable
Solutions in the Pareto-Optimal Range using Multiobjective
Genetic Algorithms”, Chawdhry, P.K.,Roy, R., & Pant, R.K. (eds)
Soft Computing in Engineering Design and Manufacturing.
Springer Verlag London Limited, Part 5, 231-240, 1997.
[5] Hartigan, J. A , Clustering algorithms. Wiley, NY, 1975.
[6] Mallinson, H. and Bentley, P.J. “Evolving Fuzzy Rules for
Pattern Classification”, In Proc. of the Int. Conf. on
Computational Intelligence for Modelling, Control and
Automation - CIMCA’99, 1999.

Similar Documents

Premium Essay

Data Mining

...Data Mining 0. Abstract With the development of different fields, artificial intelligence, machine learning, statistic, database, pattern recognition and neurocomputing they merge to a newly technology, the data mining. The ultimate goal of data mining is to obtain knowledge from the large database. It helps to discover previously unknown patterns, most of the time it is followed by deeper manual evaluation to explain and correlate the results to establish a new knowledge. It is often practically used by government, bank, insurance company and medical researcher. A general basic idea of data mining would be introduced. In this article, they are divided into four types, predictive modeling, database segmentation, link analysis and deviation detection. A brief introduction will explain the variation among them. For the next part, current privacy, ethical as well as technical issue regarding data mining will be discussed. Besides, the future development trends, especially concept of the developing sport data mining is written. Last but not the least different views on data mining including the good side, the drawback and our views are integrated into the paragraph. 1. Introduction This century, is the age of digital world. We are no longer able to live without the computing technology. Due to information explosion, we are having difficulty to obtain knowledge from large amount of unorganized data. One of the solutions, Knowledge Discovery in Database (KDD) is introduced...

Words: 1700 - Pages: 7

Premium Essay

Data Mining

...Data mining is an iterative process of selecting, exploring and modeling large amounts of data to identify meaningful, logical patterns and relationships among key variables.  Data mining is used to uncover trends, predict future events and assess the merits of various courses of action.             When employing, predictive analytics and data mining can make marketing more efficient. There are many techniques and methods, including business intelligence data collection. Predictive analytics is using business intelligence data for forecasting and modeling. It is a way to use predictive analysis data to predict future patterns. It is used widely in the insurance, medical and credit industries. Assessment of credit, and assignment of a credit score is probably the most widely known use of predictive analytics. Using events of the past, managers are able to estimate the likelihood of future events. Data mining aids predictive analysis by providing a record of the past that can be analyzed and used to predict which customers are most likely to renew, purchase, or purchase related products and services. Business intelligence data mining is important to your marketing campaigns. Proper data mining algorithms and predictive modeling can narrow your target audience and allow you to tailor your ads to each online customer as he or she navigates your site. Your marketing team will have the opportunity to develop multiple advertisements based on the past clicks of your visitors. Predictive...

Words: 1136 - Pages: 5

Premium Essay

Data Mining

...Data Mining Jenna Walker Dr. Emmanuel Nyeanchi Information Systems Decision Making May 30, 2012 Abstract Businesses are utilizing techniques such as data mining to create a competitive advantage customer loyalty. Data mining allows business to analyze customer information, such as demographics and purchase history for a better understanding of what the customers need and what they will respond to. Data mining currently takes place in several industries, and will only become even more widespread as the benefits are endless. The purpose of this paper is to gain research and examine data mining, its benefits to businesses, and issues or concerns it will need to overcome. Real world case studies of how data mining is used will also be presented for a deeper understanding. This study will show that despite its disadvantages, data mining is an important step for a business to better understand its customers, and is the future of business marking and operational planning. Tools and Benefits of data mining Before examining the benefits of data mining, it is important to understand what data mining is exactly. Data mining is defined as “a process that uses statistical, mathematical, artificial intelligence, and machine-learning techniques to extract and identify useful information and subsequent knowledge from large databases, including data warehouses” (Turban & Volonino, 2011). The information identified using data mining includes patterns indicating trends...

Words: 1900 - Pages: 8

Premium Essay

Data Mining

...Data Mining Professor Clifton Howell CIS500-Information Systems Decision Making March 7, 2014 Benefits of data mining to the businesses One of the benefits to data mining is the ability to utilize information that you have stored to predict the possibilities of consumer’s actions and needs to make better business decisions. We implement a business intelligence that will produce a predictive score for those consumers to determine these possibilities. Predictive analytics is the business intelligence technology that produces a predictive score for each customer or other organizational element. Assigning these predictive scores is the job of a predictive model which has, in turn, been trained over your data, learning from the experience of your organization. (Impact, 2014) The usefulness of predictive scoring is obvious. However, with no predictive model and no means to score your consumer, the possibility of gaining a competitive edge and revenue is also predictable. To discover consumer buying patterns from a transaction database, mining association rules are used to make better business decisions. However because users may only be interested in certain information from this database and do not want to invest a lot of time in searching for what they need, association discovery will assist in limiting the data to which only the end user needs. Association discovery will utilize algorithms to lessen the quantity of groupings of item sets or sequences in each customer...

Words: 1318 - Pages: 6

Premium Essay

Data Mining

...Data Mining Teresa M. Tidwell Dr. Sergey Samoilenko Information Systems for Decision Making September 2, 2012 Data Mining The use of data mining by companies assists them with identifying information and knowledge from databases and data warehouses that would be beneficial for the company. The information is often buried in databases, records, and files. With the use of tools such as queries and algorithms, companies can access data, analyze it, and use it to increase their profit. The benefits of using data mining, its reliability, and privacy concerns will be discussed. Benefits of Data Mining 1. Predictive Analytics: This type of analysis uses the customer’s data to make a specific model for the business. Existing information is used such as a customer’s recent purchases and their income, to create a prediction of future purchases and how much or what type of item would be purchased. The more variables used the more likely that the prediction will be correct. Such variables include the customer ranking, based on the number of and most recent purchases and the average profit made per customer purchase. Without data made available through web access and purchases by the customer, predictive analysis would be difficult to perform. The company, therefore, would not be able to plan nor predict how well they are performing. 2. Associations Discovery: This part of data mining helps the company to discover the “relationships hidden in larger data sets” (Pang-Ning...

Words: 1443 - Pages: 6

Premium Essay

Data Mining

...The increasing use of data mining by corporations and advertisers obviously creates apprehension from the perspective of the individual consumer due to privacy, security and the potential use of inaccurate information. The idea that there are data warehouses that contain customers’ personal information can be rather frightening. However, the use of data mining by these organizations can also lead to numerous benefits for consumers they otherwise would not have realized. Besides the obvious benefit of guiding consumers to products or services they’d be more interested in purchasing, the use of data mining by companies has also benefitted individuals’ health and financial safety. Not long after the use of data mining came into prominence the use of data mining consumer information vs. consumer privacy became a major issue in early 1998 after CVS and Giant entered into an agreement with Elensys, a Massachusetts direct marketing company, to send reminders to customers who had not renewed their prescriptions. However, neither CVS nor Giant explained how the program would work or received their customers' permission to transfer their prescription records to a third party (Pyatt Jr.). Giant and CVS’s rationale for entering into this agreement was “to develop a positive program to benefit consumers, many of whom don't take their medication properly,” (Pyatt Jr.). Even though their primary intention was good, Giant and CVS were not transparent about their agreement with Elensys...

Words: 949 - Pages: 4

Premium Essay

Data Mining

...Data Mining Objectives: Highlight the characteristics of Data mining Operations, Techniques and Tools. A Brief Overview Online Analytical Processing (OLAP): OLAP is the dynamic synthesis, analysis, and consolidation of large volumns of multi-dimensional data. Multi-dimensional OLAP support common analyst operations, such as: ▪ Considation – aggregate of data, e.g. roll-ups from branches to regions. ▪ Drill-down – showing details, just the reverse of considation. ▪ Slicing and dicing – pivoting. Looking at the data from different viewpoints. E.g. X, Y, Z axis as salesman, Nth quarter and products, or region, Nth quarter and products. A Brief Overview Data Mining: Construct an advanced architecture for storing information in a multi-dimension data warehouse is just the first step to evolve from traditional DBMS. To realize the value of a data warehouse, it is necessary to extract the knowledge hidden within the warehouse. Unlike OLAP, which reveal patterns that are known in advance, Data Mining uses the machine learning techniques to find hidden relationships within data. So Data Mining is to ▪ Analyse data, ▪ Use software techniques ▪ Finding hidden and unexpected patterns and relationships in sets of data. Examples of Data Mining Applications: ▪ Identifying potential credit card customer groups ▪ Identifying buying patterns of customers. ▪ Predicting trends of market...

Words: 1258 - Pages: 6

Free Essay

Data Mining

...DATA MINING Generally, data mining is the process of analyzing data from different perspectives and summarizing it into useful information. Data mining software is one of a number of tools for analyzing data. It allows users to analyze data from many different dimensions or angels, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding patterns among dozens of fields in large databases that are similar to one another. Data is any facts, numbers, or text that can be processed by a computer so in general it makes it easier for a company or business to see what the majority of customers want at a time. It’s almost like a survey that we don’t realize we are taking. I think it really can benefit consumers because we can walk into a place of business and see what we want on the shelves because it is in demand. Even better, the things we don’t want to purchase are not there because there is no demand for it. It gives us the choice to be heard and have a say in making decisions on things that impact us most. Information can be converted into knowledge about historical patterns and future trends. For example, summary information on retail supermarket sales can be analyzed in light of promotional efforts to provide knowledge of consumer buying behavior. Thus, a manufacturer or retailer could determine which items are most susceptible to promotional efforts. I don’t think data...

Words: 1315 - Pages: 6

Premium Essay

Data Mining

...DATA MINING FOR INTELIIGENCE LED POLICING The paper concentrates on use of data mining techniques in police domain using associative memory being the main technique. The author talks of constructing the data being easier and thus giving effective decision making. The author does mention making the process as simple as possible since there are not enough technically sound people into databases. The process involves a step procedural method. Further the author does explain the advantages of this system in police environment. The author mentions use of data mining activities by Dutch forces and how it makes the work easier to predict and analyze the scenario. The author talks about the tool and name given to it as Data detective. This tool involved a chunk of data stored in data warehouse. There has been a continuous development in the tool used here throughout the years making it more efficient than before. The data mining tool automatically predicts the trend and the lays down all the statistical report. This tool makes it easier for the police to pin out criminals and their trends easily. The process raises a challenge so that a predictive modeling can be developed better than before. The author talks about understanding the links and then predicting is important. The author also mentions that this involves pattern matching which is achieved by data mining. The tool also helps in automatic prediction of criminal nature matches a profile and this leads to be quite...

Words: 1306 - Pages: 6

Premium Essay

Data Mining

...Running Head: DATA MINING Assignment 4: Data Mining Submitted by: Submitted to: Course: Introduction Data Mining is also called as Knowledge Discovery in Databases (KDD). It is a powerful technology which has great potential in helping companies to focus on the most important information they have in their data base. Due to the increased use of technologies, interest in data mining has increased speedily. Data mining can be used to predict future behavior rather than focus on past events. This is done by focusing on existing information that may be stored in their data warehouse or information warehouse. Companies are now utilizing data mining techniques to assess their database for trends, relationships, and outcomes to improve their overall operations and discover new ways that may permit them to improve their customer services. Data mining provides multiple benefits to government, businesses, society as well as individual persons (Data Mining, 2011). Benefits of data mining to the businesses when employing Advantages of data mining from business point of view is that large sizes of apparently pointless information have been filtered into important and valuable business information to the company, which could be stored in data warehouses. While in the past, the responsibility was on marketing utilities and services, products, the center of attention is now on customers- their choices, preferences, dislikes and likes, and possibly data mining is one of the most important tools...

Words: 1302 - Pages: 6

Premium Essay

Data Mining

...Data Mining 6/3/12 CIS 500 Data Mining is the process of analyzing data from different perspectives and summarizing it into useful information. This information can be used to increase revenue, cut costs or both. Data mining software is a major analytical tool used for analyzing data. It allows the user to analyze data from many different angles, categorize the data and summarizing the relationships. In a nut shell data mining is used mostly for the process of finding correlations or patterns among fields in very large databases. What ultimately can data mining do for a company? A lot. Data mining is primarily used by companies with strong customer focus in retail or financial. It allows companies to determine relationships among factors such as price, product placement, and staff skill set. There are external factors that data mining can use as well such as location, economic indicators, and competition of other companies. With the use of data mining a retailer can look at point of sale records of a customer purchases to send promotions to certain areas based on purchases made. An example of this is Blockbuster looking at movie rentals to send customers updates regarding new movies depending on their previous rent list. Another example would be American express suggesting products to card holders depending on monthly purchases histories. Data Mining consists of 5 major elements: • Extract, transform, and load transaction data onto the data...

Words: 1012 - Pages: 5

Premium Essay

Data Mining

...Data Mining Introduction to Management Information System 04-73-213 Section 5 Professor Mao March 22, 2011 Group 5: Carol DeBruyn, Jason Rekker, Matt Smith, Mike St. Denis Odette School of Business – The University of Windsor Table of Contents Table of Contents ……………………………………………………………...…….………….. ii Introduction ……………………………………………………………………………………… 1 Data Mining ……………………………………………………………………...……………… 1 Text Mining ……………………………………………………………………...……………… 4 Conclusion ………………………...…………………………………………………………….. 7 References ………………………………………………..……………………………………… 9 Introduction Everyday millions of transactions occur at thousands of businesses. Each transaction provides valuable data to these businesses. This valuable data is then stored in data warehouses and data marts for later reference. This stored data represents a large asset that until the advent of data mining had been largely unexploited. As companies attempt to gain a competitive advantage over each other, new data mining techniques have been developed. The most recent revolution in data mining has resulted in text mining. Prior to text mining, companies could only focus on leveraging their numerical data. Now companies are beginning to benefit from the textual data stored in data warehouses as well. Data Mining Data mining, which is also known as data discovery or knowledge discovery is the procedure that gathers, analyzes and places into perspective useful information. This facilitates the analysis of data from...

Words: 2331 - Pages: 10

Premium Essay

Data Mining

...[pic] Data Mining Assignment 4 [pic] “Data mining software is one of a number of analytical tools for analyzing data (Data Mining, para. 1).” We will be learning about the competitive advantage, reliability of such tool, and privacy concerns towards consumers. Data mining tool is used by majority of companies to increase revenue, and build on the relationship with current consumers. Let’s explore the world of data mining technology in the following selection. “Data mining is primarily used today by companies with a strong consumer focus - retail, financial, communication, and marketing organizations. It enables these companies to determine relationships among "internal" factors such as price, product positioning, or staff skills, and "external" factors such as economic indicators, competition, and customer demographics. And, it enables them to determine the impact on sales, customer satisfaction, and corporate profits. Finally, it enables them to "drill down" into summary information to view detail transactional data (Data Mining, para. 7).” Data mining is implemented online to promote business ideas, products, and other ways to market them. Data mining is used in political websites, when you go to some sites they take your information then, they began to send you things to promote the Republicans and Democrats message. This is how your voice counts. “Companies have used powerful computers to sift through volumes of supermarket scanner data and analyze market research...

Words: 1183 - Pages: 5

Premium Essay

Data Mining

...Data Mining By: Holly Gildea CIS 500 Dr. Janet Durgin June 09, 2013 Data Mining We learn that data mining is a method of evaluating data from different viewpoints and summarizing it into useful information. Such information can be beneficial and used to increase things like revenue, and cutting costs, and so on. There are four categories that we will look at and determine the benefits for in regards to data mining: predictive analytics to understand the behavior of customers, associations discovery in products sold to customers, web mining to discover business intelligence from web customers, and clustering to find related customer information. To understand the behavior of customers by the use predictive analytics we must first understand what predictive analytics is. “Predictive analytics is the process of dealing with a variety of data and applying various mathematical formulas to discover the best decision for a given situation” (ArticleSnatch, 2011). This gives any business a competitive edge and helps to remove the guess work out of the decision making process therefore helping to find the right solution in a shorter amount of time. In order to find the solution faster there are a seven simple steps that must be worked thru first: what is the problem for the company, searching for multiple data resources, take the patterns that are observed from that data, creating a model that contains the problem and the data, categorize the data and find important...

Words: 1843 - Pages: 8

Premium Essay

Data Mining

...1. Define data mining. Why are there many different names and definitions for data mining? Data mining is the process through which previously unknown patterns in data were discovered. Another definition would be “a process that uses statistical, mathematical, artificial intelligence, and machine learning techniques to extract and identify useful information and subsequent knowledge from large databases.” This includes most types of automated data analysis. A third definition: Data mining is the process of finding mathematical patterns from (usually) large sets of data; these can be rules, affinities, correlations, trends, or prediction models. Data mining has many definitions because it’s been stretched beyond those limits by some software vendors to include most forms of data analysis in order to increase sales using the popularity of data mining. What recent factors have increased the popularity of data mining? Following are some of most pronounced reasons: * More intense competition at the global scale driven by customers’ ever-changing needs and wants in an increasingly saturated marketplace. * General recognition of the untapped value hidden in large data sources. * Consolidation and integration of database records, which enables a single view of customers, vendors, transactions, etc. * Consolidation of databases and other data repositories into a single location in the form of a data warehouse. * The exponential increase...

Words: 4581 - Pages: 19