Free Essay

Basic Classification

In:

Submitted By ujjwal919
Words 5724
Pages 23
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Introduction to Data Mining by Tan, Steinbach, Kumar

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

1

Classification: Definition
Given a collection of records (training set )
– Each record contains a set of attributes, one of the attributes is the class.

Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

2

Illustrating Classification Task
Tid 1 2 3 4 5 6 7 8 9 10
10

Attrib1 Yes No No Yes No No Yes No No No

Attrib2 Large Medium Small Medium Large Medium Large Small Medium Small

Attrib3 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K

Class No No No No Yes No No Yes No Yes

Learning algorithm Induction
Learn Model

Training Set
Tid 11 12 13 14 15
10

Model
Apply Model

Attrib1 No Yes Yes No No

Attrib2 Small Medium Large Small Large

Attrib3 55K 80K 110K 95K 67K

Class ? ? ? ? ?

Deduction

Test Set
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3

Examples of Classification Task
Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil Categorizing news stories as finance, weather, entertainment, sports, etc
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4

Classification Techniques
Decision Tree based Methods Rule-based Methods Memory based reasoning Neural Networks Naïve Bayes and Bayesian Belief Networks Support Vector Machines

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

5

Example of a Decision Tree al al us ric ric uo s in go go nt te te as cl ca ca co
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes

Splitting Attributes

Yes No No Yes No No Yes No No No

Single Married Single Married

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

Training Data
© Tan,Steinbach, Kumar Introduction to Data Mining

Model: Decision Tree
4/18/2004 6

Another Example of Decision Tree ric go al ric go al n co uo ti n us s as cl

te ca

te ca

Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes

Married NO

MarSt

Single, Divorced Refund

Yes No No Yes No No Yes No No No

Single Married Single Married

Yes NO < 80K NO

No TaxInc > 80K YES

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

There could be more than one tree that fits the same data!

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

7

Decision Tree Classification Task
Tid 1 2 3 4 5 6 7 8 9 10
10

Attrib1 Yes No No Yes No No Yes No No No

Attrib2 Large Medium Small Medium Large Medium Large Small Medium Small

Attrib3 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K

Class No No No No Yes No No Yes No Yes

Tree Induction algorithm Induction
Learn Model

Training Set
Tid 11 12 13 14 15
10

Model
Apply Model
Decision Tree

Attrib1 No Yes Yes No No

Attrib2 Small Medium Large Small Large

Attrib3 55K 80K 110K 95K 67K

Class ? ? ? ? ?

Deduction

Test Set
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 8

Apply Model to Test Data
Test Data Start from the root of tree.
Refund Marital Status No
1 0

Taxable Income Cheat 80K ?

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

Married

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

9

Apply Model to Test Data
Test Data
Refund Marital Status Taxable Income Cheat 80K ?

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

No
1 0

Married

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

10

Apply Model to Test Data
Test Data
Refund Marital Status Taxable Income Cheat 80K ?

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

No
1 0

Married

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

11

Apply Model to Test Data
Test Data
Refund Marital Status Taxable Income Cheat 80K ?

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

No
1 0

Married

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

12

Apply Model to Test Data
Test Data
Refund Marital Status Taxable Income Cheat 80K ?

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

No
1 0

Married

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

13

Apply Model to Test Data
Test Data
Refund Marital Status Taxable Income Cheat 80K ?

Refund Yes NO No MarSt Single, Divorced TaxInc < 80K NO > 80K YES Married NO

No
1 0

Married

Assign Cheat to “No”

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

14

Decision Tree Classification Task
Tid 1 2 3 4 5 6 7 8 9 10
10

Attrib1 Yes No No Yes No No Yes No No No

Attrib2 Large Medium Small Medium Large Medium Large Small Medium Small

Attrib3 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K

Class No No No No Yes No No Yes No Yes

Tree Induction algorithm Induction
Learn Model

Training Set
Tid 11 12 13 14 15
10

Model
Apply Model
Decision Tree

Attrib1 No Yes Yes No No

Attrib2 Small Medium Large Small Large

Attrib3 55K 80K 110K 95K 67K

Class ? ? ? ? ?

Deduction

Test Set
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15

Decision Tree Induction
Many Algorithms: – Hunt’s Algorithm (one of the earliest) – CART – ID3, C4.5 – SLIQ,SPRINT

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

16

General Structure of Hunt’s Algorithm
Let Dt be the set of training records that reach a node t General Procedure: – If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt – If Dt is an empty set, then t is a leaf node labeled by the default class, yd – If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes

Yes No No Yes No No Yes No No No

Single Married Single Married

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

Dt

?

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

17

Hunt’s Algorithm
Don’t Cheat

Tid Refund Marital Status 1 Yes No No Yes No No Yes No No No Single Married Single Married

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes

Refund
Yes Don’t Cheat No Don’t Cheat

2 3 4 5 6 7 8

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

Refund
Yes Don’t Cheat Single, Divorced No Yes Don’t Cheat Married Don’t Cheat < 80K Don’t Cheat
© Tan,Steinbach, Kumar

Refund
No
10

9 10

Marital Status

Single, Divorced

Marital Status
Married Don’t Cheat >= 80K

Cheat

Taxable Income

Cheat
4/18/2004 18

Introduction to Data Mining

Tree Induction
Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. Issues – Determine how to split the records
How to specify the attribute test condition? How to determine the best split?

– Determine when to stop splitting

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

19

Tree Induction
Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. Issues – Determine how to split the records
How to specify the attribute test condition? How to determine the best split?

– Determine when to stop splitting

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

20

How to Specify Test Condition?
Depends on attribute types – Nominal – Ordinal – Continuous Depends on number of ways to split – 2-way split – Multi-way split

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

21

Splitting Based on Nominal Attributes
Multi-way split: Use as many partitions as distinct values.
CarType
Family Sports Luxury

Binary split: Divides values into two subsets. Need to find optimal partitioning.
{Sports, Luxury}

CarType
{Family}

OR

{Family, Luxury}

CarType
{Sports}

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

22

Splitting Based on Ordinal Attributes
Multi-way split: Use as many partitions as distinct values.
Size
Small Medium Large

Binary split: Divides values into two subsets. Need to find optimal partitioning.
{Small, Medium}

Size
{Large}

OR

{Medium, Large}

Size
{Small}

What about this split?
© Tan,Steinbach, Kumar

{Small, Large}

Size
{Medium}
4/18/2004 23

Introduction to Data Mining

Splitting Based on Continuous Attributes
Different ways of handling – Discretization to form an ordinal categorical attribute
Static – discretize once at the beginning Dynamic – ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering.

– Binary Decision: (A < v) or (A ≥ v) consider all possible splits and finds the best cut can be more compute intensive
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 24

Splitting Based on Continuous Attributes

Taxable Income > 80K?
< 10K Yes No [10K,25K)

Taxable Income?
> 80K

[25K,50K)

[50K,80K)

(i) Binary split

(ii) Multi-way split

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

25

Tree Induction
Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. Issues – Determine how to split the records
How to specify the attribute test condition? How to determine the best split?

– Determine when to stop splitting

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

26

How to determine the Best Split
Before Splitting: 10 records of class 0, 10 records of class 1
Own Car? Yes
C0: 6 C1: 4

Car Type? No Family Sports Luxury c1
C0: 1 C1: 0

Student ID? c10
C0: 1 C1: 0

c11
C0: 0 C1: 1

c20

C0: 4 C1: 6

C0: 1 C1: 3

C0: 8 C1: 0

C0: 1 C1: 7

...

...

C0: 0 C1: 1

Which test condition is the best?

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

27

How to determine the Best Split
Greedy approach: – Nodes with homogeneous class distribution are preferred Need a measure of node impurity:
C0: 5 C1: 5
Non-homogeneous, High degree of impurity

C0: 9 C1: 1
Homogeneous, Low degree of impurity

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

28

Measures of Node Impurity
Gini Index Entropy Misclassification error

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

29

How to Find the Best Split
Before Splitting:
C0 C1 N00 N01

M0 B?

A?
Yes Node N1
C0 C1 N10 N11

No Node N2
C0 C1 N20 N21

Yes Node N3
C0 C1 N30 N31

No Node N4
C0 C1 N40 N41

M1 M12

M2

M3 M34

M4

Gain = M0 – M12 vs M0 – M34
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30

Measure of Impurity: GINI
Gini Index for a given node t :

GINI (t ) = 1 − ∑ [ p ( j | t )]2 j (NOTE: p( j | t) is the relative frequency of class j at node t).

– Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information – Minimum (0.0) when all records belong to one class, implying most interesting information
C1 C2 0 6 C1 C2 1 5 C1 C2 2 4 C1 C2 3 3

Gini=0.000

Gini=0.278

Gini=0.444

Gini=0.500

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

31

Examples for computing GINI
GINI (t ) = 1 − ∑ [ p ( j | t )]2 j C1 C2

0 6

P(C1) = 0/6 = 0

P(C2) = 6/6 = 1

Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

C1 C2

1 5

P(C1) = 1/6

P(C2) = 5/6

Gini = 1 – (1/6)2 – (5/6)2 = 0.278 P(C1) = 2/6 P(C2) = 4/6

C1 C2
© Tan,Steinbach, Kumar

2 4

Gini = 1 – (2/6)2 – (4/6)2 = 0.444
Introduction to Data Mining 4/18/2004 32

Splitting Based on GINI
Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the quality of split is computed as,

GINI split = ∑ i =1

k

ni GINI (i ) n

where,

ni = number of records at child i, n = number of records at node p.

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

33

Binary Attributes: Computing GINI Index
Splits into two partitions Effect of Weighing partitions: – Larger and Purer Partitions are sought for.
Parent

B?
Yes No Node N2

C1 C2

6 6

Gini = 0.500

Gini(N1) = 1 – (5/6)2 – (2/6)2 = 0.194 Gini(N2) = 1 – (1/6)2 – (4/6)2 = 0.528
© Tan,Steinbach, Kumar

Node N1

C1 C2

N1 5 2

N2 1 4

Gini=0.333
Introduction to Data Mining

Gini(Children) = 7/12 * 0.194 + 5/12 * 0.528 = 0.333
4/18/2004 34

Categorical Attributes: Computing Gini Index
For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions
Multi-way split Two-way split (find best partition of values)
CarType {Sports, {Family} Luxury} 3 1 2 4 0.400 CarType {Family, {Sports} Luxury} 2 2 1 5 0.419

CarType C1 C2 Gini Family Sports Luxury 1 2 1 4 1 1 0.393
C1 C2 Gini

C1 C2 Gini

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

35

Continuous Attributes: Computing Gini Index
Use Binary Decisions based on one value Several Choices for the splitting value – Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it – Class counts in each of the partitions, A < v and A ≥ v Simple method to choose best v – For each v, scan the database to gather count matrix and compute its Gini index – Computationally Inefficient! Repetition of work.
© Tan,Steinbach, Kumar Introduction to Data Mining
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Cheat 125K 100K 70K 120K No No No No Yes No No Yes No Yes

Yes No No Yes No No Yes No No No

Single Married Single Married

Divorced 95K Married 60K

Divorced 220K Single Married Single 85K 75K 90K

Taxable Income > 80K?
Yes No

4/18/2004

36

Continuous Attributes: Computing Gini Index...
For efficient computation: for each attribute, – Sort the attribute on values – Linearly scan these values, each time updating the count matrix and computing gini index – Choose the split position that has the least gini index
Cheat No No No Yes Yes Yes No No No No

Taxable Income

Sorted Values Split Positions
Yes No Gini 3 7 65 3 6 3 5 3 4 2 4 1 4 0 4 0 2

0.420

0.400

0.375

0.343

0.417

0.400

0.300

0.343

0.375

0.400

0.420

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

37

Alternative Splitting Criteria based on INFO Entropy at a given node t:

Entropy (t ) = − ∑ p ( j | t ) log p ( j | t ) j (NOTE: p( j | t) is the relative frequency of class j at node t).

– Measures homogeneity of a node.
Maximum (log nc) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information

– Entropy based computations are similar to the GINI index computations
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 38

Examples for computing Entropy

Entropy (t ) = − ∑ p ( j | t ) log p ( j | t ) j 2

C1 C2

0 6

P(C1) = 0/6 = 0

P(C2) = 6/6 = 1

Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

C1 C2

1 5

P(C1) = 1/6

P(C2) = 5/6

Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6

C1 C2

2 4

Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
Introduction to Data Mining 4/18/2004 39

© Tan,Steinbach, Kumar

Splitting Based on INFO...
Information Gain:

GAIN

split

 n = Entropy ( p ) −  ∑ Entropy (i )    n  k i i =1

Parent Node, p is split into k partitions; ni is number of records in partition i

– Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) – Used in ID3 and C4.5 – Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 40

Splitting Based on INFO...
Gain Ratio:

GainRATIO

split

=

GAIN n n SplitINFO = − ∑ log SplitINFO n n
Split

k

i

i

i =1

Parent Node, p is split into k partitions ni is the number of records in partition i

– Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! – Used in C4.5 – Designed to overcome the disadvantage of Information Gain
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 41

Splitting Criteria based on Classification Error Classification error at a node t :

Error (t ) = 1 − max P (i | t ) i Measures misclassification error made by a node.
Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

42

Examples for Computing Error

Error (t ) = 1 − max P (i | t ) i C1 C2

0 6

P(C1) = 0/6 = 0

P(C2) = 6/6 = 1

Error = 1 – max (0, 1) = 1 – 1 = 0

C1 C2

1 5

P(C1) = 1/6

P(C2) = 5/6

Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6

C1 C2

2 4

Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
Introduction to Data Mining 4/18/2004 43

© Tan,Steinbach, Kumar

Comparison among Splitting Criteria
For a 2-class problem:

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

44

Misclassification Error vs Gini
A?
Yes Node N1 No Node N2

Parent C1 C2 7 3

Gini = 0.42

Gini(N1) = 1 – (3/3)2 – (0/3)2 =0 Gini(N2) = 1 – (4/7)2 – (3/7)2 = 0.489

C1 C2

N1 3 0

N2 4 3

Gini=0.361

Gini(Children) = 3/10 * 0 + 7/10 * 0.489 = 0.342 Gini improves !!

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

45

Tree Induction
Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. Issues – Determine how to split the records
How to specify the attribute test condition? How to determine the best split?

– Determine when to stop splitting

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

46

Stopping Criteria for Tree Induction
Stop expanding a node when all the records belong to the same class Stop expanding a node when all the records have similar attribute values Early termination (to be discussed later)

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

47

Decision Tree Based Classification
Advantages: – Inexpensive to construct – Extremely fast at classifying unknown records – Easy to interpret for small-sized trees – Accuracy is comparable to other classification techniques for many simple data sets

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

48

Example: C4.5
Simple depth-first construction. Uses Information Gain Sorts Continuous Attributes at each node. Needs entire data to fit in memory. Unsuitable for Large Datasets. – Needs out-of-core sorting. You can download the software from: http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz © Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

49

Practical Issues of Classification
Underfitting and Overfitting Missing Values Costs of Classification

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

50

Underfitting and Overfitting (Example)

500 circular and 500 triangular data points. Circular points: 0.5 ≤ sqrt(x12+x22) ≤ 1 Triangular points: sqrt(x12+x22) > 0.5 or sqrt(x12+x22) < 1

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

51

Underfitting and Overfitting
Overfitting

Underfitting: when model is too simple, both training and test errors are large
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 52

Overfitting due to Noise

Decision boundary is distorted by noise point
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 53

Overfitting due to Insufficient Examples

Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54

Notes on Overfitting
Overfitting results in decision trees that are more complex than necessary Training error no longer provides a good estimate of how well the tree will perform on previously unseen records Need new ways for estimating errors

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

55

Estimating Generalization Errors
Re-substitution errors: error on training (Σ e(t) ) Generalization errors: error on testing (Σ e’(t)) Methods for estimating generalization errors: – Optimistic approach: e’(t) = e(t) – Pessimistic approach:
For each leaf node: e’(t) = (e(t)+0.5) Total errors: e’(T) = e(T) + N × 0.5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1% Generalization error = (10 + 30×0.5)/1000 = 2.5%

– Reduced error pruning (REP): uses validation data set to estimate generalization error
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 56

Occam’s Razor
Given two models of similar generalization errors, one should prefer the simpler model over the more complex model For complex models, there is a greater chance that it was fitted accidentally by errors in data Therefore, one should include model complexity when evaluating a model

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

57

Minimum Description Length (MDL)
X X1 X2 X3 X4 y 1 0 0 1
A? Yes 0 B1 No B? B2 1 C2 1

A

C? C1 0

B


Xn


1

X X1 X2 X3 X4

y ? ? ? ?


Xn


?

Cost(Model,Data) = Cost(Data|Model) + Cost(Model) – Cost is the number of bits needed for encoding. – Search for the least costly model. Cost(Data|Model) encodes the misclassification errors. Cost(Model) uses node encoding (number of children) plus splitting condition encoding.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 58

How to Address Overfitting
Pre-Pruning (Early Stopping Rule) – Stop the algorithm before it becomes a fully-grown tree – Typical stopping conditions for a node:
Stop if all instances belong to the same class Stop if all the attribute values are the same

– More restrictive conditions:
Stop if number of instances is less than some user-specified threshold Stop if class distribution of instances are independent of the available features (e.g., using χ 2 test) Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain).

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

59

How to Address Overfitting…
Post-pruning – Grow decision tree to its entirety – Trim the nodes of the decision tree in a bottom-up fashion – If generalization error improves after trimming, replace sub-tree by a leaf node. – Class label of leaf node is determined from majority class of instances in the sub-tree – Can use MDL for post-pruning

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

60

Example of Post-Pruning
Training Error (Before splitting) = 10/30 Class = Yes Class = No 20 10 Pessimistic error = (10 + 0.5)/30 = 10.5/30 Training Error (After splitting) = 9/30 Pessimistic error (After splitting) = (9 + 4 × 0.5)/30 = 11/30 PRUNE!

Error = 10/30

A?
A1 A2
Class = Yes Class = No 8 4 Class = Yes Class = No 3 4

A4 A3
Class = Yes Class = No 4 1 Class = Yes Class = No 5 1
4/18/2004 61

© Tan,Steinbach, Kumar

Introduction to Data Mining

Examples of Post-pruning
– Optimistic error?
Don’t prune for both cases

Case 1:

– Pessimistic error?
Don’t prune case 1, prune case 2

C0: 11 C1: 3

C0: 2 C1: 4

– Reduced error pruning?
Case 2:
Depends on validation set

C0: 14 C1: 3

C0: 2 C1: 2
4/18/2004 62

© Tan,Steinbach, Kumar

Introduction to Data Mining

Handling Missing Attribute Values
Missing values affect decision tree construction in three different ways: – Affects how impurity measures are computed – Affects how to distribute instance with missing value to child nodes – Affects how a test instance with missing value is classified

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

63

Computing Impurity Measure
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9 10
10

Taxable Income Class 125K 100K 70K 120K No No No No Yes No No Yes No Yes

Yes No No Yes No No Yes No No ?

Single Married Single Married

Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
Class Class = Yes = No 0 3 2 4 1 0

Divorced 95K Married 60K

Refund=Yes Refund=No Refund=?

Divorced 220K Single Married Single 85K 75K 90K

Split on Refund: Entropy(Refund=Yes) = 0 Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183 Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551 Gain = 0.9 × (0.8813 – 0.551) = 0.3303

Missing value
© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

64

Distribute Instances
Tid Refund Marital Status 1 2 3 4 5 6 7 8 9
10

Taxable Income Class 125K 100K 70K 120K No No No No Yes No No Yes No
10

Yes No No Yes No No Yes No No

Single Married Single Married

Tid Refund Marital Status 10 ? Single

Taxable Income Class 90K Yes

Divorced 95K Married 60K

Yes
Class=Yes Class=No

Refund
0 + 3/9 3

No
2 + 6/9 4

Divorced 220K Single Married 85K 75K

Class=Yes Class=No

Probability that Refund=Yes is 3/9 Probability that Refund=No is 6/9
2 4

Yes
Class=Yes Class=No 0 3

Refund

No

Cheat=Yes Cheat=No

Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9
4/18/2004 65

© Tan,Steinbach, Kumar

Introduction to Data Mining

Classify Instances
New record:
Tid Refund Marital Status 11
1 0

Married
Taxable Income Class 85K ?

Single 1 1 2

Divorced Total 0 1 1 4 2.67 6.67

Class=No Class=Yes Total

3 6/9 3.67

No

?

Refund Yes NO Single, Divorced TaxInc < 80K NO > 80K YES No MarSt Married NO

Probability that Marital Status = Married is 3.67/6.67 Probability that Marital Status ={Single,Divorced} is 3/6.67

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

66

Other Issues
Data Fragmentation Search Strategy Expressiveness Tree Replication

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

67

Data Fragmentation
Number of instances gets smaller as you traverse down the tree Number of instances at the leaf nodes could be too small to make any statistically significant decision

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

68

Search Strategy
Finding an optimal decision tree is NP-hard The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution Other strategies? – Bottom-up – Bi-directional

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

69

Expressiveness
Decision tree provides expressive representation for learning discrete-valued function – But they do not generalize well to certain types of Boolean functions
Example: parity function:
– Class = 1 if there is an even number of Boolean attributes with truth value = True – Class = 0 if there is an odd number of Boolean attributes with truth value = True

For accurate modeling, must have a complete tree

Not expressive enough for modeling continuous variables – Particularly when test condition involves only a single attribute at-a-time
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 70

Decision Boundary
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

x < 0.43? Yes y < 0.47? Yes :4 :0
0 0.1 0.2 0.3 0.4 0.5

No y < 0.33? Yes :0 :3 No :4 :0

y

No :0 :4

x

0.6

0.7

0.8

0.9

1

• Border line between two neighboring regions of different classes is known as decision boundary • Decision boundary is parallel to axes because test condition involves a single attribute at-a-time
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 71

Oblique Decision Trees

x+y t is classified as positive

At threshold t: TP=0.5, FN=0.5, FP=0.12, FN=0.88
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 89

ROC Curve
(TP,FP): (0,0): declare everything to be negative class (1,1): declare everything to be positive class (1,0): ideal Diagonal line: – Random guessing – Below diagonal line: prediction is opposite of the true class
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 90

Using ROC for Model Comparison
No model consistently outperform the other M1 is better for small FPR M2 is better for large FPR Area Under the ROC curve
Ideal: Area = 1 Random guess: Area = 0.5
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 91

How to Construct an ROC curve
Instance 1 2 3 4 5 6 7 8 9 10 P(+|A) 0.95 0.93 0.87 0.85 0.85 0.85 0.76 0.53 0.43 0.25 True Class + + + + +

• Use classifier that produces posterior probability for each test instance P(+|A) • Sort the instances according to P(+|A) in decreasing order • Apply threshold at each unique value of P(+|A) • Count the number of TP, FP, TN, FN at each threshold • TP rate, TPR = TP/(TP+FN) • FP rate, FPR = FP/(FP + TN)

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

92

How to construct an ROC curve
Class

+
0.25 5 5 0 0 1 1

0.43 4 5 0 1 0.8 1

+
0.53 4 4 1 1 0.8 0.8

0.76 3 4 1 2 0.6 0.8

0.85 3 3 2 2 0.6 0.6

0.85 3 2 3 2 0.6 0.4

+
0.85 3 1 4 2 0.6 0.2

0.87 2 1 4 3 0.4 0.2

+
0.93 2 0 5 3 0.4 0

+
0.95 1 0 5 4 0.2 0 1.00 0 0 5 5 0 0

Threshold >=
TP FP TN FN TPR FPR

ROC Curve:

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

93

Test of Significance
Given two models:
– Model M1: accuracy = 85%, tested on 30 instances – Model M2: accuracy = 75%, tested on 5000 instances

Can we say M1 is better than M2?
– How much confidence can we place on accuracy of M1 and M2? – Can the difference in performance measure be explained as a result of random fluctuations in the test set?

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

94

Confidence Interval for Accuracy
Prediction can be regarded as a Bernoulli trial
– A Bernoulli trial has 2 possible outcomes – Possible outcomes for prediction: correct or wrong – Collection of Bernoulli trials has a Binomial distribution: x ∼ Bin(N, p) x: number of correct predictions e.g: Toss a fair coin 50 times, how many heads would turn up? Expected number of heads = N×p = 50 × 0.5 = 25

Given x (# of correct predictions) or equivalently, acc=x/N, and N (# of test instances), Can we predict p (true accuracy of model)?
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 95

Confidence Interval for Accuracy
For large test sets (N > 30),
– acc has a normal distribution with mean p and variance p(1-p)/N
Area = 1 - α

P( Z < α /2

acc − p Interval contains 0 => difference may not be statistically significant
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 100

Comparing Performance of 2 Algorithms
Each learning algorithm may produce k models:
– L1 may produce M11 , M12, …, M1k – L2 may produce M21 , M22, …, M2k

If models are generated on the same test sets D1,D2, …, Dk (e.g., via cross-validation)
– For each set: compute dj = e1j – e2j – dj has mean dt and variance σt k 2 – Estimate: ∑ (d − d )

σ = ˆ
2 t t

j =1

j

k (k − 1) d = d ±t σ ˆ
1−α , k −1 t

© Tan,Steinbach, Kumar

Introduction to Data Mining

4/18/2004

101

Similar Documents

Free Essay

Mortgage

...Leah Dolce ECO 103 Mortgage Project While doing this project, I kept finding myself to be very indecisive as to what I wanted in a home and what things would be important to me if I were to purchase one. This project helped me understand the basics of buying a home such as: where I would want my kids to go to school, how many bedrooms, how many bathrooms, property, neighborhood reputation, and much more. I had very little knowledge about most of the surrounding area so now I know what kind of research I would have to do to make sure I know exatcly what I want when purchasing a home. I did know I wanted it to be at least 3 bedrooms, 2 bathrooms, and have an updated kitchen. I was able to find those things in the home I selected but what I didn't realize was what the cost would be. There is a huge financial difference between a 15 year fix and a 30 year fix mortgage. For the home I selected it was a $522.76 difference in the monthly payment and a $91,271.79 difference in the amount of interest I would be paying for the life of the loan. If I was abel to afford such a hefty, monthly payment I would choose the 15 year mortgage because in the long run it would be cheaper. The lesson learned here, though, is find a more affordable house and make updates as you go along. All -in-all I had fun doing this project and it helped me understand the process into buying a home and gave me great internet resources to do...

Words: 282 - Pages: 2

Premium Essay

Hostel Management Synopsis

...SYNOPSIS TITLE OF THE PROJECT HOSTEL MANAGEMENT SYSTEM PROBLEM STATEMENT This project needs to create the Hostel Management System (HMS) to organize the rooms, mess, student’s record and the other information about the students. All hostels without HMS are managed manually by the hostel office. And hence there is a lot of strain on the person who are running the hostel. This particular project deals with the problems on managing a hostel and avoids the problem which occur when carried manually. INTRODUCTION In hostels without a HMS all the things have to be done manually. The Registration form verification to the different data processing are done manually. Thus there are a lot of repetitions which can be easily avoided. Identification of the drawbacks of the existing system leads to the designing of computerized system that will be compatible to the existing system with the system which is more user friendly and more GUI oriented. We can improve the efficiency of the system, thus overcome the drawbacks of the existing system. Hostel management gives on idea about how the students details, room allocation, mess expenditure are maintained in the particular concern. The hostel management system also includes some special features like How many students can live in a room, and the students of the hostel can be recognized from their ID number. The administration has the unique identity for each members as well as students details. The stock management...

Words: 903 - Pages: 4

Free Essay

How to Present to Investors

...nonstop work. Startups are a counterexample to the rule that haste makes waste. (Too much money seems to be as bad for startups as too much time, so we don't give them much money either.) A week before Demo Day, we have a dress rehearsal called Rehearsal Day. At other Y Combinator events we allow outside guests, but not at Rehearsal Day. No one except the other founders gets to see the rehearsals. The presentations on Rehearsal Day are often pretty rough. But this is to be expected. We try to pick founders who are good at building things, not ones who are slick presenters. Some of the founders are just out of college, or even still in it, and have never spoken to a group of people they didn't already know. So we concentrate on the basics. On Demo Day each startup will only get ten minutes, so we encourage them to focus on just two goals: (a) explain what you're doing, and (b) explain why users will want it. That might sound easy, but it's not when the speakers have no experience presenting, and they're explaining technical matters to an audience that's mostly...

Words: 303 - Pages: 2

Free Essay

Table

...(L) SUK H207 AHCC1113 (P) V207 AHCC1113 (P) M102 (L) FAI DKB AELE0343 (P) SUK CC205 JS AHCC1163 (P) FAI M003 AELE0343 (T) SUK H207 JS AHCC1163 AHCC1153 (L) WYL DKB AHCC1153 (P) WYL LAB 2 AHCC1163 (P) FAI V207 AHCC1153 (P) WYL LAB 3 AELE0343 READING AND WRITING AHCC1153 BASIC SOFTWARE APPLICATION I AEPD1013 STUDY SKILLS AHCC1163 DRAWING BASIC AHCC1103 ART APPRECIATION AHCC1113 GRAPHIC DESIGN BASICS School of Social Science and Humanities Certificate in Graphic Design - Year 1 1st semester 2013/2014 8 9 10 AHCC1103 (T) HAR V102 11 AHCC1113 (L) JS DK 6 Tutorial Group: M1CGD2 12 1 2 3 4 AEPD1013 (L) PRA H209 5 6 7 8 9 Mon Tue Wed Thu Fri Sat AHCC1103 (L) HAR DK AB1 AELE0343 (L) SUK H207 AEPD1013 (T) PRA H207 AHCC1163 (L) FAI DKB AELE0343 (P) SUK CC205 AHCC1153 (L) WYL DKB AHCC1113 (P) V304 AHCC1113 (P) V207 AELE0343 (T) SUK H207 JS JS AHCC1163 (P) FAI V208 AHCC1153 (P) WYL LAB 2 AHCC1153 (P) WYL LAB 2 AHCC1163 (P) FAI V202 AELE0343 READING AND WRITING AHCC1153 BASIC SOFTWARE APPLICATION I AEPD1013 STUDY SKILLS AHCC1163 DRAWING BASIC AHCC1103 ART APPRECIATION AHCC1113 GRAPHIC DESIGN BASICS School of Social Science and Humanities Certificate in Graphic Design - Year 1 1st semester 2013/2014 8 9 AEPD1013 (L) PRA H209 10 11 AHCC1113 (L) JS DK 6 Tutorial Group: M1CGD3 12 1 2 3 AHCC1103 (T) HAR V104 AELE0343 (L) SUK H207 4 5 6 7 8 9 Mon Tue Wed AHCC1103 (L) HAR DK AB1 AHCC1163...

Words: 517 - Pages: 3

Premium Essay

Course Descriptions

...GE117 Composition I | A 4 credit hour Composition course This course covers phases of the writing process, with special emphasis on the structure of writing and techniques for writing clearly, precisely and persuasively. Prerequisite or Corequisite: TB133 Strategies for the Technical Professional or equivalent GE127 College Mathematics I | A 4 credit hour Mathematics course This course will include, but is not limited to, the following concepts: quadratic, polynomial and radical equations, linear functions and their graphs, systems of linear equations, functions and their properties and triangles and trigonometric functions. Activities will include solving problems and using appropriate technological tools. Prerequisite: GE184 Problem Solving or TB184 Problem Solving or GE150 Survey of the Sciences or equivalent; Prerequisite or Corequisite: TB133 Strategies for the Technical Professional or equivalent GE184 Problem Solving | A 4 credit hour Science course This course introduces students to problem solving techniques and helps them apply the tools of critical reading, analytical thinking and mathematics to help solve problems in practical applications. GE192 College Mathematics II | A 4 credit hour Mathematics course This course will include, but is not limited to, the following concepts: exponential and logarithmic equations and functions, graphs of trigonometric functions, trigonometric equations, polar coordinates, oblique triangles, vectors and sequences. Prerequisite:...

Words: 1186 - Pages: 5

Premium Essay

Kickflip Research Paper

...How to perform a kickflip with skateboard You are a novice skater and you want to learn the techniques to do the trick? This guide will explain how to make one of the tricks the best known and the basic art of skateboarding: the kickflip! The kickflip commonly called flip is one of the basic tricks of skateboarding . This number was invented by the famous American skater Rodney Mullen in the mid -eighties. This development will add to the other maneuvers, such as the grind or manual, thus creating trick combos very stylish! Make sure you have at hand: *skateboard *helmet *knee 1.This lesson will teach you how to close a kickflip . Many skater , to learn, trying several times to perform the kickflip standing still, so you can learn the correct...

Words: 417 - Pages: 2

Free Essay

Abacus Aat Pricing & Tktg Quick Ref 073109.Pdf

...Display Basic Entry FT1 Tax Details from List RB2 Specific Tax Code RB2MNLLAX–PR/LAXMIA–AA Passenger Facility Charge Basic Entry Optional Qualifiers Travel Date Multiple Carriers Display All Types of Fares Return Travel Date Fare Display from Segment Continuation Entries Redisplay Fare Tax Breakdown Display Display RBD Conditions Display RBD by Carrier FQHELP FQBKKMNL–PR/USD FQMNLBKK–TG¥QYEE6M FQMNLBWN–BR¥BY FQMNLTPE–PR¥PINF FQMNLTPE–PR¥PSEA/LBR TXN∗BKK TXN∗1 TXN∗∗XA PXC∗SFO Basic Entry Fare Rule by Line Number Redisplay Rule Information Routing Map FARE RULE DISPLAY RDMNLLAX11SEPLEE6M–PR RD2 Quick Reference Page RDHELP RD∗ Rule Menu of Categories RD2∗M RD2∗RTG Specific Categories RD2∗5/15/22 NON-ITINERARY PRICING WQMNLHKGLAX–ACX/VCX Quick Reference Page Basic Entry Optional Qualifiers Currency Code Passenger Type Code Operating Carrier Date & Booking Class Surface Segment Connection City Continuation Entries Fare Details from List Fare Rule Display Rule Display of First Fare Rule Display from Fare List Fare Basis Code WQHELP WQMNLHKG–ACX/VCX/MUSD WQMNLKULMNL–AMH/VMH/PCNN/PINF WQMNL/ASQSIN/APRMNL–VSQ WQMNL29MAR/CYBKK13APR/CSMNL–ATG/VTG WQMNL14APR/APRHKG/–BKK19APR/ATGMNL–VPR WQMNL24APR/XHKG24APRSFO29MAY/XHKG31MAYMNL–ACX/VCX WQ¥1 WQRD∗ WQRD∗L3 WQRD∗QYOX Fare Calculation WQ¥DF2 Rule Menu of Categories WQRD∗L2¥M Specific Categories WQRD∗L2¥C6/7 Quick Reference Page WQRDHELP FAREX PRICING NET FARE LIST Basic Entry Multiple Carriers NET FARE DISPLAY Basic Entry...

Words: 1491 - Pages: 6

Premium Essay

Lab 6

...and logical operators in computer programs. Use compound logical conditions. Required Setup and Tools Standard lab setup Lab Manual Lab Demo Media and Startup Files CD Recommended Procedures Complete Lab 6.2: Flowcharts from the lab manual. Deliverables Submit the following at the end of this lab activity: Corrected variable declarations and initializations using Visio in Step 2 Corrected module calls using Visio in Step 3 Corrected inputOptions() module using Visio in Step 4 Corrected displayProvider() module with case labels and flow lines using Visio in Step 5 Corrected displayChoices() module with logical operators using Visio in Step 6 Completed and workable flowchart using Visio in Step 7 Unit 6 Lab 6.3: Visual Basic Programming Challenge Learning Objectives and Outcomes Use flowcharts and pseudocode to represent Boolean conditions. Use if-then, if-then-else, and case structures in a computer program. Use Boolean variables and logical operators in computer programs. Use compound...

Words: 355 - Pages: 2

Free Essay

Sales and Inventory System

...Rica A. Hernandez BSCS 2101 Start Microsoft Visual Basic 6.0 (VB6) The New Project dialog box will appear. If it doesn't go up to the menu bar and select File -> New Project In the New Project dialog select Standard EXE, and click the Open Button. This will bring up your new Project 1 application with Form1 visible. Already Visual Basic has done a lot for us. As you can see this tutorial isn't very long but already you have a full working application. You can see your new program in action by going up to the menu bar and selecting Run -> Start (Or simply press the F5 key). You should see the Form1 window appear: This is a fully functional application. You can move it around, minimize and maximize it, and close it down. For you to do this same thing in C++ - the original language most of Windows was written in you would have written hundreds of lines of code. You area already getting to see some of the extreme power VB gives you. Now lets continue with the tutorial.  Lets make this program say hello! On the left side of the screen you can see the toolbox (if this doesn't show up go to the top menu bar and select View -> Toolbox). In this toolbox you will see a picture of a button. Double click the button icon and it will create a Command1 CommandButton in the center of your form.    If you run the program now (Press F5) you will see your window now has a button labeled Command1 in the center of it, but if you click the button it doesn't do anything...

Words: 628 - Pages: 3

Free Essay

Pt1420

... city, state, zip Display “Enter your telephone number” Input Telephone number Display “Enter college major” Input college major Input Information Console.Write("Enter your full name: ") name = Console.ReadLine() Console.Write("Enter your address, city, state, and zip: ") addressCityStateZip = Console.ReadLine() Console.Write("Enter your Telephone Number: ") telephoneNumber = Console.ReadLine() Console.Write("Enter your College Degree: ") collegeDegree = Console.ReadLine() Visual Basic Code: Sub Main() 'Declarations for variables Dim name As String Dim addressCityStateZip As String Dim telephoneNumber As String Dim collegeDegree As String 4) Total Purchase A customer in a store is purchasing five items. Design a program that asks for the price of each item, and then displays the subtotal of the sale, the amount of sales tax, and the total. Assume the sales tax is 6%. Visual Basic Code: Console.Title = "Total Purchase" Console.WriteLine("Input the amount of each item purchased") Console.WriteLine("Item 1") Dim Num1 As Double Num1 = Console.ReadLine() Console.WriteLine("Item 2") Dim Num2 As Double Num2 = Console.ReadLine() Console.WriteLine("Item 3") Dim Num3 As Double Num3 = Console.ReadLine() Console.WriteLine("Item 4")...

Words: 290 - Pages: 2

Free Essay

Asp.Net Application for Book Doisplay

...} } public Book1(string p1,string p2,string p3,double p4) { // TODO: Complete member initialization this.isbn = p1; this.title = p2; this.author = p3; this.buyprice = p4; } } public partial class display : System.Web.UI.Page { private ArrayList books; String txt; String bookname; String bookauthor; double price; protected void Page_Load(object sender, EventArgs e) { books = new ArrayList(); BookDetails(); } private void BookDetails() { Book1 b1 = new Book1("978-1449311520", "adoop: The Definitive Guide", "Tom White", 15.99); Book1 b2 = new Book1("978-0735667044", "Microsoft Visual Basic 2013 Step by Step", "Michael Halvoson", 9.50); Book1 b3 = new Book1("978-0993088100", "Fifty Quick Ideas to Improve Your User Stories", "David Evens/Gojko Adzick", 33.00); Book1 b4 = new Book1("978-1428336117", "The Medical Manager Student Edition", "David Fitzpatrick", 99.00); Book1 b5 = new Book1("978-0769302652", "Introduction to Language Development", "Scott McLaughlin", 55.00);...

Words: 773 - Pages: 4

Premium Essay

Unit 3

...Lab 3: Input, Processing, and Output This lab accompanies Chapter 2 (pp. 56-68) of Starting Out with Programming Logic & Design. Chris Garcia Name: ___________________________ Lab 3.1 – Pseudocode This lab requires you to think about the steps that take place in a program by writing pseudocode. Read the following program prior to completing the lab. Write a program that will take in basic information from a student, including their name and how many credits they have taken in Network Systems Administration program. The program will then calculate how many credits are needed to graduate. Display should include the student name and the number of credits left to graduate. This should be based off a 90 credit program, where some courses are half credits. Step 1: This program is most easily solved using just a few variables. Identify potential problems with the following variables declared in the pseudocode. Assume that the college has the ability to offer half credits. (Reference: Variable Names, page 39-40). |Variable Name |Problem (Yes or No) |If Yes, what’s wrong? | |Declare Real creditsTaken |n | | |Declare Int creditsLeft |y | | |Declare Real studentName ...

Words: 1394 - Pages: 6

Free Essay

Management Information System

...酒店管理系统是较为典型的管理信息系统,系统的开发主要包括前端的程序开发和后台数据库的建立和维护。数据库要求具有一致性、完整性、数据安全性好的特点,而前端的程序要求功能完备,使用便捷。 本系统使用MICROSOFT公司的Visual Basic 6.0和ACCESS 2000作为程序开发工具和数据库开发工具。主要包括预订管理,接待管理,收银管理,系统管理,客房管理等功能模块。设计首先在短时间内建立起系统应用的原型, 然后对原型系统进行需求分析, 并不断修正和改进, 直到最终形成用户满意的可行性系统。系统的难点在于数据库的设计和模块之间的动态连接。因为时间和能力的原因,目前本系统的设计为单机版,在论文的第6章有关于网络版的部分构思。 关键字:管理信息系统 BASIC 6.0 ACCESS 2000 窗体 ABSTRACT The system of hotel management is a typical application of management information system(MIS),which mainly includes building up data-base of back-end and developing the application interface of front-end. The former should make the application powerful and easily used. The later required consistency and integrality and well security of data. This system uses Visual Basic 6.0 and the ACCESS 2000 presented by Microsoft Company. Including the pre-arranged management primarily, reception management, system management, guest room management etc. function mold piece. It can give you a short-cut to build up a prototype of system application. The prototype could be modified and developed till users are satisfied with it. The design of this system is a single machine version, there are a outline concerning network in the section six. Key words: Management information system(MIS) VISUAL BASIC 6.0 ACCESS 2000 FORM 目 录 前言 ...

Words: 2237 - Pages: 9

Free Essay

Chapter of My...

...computer. The program is also robust; phpMyAdmin has enough functionality that you can probably create and run a Web site without knowing any SQL. Being free and open-source never hurt anybody, either. For these reasons, most hosting sites include phpMyAdmin in their control panel as the default MySQL administration tool. phpMyAdmin has some nice extras, such as importing and exporting Excel, OpenDocument, and XML files, and a tool that generates a PDF image of your database schema. Visual Basic  According to Laud (2012) Visual Basic is a programming language and integrated development environment (IDE). It derives from the much older BASIC programming language, and so is considered a useful and relatively easy programming language for the beginner to learn. Visual Basic (VB) is now integrated into many different software applications and also web applications. Visual Basic was developed to be easy to learn, with a quick learning curve and a diverse scope of possibilities. Using the Visual Basic software, you can either hard-code or use the developer software to assist you throughout. It's also used to create ActiveX controls (for web usage and other controls), .dll file extensions or executables for standalone operation. LAN (Local Area Network) According to Rouse (2006) a local area network (LAN) is a group of computers and associated...

Words: 1499 - Pages: 6

Premium Essay

Pt1420

...Lab 4.1 – Pseudocode and Modules (“UTP Installed”) Critical ReviewA Module is a group of statements that exists within a program for the purpose of performing a specific task.Modules are commonly called procedures, subroutines, subprogram, methods, and functions.The code for a module is known as a module definition. To execute the module, you write a statement that calls it.The format for a module definition is as follows:Module name()StatementStatementEtc.End ModuleCalling a module is normally done from the Main() module such as:Call name()Generally, local variables should be used and arguments should be passed by reference when the value of the variable is changed in the module and needs to be retained. For example:Module main()Real Integer numberCall inputData(number)Call printData(number)End Module// Accepts number as a reference so that changed value// will be retainedModule inputData(Real Ref number)number = 20End Module// number does not need to be sent as reference because// number is not going to be modifiedModule printData(Real number)Display “The number is “, numberEnd Module | This lab requires you to think about the steps that take place in a program by writing pseudocode. Read the following program prior to completing the lab. Data Communications Corp wants a small program that will calculate the cost of UTP it installs for their clients. Write a program that will ask the user to input the name of the client, the number of feet of cable installed. The program...

Words: 1808 - Pages: 8