Free Essay

Econometrics

In:

Submitted By kumok
Words 3111
Pages 13
Expectation Maximization in Collaborative Filtering
Jonathan Baker 2010

Abstract Expectation maximization (EM) is a method of approximating maximum likelihood estimators (MLE) in models with missing data or latent variables. A straightforward application of EM is collaborative filtering (CF): using data from multiple agents to predict unreported values. In this paper, we show a simple method of applying EM to a large CF problem: predicting ratings in the Netflix Prize dataset.

1

Auxiliary Functions

EM belongs to a general class1 of optimization algorithms using successive locally approximating auxiliary functions. For an objective function f : X → R, call g : X × X → R an auxiliary function if ∀x, x0 ∈ X : g(x, x0 ) ≥f (x) g(x, x) =f (x).

∞ Then for any x0 , define the sequence (xn )n=0 by

xn+1 = arg min g(x, xn ) x 1 For example, the Newton Rhapson method can also be stated in terms of auxiliary functions.

1

This sequence has a non-increasing image under f . This is easy to prove:

f (xn+1 ) ≤ g(xn+1 , xt ) ≤ g(xn , xn ) = f (xn ). This idea may be more clear graphically in figure (1). Each g(·, xt ) dominates f (·) but is equal at xt . It is easy to see why we might hope that the minimums of g would approach the minimum of f .

Figure 1: An objective function f and the auxiliary function g on two iterations Assuming f is bounded below (and it really ought to be since we are trying to find its minimum value), then f (xn )∞ is also bounded below. We have already n=0 shown that (f (xn ))∞ is monotonically decreasing and so must converge. Hown=0 ever, without more information we cannot guarantee that (xn )∞ converges, n=0 let alone that it converges to a global minimizer. In specific applications–such as EM–we can say more about convergence.

2

2

Expectation Maximization

EM uses a form of this auxiliary function idea. Specifically, for complete data x (known and unknown values), unknown values z and probability density (or mass) function f , EM approximates θ through the sequence of estimators

θn+1 = arg max{E[ (θ|x, z)|x, θn ]} θ (1)

where (θ) := ln(f (x, z|θ)). is the log-likelihood function. The function g(θ, θn ) = −E[ln(f (x, z|θ))|x, θn ] (3) (2)

is closely related2 to an auxiliary function for the likelihood function such that the sequence given by (1) gives convergence of (θn |x). A thorough discussion of convergence conditions of θn itself is available in [2] (in all our testing, the estimators appeared to converge without problems).

3

Collaborative Filtering

Collaborative filtering (CF) is the process of analyzing information collected from multiple agents in order to infer more information. CF techniques fall into two basic categories: • User-based: Agents that usually agree would have agreed on the missing data • Item-based: Items that multiple agents regard similarly have similar missing values
2 Details

in this relationship appear in the appendix

3

For example, we see item-based filtering when Amazon tracks what items are similar to one another and, at check-out, suggests items similar to the customers’ purchases. Criticker employs user-based filtering to recommend similar users’ favorite films to each-other. Other applications may call for a mixture of these strategies. In all missing data problems, it is often convenient to suppose that the lack of response from an agent is not correlated with the its response (no response bias). If this assumption is good, we call the unobserved data missing. If the fact that a data point is unobserved is significant, we will call it hidden. CF often deals with hidden rather than missing values (for example, customers are likely to use and rate primarily items they expect to like).

4

The Netflix Prize Dataset

In October 2004, Netflix Inc. announced a competition to design an algorithm giving better movie recommendations than Cinematch, Netflix’s own algorithm. For this purpose, Netflix released the ratings (1-5 integral stars) given by about 500,000 users for 177,000 of the films Netflix rents. Included with the data were two lists of movie/user pairs: • Probe set: A subset of the distributed dataset values. Netflix recommended training with this set: hiding the probe values from the algorithm, predicting the probe values and comparing to the actually provided values • Qualifying set: A set of movie/user pairs whose ratings were provided by the users but withheld by Netflix for testing. Competitors submitted predicted ratings for the movie/user pairs in the qualifying set. A submission would win if a randomly selected subset of these submitted ratings (when compared to the actual ratings) had a root-mean-

4

squared-error (RMSE) lower than .8573 (a 10% improvement over Cinematch)3 . The dataset consists of 100,480,507 ratings (integers from 1-5) with an associated pair of ID numbers identifying the user giving the rating and the movie to which the rating was assigned. Netflix’s suggested interpretation of the stars is 1. “Hated It” 2. “Didn’t Like It” 3. “Liked It” 4. “Really Liked It” 5. “Loved It” The distribution of these ratings is displayed in figure (2). Also included were the date on which each rating was given and the title and release year of each movie. Most users have not rated most of the movies, so about 99.88% of the possible 84,993,453,000 movie/user pairs have no reported values. Despite the sparsity of the data, there are still enough values to make computation difficult on a standard private processor. For this reason, we will study a random subsets of the users and movies. Figure (3) illustrates the sparsity of the subset we will focus on. However, we will also present the theory generally so anyone with the necessary computational power could analyze the entire set.

5

Model

We will suppose that for each user, ratings are distributed normally. That is, for each user u, ru is a vector of ratings for different movies drawn from a
3 The competition was won in July 2009 by a three-team conglomeration with a RMSE of .8567 just 20 minutes before another team submitted predictions with the same RMSE. Because of the tie, the earlier submission won.

5

3.5

x 10

7

3

2.5

2

1.5

1

0.5

0

1

2

3

4

5

Figure 2: Distribution of ratings multivariate normal distribution

ru ∼ N (µ, Σ) independent of the missing data. This is item-based filtering since we are essentially holding users constant and studying the properties of the movies. This is different from supposing that each movie’s ratings have a multivariate normal distribution which would be user-based filtering. We choose to focus on item-based rather than user-based methods because 1. There are many more users than films: If correlation between users is calculated, we estimate more parameters than we have data (an ill-posed problem in general). Indeed, EM’s approximated correlation matrix will be low-rank (singular) after few iterations and EM cannot continue. We could use a subset of the data with more films than users, but this would be unrepresentative of the original data. 2. Results of item-based filtering will be easier to assess: We may be able to intuitively check the correlation matrix generated by EM if it 6

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

0

10

20

30 nz = 893

40

50

60

Figure 3: Dots show the 893 existing ratings for 60 users on 9000 movies. Column 30 corresponds to Something’s Gotta Give (2003): the most rated film in the subset.

7

represents correlations between movies: we may believe EM worked well if it predicts high correlation between movies of the same genre. We would not be able to examine a user-correlation matrix in this fashion since we have little information about the users. We have chosen this model largely for its simplicity so that we may freely demonstrate results of using EM in CF problems. Some legitimate concerns with the model as a whole are 1. We should expect strong selection bias: It is unreasonable to suppose that whether a user has seen/rated a film is independent of how well the user liked the film. It seems very likely that users would tend to see and rate films that they will enjoy. We cannot justify this assumption except to lend simplicity to the model. 2. For each movie, ratings across users appear to be normal: The reverse appears to be untrue. In fact, many users give the same rating to every movie. We have assumed the opposite for the reasons mentioned above.

6

Using the EM Algorithm

We now describe our prediction procedure in detail. Other than the EM update rule (which depends on our model), the process should be similar for any application. First, we removed some known ratings. We will predict these ratings based on the remaining data and compare the results to the actual values. We will report results for the probe set specifically, but considered analysis of this single set to be insufficient, so we repeated this process on many random subsets of the data.

8

From here on, when we refer to data, we will mean the data without the values we hid for testing purposes. We next determine which movie/user pairs (and which model parameters) it will be possible to predict. If a certain component of a multivariate normal variable has never been observed, it is impossible to make predictions about that component. In our case, if a movie has never been seen, we can make no predictions about how well it will be received (let alone by a specific person). Similarly, if a certain component has been observed only once, it is not reasonable to estimate its variance. The extreme sparsity of the data (made worse by hiding some for tests) and using only a small portion of the data may mean we cannot predict many values. In fact, out of the 60 × 9000 section selected, only 9 of the 28 values of the probe set could be predicted4 . For movies that have only received 1 rating (or all ratings of the same value) we may make the obvious predictions of its mean and rating from other users. We may even predict that its rating variance is 0, but because of the need to invert the covariance matrix, these movies should not be included in the actual algorithm. We will suppose now that all pathological users and movies have been handled separately such that all movies have received at least 2 different ratings and that all users have provided at least 1 rating. For describing the update process, it will convenient to define notation for partitions of vectors and matrices. For lists of indices

J =(j1 , j2 , . . . , jp ) K =(k1 , k2 , . . . , kq )
4 Most users and movies have more than 1 rating so nearly all the values should be predictable from the full dataset.

9

define  ξj1      ξj2    ξJ :=  .   .   .    ξjp   aj1 ,k1 aj1 ,k2   aj2 ,k1 aj2 ,k2  AJ,K :=  . .  . . . .   ajp ,k1 ajp ,k2   aj1 ,i      aj2 ,i    ∀i : AJ,i :=  .   .   .    ajp ,i 

··· ··· .. . ···

 aj1 ,kq   aj2 ,kq    .  . .   ajp ,kq

For each user u, define Ku as the list of indices (k1 , k2 , . . . , kp ) such that c c c c each rki ,u is known. Similarly define Ku := (k1 , k2 , . . . , kq ) as the list of indices

of ru corresponding to unknown data. Recall that we defined the ratings for each user u as a column vector ru . Let ru be the ratings for user u with the tth estimates of the missing values in place. We will call the entire rating matrix  r1,1  (t)  r  2,1 (t) . . . rn ] =  .  .  .  (t) rm,1 
(t) (t)

r1,2 r2,2 . . . rm,2
(t) (t)

(t)

··· ··· .. . ···

(t) (t) R(t) := [r1 r2

 (t) r1,n  (t)  r2,n   .  .  .   (t) rm,n

Notice that RKu ,u never depends on t since the known values are never altered, so we simply call RKu ,u the vector of known values for user u.

(t)

10

For our purposes, we took initial estimates of the parameters to be  µ1  (0)  µ  2 = .  .  .  (0) µm 
(0)

        

µ(0)

Σ(0) =Im where each µi
(0)

is the mean of known ratings for movie i and Im is the m × m

identity matrix. No arbitrary initial estimates of the the unknown ratings need be made (R(0) will be determined by µ(0) and Σ(0) ). We are now ready to describe the EM update process for our model: 1. Estimate unknown values as their expected values given known data and the current estimates µ(t) , Σ(t) . Specifically, for each user u, update estimates of unknown values of in ru by c RKu ,u =E RKu ,u |RKu ,u , µ(t) , Σ(t) c

(t)

=µKu + ΣKu ,Ku ΣKu ,Ku c c

(t)

(t)

(t)

−1

RKu ,u − µKu

(t)

(4)

2. Obtain new estimated parameters µ(t+1) , Σ(t+1) from known data and unknown data estimated in 1:
(t+1)

∀i : µi

=

1 (t) R n u=1 i,u
T

n

(5) − µ(t+1) µ(t+1)
T

1 Σ(t+1) = R(t) R(t) n

(6)

3. Repeat 1-2 while resulting changes in estimated parameters are small The update (4) is simply the expected value of unknown components of a multivariate normal distribution given the known components (and their estimated 11

means and covariances). The updates (5) and (6) are the MLE estimators of µ, Σ assuming the values just estimated in (4) were actual observations.

7

Results

Of the 28 probe ratings in the subset, only 9 could be predicted for reasons discussed in the previous section. The resulting RMSE is surprisingly low (low enough to have won the Netflix Prize if it could have been achieved for the qualifying set). Unfortunately, this result is not typical. To illustrate this, consider the average RMSE for when predicting the same number of randomly selected ratings: 100 trials resulted in an average RMSE more than double that of these 9 in the probe set. Another problem is the values that EM simply cannot predict. Failing to predict values is usually unacceptable. We should provide for some means of predicting the difficult values and incorporate these additional errors into the RMSE. For example, we might simply predict the overall mean rating for all the difficult ratings. This increased the RMSE to worse than Cinematch’s. The RMSE’s compared to those of significant algorithms are listed in table (1). Table 1: EM’s and Other Algorithms’ RMSE’s EM: 9 Probe Values EM: 28 Probe Values (na¨ predictions for difficult values) ıve EM: 100 Random Trials Cinematch BellKor’s Pragmatic Chaos (contest winners) RMSE 0.8013 1.1241 1.7695 0.9525 0.8567

Finding significance levels of EM estimators is relatively difficult, but we might expect to be able to judge the accuracy of the correlation matrix based on its predicted correlations between films’ ratings. In the subset of films used in this study, the films with the highest predicted ratings correlation were Rudolph the Red-Nosed Reindeer (a 1964 stop-motion Christmas TV special) 12

and Carandiru (a 2003 Brazilian film about a prison in S˜o Paulo). It seems a unlikely that these films’ ratings should be so correlated since the films’ contents seem very dissimilar. Such a dissatisfying result could be a product of our admittedly unreasonable assumptions, but a direct analysis is very difficult since no user rated both films. This apparently nonsensical prediction is similar to a result found by principle component analysis (PCA). PCA finds latent effects in data by singular value decomposition (SVD), but cannot provide any interpretation of these effects. When each missing value is replaced with the average of the corresponding movie’s and user’s mean ratings, PCA predicts high similarity5 between Elmo’s World: The Street We Live On (a light-hearted, educational children’s film) and Die Hard 2 (an intense action movie). The commonality of these movies is not apparent, but is indicated by the data.

References
[1] Sean Borman. The expectation maximization algorithm: A short tutorial. 2004. [2] Geoffrey Mclachlan and Thriyambakam Krishnan. The EM Algorithm and Extensions. John Wiley and Sons, New York, 1996.

Appendix
We follow the demonstration given in [1] that maximizing (3) with respect to θ is equivalent to maximizing an auxiliary function for the log-likelihood function (2). The auxiliary function will be defined explicitly in (7). Note the necessity of the negative sign in (3) since auxiliary functions are defined for minimization.
5 based

on similar scores in the most significant effects

13

Now, denoting the probability measure by P: (θ) − (θn ) = ln f (x|θ) − ln f (x|θn ) = ln z f (x|z, θ)f (z|θ)dP − ln f (x|θn ) f (z|x, θn )

= ln

f (x|z, θ)f (z|θ) dP − ln(f (x|θn )) f (z|X, θn ) z f (x|z, θ)f (z|θ) dP − ln(f (x|θn )) ≥ f (z|X, θn ) ln f (z|x, θn ) z (Jensen’s inequality)

= z f (z|x, θn ) ln

f (x|z, θ)f (z|θ) f (z|x, θn )f (x|θn )

dP

=:∆(θ|θn ) We claim that the function G(θ, θn ) := − (θn ) − ∆(θ|θn )

(7)

is an auxiliary function for − (and so helps max imize ). We have just demonstrated − (θ) ≤ G(θ, θn ). To finish proving the claim, we also need to show G(θ, θ) = − (θ) − ∆(θ|θ) = − (θ) − z f (z|x, θ) ln

f (x|z, θ)f (z|θ) f (z|x, θ)f (x|θ)

dP

(Bayes’ Rule) = − (θ) − z f (z|x, θ) ln

f (x, z|θ) f (x, z|θ)

dP

= − (θ) − z f (z|x, θ) ln(1)dP

= − (θ)

Minimizing (3) (as done in each iteration of EM) is equivalent to minimizing

14

(7) (that is, the same sequence of estimates θn is generated) because arg min G(θ, θn ) = arg min {− (θn ) − ∆(θ|θn )} θ θ

= arg min − (θn ) − θ z

f (z|x, θn ) ln

f (x|z, θ)f (z|θ) f (z|x, θn )f (x|θn )

dP

(dropping terms that are constant with respect to θ) = arg min − θ z

f (z|x, θn ) ln(f (x|z, θ)f (z|θ))dP f (z|x, θn ) ln(f (x, z|θ))dP z = arg min − θ = arg min {−E[ln(f (x, z|θ))|x, θn ]} θ = arg min g(θ, θn ) θ 15

Similar Documents

Premium Essay

Econometrics

...e YOUR ECONOMETRICS PAPER BASIC TIPS There are a couple of websites that you can browse to give you some ideas for topics and data. Think about what you want to do with this paper. Econometrics is a great tool to market when looking for jobs. A well-written econometrics paper and your presentation can be a nice addition to your resume. You are not expected to do original research here. REPLICATION of prior results is perfectly acceptable. Read Studenmund's Chapter 11. One of the most frustrating things in doing an econometrics paper is finding the data. Do not spend a lot of time on a topic before determining whether there is data available that will allow you to answer your question. It is a good idea to write down your ideal data set that would allow you to address your topic. If you find that the available data is not even close to what you had originally desired, you might want to change your topic. Also, remember that knowing the location of your data – website, reference book, etc – is not the same as having your data available to use. It may take a LONG time to get the data in a format that EVIEWS can read. Do not leave this till the last minute. For most data, I enter the data into Excel first. I save the Excel sheet in the oldest version, namely MS Excel Worksheet 2.1 . The reason is that format can be read by most programs whereas newer formats may or may not be read. Eviews easily reads an Excel sheet 2.1 version. You should use...

Words: 2376 - Pages: 10

Premium Essay

Econometrics

...This page intentionally left blank Introductory Econometrics for Finance SECOND EDITION This best-selling textbook addresses the need for an introduction to econometrics specifically written for finance students. It includes examples and case studies which finance students will recognise and relate to. This new edition builds on the successful data- and problem-driven approach of the first edition, giving students the skills to estimate and interpret models while developing an intuitive grasp of underlying theoretical concepts. Key features: ● Thoroughly revised and updated, including two new chapters on ● ● ● ● ● ● panel data and limited dependent variable models Problem-solving approach assumes no prior knowledge of econometrics emphasising intuition rather than formulae, giving students the skills and confidence to estimate and interpret models Detailed examples and case studies from finance show students how techniques are applied in real research Sample instructions and output from the popular computer package EViews enable students to implement models themselves and understand how to interpret results Gives advice on planning and executing a project in empirical finance, preparing students for using econometrics in practice Covers important modern topics such as time-series forecasting, volatility modelling, switching models and simulation methods Thoroughly class-tested in leading finance schools Chris Brooks is Professor of Finance...

Words: 195008 - Pages: 781

Premium Essay

Econometrics

...A Guide to Modern Econometrics 2nd edition Marno Verbeek Erasmus University Rotterdam A Guide to Modern Econometrics A Guide to Modern Econometrics 2nd edition Marno Verbeek Erasmus University Rotterdam Copyright  2004 John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777 Email (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on www.wileyeurope.com or www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (+44) 1243 770620. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required,...

Words: 194599 - Pages: 779

Free Essay

Econometrics

...Оценивание отдачи от образования. Данные из учебника Manno Verbeek “A guide to Modern Econometrics” http://www.econ.kuleuven.ac.be/GME/ Файл schooling содержит данные Национального панельного опроса 1976 года молодых мужчин (NLSYM, проживающих в США. Переменные в файле и их описание: smsa66 1 if lived in smsa in 1966 1 Семинары по эконометрике, 2013 г. smsa76 1 if lived in smsa in 1976 nearc2 grew up near 2-yr college nearc4 grew up near 4-yr college nearc4a grew up near 4-year public college nearc4b grew up near 4-year private ed76 education in 1976 ed66 education in 1966 age76 age in 1976 college daded dads education (imputed avg nodaded 1 if dads education imputed momed mothers education nomomed 1 if moms education imputed momdad14 1 if lived with mom and dad sinmom14 1 if single mom at age 14 step14 1 if step parent at age 14 south66 1 if lived in south in 1966 south76 1 if lived in south in 1976 lwage76 log wage in 1976 (outliers trimmed) famed mom-dad education class (1-9) black 1 if black wage76 wage in 1976 (raw, cents per hour) enroll76 1 if enrolled in 1976 kww the kww score iqscore a normed IQ score mar76 marital status in libcrd14 1 if library card exp76 exp762 experience in 1976 exp76 squared 1976 (1 if married) in home at age 14 if missing) at age 14 1.1. Оцените простую линейную модель регрессии: reg lwage76 ed76 exp76 exp762 black smsa76 south76 est store ols 1.2. Проверка мультиколлинеарности: vif 1.3. Проверка гетероскедастичности: ...

Words: 505 - Pages: 3

Free Essay

Econometrics

...Question no. 1 Y1= ∝0+∝1Y2+∝2X1+∈1 Y2= β0+β1Y1+β2X1+β3X3+ϵ2 i. Identification Status: Equation 1: P1=1, P2=1 so that P1=P2 so, equation is Exactly identified. Equation 2: P1=0, P2=1 so that P1<P2 so, equation is Unidentified. ii. Reduced form equations: Putting Y1 in Y2: Y2= β0+β1(∝0+∝1Y2+∝2X1+∈1) +β2X1+β3X3+ϵ2 Y2= β0+β1α0+β1α1Y2+β1α2X1+β1ϵ1+β2X1+β3X3+ϵ2 Y21-β1α1= β0+β1α0+X1β1α2+β2+β3X3+β1ϵ1+ϵ2 Y2= β0+β1α01-β1α1+β1α2+β21-β1α1X1+β31-β1α1X3+β1ϵ1+ϵ21-β1α1 Y1=π20+π21X1+π22X3+ν2 Now putting this reduced form equation of Y2 in Y1 equation: Y1= ∝0+∝1(π20+π21X1+π22X3+V2)∝2X1+∈1 Y1= ∝0+∝1π20+X1α1π21+α2+α1π22X3+α1V2+ϵ1 Y1= π10+π11X1+π12X3+V1 π10= ∝0+∝1π20 α0= π10-π12π22(π20) π11= α1π21+α2 α2= π11- π12π22(π21) π12= α1π22 α1= π12π22 Using STATA the reduced form equation (DATA set 1) Y2= -1.57953 -.37781X1+1.744 X3 Y1= 14.245+ .67809 X1+ .99181 X3 Estimations of structural parameters For equation 1: Run the regression on reduced form equations in STATA and we calculated the following values of structural parameters: 1. α1= π12π22 = .99181 1.744 α1= 0.5688 2. α2= π11- π12π22π21 = .67809 - 0.5688 (-.37781) α2= 0.8930 3. α0= π10-π12π22(π20) = 14.245-0.5688(-1.57953) α0= 15.1437 Question no. 2 Y1=αo+α1Y2+α2X1+α3X2+e1 Y2=βo+β1Y1+β2X1+β3X3+e2 Status Identification of equations: For equation 1: P1=1 P2=1 P1=P2This identifies that the equation 1 is “Exactly-identified”. ...

Words: 1475 - Pages: 6

Free Essay

Most Harmless Econometrics

...Mostly Harmless Econometrics: An Empiricist’ Companion s Joshua D. Angrist Massachusetts Institute of Technology Jörn-Ste¤en Pischke The London School of Economics March 2008 ii Contents Preface Acknowledgments Organization of this Book xi xiii xv I Introduction 1 3 9 10 12 16 1 Questions about Questions 2 The Experimental Ideal 2.1 2.2 2.3 The Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Random Assignment Solves the Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . Regression Analysis of Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II The Core 19 21 22 23 26 30 36 38 38 44 47 51 51 3 Making Regression Make Sense 3.1 Regression Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 3.1.2 3.1.3 3.1.4 3.2 Economic Relationships and the Conditional Expectation Function . . . . . . . . . . . Linear Regression and the CEF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asymptotic OLS Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Saturated Models, Main E¤ects, and Other Regression Talk . . . . . . . . . . . . . . . Regression and Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 The Conditional Independence Assumption . . . . . . . . . . . . . . . . . . . . . . . . The Omitted Variables Bias Formula . ....

Words: 114745 - Pages: 459

Premium Essay

Econometrics Book Description

...Using gretl for Principles of Econometrics, 4th Edition Version 1.0411 Lee C. Adkins Professor of Economics Oklahoma State University April 7, 2014 1 Visit http://www.LearnEconometrics.com/gretl.html for the latest version of this book. Also, check the errata (page 459) for changes since the last update. License Using gretl for Principles of Econometrics, 4th edition. Copyright c 2011 Lee C. Adkins. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation (see Appendix F for details). i Preface The previous edition of this manual was about using the software package called gretl to do various econometric tasks required in a typical two course undergraduate or masters level econometrics sequence. This version tries to do the same, but several enhancements have been made that will interest those teaching more advanced courses. I have come to appreciate the power and usefulness of gretl’s powerful scripting language, now called hansl. Hansl is powerful enough to do some serious computing, but simple enough for novices to learn. In this version of the book, you will find more information about writing functions and using loops to obtain basic results. The programs have been generalized in many instances so that they could be adapted for other uses if desired. As I learn more about hansl specifically...

Words: 73046 - Pages: 293

Premium Essay

Econometrics Project

...Project of Econometric Modelling © 2013 CULS in Prague I. One equation model: The following econometric model would like to analyze the impacts of consumption, interest rate and unemployment rate on Gross Domestic Product of China based on the data extracted from National Bureau of Statistics of China.(1992-2011 National Data in 1992-2011 ). 1. Economic model and econometric model 2.1. Assumption * Gross Domestic Product (GDP) depends on the following variables: * Private Consumption * Government spending * Total wage of employees * General model: GDP = f (Private Consumption, Government spending, Total wage of employees) * Dependency between variables based on economic theory: * Increase of private consumption will cause increase in GDP. * Increase of Government spending will cause increase in GDP. * Increase of Total wage will cause increase in GDP. 2.2. Economic and econometrics model * Declaration of variables Variable | Symbol | Unit | Gross Domestic Product | y1 | 100 million yuan | Unit vector | x1 | | Private Consumption | x2 | 100 million yuan | Government spending | x3 | 100 million yuan | Total wage of employees | x4 | 100 million yuan | Stochastic variable | u1t | | * Economic model: y1 = γ1+ γ2 x2 + γ3 x3 + γ4 x4 . Insert stochastic variable- u1t into economic model to form econometric model. * Econometric model: y1t...

Words: 2069 - Pages: 9

Free Essay

Applied Econometrics Individual Assignment

...1. In the population model: cigsi = βo + β1educi + ui a) Interpret the coefficient β1. β1 represents the slope of the regression line and is the change in cigs associated with a unit change in educ. So for one unit increase of educ there will be β1 units increase/decrease (depending on the sign of β1) in the cigs. b) Can you predict the sign of β1 (without doing any estimation)? Explain. The sign of β1 would most probably be minus looking at the information from the surveys in the excel spreadsheet. The education value is always a positive number (educi > 0), βo is also a positive number (the intercept with y, as cigsi >=0). In this way in order for the number of cigarettes to be equal to 0, the β1 value should be negative. 2. Use the data in SMOKE.sav (see Blackboard) to estimate the model from question 1. Report the estimated equation in the usual way. Also, plot a handwritten graph of the estimated equation. cigs = 11,412 – 0,219 educ 3. Does educ explain a lot of the variation in the number of cigarettes smoked? Explain. The regression R2 is the fraction of the sample variance of cigsiexplained by (or predicted by) educi. In this example R2 equals to 0,002 and this amount is closer to 0, which means that the regressor educ is not very good at predicting the value of cigs, thus does not explain a lot of the variation in the number of cigarettes smoked. 4. Find the predicted difference in number of smoked cigarettes for two people...

Words: 372 - Pages: 2

Premium Essay

Nonparametric Estimation and Hypothesis Testing in Econometric Models by A. Ullah

...empec, Vol. 13, 1988, page 223-249 Nonparametric Estimation and Hypothesis Testing in Econometric Models By A. Ullah ~ Abstract: In this paper we systematically review and develop nonparametric estimation and testing techniques in the context of econometric models. The results are discussed under the settings of regression model and kernel estimation, although as indicated in the paper these results can go through for other econometric models and for the nearest neighbor estimation. A nontechnical survey of the asymptotic properties of kernel regression estimation is also presented. The technique described in the paper are useful for the empirical analysis of the economic relations whose true functional forms are usually unknown. 1 Introduction Consider an economic model y =R(x)+u where y is a dependent variable, x is a vector o f regressors, u is the disturbance and R(x) = E ( y l x ) . Often, in practice, the estimation o f the derivatives o f R(x)are o f interest. For example, the first derivative indicates the response coefficient (regression coefficient) o f y with respect to x, and the second derivauve indicates the curvature o f R(x). In the parametric econometrics the estimation o f these derivatives and testing 1 Aman Ullah, Department of Economics, University of Western Ontario, London, Ontario, N6A 5C2, Canada. I thank L Ahmad, A. Bera, A. Pagan, C. Robinson, A. Zellner, and the participants of the workshops at the Universities of Chicago...

Words: 5119 - Pages: 21

Premium Essay

Econometric

...Principles of Econometrics Tips for a Term Paper Topic Your work MUST BE ORIGINAL, but the issue/model/methodology need not! Money/Macro/International Economics Common Approaches 1. Apply a model or law (e.g., Phillips curve, Okun’s law, etc.) to more recent data. 2. Extend what is known for the U.S. to other countries (emerging, developing or Eastern European). Examples: 1. Outsourcing: Do firms that outsource tend to do better? Or why they outsource? 2. Trade deficit: What causes the huge US trade deficit? 3. Twin deficits: Is there a link between the trade deficit and the government budget deficit? 4. Foreign exchange: What has caused the recent drop of the US dollar? 5. Oil shocks: Have oil shocks led to recessions in the US or elsewhere? 6. Growth: Why some countries are rich while others poor? 7. Election: What determines an election outcome? 8. Big Mac Index Finance/Management/Accounting Common Approaches 1. What affects stock performance of different firms or over time? 2. Firm performance? Some Issues 1. Any link between the economy and the stock market? 2. How does monetary policy affect the financial markets? 3. Any link between stocks and bonds? Microeconomic/Socioeconomic/Marketing Issues General Approach: Apply any theory, model or concept to firms, people or markets. Some Issues 1. What affects the demand (or price) for a product? 2. Does money buy happiness? 3. Any link between market price (or profit) and quality...

Words: 921 - Pages: 4

Premium Essay

Ningning

...YOUR ECONOMETRICS PAPER BASIC TIPS There are a couple of websites that you can browse to give you some ideas for topics and data. Think about what you want to do with this paper. Econometrics is a great tool to market when looking for jobs. A well-written econometrics paper and your presentation can be a nice addition to your resume. You are not expected to do original research here. REPLICATION of prior results is perfectly acceptable. Read Studenmund's Chapter 11. One of the most frustrating things in doing an econometrics paper is finding the data. Do not spend a lot of time on a topic before determining whether there is data available that will allow you to answer your question. It is a good idea to write down your ideal data set that would allow you to address your topic. If you find that the available data is not even close to what you had originally desired, you might want to change your topic. Also, remember that knowing the location of your data – website, reference book, etc – is not the same as having your data available to use. It may take a LONG time to get the data in a format that EVIEWS can read. Do not leave this till the last minute. For most data, I enter the data into Excel first. I save the Excel sheet in the oldest version, namely MS Excel Worksheet 2.1 . The reason is that format can be read by most programs whereas newer formats may or may not be read. Eviews easily reads an Excel sheet 2.1 version. You should use the...

Words: 2375 - Pages: 10

Premium Essay

Making Decisions Based on Demand and Forecasting

...AND COMMENTS TO Joy de Beyer ( jdebeyer@worldbank.org) and Ayda Yurekli (ayurekli@worldbank.org) World Bank, MSN G7-702 1818 H Street NW Washington DC, 20433 USA Fax : (202) 522-3234 Contents I. Introduction 1 Purpose of this Tool 1 Who Should Use this Tool 2 How to Use this Tool 2 II. Define the Objectives of the Analysis 4 The Reason for Analysis of Demand 4 The Economic Case for Demand Intervention 4 Analysis of Demand for the Policy Maker 5 Design an Analysis of Demand Study 6 Components of a Study 6 The Nature of Econometric Analysis 7 Resources Required 7 Summary 8 References and Additional Information 8 III. Conduct Background Research 9 IV. Build the Data Set 11 Choose the Variables 11 Data Availability 11 Data Types 12 Prepare the Data 13 Data Cleaning and Preliminary Examination 14 Preparing the Data Variables 14 References and Additional Information 19 V. Choose the Demand Model 20 Determine the Identification Problem 20 Test for Price Endogeneity 21 ...

Words: 36281 - Pages: 146

Premium Essay

Do Mind Your Mind

...Econometrics (Economics 360) Syllabus: Spring 2015 Instructor: Ben Van Kammen Office: Krannert 531 Office Hours: Friday, 10 a.m.-noon Email: bvankamm@purdue.edu Meeting Location: KRAN G010 Meeting Days/Times: TR 1:30-2:45 p.m. (001) TR 3-4:15 p.m. (002) TR 4:30-5:45 p.m. (003) Course Description This is an upper division economics course required for students pursuing a BS in economics. It is one of the few courses that explicitly covers empirical methods, i.e., the analysis of observed economic behavior in the form of data. Empirics stand in contrast to theory, e.g., micro and macro, about how agents behave. Despite this under-representation, empirical analysis comprises a large part of economists’ workload and is one of the most practical skills that an economics student can learn. Course Objectives In this class students will: 1. perform statistical and practical inference based on the results of empirical analysis, 2. identify useful characteristics of estimators, e.g., unbiasedness, consistency, efficiency, 3. state predictions of theoretical economic models in terms of testable hypotheses, 4. model economic relationships using classical methods, such as Ordinary Least Squares, derive the properties of estimators related to these methods, and 5. perform estimation using methods discussed in class using software, 6. perform diagnostic tests that infer whether a model’s assumptions are invalid, 7. evaluate empirical models based on whether their resulting estimators...

Words: 2067 - Pages: 9

Free Essay

Stock Market Relation

...International Conference On Applied Economics – ICOAE 2010 299 DOES STOCK MARKET DEVELOPMENT CAUSE ECONOMIC GROWTH? A TIME SERIES ANALYSIS FOR BANGLADESH ECONOMY MD. SHARIF HOSSAIN (PH. D.)1 - KHND. MD. MOSTAFA KAMAL2 Abstract In this paper the principal purpose has been made to investigate the causal relationship between stock market development and economic growth in Bangladesh. To investigate long-run causal linkages between stock market development and economic growth the Engle-Granger causality and ML tests are applied. In this paper another attempt has been made to investigate the non-stationarity in the series of stock market development and economic growth by using modern econometric techniques. The co-integrated tests are applied to know whether this pair of variables shares the same stochastic trend or not. From our analysis it has been found that the stock market development strongly influences the economic growth in Bangladesh economy, but there is no causation from economic growth to stock market development. Thus unidirectional causality has prevailed between stock market development and economic growth in the Bangladesh economy. Also it has been found that all the variables are integrated of order 1, and both the variables stock market development and economic growth share the same stochastic trend in Bangladesh economy. JEL Code: C010 Key Words: Stock Market Development, Causal Relationship, Non-stationarity, Unit Root Test, Co-integrated Tests 1 ...

Words: 5712 - Pages: 23