Free Essay

Linear Least Squares

In:

Submitted By monquavogia
Words 3071
Pages 13
Linear Least Squares

Suppose we are given a set of data points {(xi , fi )}, i = 1, . . . , n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. You have seen that we can interpolate these points, i.e., either find a polynomial of degree ≤ (n − 1) which passes through all n points or we can use a continuous piecewise interpolant of the data which is usually a better approach. How, it might be the case that we know that these data points should lie on, for example, a line or a parabola, but due to experimental error they do not. So what we would like to do is find a line (or some other higher degree polynomial) which best represents the data. Of course, we need to make precise what we mean by a “best fit” of the data. As a concrete example suppose we have n points (x1 , f1 ), (x2 , f2 ), ··· (xn , fn )

and we expect them to lie on a straight line but due to experimental error, they don’t. We would like to draw a line and have the line be the best representation of the points. If n = 2 then the line will pass through both points and so the error is zero at each point. However, if we have more than two data points, then we can’t find a line that passes through the three points (unless they happen to be collinear) so we have to find a line which is a good approximation in some sense. Of course we need to define what we mean by a good representation. An obvious approach would be to create an error vector of length n and each component measures the difference (fi − y(xi )) where y = a1 x + a0 is the line we fit the data with. Then we can take a norm of this error vector and our goal would be to find the line which minimizes this error vector. Of course this problem is not clearly defined because we have not specified what norm to use. The linear least squares problem finds the line which minimizes this difference in the ℓ2 (Euclidean) norm. Example We want to fit a line p1 (x) = a0 + a1 x to the data points (1, 2.2), (.8, 2.4), (0, 4.25)

in a linear least squares sense. For now, we will just write the overdetermined system and determine if it has a solution. We will find the line after we investigate how to solve the linear least squares problem. Our equations are a0 + a1 ∗ 1 = 2.2 a0 + a1 ∗ .8 = 2.4 a0 + a1 ∗ 0 = 4.25 1

Writing this as a matrix problem Ax = b we have   1 1  1 0.8  1 0 a0 a1  2.1 =  2.4  4.25 

Now we know that this over-determined problem has a solution if the right hand side is in R(A) (i.e., it is a linear combination of the columns of the coefficient matrix A). Here the rank of A is clearly 2 and thus not all of I 3 . Moreover, (2.1, 2.4, 4.25)T is not in the R(A), R T T i.e., not in the span{(1, 1, 1) , (1, 0.8, 0) } and so the system doesn’t have a solution. This just means that we can’t find a line that passes through all three points.

Example

If our data had been (1, 2.1) (0.8, 2.5) (0, 4.1)

then would we have had a solution to the over-determined system? Our matrix problem Ax = b is  1 1  1 0.8  1 0  a0 a1  2.1 =  2.5  4.1 

and we notice that in this case, the right hand side is in R(A) because      2.1 1 1  2.5  = 4.1  1  − 2  0.8  4.1 1 0  and thus the system is solvable and we have the line 4.1 − 2x which passes through all three points. But, in general, we can’t solve the over-determined system so our approach is to find a vector x such that the residual r = b − Ax is as small as possible. The residual is a vector and so we take the norm. The linear least squares method uses the ℓ2 -norm. Consider the over-determined system Ax = b where A is m × n with m > n. The linear least squares problem is to find a vector x which minimizes the ℓ2 norm of the residual that is x = min b − Az n z∈I R 2

b − Az

2

for all z ∈ I n R

2

We note that minimizing the ℓ2 norm of the residual is equivalent to minimizing its square. This is often easier to work with because we avoid dealing with square roots. So we rewrite the problem as find a vector x which minimizes the square of the ℓ2 norm b − Az
2 2

for all z ∈ I n R

Example points

For our example where we want to fit a line p1 (x) = a0 + a1 x to the data (1, 2.2),   2.2 1 r =  2.4  −  1 4.25 1  (.8, 2.4),  1 0.8  0 (0, 4.25)  2.2 − z1 − z2 =  2.4 − z1 − 0.8z2  4.25 − z1 

calculate the residual vector and then use techniques from Calculus to minimize r 2 . 2 z1 z2

To minimize r 2 we take the first partials with respect to z1 and z2 and set them equal 2 to zero. We have f= r and thus ∂f = −4.4 + 2z1 + 2z2 − 4.8 + 2z1 + 1.6z2 − 8.5 + 2z1 = 17.7 + 6z1 + 3.6z2 = 0 ∂z1 ∂f = −4.4 + 2z1 + 2z2 − 3.84 + 1.6z1 + 1.28z2 = −8.24 + 3.6z1 + 3.28z2 = 0 ∂z2 So we have to solve the linear system 6 3.6 3.6 3.28 whose solution is (4.225, −2.125)T . We now want to determine 1. Does the linear least squares problem always have a solution? 2. Does the linear least squares problem always have a unique solution? 3. How can we efficiently solve the linear least squares problem? Theorem The linear least squares problem always has a solution. It is unique if A has linearly independent columns. The solution of the problem can be found by solving the normal equations AT Ay = AT b . 3 z1 z2 = −17.7 8.24
2 2

= (2.2 − z1 − z2 )2 + (2.4 − z1 − .8z2 )2 + (4.25 − z1 )2

Before we prove this, recall that the matrix AT A is symmetric because (AT A)T = AT (AT )T = AT A and is positive semi-definite because xT (AT A)x = (xT AT )(Ax) = (Ax)T (Ax) = y T y ≥ 0 where y = Ax

Now y T y is just the square of the Euclidean length of y so it is only zero if y = 0. Can y ever be zero? Remember that y = Ax so if x ∈ N (A) then y = 0. When can the rectangular matrix A have something in the null space other than the zero vector? If we can take a linear combination of the columns of A (with coefficients nonzero) and get zero, i.e., if the columns of A are linearly dependent. Another way to say this is that if the columns of A are linearly independent, then AT A is positive definite; otherwise it is positive semi-definite (meaning that xT AT Ax ≥ 0). Notice in our theorem we have that the solution is unique if A has linearly independent columns. Another equivalent statement would be to require N (A) = 0. Proof First we show that the problem always has a solution. Recall that R(A) and N (AT ) are orthogonal complements in I m . This tells us that we can write any vector in R m I R as the sum of a vector in R(A) and one in N (AT ). To this end we write b = b1 + b2 where b1 ∈ R(A) and b2 ∈ R(A)⊥ = N (AT ). Now we have the residual is given by b − Ax = (b1 + b2 ) − Ax Now b1 ∈ R(A) and so the equation Ax = b1 is always solvable which says the residual is r = b2 When we take the r 2 we see that it is b2 2 ; we can never get rid of this term unless b ∈ R(A) entirely. The problem is always solvable and is the vector x such that Ax = b1 where b1 ∈ R(A). When does Ax = b1 have a unique solution? It is unique when the columns of A are linearly independent or equivalently N (A) = 0. Lastly we must show that the way to find the solution x is by solving the normal equations; note that the normal equations are a square n × n system and when A has linearly independent columns the coefficient matrix AT A is invertible with rank n. If we knew what b1 was, then we could simply solve Ax = b1 but we don’t know what the decomposition of b = b1 + b2 is, simply that it is guaranteed to exist. To demonstrate that the x which minimizes b − Ax 2 is found by solving AT Ax = AT b we first note that these normal equations can be written as AT (b − Ax) = 0 which is just AT times the residual vector so we need to show AT r = 0 to prove the result. From what we have already done we know that AT (b − Ax) = AT (b1 + b2 − Ax) = AT (b2 ) 4

Recall that b2 ∈ R(A)⊥ = N (AT ) which means that AT b2 = 0 and we have that AT (b − Ax) = 0 ⇒ AT Ax = AT b The proof relies upon the fact that R(A) and N (AT ) are orthogonal complements and that this implies we can write any vector as the sum of a vector in R(A) and its orthogonal complement. Example We return to our previous example and now determine the line which fits the data in the linear least squares sense; after we obtain the line we will compute the ℓ2 norm of the residual. We now know that the linear least squares problem has a solution and in our case it is unique because A has linearly independent columns. All we have to do is form the normal equations and solve as usual. The normal equations     1 1 2.2 1 1 1  a0 1 1 1  1 0.8  = 2.4  1 0.8 0 a1 1 0.8 0 1 0 4.25 are simplified as 3.0 1.8 1.8 1.64 a0 a1 = 8.85 4.12

which has the solution (4.225, −2.125) giving the line y(x) = 4.225−2.125x. If we calculate the residual vector we have     2.2 − y(1) 0.1  2.4 − y(0.8)  =  −0.125  4.25 − y(0) 0.025 which has an ℓ2 norm of 0.162019. We said that we only talked about the inverse of a square matrix. However, one can define a pseudo-inverse of a rectangular matrix. If A is an m×n matrix with linearly independent columns then a pseudo-inverse (or sometimes called left inverse of A ) is A† = (AT A)−1 AT which is the matrix in our solution to the normal equations x = (AT A)−1 AT b It is called the pseudo-inverse of the rectangular matrix A because (AT A)−1 AT A = (AT A)−1 (AT A) = I Note that if A is square and invertible the pseudo-inverse reduces to A−1 because (AT A)−1 AT = A−1 (AT )−1 AT = A−1 . 5

We can also find a polynomial of higher degree which fits a set of data. The following example illustrates this. Example State the linear least squares problem of finding the quadratic polynomial which fits the following data in a linear least squares sense; determine if it has a unique solution; calculate the solution and calculate the ℓ2 norm of the residual vector. (0, 0) (1, 1) (3, 2) (4, 5)

In this case we seek a polynomial of the form p(x) = a0 + a1 x + a2 x2 . Our over determined system is     0 1 0 0   a0 1 1 1   1  a1 =    2 1 3 9 a2 5 1 4 16 So the linear least squares problem is to find a vector x in R3 which minimizes     0 z1 1  z2   −A 2 z3 5
2

2

for all z ∈ R3 where A is the 4 × 3 matrix given above. We see that A has linearly independent columns so its rank is 3 and thus the linear least squares problem has a unique solution. The normal equations are  1 1 0 1 0 1 1 1 1 1 3 4  1 9 16 1  0 1 3 4  0    a 1 1  0   a1 = 0 9 a2 0 16 1 1 1 3 1 9 0 1 1 4   2 16 5

leading to the square system 4  8 26     8 26 a0 8   a1  =  27  26 92 a2 92 338 99

Solving this we get a0 = 3/10, a1 = 7/30, a2 = 1/3. Our residual vector is    0.3 0 − p(0)  1 − p(1)   0.6  r=  = 0.6 2 − p(3) 0.3 5 − p(4)  6

and the square of its ℓ2 norm is r
2 2

= .32 + .62 + .62 + .32 = 0.9

Now it seems as if we are done because we know when the solution is unique and have a method for determining the solution when it is unique. What else do we need? Unfortunately, finding the normal equations works well for hand calculations but is not the preferred method for computations. Why is this? To form the normal equations we must compute AT A. This can cause problems as the following theorem tells us. Theorem Let A have linearly independent columns. Then
2 K2 (A) = K2 (AT A)

where K(A) = A A† Thus when we form AT A we are squaring the condition number of the original matrix. This is the major reason that solving the normal equations is not a preferred computational method. A more subtle problem is that the computed AT A may not be positive definite even when A has linearly independent columns so we can’t use Cholesky’s method. Can we use any of our previous results from linear algebra to help us solve the linear least squares problem? We looked at three different decompositions: LU and its variants, QR and the SVD. We use LU (or its variants) to solve the normal equations. Can we use QR or the SVD of A? In fact, we can use both. Recall that an m × n matrix with m > n and rank n has the QR decomposition A=Q R 0

where Q is an m × m orthogonal matrix and R is an n × n upper triangular matrix and 0 represents an (m − n) × n zero matrix. Now to see how we can use the QR decomposition to solve the linear least squares problem, we take QT r where r = b − Ax to get QT r = QT b − QT Ax = QT b − QT Q R 0 x

Now Q is orthogonal so QT Q = I so if we let QT b = (c, d)T , we have QT r = QT b − R 0 x= c d 7 − Rx 0 = c − Rx d

Now also recall that an orthogonal matrix preserves the ℓ2 length of any vector, i.e., Qy 2 = y 2 for Q orthogonal. Thus we have QT r and hence r
2 2 2

= r

2

= QT r

2 2

= c − Rx

2 2

+ d

2 2

So to minimize the residual we must find x which solves Rx = c and thus the minimum value of the residual is r 2 = d 2 . In conclusion, once we have a 2 2 QR decomposition of A with linearly independent columns then the solution to the linear least squares problem is the solution to the upper triangular system Rx = c where c is the first n entries of QT b and the residual is the remaining entries of QT b. Now we want to see how we can use the SVD to solve the linear least squares problem. Recall that you had a homework problem to use the SVD to compute the pseudo-inverse of an m × n matrix A so we essentially need this. Recall that the SVD of an m × n matrix A is given by A = U ΣV T where U is an m×m orthogonal matrix, V is an n ×n orthogonal matrix and Σ is an m×n diagonal matrix (i.e., Σij = 0 for all i = j). Note that this also says that U T AV = Σ. Because here m > n we write Σ as Σ= Σ 0 0 0

where Σ is a square invertible diagonal matrix. The following result gives us the solution to the linear least squares problem. Theorem given by Let A have the singular value decomposition given above. Then the vector x x=V Σ−1 0 0 0 UT b

minimizes b − Az 2 , i.e., x is the solution of the linear least squares problem. We compute our residual and use the fact that V V T = I to get r = b − Ax = b − AV V T x Now once again using the fact that an orthogonal matrix preserves the ℓ2 length of a vector, we have r
2 2

= b−AV V T x

2 2

= U T (b−AV V T x)

2 2

= U T b−(U T AV )V T x) 8

2 2

= U T b−ΣV T x)

2 2

Writing U T b = (c1 , c2 )T and V T x = (z1 , z2 )T we have r
2 2

=

c1 c2



Σ 0

0 0

z1 z2

2

=
2

c1 − Σz1 c2

2 2

So the residual is minimized when c1 − Σz1 = 0; note that z2 is arbitrary so we set it to zero. We have V Tx = z = Σ−1 c1 0 ⇒x=V Σ−1 c1 0 ⇒x=V Σ−1 0 0 0 UT b

because U T b = (c1 , c2 )T .

9

Similar Documents

Premium Essay

Finance

...typically a short summary of the contents of the document. Type the abstract of the document here. The abstract is typically a short summary of the contents of the document.] | a. Use excel to estimate the parameters of this model using i. Formulas [as in example_regression.xls file on lms] [10] ii. Solver [10] iii. Regression tool in Data Analysis [10] b. Use stata to answer questions in this part. i. Estimate the regression model and write the equation of the best fit line. * volatility=53.28629 + (-0.3970438)Cratings + error [10] ii. Plot the all the data on a scattergram, then sketch the least squares line. [10] iii. Do the data provide sufficient evidence to conclude that country credit rating contributes information for the prediction of market volatility? Use α=0.05. Ho; β1=0 Ha; β1 ≠0 T statistic for Crating as shown in stata is 17.69. T α/2 at α = 0.05 is 2.045 . we reject null hypothesis if and only if t > t α/2 as 17.69 is greater than 2.045 so β ≠0 which means Cratings is significant in predicting market volatility. Moreover, the p value is also 0 < 0.05 which implies that C ratings contributes significantly in the prediction of market volatility. as per the regression model above , the 95% confidence interval does not include 0 which explains that the relationship between volatility and...

Words: 468 - Pages: 2

Free Essay

Sysotolic Array

...N U A R Y 1991 43 A Class of Least-Squares Filtering and Identification Algorithms with Systolic Array Architectures Seth Z. Kalson, Member, IEEE, and Kung Yao, Member, IEEE Abstract -A unified approach is presented for deriving a large class of new and previously known time and order recursive least-squares algorithms with systolic array architectures, suitable for high throughput rate and VLSl implementations of space-time filtering and system identification problems. The geometrical derivation given here is unique in that no assumption is made concerning the rank of the sample data correlation matrix. Our method utilizes and extends the concept of oblique projections, as used previously in the derivations of the leastsquares lattice algorithms. Both the growing and sliding memory, exponentially weighted least-squares criteria are considered. Index Terms-Least-squares systolic arrays. tions of the least-squares estimation problem: 1) the filtering problem is to find the filtered output y , , ( t ) , where n . Y,!(t)S Cgl'(t)xi(t), i=l 1ItIT; (1.2) 2) the identification problem is to find the filter weights g ; ( t ) , i = 1;. ., n, for any t I. T This generalization of the least-squares estimation problem is important whenever practical space-time or multichannel filtering arises, such as in adaptive antenna arrays, I. INTRODUCTION decision feedback and fractionally spaced channel equalizINIMUM mean-square estimation is an old and ma- ers, etc...

Words: 8075 - Pages: 33

Free Essay

Math Bellevue

...1. | What is the slope of the regression line? | | Answer | The slope is the coefficient before the x variable (D in this case). Thus the answer is 0.0138. | Points Earned: | 1/1 | Correct Answer: | 0.0138 | Your Response: | 0.0138 | 2. | Explain in specific language what this slope says about this penguin's dives. | | A. | If the depth of the dive is increased by one meter, it adds 0.0138 minutes to the time spent under water. | B. | If the depth of the dive is decreased by one meter, it adds 0.0138 minutes to the time spent under water. | C. | If the depth of the dive is increased by 0.0138 meter, it adds one minute to the time spent under water. | | In the equation of a line, ŷ = a + bx, b is the slope. The slope is the amount by which y changes when x increases by one unit. | Points Earned: | 1/1 | Correct Answer: | A | Your Response: | A | 3. | According to the regression line, how long does a typical dive to a depth of 180 meters last? Answer to 3 decimal places. | | Answer | The predicted value of the dive duration (DD) to a depth D = 180 is given by the regression equation DD = 2.69 + 0.0138D = 2.69 + 0.0138 × 180 = 5.174 | Points Earned: | 1/1 | Correct Answer: | 5.174 | Your Response: | 5.174 | 4. | The dives varied from 40 meters to 300 meters in depth. Plot the regression line from x = 40 to x = 300. Which of the lines in the figure below is the correct regression line? | | A. | Blue | B. | Yellow | C....

Words: 5441 - Pages: 22

Premium Essay

Osama

...Linear Regression and Correlation Chapter 13 McGraw-Hill/Irwin ©The McGraw-Hill Companies, Inc. 2008 GOALS      Understand and interpret the terms dependent and independent variable. Calculate and interpret the coefficient of correlation, the coefficient of determination, and the standard error of estimate. Conduct a test of hypothesis to determine whether the coefficient of correlation in the population is zero. Calculate the least squares regression line. Construct and interpret confidence and prediction intervals for the dependent variable. 2 Regression Analysis - Introduction     Recall in Chapter 4 the idea of showing the relationship between two variables with a scatter diagram was introduced. In that case we showed that, as the age of the buyer increased, the amount spent for the vehicle also increased. In this chapter we carry this idea further. Numerical measures to express the strength of relationship between two variables are developed. In addition, an equation is used to express the relationship. between variables, allowing us to estimate one variable on the basis of another. 3 Regression Analysis - Uses Some examples.  Is there a relationship between the amount Healthtex spends per month on advertising and its sales in the month?  Can we base an estimate of the cost to heat a home in January on the number of square feet in the home?  Is there a relationship between the miles per gallon achieved by large...

Words: 2248 - Pages: 9

Premium Essay

Poverty

...Hutcheson, G. D. (2011). Ordinary Least-Squares Regression. In L. Moutinho and G. D. Hutcheson, The SAGE Dictionary of Quantitative Management Research. Pages 224-228. Ordinary Least-Squares Regression Introduction Ordinary least-squares (OLS) regression is a generalized linear modelling technique that may be used to model a single response variable which has been recorded on at least an interval scale. The technique may be applied to single or multiple explanatory variables and also categorical explanatory variables that have been appropriately coded. Key Features At a very basic level, the relationship between a continuous response variable (Y) and a continuous explanatory variable (X) may be represented using a line of best-fit, where Y is predicted, at least to some extent, by X. If this relationship is linear, it may be appropriately represented mathematically using the straight line equation 'Y = α + βx', as shown in Figure 1 (this line was computed using the least-squares procedure; see Ryan, 1997). The relationship between variables Y and X is described using the equation of the line of best fit with α indicating the value of Y when X is equal to zero (also known as the intercept) and β indicating the slope of the line (also known as the regression coefficient). The regression coefficient β describes the change in Y that is associated with a unit change in X. As can be seen from Figure 1, β only provides an indication of the average expected change (the observed...

Words: 1307 - Pages: 6

Free Essay

Econometrics 2

...Department of Economics, Spring 2016 Practice Midterm Questions (No Solution will be Provided) 1. Suppose the data generating process (the true relationship) is y = Xβ + ε, where E[ε|X] = 0, E[εε |X] = σ 2 I n ; and X includes an intercept term. You do not observe the data set Z = [y X]. Instead you observe   150 15 50 Z Z =  15 25 0  50 0 100 2 Compute the least squares estimators β, s2 , R2 and RAdj (the adjusted R2 ). Is there anything to be gained by observing the full data set? 2. Suppose you have the simple regression model with no intercept: yi = xi β+ i for i = 1, 2. Suppose further that the true value of β is 1, the values of xi observed in the sample are x1 = 2 and x2 = 3, and the distribution of i is Pr( i = −2) = Pr( i = 2) = 1/2 with 1 independent of 2 . (a) Find the least squares estimator of β. (b) What is it mean and variance? Is it BLUE? (c) Consider the alternative estimator β ∗ = y /¯, where y is the sample mean ¯ x ¯ of yi and x is the sample mean of xi . What is the mean and variance of ¯ β ∗ ? Is it unbiased? (d) Which estimator is more efficient, the least squares estimator or β ∗ ? 3. Suppose x1 , x2 . . . xn is an independent but not identically distributed random sample from a population with E[xi ] = µ and Var[xi ] = σ 2 /i for i = 1, 2, . . . , n. Consider the following class of estimators for the population mean µ: n µ= ˆ ci xi where c1 , . . . , c n are constants i=1 Each sequence {c1 , c2 , . ....

Words: 1769 - Pages: 8

Premium Essay

Wilson Company Production Function

...The Production Function for Wilson Company By using the EViews software, we get the result below by using Least Square method: Dependent Variable: Y | | | Method: Least Squares | | | Date: 06/18/12 Time: 03:24 | | Sample: 1 15 | | | Included observations: 15 | | | | | | | | | | | | Variable | Coefficient | Std. Error | t-Statistic | Prob.   | | | | | | | | | | | C | -130.0086 | 129.8538 | -1.001192 | 0.3365 | L | 0.359450 | 0.245593 | 1.463601 | 0.1690 | K | 0.027607 | 0.006051 | 4.562114 | 0.0007 | | | | | | | | | | | R-squared | 0.838938 |     Mean dependent var | 640.3800 | Adjusted R-squared | 0.812094 |     S.D. dependent var | 227.9139 | S.E. of regression | 98.79645 |     Akaike info criterion | 12.20086 | Sum squared resid | 117128.9 |     Schwarz criterion | 12.34247 | Log likelihood | -88.50643 |     F-statistic | 31.25263 | Durbin-Watson stat | 1.458880 |     Prob(F-statistic) | 0.000017 | | | | | | | | | | | 1. In standard form the estimated Cobb-Douglas equation is written as: Q= α Lβ1 Kβ2 The multiplicative exponential Cobb-Douglass Function can be estimated as a linear regression relation by taking logarithm: Log Q = log α + β1 log L + β2 log K Therefore: log(y) = -130.0086 + 0.359450*log(L) + 0.027607*log(K) The output elasticity of capital is 0.027607 and the output elasticity of labor is 0.359450. 2...

Words: 387 - Pages: 2

Premium Essay

Image Processing

...DELHI TECHNOLOGICAL UNIVERSITY SELF – STUDY Ranesh Shevam 2k12/EC/139 Self study on : Object Tracking ( Structural partial least square for simultaneous object tracking and segmentation.) Report : * Definition * Applications * Challenges * Simplification of Tracking DEFINITION : * Tracking can be defined as the problem of estimating the trajectory of an object in the image plane as it moves around a scene * Three steps in video analysis: 1. Detection of interesting moving objects 2. Tracking of such objects from frame to frame 3. Analysis of object tracks to recognize their behavior 1) Applications : * motion-based recognition * human identification based on gait, automatic object detection, etc * automated surveillance * monitoring a scene to detect suspicious activities or unlikely events * video indexing * automatic annotation and retrieval of the videos in multimedia databases * human-computer interaction * gesture recognition, eye gaze tracking for data input to computers, etc. * traffic monitoring * real-time gathering of traffic statistics to direct traffic flow * vehicle navigation * video-based path planning and obstacle avoidance capabilities. Challenges : * loss of information caused by projection of the 3D world on a 2D image * noise in images * complex object motion * nonrigid or articulated nature of objects ...

Words: 657 - Pages: 3

Free Essay

Econometrics

...JOURNAL OF APPLIED ECONOMETRICS J. Appl. Econ. 23: 925– 948 (2008) Published online 7 November 2008 in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/jae.1036 ECONOMETRICS OF AUCTIONS BY LEAST SQUARES LEONARDO REZENDE* PUC-Rio, Rio de Janeiro, Brazil; and University of Illinois at Urbana–Champaign, Illinois, USA SUMMARY I investigate using the method of ordinary least squares (OLS) on auction data. I find that for parameterizations of the valuation distribution that are common in empirical practice, an adaptation of OLS provides unbiased estimators of structural parameters. Under symmetric independent private values, adapted OLS is a specialization of the method of moments strategy of Laffont, Ossard and Vuong (1995). In contrast to their estimator, here simulation is not required, leading to a computationally simpler procedure. The paper also discusses using estimation results for inference on the shape of the valuation distribution, and applicability outside the symmetric independent private values framework. Copyright  2008 John Wiley & Sons, Ltd. Received 15 September 2006; Revised 1 July 2008 1. INTRODUCTION The field of econometrics of auctions has been successful in providing methods for the investigation of auction data that are well grounded in economic theory and allow for inference on the structure of an auction environment. Today, a researcher has a number of alternative structural methods, especially within the independent private-values paradigm...

Words: 12659 - Pages: 51

Premium Essay

Aafff

...139 Part 2 Costs and Decision Making Chapter 5 Cost Behavior and Relevant Costs Chapter 6 Cost-Volume-Profit Analysis and Variable Costing Chapter 7 Short-Term Tactical Decision Making Chapter 8 Long-Term (Capital Investment) Decisions 140 Chapter 5 Cost Behavior and Relevant Costs Chapter 5 U 141 Cost Behavior and Relevant Costs nderstanding the behavior of costs is of vital importance to managers. Understanding how costs behave, whether costs are relevant to specific decisions, and how costs are affected by income taxes allows managers to determine the impact of changing costs and other factors on a variety of decisions. This chapter introduces concepts and tools that will be used in Chapters 6 through 8. Chapter 5 begins with a definition of cost behavior and illustrates the concepts of fixed costs, variable costs, and mixed costs. Next, the chapter revisits the concept of relevant costs (introduced in Chapter 1) as it applies to variable and fixed costs. The chapter also describes the impact of income taxes on costs. Learning Objectives After studying the material in this chapter, you should be able to: 1 Describe the nature and behavior of fixed, variable, and mixed costs Analyze mixed costs using regression analysis and the high/low method 2 Distinguish between relevant and irrelevant costs and apply the concept to decision making 3 Illustrate the impact of income taxes...

Words: 13858 - Pages: 56

Premium Essay

Bus 352 Week 8 Homework Answers Grand Canyon

...BUS 352 week 8 homework Answers Grand Canyon https://homeworklance.com/downloads/bus-352-week-8-homework-answers-grand-canyon/ 10.55Consider an experiment with four groups, with eight values in each. For the ANOVA summary table below, fill in all the missing results: Source Degrees of Freedom Sum of Squares Mean Square (Variance) F Among Groups c-1=? SSA=? MSA=80 FSTAT=? Within Groups n-c=? SSW=560 MSW=? Total n-1=? SST=? 10.59A hospital conducted a study of the waiting time in its emergency room. The hospital has a main campus and three Satellite locations. Management had a business objective of reducing waiting time for emergency room cases that did not require immediate attention. To study this, a random sample of 15 emergency room cases that did not require immediate attention at each location were selected on a particular day, and the waiting times (measured from check-in to when the patient was called into the clinic area) were collected and stored inERWaiting. A.At the 0.05 level of significance, is there evidence of a difference in the mean waiting times in the four locations? B.If appropriate, determine which locations differ in mean waiting time. C.At the 0.05 level of significance, is there evidence of a difference in the variation in waiting time among the four locations? 11.21When performing ax2 test of independence in a contingency table withrrows andccolumns, determine the upper-tail critical value of the test statistic in each of the following...

Words: 1173 - Pages: 5

Premium Essay

Business Statistics Assignment

...This indicates that the model is not satisfactory. g) -4.9913+0.1227*90000 =$11038.0087 h) H0: p=0, H1: p ≠0 at α= 0.05 p-value= 5.68×10-14 If p-value is less than α reject H0 P-value < 0.05. This means that the probability that the observed results are due to random choice is low. So reject null hypothesis (H0) Conclusion: At the 5% significance level there is evidence of linear association between advertising expenditure and sales. i) b± t*( sb ) (see appendix 1 ) =0.1227 ± (2.048* 0.0089) = (0.104, 0.141) It could be as low as 0.104 or as high as 0.141 A=8, B= 90, C=3, D=40, E=9.7327, F= 1.9759, G= 4.8247, H=-2.8247, I=6.2971 (See Appendix 2) A=8, B= 90, C=3, D=40, E=9.7327, F= 1.9759, G= 4.8247, H=-2.8247, I=6.2971 (See Appendix 2) j) Question 2 a) = -0.66061 b) There is a moderate negative correlation between the scores of judge 1 and the scores of judge 2. Scores of judge 2 decreases when scores of judge 1 increases. Appendix Appendix 1: Estimating slope of regression line b± t*( SEb ) where b= slope of least squares regression line i.e 0.1227 t*= the upper (1−C)/2 critical value from the t(n – 2) distribution (which means...

Words: 629 - Pages: 3

Premium Essay

Nonparametric Estimation and Hypothesis Testing in Econometric Models by A. Ullah

...empec, Vol. 13, 1988, page 223-249 Nonparametric Estimation and Hypothesis Testing in Econometric Models By A. Ullah ~ Abstract: In this paper we systematically review and develop nonparametric estimation and testing techniques in the context of econometric models. The results are discussed under the settings of regression model and kernel estimation, although as indicated in the paper these results can go through for other econometric models and for the nearest neighbor estimation. A nontechnical survey of the asymptotic properties of kernel regression estimation is also presented. The technique described in the paper are useful for the empirical analysis of the economic relations whose true functional forms are usually unknown. 1 Introduction Consider an economic model y =R(x)+u where y is a dependent variable, x is a vector o f regressors, u is the disturbance and R(x) = E ( y l x ) . Often, in practice, the estimation o f the derivatives o f R(x)are o f interest. For example, the first derivative indicates the response coefficient (regression coefficient) o f y with respect to x, and the second derivauve indicates the curvature o f R(x). In the parametric econometrics the estimation o f these derivatives and testing 1 Aman Ullah, Department of Economics, University of Western Ontario, London, Ontario, N6A 5C2, Canada. I thank L Ahmad, A. Bera, A. Pagan, C. Robinson, A. Zellner, and the participants of the workshops at the Universities of Chicago...

Words: 5119 - Pages: 21

Premium Essay

Corporate Competitive Strategy

...Quantitative Business Valuation Other Titles in the Irwin Library of Investment and Finance Convertible Securities by John P. Calamos Pricing and Managing Exotic and Hybrid Options by Vineer Bhansali Risk Management and Financial Derivatives by Satyajit Das Valuing Intangible Assets by Robert F. Reilly and Robert P. Schweihs Managing Financial Risk by Charles W. Smithson High-Yield Bonds by Theodore Barnhill, William Maxwell, and Mark Shenkman Valuing Small Business and Professional Practices, 3rd edition by Shannon Pratt, Robert F. Reilly, and Robert P. Schweihs Implementing Credit Derivatives by Israel Nelken The Handbook of Credit Derivatives by Jack Clark Francis, Joyce Frost, and J. Gregg Whittaker The Handbook of Advanced Business Valuation by Robert F. Reilly and Robert P. Schweihs Global Investment Risk Management by Ezra Zask Active Portfolio Management 2nd edition by Richard Grinold and Ronald Kahn The Hedge Fund Handbook by Stefano Lavinio Pricing, Hedging, and Trading Exotic Options by Israel Nelken Equity Management by Bruce Jacobs and Kenneth Levy Asset Allocation, 3rd edition by Roger Gibson Valuing a Business, 4th edition by Shannon P. Pratt, Robert F. Reilly, and Robert Schweihs The Relative Strength Index Advantage by Andrew Cardwell and John Hayden Quantitative Business Valuation A Mathematical Approach for Today’s Professional JAY B. ABRAMS, ASA, CPA, MBA McGRAW-HILL New York San Francisco Washington, D.C. Auckland Bogota ´ Caracas Lisbon London...

Words: 214584 - Pages: 859

Premium Essay

Individual Project Jiayi

...offering (IPO). With SEO, corporations raise funds through the sale of stocks rather than the issuance of additional debt. Many articles have mentioned that investors may construe a seasoned issue as a sign that a company is having financial problems. This news can cause the price of both the corporation’s outstanding shares and the new shares to fall. This is the impact of SEOs to the issuer itself, then what is the impact of SEOs to the competitors in the same industry? Will the stock return of competitors decrease or increase, or the SEOs have no impact on other corporations? This study examines the abnormal return on the file date of competitors in the same industry to test the effect of SEOs to competitors by event study and Ordinary Least-Squares Regression. 1. Introduction This study observes the effects of seasoned equity offerings (SEOs) to the competitors in the same industry. Seasoned equity offerings may involve shares sold by existing shareholders (non-dilutive), new shares (dilutive) or both. This moment can be marked by the file date that the corporation announced to conduct a seasoned equity offering, and the reacts of the market can be shown from the return of the specific corporation and its competitors, defined as Return = { [Ending stock price (period 1) – Initial price]+Dividends} / Initial price. Here we mainly examine the excessive or abnormal return of the corporation which conducted the SEO, and its competitors, and check what kind of variables may...

Words: 3113 - Pages: 13