Free Essay

Student

In:

Submitted By enenen
Words 19303
Pages 78
The Nature of Risk Preferences:
Evidence from Insurance Choices
Levon Barseghyany

Francesca Molinari

Cornell University

Cornell University

Ted O’
Donoghue

Joshua C. Teitelbaum

Cornell University

Georgetown University

July 21, 2010

Abstract
We use data on households’ deductible choices in auto and home insurance to estimate a structural model of risky choice that incorporates "standard" risk aversion
(concave utility over …nal wealth), loss aversion, and nonlinear probability weighting.
Our estimates indicate that nonlinear probability weighting plays the most important role in explaining the data. More speci…cally, we …nd that standard risk aversion is small, loss aversion is nonexistent, and nonlinear probability weighting is large. When we estimate restricted models, we …nd that nonlinear probability weighting alone can better explain the data than standard risk aversion alone, loss aversion alone, and standard risk aversion and loss aversion combined. Our main …ndings are robust to a variety of modeling assumptions.
JEL classi…cations: D01, D03, D12, D81, G22
Keywords: deductible, loss aversion, probability weighting, risk aversion
We are grateful to Darcy Steeg Morris for excellent research assistance. For helpful comments, we thank
Matthew Rabin as well as seminar and conference participants at Berkeley, UCLA, the Second Annual
Behavioral Economics Conference, the Summer 2010 Workshop on Behavioral/Institutional Research and
Financial Regulation, FUR XIV, and the 85th Annual Conference of the Western Economic Association
International. Barseghyan acknowledges …nancial support from the Institute for Social Sciences at Cornell
University. Molinari acknowledges …nancial support from NSF grants SES-0617482 and SES-0922330. y Corresponding author: Levon Barseghyan, Department of Economics, Cornell University, 456 Uris Hall,
Ithaca, NY 14853 (lb247@cornell.edu).

Electronic copy available at: http://ssrn.com/abstract=1646520

1

Introduction

Households are averse to risk— for example, they require a premium to invest in equity and they purchase insurance at actuarially unfair rates. The standard expected utility model attributes risk aversion to a concave utility function de…ned over …nal wealth states (diminishing marginal utility for wealth). Research in behavioral economics, however, suggests that the standard account is inadequate. The leading alternative account, o¤ered by prospect theory (Kahneman and Tversky 1979; Tversky and Kahneman 1992), posits that two additional features of risk preferences— loss aversion and nonlinear probability weighting— play important roles in explaining aversion to risk.
In this paper, we use data on households’deductible choices in auto and home insurance to estimate a structural model of risky choice that incorporates "standard" risk aversion
(concave utility over …nal wealth), loss aversion, and nonlinear probability weighting. Our estimates indicate that nonlinear probability weighting plays the most important role in explaining the data. More speci…cally, we …nd that standard risk aversion is statistically signi…cant but economically small, loss aversion is nonexistent, and nonlinear probability weighting is statistically and economically signi…cant. When we estimate restricted models, we …nd that nonlinear probability weighting alone can better explain the data than standard risk aversion alone, loss aversion alone, and standard risk aversion and loss aversion combined.
Section 2 provides an overview of our data. The source of the data is a large U.S. property and casualty insurance company that o¤ers multiple lines of insurance, including auto and home coverage. The full data set comprises yearly information on more than 400,000 households who held auto or home policies between 1998 and 2006. For each household, the data contain, inter alia, the household’ deductible choices for three property damage s coverages— auto collision, auto comprehensive, and home all perils. The data also include the household-coverage-speci…c menus of premium-deductible combinations that were available to each household when it made its deductible choices. In addition, the data contain each household’ claims history for each coverage, as well as a rich set of demographic information. s We utilize the data on claim realizations and demographics to assign to each household a household-coverage-speci…c predicted claim rate for each coverage.
Section 3 describes our theoretical framework. We …rst develop an underlying microeconomic model of deductible choice that incorporates standard risk aversion and loss aversion by adopting a variant of the model of reference-dependent preferences proposed by K½ szegi o and Rabin (2006, 2007). We then generalize the K½ szegi-Rabin model to allow for ranko dependent nonlinear probability weighting (Quiggin 1982), and we use the one-parameter probability weighting function proposed by Prelec (1998). In specifying our econometric

1
Electronic copy available at: http://ssrn.com/abstract=1646520

model, we follow McFadden (1974, 1981) and assume random utility with additively separable choice noise. In addition, we permit each of the utility parameters to depend on observable household characteristics.
Section 4 presents the main estimation results. They suggest that nonlinear probability weighting plays the key role in explaining the households’ deductible choices. Under our benchmark speci…cation, the mean and median estimates of the coe¢ cient of absolute risk aversion are 3:0 10 5 and 1:0 10 7 , respectively; the mean and median estimates of the coe¢ cient of loss aversion are both zero; and the mean and median estimates of the nonlinear probability weighting parameter (Prelec’ ) are both 0:7 (standard linear weighting involves s = 1). Qualitatively, our results imply a small role for standard risk aversion, little to no role for loss aversion, and a large role for nonlinear probability weighting. For example, we show that our benchmark estimates imply that standard risk aversion generates a negligible increase in willingness to pay for lower deductibles (relative to the actuarially fair premium), whereas nonlinear probability weighting generates a substantial increase.
Section 5 contains a sensitivity analysis. Most importantly, we consider other probability weighting functions, including the one-parameter function proposed by Tversky and Kahneman (1992). All in all, we …nd that our benchmark estimates are quite robust to alternative model speci…cations. We conclude the paper with a brief discussion in Section 6.
Numerous previous studies structurally estimate risk preferences from observed choices, relying in most cases on nonmarket data (survey and experimental data) and in some cases on market data, including insurance data. The majority of the studies in the literature estimate models that incorporate only standard risk aversion.1 A minority, however, allow for loss aversion or nonlinear probability weighting, or both.2 Cicchetti and Dubin (1994), for instance, take an approach similar to ours, though they reach somewhat di¤erent conclusions.
They use data on telephone customers’interior wire insurance choices to estimate a random utility model that allows for nonlinear probability weighting. While they …nd that the average customer has a relatively small degree of absolute risk aversion,3 they …nd only slight evidence that consumers weight line trouble probabilities nonlinearly. One limitation of their study, however, is that the interior telephone wire insurance market is characterized by extremely low, and tightly dispersed, stakes and claim probabilities.4 More recently, three
1

Two that use data on deductible choices are Cohen and Einav (2007) and Sydnor (forthcoming). The latter discusses, but does not estimate, the K½ szegi-Rabin model. o 2
In addition to the studies discussed below, see, e.g., Tversky and Kahneman (1992), Hey and Orme
(1994), Jullien and Salanié (2000), Choi et al. (2007), Post et al. (2008), and Tanaka et al. (2010).
3
We should note, however, that this result is a matter of dispute (Rabin and Thaler 2001; Grgeta 2003).
4
The average consumer in their sample faces a price of $0.45 per month to insure against a 0.5 percent chance of incurring a loss of $55. The authors do not report the dispersion in stakes, but they do report that claim rates vary only from 0.3 percent to 0.7 percent.

2

studies report …ndings comparable to ours, though each takes a di¤erent approach. Bruhin et al. (forthcoming) use experimental data on subjects’choices over binary money lotteries to estimate a mixture model of cumulative prospect theory. They …nd that approximately
20 percent of subjects can essentially be characterized as expected value maximizers, while approximately 80 percent of subject exhibit signi…cant nonlinear probability weighting (and small to moderate money nonlinearity). Snowberg and Wolfers (forthcoming) use data on gamblers’ bets on horse races to test the …t of two models— a model with standard risk aversion alone and a model with nonlinear probability weighting alone— and …nd that the latter model better …ts their data. Kliger and Levy (2009) use data on call options on the
S&P 500 index to estimate a cumulative prospect theory model. Like us, they …nd that standard risk aversion is small and that nonlinear probability weighting is large, but, unlike us, they …nd evidence of loss aversion. A limitation of the latter two studies, however, is that they have only aggregate data, which necessitates that they take a representative agent approach and rely on equilibrium "ratio" conditions to identify the agent’ utility function. s Our paper complements these studies and contributes to the literature principally by utilizing disaggregated, market data in a setting of central interest to economists.

2

Data Description

2.1

Overview and Core Sample

We acquired the data from a large U.S. property and casualty insurance company. The company o¤ers multiple lines of insurance, including auto, home, and umbrella policies.
The full data set comprises yearly information on more than 400,000 households who held auto or home policies between 1998 and 2006. For each household, the data contain all the information in the company’ records regarding the household’ characteristics (other than s s identifying information) and its policies (e.g., the limits on liability coverages, the deductibles on property damage coverages, and the premiums associated with each coverage). The data also record the number of claims that each household …led with the company under each of its policies during the period of observation.
In this paper, we restrict attention to households who hold both auto and home policies and we focus on three choices: (i) the deductible for auto collision coverage; (ii) the deductible for auto comprehensive coverage; and (iii) the deductible for home all perils coverage.5 In
5

Auto collision coverage pays for damage to the insured vehicle caused by a collision with another vehicle or object, without regard to fault. Auto comprehensive coverage pays for damage to the insured vehicle from all other causes (e.g., theft, …re, ‡ ood, windstorm, glass breakage, vandalism, hitting or being hit by an animal, or by falling or ‡ ying objects), without regard to fault. If the insured vehicle is stolen, auto

3

addition, we consider only the initial deductible choices of each household. This is meant to increase con…dence that we are working with active choices; one might be concerned that some households renew their policies without actively reassessing their deductible choices.
Finally, we restrict attention to households who …rst purchased their auto and home policies from the company in the same year, in either 2005 or 2006. These restrictions are meant to avoid temporal issues, such as changes in household characteristics and in the economic environment. In the end, we are left with a core sample of 4170 households. Table 1 provides descriptive statistics for the variables we use later to estimate the households’ utility parameters.
TABLE 1

2.2

Deductibles and Premiums

For each household in the core sample, we observe the household’ deductible choices for auto s collision, auto comprehensive, and home, as well as the premiums paid by the household for each type of coverage. In addition, the data contain the exact menus of premium-deductible combinations that were available to each household at the time it made its deductible choices.
Table 2 summarizes the deductible choices of the households in the core sample. For each coverage, the most popular deductible choice is $500. Table 3 summarizes the premium menus. For each coverage, it describes, for all households, the premium for coverage with a $500 deductible, as well as the marginal cost of decreasing the deductible from $500 to
$250 and the marginal bene…t of increasing the deductible from $500 to $1000. (Tables A.1 through A.3 in the Appendix summarize the premium menus with households grouped by their deductible choice.) The average annual premium for coverage with a $500 deductible is $180 for auto collision, $115 for auto comprehensive, and $679 for home. The average annual cost of decreasing the deductible from $500 to $250 is $54 for auto collision, $30 for auto comprehensive, and $56 for home. The average annual savings from increasing the deductible from $500 to $1000 is $41 for auto collision, $23 for auto comprehensive, and $74 for home.
TABLES 2 & 3
As Table 3 suggests, there is considerable variation in premiums across households and coverages. To illuminate the sources of such variation, we provide a generalized description comprehensive coverage also provides a certain amount per day for transportation expenses (e.g., rental car or public transportation). Home all perils coverage pays for damage to the insured home from all causes (e.g.,
…re, windstorm, hail, tornadoes, vandalism, or smoke damage), except those that are speci…cally excluded
(e.g., ‡ ood, earthquake, or war). For simplicity, we often refer to home all perils simply as home.

4

of the plan the company uses to rate a policy in each line of coverage. First, upon observing the household’ coverage-relevant characteristics, X, the company determines a benchmark s premium p (i.e., the premium associated with a benchmark deductible d) according to a coverage-speci…c rating function, p = f (X). The rating function takes into account, inter alia, the household’ risk tier and any applicable discounts. For each coverage, the company s has roughly ten risk tiers. Assignment to a lower risk tier reduces the household’ benchmark s premium by a …xed percentage. These percentages are known in the industry as tier factors.
Second, the company generates a household-speci…c menu f(pd ; d) : d 2 Dg, which associates a premium pd with each deductible d in the coverage-speci…c set of deductible options D, according to a coverage-speci…c multiplication rule, pd = (g(d) p) + c, where g ( ) > 0
(with g(d) = 1) and c > 0. The multiplicative factors fg(d) : d 2 Dg are known in the industry as deductible factors, and c is known as an expense fee. The deductible factors and the expense fees are coverage speci…c but household invariant. Moreover, the expense fees are …xed markups that do not depend on the deductibles. The company’ rating plan, s including its rating function and multiplication rule, are subject to state regulation. Among other things, the regulations require that the company base its rating plan on actuarial considerations (losses and expenses) and prohibit the company and its agents from charging rates that depart from the company’ rating plan.6 It is safe to assume, therefore, that the s variation in premiums is exogenous to the households’risk preferences, once we control for household characteristics.

2.3

Claim Rates

For purposes of our analysis, we need to estimate each household’ (latent) claim rate for s each coverage. To estimate the claim rates, we use the full data set: 1,348,020 household-year records for auto and 1,265,229 household-year records for home. For each household-year record, the data record the number of claims …led by the household in that year. We estimate a Poisson panel regression model with random e¤ects for each of the three claim processes, regressing the number of claims on a battery of observables. For each household in the core sample, we use the regression estimates to generate a predicted annual claim rate for each coverage, and we treat the predicted claim rates as the household’ true claim rates.7 s More speci…cally, we assume that claims follow a Poisson distribution at the householdcoverage level. That is, we assume that household i’ claims under coverage j in year t follow s 6
They also prohibit "excessive" rates and provide that insurers shall consider only "reasonable pro…ts" in making rates. See, e.g., N.Y. Ins. Law §§ 2303, 2304 & 2314 (Consol. 2010), N.Y. Comp. Codes R. &
Regis. tit. 11, § 160.2 (2010), and Dunham (2009, §§ 26.03 & 43.10).
7
We note that our approach is closely related to the approach taken by Barseghyan et al. (forthcoming).

5

a Poisson distribution with arrival rate ijt . Under this assumption, the household’ claim s arrivals are independent within each coverage and across coverages. In addition, we assume that deductible choices do not in‡ uence claim rates, i.e., households do not su¤er from moral hazard.8 We treat the claim rates as latent random variables and assume that ln ijt

=

j Xijt

+

ij ;

where Xijt is a vector of observables,9 ij is an unobserved iid error term, and exp( ij ) follows a gamma distribution with unit mean and variance j . On the basis of the foregoing assumptions, we perform standard Poisson panel regressions with random e¤ects to obtain maximum likelihood estimates of j and j for each coverage j. The estimates are reported in Tables A.4 and A.5 in the Appendix. For each household i, we then use these estimates to generate a predicted claim rate bij for each coverage j, conditional on the household’ (ex s ante) characteristics Xij and (ex post) claims experience.
Table 4 summarizes the predicted claim rates for the core sample. The mean predicted claim rates for auto collision, auto comprehensive, and home are 0:072, 0:021, and 0:089, respectively, and there is substantial variation across households and coverages. Table 4 also reports pairwise correlations among the predicted claim rates and between the predicted claim rates and the premiums for coverage with a $500 deductible. Each of the pairwise correlations is positive, as expected, though none are large.
TABLE 4

3

Theoretical Framework

In this section, we describe our theoretical framework. First, we develop a microeconomic model of deductible choice. We then specify our econometric model, outline our estimation procedure, and discuss identi…cation.

3.1

A Microeconomic Model of Deductible Choice

We assume that a household treats its deductible choices as independent decisions. This assumption is motivated, in part, by computational considerations,10 but also by the literature on "narrow bracketing" (e.g., Read et al. 1999), which suggests that when people make
8

See infra footnotes 12 and 13.
In addition to the variables in Table 1, Xijt includes numerous other variables (see Tables A.4 and A.5 in the Appendix).
10
If instead we were to assume that a household treats its deductible choices as a joint decision, then the household would face 180 options and the utility function would have over 350 terms.
9

6

multiple choices, they frequently do not assess the consequences of all choices at once, but rather tend to make each choice in isolation. Thus, we develop a model for how a household chooses the deductible for a single type of insurance coverage. The coverage provides full insurance against covered losses in excess of the deductible. To simplify notation, we suppress the subscripts for household and coverage (though we remind the reader that premiums and claim rates are household and coverage speci…c).
The household faces a menu of premium-deductible pairs f(pd ; d) : d 2 Dg, where pd is the premium associated with deductible d and D is the coverage-speci…c set of deductible options. In principle, over the course of the policy period, the household may experience zero claims, one claim, two claims, three claims, and so forth. We assume that the number of claims follow a Poisson distribution with arrival rate , and, for simplicity, we assume that each household experiences at most two claims.11 Hence, the probability of having zero claims is 0 exp( ), the probability of having one claim is 1 exp( ), and the probability of having two or more claims is 2 1 s 0
1 . In addition, we assume that the household’
12
choice of deductible does not in‡ uence (i.e., there is no moral hazard), and that every claim exceeds the highest available deductible.13 Finally, we assume that the household knows (or, alternatively, that its subjective belief about its claim rate corresponds to ).
Under the foregoing assumptions, the choice of deductible involves a choice among lotteries of the form
Ld ( pd ; 0 ; pd d; 1 ; pd 2d; 2 ) ; to which we refer as deductible lotteries.
We allow for the possibility that the household’ preferences over deductible lotteries are s in‡ uenced by standard risk aversion, loss aversion, and nonlinear probability weighting. We incorporate standard risk aversion and loss aversion by adopting the model of referencedependent preferences proposed by K½ szegi and Rabin (2006, 2007). In the K½ szegi-Rabin o o
N
(KR) model, the utility from choosing lottery Y
(yn ; qn )n=1 given a reference lottery
~ (~m ; qm )M is
Y
y ~ m=1
N
M
XX
~
U (Y jY ) qn qm [u(yn ) + v(yn j~m )] :
~
y n=1 m=1

11

Because claim rates are small (typically less than 0.1, and almost always less than 0.3), the likelihood of more than two claims is very small. Even for a claim rate of 0.3, for instance, the probability of more than two claims is 0.0036.
12
More speci…cally, we assume there is neither ex ante moral hazard (deductible choice does not in‡ uence the frequency of claimable events) nor ex post moral hazard (deductible choice does not in‡ uence the decision to …le a claim).
13
For arguments and evidence in support of the latter two assumptions, see Cohen and Einav (2007),
Sydnor (forthcoming), and Barseghyan et al. (forthcoming).

7

The function u represents standard "intrinsic" utility de…ned over …nal wealth states, and standard risk aversion is captured by the concavity of u. The function v represents the "gainloss" utility that results from experiencing gains or losses relative to the reference point. For v, we follow KR and use the functional form v(yj~) = y (

[u(y) u(~)] if u(y) > u(~) y y
:
[u(y) u(~)] if u(y) u(~) y y

In this formulation, the magnitude of gain-loss utility is determined by the intrinsic utility gain or loss relative to consuming the reference point. Moreover, gain-loss utility takes a two-part linear form, where
0 captures the importance of gain-loss utility relative to intrinsic utility and
1 captures loss aversion. The model reduces to expected utility when = 0 or = 1. But for > 0 and > 1, the household’ behavior is in‡ s uenced by risk aversion (via u) and loss aversion (via v).
KR propose that the reference lottery equals recent expectations about outcomes— i.e.,
~
~ if a household expects to face lottery Y , then its reference lottery becomes Y . However, because situations vary in terms of when a household deliberates about its choices and when it commits to its choices, KR o¤er a number of solution concepts for the determination of the reference lottery. We assume that the reference lottery is determined according to what
KR call a "choice-acclimating personal equilibrium" (CPE). Formally:
De…nition (CPE). Given a choice set Y, a lottery Y 2 Y is a choice-acclimating personal equilibrium if for all Y 0 2 Y, U (Y jY ) U (Y 0 jY 0 ).
In a CPE, a household’ reference lottery corresponds to its choice. KR argue that CPE is s appropriate in situations where the household commits to a choice well in advance of the resolution of uncertainty, and thus it knows that by the time the uncertainty is resolved and it experiences utility, it will have become accustomed to its choice and hence expect the lottery induced by its choice.14 In particular, KR suggest that CPE is the appropriate solution concept for insurance applications.
Under the KR model using CPE, the utility to the household from choosing deductible
14
The assumption that the household commits to its choice is important. Suppose instead that the household has the opportunity to revise its choice just before the uncertainty is resolved. Then even after
"choosing" Y and coming to expect it, if U (Y 0 jY ) > U (Y jY ) the household would want to revise its choice just before the uncertainty is resolved. KR propose alternative solution concepts that are more appropriate in such situations, where a household thinks about the problem in advance but does not commit to a choice until just before the uncertainty is resolved.

8

lottery Ld = ( pd ;

0;

pd

U (Ld jLd ) =

d;

1;

0 u(w

pd

2d;

pd ) +

2)

is

1 u(w

pd

d) +

2 u(w

0 1 [u(w

pd )

u(w

pd

pd )

u(w

pd

pd

(1)

2d)]

1 2 [u(w

2d)

d)]

0 2 [u(w

pd

d)

u(w

pd

2d)];

where = (
1) and w is the household’ initial wealth. From equation (1), it is clear that s we can not separately identify the parameters and . Instead, we estimate the product
(
1)
.15 We refer to as the coe¢ cient of "net" loss aversion.
Next, we incorporate nonlinear probability weighting. In their original prospect theory paper, Kahneman and Tversky (1979) suggest that certain choice phenomena are best captured by nonlinear probability weighting, whereby individual probabilities are transformed into decision weights. Their original approach, however, encounters problems— most notably, violations of stochastic dominance— which Quiggin (1982) solves by proposing a rankdependent approach. Instead of transforming individual probabilities into decision weights, the decumulative distribution of each lottery is transformed into a vector of decision weights for that lottery, where the decision weights sum to one. Over the years, several forms of nonlinear probability weighting have been proposed (e.g., Tversky and Kahneman 1992; Lattimore et al. 1992; Prelec 1998). We adopt the rank-dependent approach of Quiggin (1982) and use the one-parameter probability weighting function proposed by Prelec (1998).16
Formally, for deductible lottery Ld ( pd ; 0 ; pd d; 1 ; pd 2d; 2 ), we assume the decision weights are
!0

( 0)

!1

(

!2

1

where the probability weighting function

1

+
(

0)
1

+

( 0)
0 );

is given by

( ) = exp( ( ln ) );
15

(2)

The inability to separately identify and applies to any application of CPE, and not just deductible lotteries, because for any lottery Y , and appear in U (Y jY ) only as the product (
1). For other solution concepts, and become separately identi…ed.
16
In Section 5.1, we con…rm that our results are robust to a transformation of the cumulative distribution.
We also con…rm the robustness of our results to several other probability weighting functions.

9

with 0 <
1. Note that (2) nests standard linearity in the probabilities for = 1.17
Generalizing the KR model to allow for nonlinear probability weighting requires that we specify the decision weights for both the chosen lottery and the reference lottery. KR o¤er no guidance on this modeling choice, as they abstract from nonlinear decision weights. To our minds, it seems natural to assume that households treat the chosen lottery and the reference lottery symmetrically. Accordingly, we assume that the decision weights are the same for the chosen lottery and the reference lottery.
Given the foregoing assumptions, the household chooses a deductible lottery to maximize equation (1), except that the claim probabilities 0 , 1 , and 2 are replaced by the decision weights ! 0 , ! 1 , and ! 2 .

3.2

Econometric Model

To specify our econometric model, we …rst must account for observationally equivalent households choosing di¤erent deductibles. We follow McFadden (1974, 1981) and assume random utility with additively separable choice noise. Speci…cally, we assume that the utility from deductible d 2 D is given by
~
V (d) U (Ld jLd ) + "d ;
(3)
~
~
where U (Ld jLd ) U (Ld jLd )=u0 (w) and "d is an iid random variable. In U , we divide U by u0 (w) to normalize the scale of utility. The term "d represents error in evaluating utility (Hey and Orme 1994). We assume that "d follows a type 1 extreme value distribution with scale parameter .18 Hence, a household chooses deductible d when V (d) > V (d0 ) for all d0 6= d, or equivalently when
"d 0

~
"d < U (Ld jLd )

~
U (Ld0 jLd0 ) for all d0 6= d:

The probability that the household chooses deductible d is
Pr (d) = Pr "d0
= P

~
"d < U (Ld jLd )

~ exp U (Ld jLd )=

~ d0 2D exp U (Ld0 jLd0 )=

17

~
U (Ld0 jLd0 ) for all d0 6= d
:

Figure 1 depicts (2) for = 0:7 (our benchmark estimate).
The scale parameter is a monotone transformation of the variance of "d , and thus a larger larger variance.
18

10

(4)

means

In the estimation, we construct the likelihood function from these choice probabilities (see
Section 3.3).
Next, we must specify intrinsic utility u. In our main analysis, we follow Cohen and Einav
(2007) and Barseghyan et al. (forthcoming) and consider a second-order Taylor expansion of the utility function u(w
) around w. This yields u(w ) u0 (w)

u(w)
=
u0 (w)

r
2

2

;

where r u00 (w)=u0 (w) is the coe¢ cient of absolute risk aversion. Applied to equation (1) with nonlinear probability weighting, this yields
~
U (Ld jLd )

u(w)
=
u0 (w)

[pd + ! 1 d + ! 2 2d] r ! 0 ( pd )2 + ! 1 ( pd d)2 + ! 2 ( pd 2d)2
2
[! 0 ! 1 d + ! 0 ! 2 2d + ! 1 ! 2 d]
8
9
> ! 0 ! 1 [(pd )2 (pd + d)2 ]
>
= r< 2
2
:
+
+! 0 ! 2 [(pd )
(pd + 2d) ]
>
2>
:
;
+! 1 ! 2 [(pd + d)2 (pd + 2d)2 ]

(5)

Note that because the term u(w)=u0 (w) appears for all n, it does not a¤ect the choice probabilities, and thus the choice probabilities are independent of w.
The …rst term on the right-hand side of equation (5) re‡ ects an expected value with respect to the decision weights. The second term is due to standard risk aversion— it is the sum of second-order di¤erences in actual payo¤s in the three states of the world, weighted by their respective decision weights and scaled by the household’ standard risk aversion s parameter. The third term arises from loss aversion— because payment of the premium occurs in all states of the world, it is not perceived as a loss under CPE. The last term is the "interaction" term between loss aversion and standard risk aversion— because premium payments do not directly a¤ect the household’ utility through loss aversion, it is only the s second-order di¤erences in payo¤s, scaled by the standard risk aversion and net loss aversion parameters, that are relevant for the household’ utility. s Note that, with this speci…cation, we estimate a local approximation of the household’ s coe¢ cient of absolute risk aversion. This approach is instrumental to our purposes, because even with the scale normalization, u(w
)=u0 (w) can depend on w, which we do not observe (though we note that, in Section 4.2, we endeavor to account for wealth by using

11

home value as a proxy).19 Moreover, this speci…cation provides insight into important classes of utility functions. In particular, it is an exact approximation for quadratic utilities, which are commonly used in …nance, and it is an appropriate approximation for plausible CRRA utilities— if u(w) = w1 =(1
), > 0, then for on the order of $1000, in the low single digits, and wealth on the order of $100,000, each term in the full Taylor expansion of u(w ) around w is roughly 1 percent of the magnitude of the prior term.
Finally, in our main analysis we assume that the household’ true claim rate corresponds s to its predicted claim rate b (see Section 2.3). Thus, the decision weights are speci…ed as
!0

(b0 )

!1

(b1 + b0 )

!2

1

(b0 )

(b1 + b0 );

where b0 exp( b), b1 b exp( b), and is de…ned by equation (2). We note that while this approach captures heterogeneity in claim rates based on observables, it does not account for potential unobserved heterogeneity, which could lead to 6= b. In other words, even if the household knows (or believes) to be its true claim rate (as we assume), the predicted claim rate b may not correspond to due to unobserved heterogeneity. Indeed, Cohen and
Einav (2007) and Barseghyan et al. (forthcoming) …nd evidence of unobserved heterogeneity in claim rates, though in both studies the degree of unobserved heterogeneity is relatively small. We endeavor to account for unobserved heterogeneity in an extension of our main analysis (see Section 4.3).

3.3

Estimation Procedure

We observe data fDij ; ij g, where Dij is household i’ deductible choice for coverage j and s (Zi ; bij ; Pij ). In ij , Zi is a vector of household characteristics, bij is household i’ s ij predicted claim rate for coverage j (as described in Section 2.3), and Pij denotes household i’ menu of premiums for coverage j. In our benchmark speci…cation, Zi comprises a constant s and the variables in Table 1, except for home value (see Section 4.1). In all speci…cations,
Zi is a strict subset of the vector of observables Xij that we use to generate bij .
There are four model parameters to be estimated:
19

Alternately, we could assume CARA utility, for which u(w
)=u0 (w) is independent of w. While we view CARA utility as too restrictive, we note that our main conclusions also hold for the CARA speci…cation
(see Section 5.2).

12

r—




the coe¢ cient of absolute risk aversion (r = 0 means no risk aversion); the coe¢ cient of net loss aversion ( = 0 means no loss aversion); the degree of nonlinear probability weighting ( = 1 means linearity); and the scale of choice noise ( = 0 means no choice noise).

In our main analysis, we assume that does not vary across households or coverages. However, we allow the preference parameters to depend on household characteristics Zi as follows: ln ri =

r Zi

ln

i

=

Zi

ln

i

=

Zi :

We estimate the model via maximum likelihood using combined data for all three coverages. For each household i, the conditional loglikelihood function is
`i ( )

XX j 1 (Dij = d) ln [Pr (Dij = dj

ij ;

)] ;

d2Dj

where = ( r , ; ; ), the indicator function selects the deductible chosen by household i for coverage j, and Pr (Dij = dj ij ; ) denotes the choice probability in equation (4). We
P
estimate by maximizing i `i ( ). We then use b to assign …tted values of ri , i , and i to each household i.
As noted above, we assume that households treat their deductible choices as independent decisions, and we also assume no coverage-speci…c e¤ects. In Section 5.5, we revisit these assumptions by both estimating the model separately for each coverage and estimating the model with coverage-speci…c choice noise.

3.4

Identi…cation

In this section, we demonstrate that if there is su¢ cient variation in premiums and claim rates for a …xed array of observables Z, then the preference parameters r, , and are identi…ed. We then argue that our data indeed contain signi…cant variation in premiums and claim rates even for a …xed Z.
~
The random utility model in equation (3) comprises the sum of a utility function U (Ld jLd ) and an error term "d . Using the results of Matzkin (1991), normalizations that …x scale and location, plus regularity conditions that are satis…ed in our model, allow us to identify
~
nonparametrically the utility function U (Ld jLd ) within the class of monotone and concave
~
utility functions. A fortiori, this guarantees parametric identi…cation of U (Ld jLd ).
13

This in turn allows us to separately identify standard risk aversion (r), net loss aversion
( ), and nonlinear probability weighting ( ). To see the source of identi…cation intuitively, consider the following example. Suppose we observe that a household with a 10 percent claim rate in auto collision chooses to pay $60 to decrease its deductible from $1000 to
$500. The household’ choice, which implies a lower bound on its maximum willingness to s pay (W T P ) to decrease its expected loss from $100 to $50, is consistent with numerous combinations of di¤erent degrees of standard risk aversion, loss aversion, and nonlinear probability weighting.20 However, di¤erent combinations yield di¤erent implications for other choices. For instance, di¤erent combinations would imply di¤erent lower bounds on the household’ W T P to further decrease its auto collision deductible. They also would s imply di¤erent lower bounds on the household’ W T P to decrease its deductible in other s coverages, for which the household has a di¤erent claim rate. In short, di¤erent combinations of standard risk aversion, loss aversion, and nonlinear probability weighting have di¤erent implications for the observed distribution of deductibles, premiums, and claim rates.
Formally, then, we must demonstrate that the utility di¤erences between deductible choices react in di¤erent ways to changes in the three preference parameters. Consider two deductible options, a and b, and suppose that the probability of experiencing two claims is negligible, so that ! 0 = ( 0 ) = (exp( )) = exp(
) and ! 1 = 1 ! 0 .21 Applying equation (5) to this case, the di¤erence in the household’ utility from choosing deductible s lotteries La and Lb is given by
~
U (La jLa )

~
U (Lb jLb ) = (pb

pa ) + ! 1 (b

(6)

a)

+ ! 0 ! 1 (b a) r + ! 0 (p2 p2 ) + ! 1 (pb + b)2 (pa + a)2 b a
2
r
(p2 p2 ) :
+ ! 0 ! 1 (pb + b)2 (pa + a)2 b a
2
We can rewrite equation (6) as
~
U (La jLa )

r pa ) + ( ) (b a) + (p2
2 b r + ( )
(pb + b)2 (pa + a)2
2

~
U (Lb jLb ) = (pb

20

p2 ) a (p2 b (7) p2 ) ; a For the avoidance of doubt, throughout the paper we use W T P to denote maximum willingness to pay.
These assumptions are without loss of generality. If the model is identi…ed for the case where households have two deductible options and can experience at most one claim, then it also is identi…ed where households have more than two deductible options and can experience more than one claim.
21

14

where
( )

[ ! 0 + 1] ! 1
= [ exp(

) + 1] [1

exp(

)] :

From these equations, it is clear that variation in p and permits us to separately identify r and ( ), and then variation in permits us to separately identify and .22 Thus, given su¢ cient variation in premiums and claim rates for a …xed Z, the preference parameters are identi…ed. We now argue that our data indeed contains signi…cant variation in premiums and claim rates even for a …xed Z. For each coverage, a household’ claim rates are determined by s factors beyond its vector of household characteristics Z. As described in Section 2.3, the household’ predicted claim rate depends on a vector of observables X s Z. More importantly, the household’ menu of premiums is determined by factors beyond those that s determine the household’ claim rate. As explained in Section 2.2, the household’ menu s s of premiums is a function not only of observables X but also other coverage-speci…c variables, such as state regulations, the company’ tier and deductible factors (which are the s same for all households), and various discount programs. Consequently, there is variation in premiums that is not driven by the variation in claim rates or in Z, and the variation in claim rates does not arise solely because of the variation in Z.23 In the case of auto collision coverage, for example, regressions of premiums and predicted claim rates on Z yield coe¢ cients of determination of 0.13 and 0.34, respectively, and the correlation coe¢ cient between benchmark premiums (premiums for coverage with a $500 deductible) and predicted claim rates is 0.35.24
In addition to the signi…cant variation in premiums and claim rates within a coverage, our data also contain signi…cant variation in premiums and claim rates across coverages. A key feature of our data is that for each household we observe deductible choices for three coverages, and even for a …xed Z (and, in fact, even for a …xed X), there is signi…cant
22

This holds even if r is zero and the right-hand side of equation (7) collapses to (pb pa ) + ( ) (b a).
Moreover, it is safe to assume that, for a …xed Z, the variation in premiums and claims rates is exogenous to the households’ risk preferences. Indeed, several of the variables in XnZ (such as distance to hydrant and territory code (which the company bases on actuarial risk factors, such as weather patterns and wildlife density)), as well as the additional variables that determine premiums (such as state law and the company’ s rating plan), are undoubtedly exogenous to the households’risk preferences. Even if these variables were not wholly exogenous, it is not clear that this would bias our results in favor of nonlinear probability weighting and against standard risk aversion and loss aversion.
24
The corresponding coe¢ cients for auto comprehensive and home are even lower. In the case of auto comprehensive, the coe¢ cients of determination are 0.07 and 0.31, and the correlation coe¢ cient is 0.15. In the case of home, the coe¢ cients of determination are 0.04 and 0.12, and the correlation coe¢ cient is 0.24.
23

15

variation in premiums and claim rates across the three coverages. Indeed, even if the withincoverage variation in p and was insu¢ cient in practice, we still might be able to separately identify r, , and using across-coverage variation.

4

Estimation Results

This section presents the results of our main analysis, including our benchmark estimates.
It also presents extensions in which we endeavour to account for wealth and for unobserved heterogeneity in claim rates.

4.1

Benchmark Results

In our initial speci…cation, we assume no heterogeneity (Zi includes only a constant). We refer to this speci…cation as Model 1. The estimates for standard risk aversion and loss aversion are both e¤ectively zero— the estimate for r is 3:1 10 10 (standard error: 8:7 10 9 ) and the estimate for is 5:8 10 7 (standard error: 1:6 10 5 ). By contrast, the estimated probability weighting parameter ( ) is 0:68 (standard error: 0:0027) which, as we illustrate below, is economically large. While Model 1 is an oversimpli…cation, it provides a clear illustration of our main conclusion: nonlinear probability weighting plays the dominant role in explaining the households’deductible choices.
Table 5 reports the estimates for our benchmark speci…cation, which we label Model 2.
Model 2 permits the preference parameters to depend on household characteristics. Speci…cally, the covariates include a constant and all of the variables in Table 1,25 except for home value. We view home value primarily as a proxy for wealth, and thus we introduce it below when we endeavor to account for wealth. The top panel presents the coe¢ cient estimates for the covariates, br , c , and c , as well as the estimate of the scale of choice noise, b.
These estimates imply nontrivial heterogeneity in the underlying preference parameters and nonzero choice noise. The bottom panel presents the mean and median of the …tted values for the preference parameters, r, , and . For r, the median estimate is e¤ectively zero, though the mean estimate is somewhat larger at approximately 3:0 10 5 .26 While this implies nontrivial standard risk aversion, it does not imply "absurd" risk aversion in the sense of Rabin (2000). For a household with wealth of $100; 000, for example, a coe¢ cient of absolute risk aversion of 3:0 10 5 implies a coe¢ cient of relative risk aversion of 3, a
25

Each variable z is normalized as (z mean(z))=stdev(z).
This is because certain types of households— particularly young, unmarried households— have larger estimated standard risk aversion. Nevertheless, of the 4170 households in the core sample, only 8 are assigned r > 0:001 and only 238 are assigned r > 0:0001.
26

16

magnitude that many economists would consider plausible. For , the mean and median estimates are both e¤ectively zero, suggesting that loss aversion plays little to no role in explaining the data. For , the mean and median estimates are both approximately 0:7, which implies pronounced nonlinear probability weighting.
TABLE 5
4.1.1

Statistical Signi…cance

A likelihood ratio test rejects at the 1 percent level both the null hypotheses of standard risk neutrality (r = 0) and the null hypothesis of linear probability weighting ( = 1), suggesting that both standard risk aversion and nonlinear probability weighting play a statistically signi…cant role in deductible choices. By contrast, a likelihood ratio test fails to reject the null hypothesis of net loss neutrality ( = 0), which is consistent with loss aversion playing little to no role. To test the relative statistical importance of standard risk aversion, loss aversion, and nonlinear probability weighting, we also estimate restricted models and perform Vuong
(1989) model selection tests.27 We …nd that the model with nonlinear probability weighting alone is "better" (at the 1 percent level) than (i) a model with standard risk aversion alone,
(ii) a model with loss aversion alone, and (iii) a model with both standard risk aversion and loss aversion.
4.1.2

Economic Signi…cance

To give a sense of the economic signi…cance of our benchmark estimates for standard risk aversion, loss aversion, and nonlinear probability weighting, we present the following "backof-the-envelope" calculations in Table 6. For selected claim rates , column (1) contrasts the probability of experiencing one claim, 1 = exp( ), with the associated decision weight,
!1
( 1 + 0)
( 0 ), for the case where = 0:7. For instance, when the probability of one claim is 2:0 percent, the decision weight is 6:0 percent; when the probability of one claim is 6:5 percent, the decision weight is 13:0 percent; and when the probability of one claim is
12:9 percent, the decision weight is 19:3 percent.
Columns (2)-(9) display, for selected claim rates and various preference parameter combinations, the dollar amount that would make a household with the utility function
27

Vuong’ (1989) test allows one to select between two nonnested models on the basis of which best …ts s the data. Neither model is assumed to be correctly speci…ed. Vuong (1989) shows that testing whether one model is signi…cantly closer to the truth (its loglikelihood value is signi…cantly greater) than another model amounts to testing the null hypothesis that the loglikelihoods have the same expected value.

17

in equation (5) indi¤erent between the following two deductible lotteries:
L1000 = ( $200;

0;

L500 = ( ($200 +

$200
);

0;

$1000;
($200 +

1;

)

$200
$500);

$2000;
1;

2) ;

and

($200 +

)

$1000);

2) :

Lottery L1000 represents coverage with a $1000 deductible and a premium of $200, and L500 represents a policy with a $500 deductible and a premium of $200 + . Thus, corresponds to the household’ maximum willingness to pay (W T P ), in terms of excess premium above s $200, to reduce its deductible from $1000 to $500.
As a benchmark, column (2) reports W T P for a standard risk-neutral household, with r = 0, = 0, and = 1. Column (3) reports W T P for a household with r = 0, = 0, and
= 0:7. It illustrates that the mean estimated degree of nonlinear probability weighting, by itself, generates substantial aversion to risk, in the sense that the household’ W T P s is approximately two to three times larger than a standard risk-neutral household. For a household with a claim rate of 7 percent, for example, moving from
= 1 to
= 0:7 increases the household’ W T P from $35 to $79. s Columns (4) and (5) reports W T P for a household with r = 0:00003, = 0, and either
= 1 (column (4)) or
= 0:7 (column (5)). Together, they illustrate that the mean estimated degree of standard risk aversion has little per se e¤ect on the household’ W T P . s For the household with a claim rate of 7 percent, for instance, moving from r = 0 to r =
0:00003 increases W T P by less than one dollar when = 1 and less than two dollars when
= 0:7. In other words, columns (4) and (5) illustrate that, at our benchmark estimate, standard risk aversion plays a small role in explaining the aversion to risk manifested in the households’deductible choices.
In order to establish certain benchmarks for later results, columns (6) and (7) report
W T P when the degree of standard risk aversion is r = 0:0001 and r = 0:001, respectively
(i.e., one and two orders of magnitude larger than our benchmark estimate), and column
(8) reports W T P when the degree of net loss aversion is
= 0:02 (which is as large as we ever …nd when we also allow for nonlinear probability weighting). In all three columns
= 0:7. Increasing the degree of standard risk aversion to r = 0:0001 marginally increases the household’ W T P (for the household with = 0:07, W T P increases from $79 to $85), s whereas increasing the degree of standard risk aversion to r = 0:001 substantially increases the household’ W T P (for the household with = 0:07, W T P increases from $79 to $123). s Increasing the degree of net loss aversion to = 0:02 has little e¤ect on the household’ s W T P (for the household with = 0:07, W T P increases from $79 to $82).
TABLE 6
18

4.1.3

Predicting Households’Deductible Choices

For each household i and coverage j, the parameter estimates b imply a probability that the household’ choice Dij for such coverage corresponds to the deductible d we observe in the s data (i.e., Pr(Dij = dj ij ; b) from Section 3.3). These choice probabilities provide a sense of how the model performs in terms of predicting the households’deductible choices. Table
7 describes these choice probabilities for each coverage. As a baseline, row (1) reports the choice probabilities assuming households chose their deductibles uniformly at random.28
Row (2) reports the average of the model predicted choice probabilities across all households. Rows (3)-(7) provide a sense of how the model performs for di¤erent deductibles. In each row, the table reports the average choice probability among households who chose the indicated deductible. The model performs best in explaining the more common, intermediate deductible choices, while it performs less well in explaining the less common, extreme deductible choices.
Finally, rows (8) and (9) report the average choice probabilities for two restricted models.
Row (8) reports the average choice probabilities for a model with only nonlinear probability weighting (i.e., when we estimate the model restricting r = = 0), while row (9) reports the average choice probabilities for a model with only standard risk aversion (i.e., when we restrict = 0 and = 1). Comparisons with row (2) reveals that the model with only nonlinear probability weighting performs almost as well as the full model, whereas the full model comfortably outperforms the model with only standard risk aversion.
TABLE 7

4.2

Accounting for Wealth

As noted in Section 3.2, we do not directly observe the wealth of the households in the data. Economists generally believe, however, that standard risk aversion depends on wealth.
In our benchmark results, we deal with this issue by estimating a local approximation of absolute risk aversion. In this section, we endeavor to account for household wealth by using home value as a proxy.
In Model 3, we take a naive, reduced-form approach and merely add home value to the vector of observables, Zi , upon which a household’ preference parameters depend. That is, s Model 3 e¤ectively assumes that a household’ intrinsic utility function depends on its wealth. s 28

For auto collision, there are …ve deductible levels, and so uniformly random choice would yield choice probabilities of 20 percent for each deductible option. For auto comprehensive and home, there are six deductible levels, and so uniformly random choice would yield choice probabilities of 16:7 percent for each deductible option.

19

However, economists typically do not assume that utility functions depend on wealth, but rather that utility is a function of wealth (i.e., wealth is the domain of the utility function).
Hence, Model 3 is perhaps a misspeci…ed model.
In Models 4 and 5, we take a structural approach, in which we assume constant relative risk aversion (CRRA) utility, i.e., u(w) = w1 =(1
), > 0. In the CRRA speci…cation, is the coe¢ cient of relative risk aversion, and thus = w r. We allow to depend on household characteristics, Zi , assuming (as above) ln = Zi , and we take home value as a proxy for wealth, to wit r = =(home value) in equation (5). In Model 4, we estimate this speci…cation without also including home value in Zi ; that is, we assume that the preference parameters do not depend on home value other than through the relationship r = =(home value). However, because in addition to being a proxy for wealth, home value might also be a signal of household type, in Model 5 we also include home value in Zi . Model 5 re‡ ects our preferred approach to accounting for wealth.
Table 8 reports the mean and median of the …tted values for the preference parameters for
Models 3, 4, and 5.29 For comparison, the …rst panel restates the benchmark estimates from
Model 2. The second panel reports the estimates from Model 3. They are very similar to the benchmark estimates, except in the case of standard risk aversion, where the mean estimate is an order of magnitude larger, at approximately 3:8 10 4 , and the median estimate now is the same order of magnitude as the mean estimate, at approximately 2:5 10 4 .30 But again, we believe this is a misspeci…ed model. The third and fourth panels report the estimates for Models 4 and 5. The estimates for both models are nearly identical to the benchmark estimates; the only substantive di¤erence is that the mean and median estimates for standard risk aversion are roughly twice as large as the benchmark estimates, although they have the same order of magnitude. In terms of the direct impact of home value in Model 5, the coe¢ cient estimates (which are reported in Table A.8 in the Appendix) suggest that home value does not have a direct impact on the degree of standard risk aversion— the e¤ect is fully captured by the relationship r = =(home value)— but that it does have a positive and statistically signi…cant relationship with the degree of nonlinear probability weighting, suggesting that owning a more expensive home is associated with being closer to linear probability weighting.
TABLE 8
29

For the sake of brevity, Table 8 does not report the coe¢ cient estimates for the covariates. The complete results, however, are reported in Tables A.6 through A.8 in the Appendix.
30
As reported in Table A.6, we also …nd that standard risk aversion declines with home value, which is consistent with the usual economic assumption that absolute risk aversion declines with wealth.

20

4.3

Accounting for Unobserved Heterogeneity in Claim Rates

In our main analysis, we assign to each household in the core sample a predicted claim rate b for each coverage. While this approach allows for heterogeneity in claim rates based on observable characteristics, it does not permit unobserved heterogeneity. Such unobserved heterogeneity is potentially important, however, because it might help explain why observationally equivalent households choose di¤erent deductibles. In order to account for unobserved heterogeneity in claim rates, we expand our approach and assign to each household its predicted distribution of claim rates for each coverage.
More speci…cally, in Section 3 we derive a household’ choice probability as a function s of the household’ (latent) true claim rate . In our benchmark analysis, we assume that, s for each coverage, the household’ true claim rate corresponds to its predicted claim rate b, s which we calculate using the estimates from the claim rate regression for such coverage. We then construct the likelihood function using the choice probabilities for all households; in particular, we use the regression estimates to calculate the expected claim rate conditional on the household’ observables. Of course, the claim rate regressions yield not only the s conditional expectation, but also the conditional distribution of claim rates. Hence, we can use the regression estimates to assign to each household not just a predicted claim rate b, but b also predicted claim rate distribution F ( ). We can then construct the likelihood function b by integrating over F ( ).31
Table 9 reports the mean and median of the preference parameter estimates for Models
2 and 5 (relabeled as Models 2u and 5u) when we allow for unobserved heterogeneity in this way.32 The main message is roughly the same. Loss aversion is nonexistent. Nonlinear probability weighting is statistically and economically signi…cant, although it is somewhat smaller in magnitude— the mean and median of the …tted values of are approximately 0:8
(rather than 0:7). Standard risk aversion is statistically signi…cant, but now is economically signi…cant as well. The mean and median …tted values of r are approximately 1:0 10 3 and
5:7 10 4 , respectively, in Model 2u and approximately 7:3 10 4 and 2:3 10 4 , respectively, in Model 5u. As Table 6 suggests, standard risk aversion of this order of magnitude implies appreciable aversion to risk.
TABLE 9
31

We compute this integral using the Gauss-Laguerre quadrature method.
The complete results, with the coe¢ cient estimates for the covariates, are reported in Tables A.9 and
A.10 in the Appendix.
32

21

5

Sensitivity Analysis

Our analysis in Section 4 yields a clear main message: nonlinear probability weighting plays the most important role in explaining the households’deductible choices. More speci…cally, only nonlinear probability weighting is consistently statistically and economically signi…cant; standard risk aversion is consistently statistically signi…cant but is economically signi…cant only in a subset of speci…cations, and loss aversion is consistently estimated to be nonexistent.
In this section, we investigate the sensitivity of these results to our modeling assumptions.
In general, we …nd that the results are quite robust to a variety of alternative assumptions.
The main result that varies across speci…cations is the economic signi…cance of standard risk aversion. To conserve space, we only summarize the results of the sensitivity analysis below.
The complete results are available in the Appendix (Tables A.11 through A.23).

5.1

Form of Probability Weighting

As noted in Section 3.1, we incorporate nonlinear probability weighting into the model by (i) adopting the rank-dependent approach of Quiggin (1982), which contemplates a transformation of the decumulative distribution, and (ii) using the one-parameter probability weighting function proposed by Prelec (1998). In this section, we check the sensitivity of our results to a transformation of the cumulative distribution and to other probability weighting functions.
In their cumulative prospect theory paper, Tversky and Kahneman (1992) propose a rank-dependent approach to nonlinear probability weighting that contemplates a transformation of the decumulative distribution for gains and the cumulative distribution for losses. The point of their approach is that extreme outcomes (the largest gains and the largest losses) are what get overweighted. In the case of our deductible lotteries, Ld =
( pd ; 0 ; pd d; 1 ; pd 2d; 2 ), which involve only losses, their approach implies the following decision weights:
!2

( 2)

!1

(

!0

1

1

+
(

2)
1

+

( 2)
2 ):

When we estimate Models 2 and 5 using these decision weights (and the Prelec (1998) oneparameter probability weighting function), the mean and median of the estimated preference parameters are essentially unchanged, except that mean estimate for standard risk aversion in Model 5 is roughly half the magnitude (and the roughly same as in Model 2).
Tversky and Kahneman (1992) also propose an alternative one-parameter probability
22

weighting function: ( ) = =[ + (1
) ]1= . When we estimate Model 2 using their probability weighting function (and, as they suggest, the cumulative form of rank dependence), our main message is much the same. Nonlinear probability weighting is statistically and economically signi…cant— the mean and median estimates of are approximately 0:44 and 0:56, respectively, both of which are somewhat smaller (more nonlinear) than Tversky and Kahneman’ median estimate of 0:69. (Figure 1 depicts the Tversky and Kahneman s (1992) function for = 0:5 and = 0:69, as well as the Prelec (1998) function for = 0:7.)
Standard risk aversion is statistically signi…cant but economically insigni…cant— the mean estimate is 8:3 10 5 and the median estimate is e¤ectively zero. The only apparent di¤erence is that the mean estimate for the coe¢ cient of net loss aversion is approximately 0:02
(though the median estimate still is zero). As Table 6 illustrates, however, loss aversion of this magnitude is not economically signi…cant.
FIGURE 1
Both the Prelec (1998) and Tversky and Kahneman (1992) probability weighting functions have the feature of implying hypersensitivity to small probability changes near the extremes of the probability scale. It is not clear, however, whether there is good evidence of such hypersensitivity. Because our data contain many observations of small claim probabilities, we also consider a linear probability weighting function, ( ) =
+ (1
)=e.
Note that ( ) intersects the 45 degree line at = 1=e (with ( ) > for < 1=e and
( ) < for > 1=e) for all values of ; this makes it comparable to the Prelec (1998) one-parameter speci…cation, which also intersects the the 45 degree line at 1=e for all values of . When we estimate Model 2 using this probability weighting function (and, as in our benchmark analysis, the decumulative form of rank dependence), the preference parameter estimates are e¤ectively identical to the benchmark estimates.
Finally, each of the foregoing probability weighting functions captures two features— overweighting of small probabilities and insensitivity to probability changes— with a single parameter. For this reason, we also consider the two-parameter probability weighting function suggested by Lattimore et al. (1992), ( ) = a =[a + (1
) ], where roughly a
33
captures overweighting and captures insensitivity. When we estimate Model 2 using this speci…cation (and the decumulative form of rank dependence), our main message emerges yet again: nonlinear probability weighting is statistically and economically signi…cant (the estimates for a and are roughly 5 and 0:2, respectively), standard risk aversion is statisti33
This function was used earlier by Goldstein and Einhorn (1987). As Gonzalez and Wu (1999) demonstrate, it is equivalent to specifying that the log-odds ratio of the weighted probability be a linear function of the log-odds ratio of the true probability.

23

cally signi…cant but economically small (the mean estimate is approximately 8:6 10 5 and the median estimate is zero), and loss aversion is nonexistent.
In light of the robustness of our results to the form of probability weighting, the remainder of our sensitivity analysis follows our main analysis and uses the decumulative form of rank dependence and the Prelec (1998) one-parameter probability weighting function.

5.2

CARA Utility

In our main analysis, we account for initial wealth by using a second-order Taylor expansion of the intrinsic utility function. Here we take an alternative approach: we assume constant absolute risk aversion (CARA) utility, u(w) = exp( rw), in which case initial wealth is irrelevant. When we estimate Model 3 with CARA utility— which, with CARA utility, is the analogue for our Model 5 (our preferred approach to accounting for wealth)— our main message is roughly the same. Loss aversion is nonexistent. Nonlinear probability weighting is statistically and economically signi…cant, although it is smaller in magnitude (more linear)— the mean and median of the …tted values of are approximately 0:9 and 0:8, respectively.
Standard risk aversion is statistically signi…cant, but now is economically signi…cant as well— the mean and median …tted values of r are 7:1 10 4 and 6:8 10 4 , respectively. As between
Model 3 with CARA utility and Model 5, however, a Vuong (1989) test decidedly selects
Model 5 as the one which best …ts the data.

5.3

Maximum Number of Claims

Our main analysis permits that a household may have zero, one, or two claims. Given the importance of nonlinear probability weighting in our results, one might worry that allowing for the low probability event of experiencing two claims is having undue in‡ uence on our results. Hence, we estimate Models 2 and 5 permitting households to have at most one claim. The results tell the same basic story. The only noteworthy di¤erence is that the mean estimate for the coe¢ cient of net loss aversion in Model 5 is roughly 0:001 (though the median estimate still is zero), but this is not economically signi…cant.

5.4

Extreme Deductibles

Table 2 reveals that, for each coverage, the vast majority of households in the core sample choose intermediate deductibles: 92.3 percent of households choose a deductible of $200,
$250, or $500 in auto collision; 87.1 percent of households choose a deductible of $200, $250, or $500 in auto comprehensive; and 97.5 percent of households choose a deductible of $250,

24

$500, or $1000 in home. Given these choice patterns, one might worry that households do not really consider the more extreme deductible options, which might bias our estimates.34
To address this concern, we estimate Model 2 under the following conditions: (i) we restrict the set of deductible options to f$200; $250; $500g for each of the auto coverages and to f$250; $500; $1000g for home coverage; and (ii) for each coverage, if a household’ actual des ductible choice is outside the restricted choice set, we assign to the household the deductible option from the restricted choice set that is closest to their actual deductible choice. The results are essentially the same. The only appreciable di¤erence is that mean and median estimates for standard risk aversion are somewhat larger: roughly 1:1 10 4 (which is borderline economically signi…cant) and 9:4 10 6 (which is not economically signi…cant), respectively.

5.5

Coverage-Speci…c Analysis

As noted in Section 3.3, our main analysis estimates risk preferences using combined data for all three coverages. We believe this is the best approach because it enhances the variation in premiums and claim rates. Nevertheless, we also investigate whether the benchmark results are robust to estimating the model separately for each coverage. When we estimate Model 2 separately for each coverage, the main message is roughly the same. For auto comprehensive coverage, the estimates for loss aversion and nonlinear probability weighting nearly correspond to the benchmark estimates, though there is economically signi…cant standard risk aversion (the mean and median estimates for r are approximately 1:7 10 3 and 1:4 10 3 , respectively). For home coverage, the estimates for nonlinear probability weighting are almost identical to the benchmark estimates, while there is a little more standard risk aversion
(the mean and median estimates are approximately 7:5 10 5 and 1:7 10 5 , respectively) and perhaps some loss aversion (the mean estimate for is approximately 0:006, but the median estimate still is zero), though both are economically insigni…cant. For auto collision coverage, loss aversion is nonexistent, but there is more (and economically signi…cant) standard risk aversion (the mean and median estimates for r are roughly 1:3 10 3 and 1:2 10 3 , respectively) and less nonlinear probability weighting (the mean and median estimates are both roughly 0:9).
Even when we estimate risk preferences using combined data from all three coverages, a second way to allow for coverage-speci…c e¤ects is to permit coverage-speci…c choice noise
34

For instance, when a household chooses a $200 deductible in auto comprehensive, we are using the fact that it did not choose a $50 deductible to infer an upper bound on its aversion to risk. But if the household in fact does not even consider the $50 deductible as an option, our inference would be invalid. Similarly, when a household chooses a $1000 deductible in home, we are using the fact that it did not choose a $5000 deductible to infer a lower bound on its aversion to risk. Again, if the household in fact does not even consider the $5000 deductible as an option, our inference would be invalid.

25

(our main analysis assumes that choice noise ( ) does not vary across coverages). When we estimate Model 2 with coverage-speci…c choice noise, the results are nearly identical, except that there is a little more standard risk aversion (the mean and median …tted values of r are approximately 1:1 10 4 and 7:0 10 5 , respectively).

6

Discussion

We develop a structural model of risky choice that incorporates standard risk aversion (concave utility over …nal wealth), loss aversion, and nonlinear probability weighting, and we estimate the model using data on households’deductible choices in auto and home insurance.
We …nd that nonlinear probability weighting plays the most important role in explaining the data, while standard risk aversion plays a small role and loss aversion plays little to no role.
Insofar as they are generalizable, our results suggest that risk preferences are shaped …rst and foremost by how one evaluates risk and only second by how one evaluates outcomes.
Perhaps the main takeaway of the paper is that economists should pay greater attention to the question of how people evaluate risk. Prospect theory incorporates two key features: a value function that describes how people evaluate outcomes and a probability weighting function that describes how people evaluate risk. The behavioral economics literature, however, has focused primarily on the value function, and there has been relatively little focus on probability weighting.35 In light of our work, as well as the work discussed in Section 1 that reaches a similar conclusions using di¤erent methods (Bruhin et al., forthcoming; Snowberg and Wolfers, forthcoming; Kliger and Levy 2009), it seems clear that the literature ought to reevaluate its focus.36
That said, it is worth highlighting certain limitations of our analysis. An important limitation is that, while our analysis clearly indicates that the main "action" lies in how people evaluate risk, it does not enable us to say whether households are engaging in nonlinear probability weighting per se— i.e., they know the probabilities but weight them nonlinearly— or whether their subjective beliefs simply do not correspond to the objective probabilities.
Relatedly, it is not clear that nonlinear probability weighting is the best way to model how people evaluate risk. Indeed, there are a variety of other models that take di¤erent approaches— the leading examples include models of ambiguity averse preferences (e.g.,
35

Two prominent review papers— an early paper that helped set the agenda for behavioral economics
(Rabin 1998) and a recent paper that surveys the current state of empirical behavioral economics (DellaVigna
2009)— contain almost no discussion of probability weighting. The behavioral …nance literature has paid a more attention to probability weighting (see, e.g., Barberis and Huang 2008; Barberis 2010)
36
Indeed, Prelec (2000) conjectured that "probability nonlinearity will eventually be recognized as a more important determinant of risk attitudes than money nonlinearity."

26

Gilboa and Schmeidler 1989; Schmeidler 1989; Klibano¤ et al. 2005). An important avenue of future research, therefore, is to investigate di¤erent accounts of how people evaluate risk and uncertainty.
A second limitation is that our analysis relies exclusively on insurance deductible choices, and it is unclear the extent to which our conclusions generalize to other choices or settings.
While we suspect that our main message would resonate in many domains beyond insurance deductible choices, it is evident that our estimated model would not perform well in certain contexts. In particular, people often display aversion to risk in 50-50 positive expected value gambles— e.g., people frequently reject gambles with a 50 percent chance to win $110 and a
50 percent chance to lose $100. It seems clear that nonlinear probability weighting does not explain such aversion to risk.
A third limitation pertains to the way we account for observationally equivalent households choosing di¤erent deductibles. As described in Section 3.2, we specify a random utility model with additively separable choice noise. There are alternative approaches, however, including the random error (or tremble) model (Harless and Camerer 1994) and the random preference model (Loomes and Sugden 1995), though neither is clearly superior to ours
(Loomes and Sugden 1998; Loomes et al. 2002). It would be useful nevertheless to explore these and perhaps other approaches in future work, particularly in light of recent work on the stability of risk preferences (Barseghyan et al., forthcoming; Einav et al. 2010).
It is also worth clarifying our conclusion that loss aversion plays little to no role in explaining the households’deductible choices. What we …nd is little to no role for K½ szegio
Rabin loss aversion— that is, loss aversion wherein gains and losses are de…ned relative to recent expectations, which in turn are determined by the chosen option. We …nd this result intriguing, because K½ szegi and Rabin (2007) and Sydnor (forthcoming) hypothesize that o KR loss aversion is implicated in insurance deductible choices. Nonetheless, our analysis does not contradict the original, "status quo" loss aversion proposed by Kahneman and Tversky
(1979)— that is, loss aversion wherein gains and losses are de…ned relative to initial wealth. In the context of insurance deductible choices, because all outcomes are losses relative to initial wealth, status quo loss aversion is inapposite. However, it is probably the best explanation for aversion to 50-50 positive expected value gambles.
Finally, we highlight that our benchmark estimates are immune to the Rabin critique
(Rabin 2000). Rabin uses a calibration argument to demonstrate the inability of the standard expected utility model to explain appreciable aversion to gambles with moderate stakes—
e.g., rejecting a gamble involving equal chances to win $110 and lose $100— because it implies an "absurd" degree of risk aversion when the stakes are increased by one or two orders of

27

magnitude.37 Ex ante (before confronting the data) our analysis could have yielded absurdly high levels of risk aversion— i.e., in our estimation procedure, the parameter space allowed for any degree of risk aversion. As we demonstrate in Section 4, however, our benchmark estimate for standard risk aversion implies a plausibly small level of aversion to risk (both in terms of the implied coe¢ cient of relative risk aversion and the implied willingness to pay for lower deductibles). At the same time, the degree of probability weighting is independent of stakes, and thus increases in stakes have little e¤ect on risk attitudes.38 We hope to pursue this theme in future research by exploiting the fact that our data set records both deductible choices, which involve moderate stakes, and liability limit choices, which involve stakes that are orders of magnitude larger.

References
Barberis, N. (2010): “A Model of Casino Gambling,” Mimeo, Yale University, http://badger.som.yale.edu/faculty/ncb25/gb23d.pdf. Barberis, N. and M. Huang (2008): “Stocks as Lotteries: The Implications of Probability
Weighting for Security Prices,”American Economic Review, 98, 2066–
2100.
Barseghyan, L., J. Prince, and J. C. Teitelbaum (Forthcoming): “Are Risk Preferences Stable Across Contexts? Evidence from Insurance Data,” American Economic
Review.
Bruhin, A., H. Fehr-Duda, and T. Epper (Forthcoming): “Risk and Rationality: Uncovering Heterogeneity in Probability Distortion,”Econometrica.
Choi, S., R. Fisman, D. Gale, and S. Kariv (2007): “Consistency and Heterogeneity of Individual Behavior under Uncertainty,”American Economic Review, 97, 1921–
1938.
Cicchetti, C. J. and J. A. Dubin (1994): “A Microeconometric Analysis of Risk Aversion and the Decision to Self-Insure,”Journal of Political Economy, 102, 169–
186.
Cohen, A. and L. Einav (2007): “Estimating Risk Preferences from Deductible Choice,”
American Economic Review, 97, 745–
788.
37

Sydnor (forthcoming) applies the Rabin critique to argue that the degree of standard risk aversion implied by the observed deductible choices in his data set is implausibly large.
38
For instance, if there were no standard risk aversion and no loss aversion, then even with probability weighting, the certainty equivalent of a lottery would merely increase proportionally with the stakes (if all outcomes in the lottery are multiplied by m, then the certainty equivalent is simply m times larger).

28

DellaVigna, S. (2009): “Psychology and Economics: Evidence from the Field,” Journal of Economic Literature, 47, 315–
372.
Dunham, W. B., ed. (2009): New Appleman New York Insurance Law, Second Edition,
New Providence, NJ: LexisNexis.
Einav, L., A. Finkelstein, I. Pascu, and M. R. Cullen (2010): “How General are
Risk Preferences? Choice under Uncertainty in Di¤erent Domains,”Working Paper 15686,
NBER.
Gilboa, I. and D. Schmeidler (1989): “Maxmin Expected Utility with Non-Unique
Prior,”Journal of Mathematical Economics, 18, 141–
153.
Goldstein, W. M. and H. J. Einhorn (1987): “Expression Theory and the Preference
Reversal Phenomena,”Psychological Review, 94, 236–
254.
Gonzalez, R. and G. Wu (1999): “On the Shape of the Probability Weighting Function,”
Cognitive Psychology, 38, 129–
166.
Grgeta, E. (2003): “Estimating Risk Aversion: Comment on Cicchetti and Dubin 1994,”
Mimeo, http://www.grgeta.com/edi/grgeta-CD_comment.pdf.
Harless, D. W. and C. F. Camerer (1994): “The Predictive Utility of Generalized
Expected Utility Theories,”Econometrica, 62, 1251–
2189.
Hey, J. D. and C. Orme (1994): “Investigating Generalizations of Expected Utility Theory
Using Experimental Data,”Econometrica, 62, 1291–
1326.
Jullien, B. and B. Salanié (2000): “Estimating Preferences under Risk: The Case of
Racetrack Bettors,”Journal of Political Economy, 108, 503–
530.
Kahneman, D. and A. Tversky (1979): “Prospect Theory: An Analysis of Decision under Risk,”Econometrica, 47, 263–
291.
½
KO szegi, B. and M. Rabin (2006): “A Model of Reference-Dependent Preferences,”Quarterly Journal of Economics, 121, 1133–
1166.

— — — (2007): “Reference-Dependent Risk Attitudes,” American Economic Review, 97,
1047–
1073.
Klibanoff, P., M. Marinacci, and M. Mukerji (2005): “A Smooth Model of Decision
Making under Ambiguity,”Econometrica, 73, 1849–
1892.
29

Kliger, D. and O. Levy (2009): “Theories of Choice under Risk: Insights from Financial
Martkets,”Journal of Economic Behavior and Organization, 71, 330–
346.
Lattimore, P. K., J. R. Baker, and A. D. Witte (1992): “The In‡ uence of Probability on Risky Choice: A Parametric Examination,” Journal of Economic Behavior and
Organization, 17, 377–
400.
Loomes, G., P. G. Moffatt, and R. Sugden (2002): “A Microeconometric Test of
Alternative Stochastic Theories of Risky Choice,” Journal of Risk and Uncertainty, 24,
103–
130.
Loomes, G. and R. Sugden (1995): “Incorporating a Stochastic Element into Decision
Theories,”European Economic Review, 39, 641–
648.
— — — (1998): “Testing Di¤erent Stochastic Speci…cations of Risky Choice,” Economica,
65, 581–
598.
Matzkin, R. L. (1991): “Semiparametric Estimation of Monotone and Concave Utility
Functions for Polychotomous Choice Models,”Econometrica, 59, 1315–
1327.
McFadden, D. (1974): “Conditional Logit Analysis of Qualitative Choice Behavior,” in
Frontiers in Econometrics, ed. by P. Zarembka, New York: Academic Press.
— — — (1981): “Econometric Models of Probabilistic Choice,” in Structural Analysis of
Discrete Data with Econometric Applications, ed. by C. F. Manski, Cambridge, MA: MIT
Press.
Post, T., M. J. van den Assem, G. Baltussen, and R. H. Thaler (2008): “Deal or No Deal? Decision Making under Risk in a Large-Payo¤ Game Show,” American
Economic Review, 98, 38–
71.
Prelec, D. (1998): “The Probability Weighting Function,”Econometrica, 66, 497–
527.
— — — (2000): “Compound Invariant Weighting Functions in Prospect Theory,”in Choices,
Values, and Frames, ed. by D. Kahneman and A. Tversky, Cambridge: Cambridge University Press.
Quiggin, J. (1982): “A Theory of Anticipated Utility,”Journal of Economic Behavior and
Organization, 3, 323–
343.
Rabin, M. (1998): “Psychology and Economics,”Journal of Economic Literature, 36, 11–
46.

30

— — — (2000): “Risk Aversion and Expected-Utility Theory: A Calibration Theorem,”
Econometrica, 68, 1281–
1292.
Rabin, M. and R. H. Thaler (2001): “Anomalies: Risk Aversion,”Journal of Economic
Perspectives, 15, 219–
232.
Read, D., G. Loewenstein, and M. Rabin (1999): “Choice Bracketing,” Journal of
Risk and Uncertainty, 19, 171–
197.
Schmeidler, D. (1989): “Subjective Probability and Expected Utility Without Additivity,”
Econometrica, 57, 571–
587.
Snowberg, E. and J. Wolfers (Forthcoming): “Explaining the Favorite-Longshot Bias:
Is It Risk-Love or Misperceptions?” Journal of Political Economy.
Sydnor, J. (Forthcoming): “Over(Insuring) Modest Risks,” American Economic Journal:
Applied Economics.
Tanaka, T., C. F. Camerer, and Q. Nguyen (2010): “Risk and Time Preferences:
Linking Experimental and Household Survey Data from Vietnam,” American Economic
Review, 100, 557–
571.
Tversky, A. and D. Kahneman (1992): “Advances in Prospect Theory: Cumulative
Representation of Uncertainty,”Journal of Risk and Uncertainty, 5, 297–
323.
Vuong, Q. H. (1989): “Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses,”Econometrica, 57, 307–
333.

31

1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

µ

Figure 1: Probability Weighting Functions
Notes— The black curve is the Prelec (1998) function with = 0:7. The red and green curves are the Tversky and Kahneman (1992) function with = 0:5 and = 0:69, respectively. The dashed line is the 45 degree line.

32

Table 1: Descriptive Statistics
Core Sample (4170 Households)
Variable

Mean

Std Dev

1st Pctl

99th Pctl

Driver 1 age (years)

54.5

15.4

26

84

Driver 1 female

0.37

Driver 1 single

0.24

Driver 1 married

0.51

Driver 1 credit score

766

113

530

987

Driver 2 indicator

0.42

Home value (thousands of dollars)

191

125

10

619

Note: Omitted category for driver 1 marital status is divorced or separated.

33

Table 2: Summary of Deductible Choices
Core Sample (4170 Households)
Deductible

Collision

$50

Comp

Home

5.2

$100

1.0

4.1

0.9

$200

13.4

33.5

$250

11.2

10.6

29.7

$500

67.7

43.0

51.9

$1000

6.7

3.6

15.9

$2500

1.2

$5000

0.4

Note: Values are percent of households.

34

Table 3: Summary of Premium Menus
Core Sample (4170 Households)
Coverage

Mean

Std Dev

1st Pctl

99th Pctl

Auto collision premium for $500 deductible

180

100

50

555

Auto comprehensive premium for $500 deductible

115

81

26

403

Home all perils premium for $500 deductible

679

519

216

2511

Auto collision

54

31

14

169

Auto comprehensive

30

22

6

107

Home all perils

56

43

11

220

Auto collision

41

23

11

127

Auto comprehensive

23

16

5

80

Home all perils

74

58

15

294

Cost of decreasing deductible from $500 to $250:

Savings from increasing deductible from $500 to $1000:

Note: Annual amounts in dollars.

35

Table 4: Predicted Claim Rates (Annual)
Core Sample (4170 Households)
Collision

Comp

Home

Mean

0.072

0.021

0.089

Standard deviation

0.026

0.011

0.053

1st percentile

0.026

0.004

0.025

5th percentile

0.035

0.007

0.034

25th percentile

0.053

0.013

0.054

Median

0.069

0.019

0.079

75th percentile

0.087

0.027

0.110

95th percentile

0.120

0.042

0.177

99th percentile

0.150

0.056

0.265

Collision

Comp

Home

Correlations
Collision

1

Comp

0.13

1

Home

0.27

0.19

1

Premium for coverage with $500 deductible

0.35

0.15

0.24

36

Table 5: Benchmark Estimates (Model 2)
Core Sample (4170 Households) r Coef
Constant
Driver 1 age
Driver 1 age squared

Λ

α

Std Err

Coef

Std Err

‐16.06 **

0.95

‐12.19 *

6.70

‐0.40 **

0.01

‐0.60 **

0.26

1.45

‐0.04 **

0.01

0.92 **

0.22

‐1.89

7.33

2.91 **

Coef

Std Err

0.01

0.00

Driver 1 female

‐0.18

0.20

‐0.44

1.99

Driver 1 single

0.06

0.27

0.83

1.32

0.00

0.01

0.75

0.48

1.15

0.00

0.01

0.01

0.21

0.14

2.87

‐0.06 **

0.01

‐0.14

1.34

‐1.83

2.11

0.03 *

0.01

Coef

Std Err

Driver 1 married
Driver 1 credit score
Driver 2 indicator

‐4.32 **

σ

2.93 **

‐0.02 **

0.05

Parameter mean

0.0000299

0.0000

0.683

Parameter median

0.0000001

0.0000

0.678

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

37

0.01

Table 6: Economic Significance of Benchmark Estimates
(1)

(2)

r=0

Λ=0 α=0.7 (3)

r=0

(4)

Λ=0

Λ=0

α=1

α=0.7

α=1

(5)

(6)

(7)

(8)

r=0.0001

r=0.001

r=0.00003

Λ=0

Λ=0

Λ=0

Λ=0.02

α=0.7

α=0.7

α=0.7

α=0.7

r=0.00003 r=0.00003

μ

μ1

0.02

0.020

0.060

10.00

0.05

0.048

0.107

24.99

0.07

0.065

0.130

34.97

79.13

35.75

80.94

84.99

123.10

82.34

0.10

0.090

0.158

49.92

102.02

51.03

104.34

109.54

156.70

106.08

0.15

0.129

0.193

74.74

136.23

76.38

139.31

146.17

205.30

141.52

ω1

WTP

WTP

WTP

WTP

WTP

WTP

WTP

32.59

10.22

33.32

35.00

52.01

33.95

62.31

25.55

63.72

66.92

97.83

64.86

Note: WTP denotes‐‐for a household with claim rate μ, the utility function in equation (5), and the specified preference parameters‐‐the household's maximum willingness to pay, in terms of excess premium above $200, to decrease its deductible from $1000 to $500.

38

Table 7: Mean Choice Probabilities
Core Sample (4170 Households)
Collision

Comp

Home

(1)

Random choice

0.200

0.167

0.167

(2)

Full model ‐ all households

0.333

0.234

0.377

(3)

Full model ‐ policies with $50 or $100 deductible

0.194

0.139

0.089

(4)

Full model ‐ policies with $200 or $250 deductible

0.235

0.192

0.440

(5)

Full model ‐ policies with $500 deductible

0.377

0.279

0.337

(6)

Full model ‐ policies with $1000 deductible

0.269

0.439

0.427

(7)

Full model ‐ policies with $2500 or $5000 deductible

0.125

(8)

Restricted model (r=Λ=0) ‐ all households

0.332

0.234

0.376

(9)

Restricted model (Λ=0, α=1) ‐ all households

0.295

0.205

0.331

39

Table 8: Accounting for Wealth (Models 3‐5)
Core Sample (4170 Households) r Λ

α

Parameter mean

0.0000299

0.0000

0.683

Parameter median

0.0000001

0.0000

0.678

Parameter mean

0.0003803

0.0000

0.730

Parameter median

0.0002531

0.0000

0.723

Parameter mean

0.0000690

0.0000

0.687

Parameter median

0.0000003

0.0000

0.678

Parameter mean

0.0000619

0.0001

0.684

Parameter median

0.0000002

0.0000

0.675

Model 2:

Model 3:

Model 4:

Model 5:

Table 9: Accounting for Unobserved Heterogeneity
Core Sample (4170 Households) r Λ

α

Parameter mean

0.0010460

0.0001

0.817

Parameter median

0.0005689

0.0000

0.806

Parameter mean

0.0007313

0.0000

0.785

Parameter median

0.0002334

0.0000

0.767

Model 2u:

Model 5u:

40

Appendix (Not for Publication) to The Nature of Risk Preferences:
Evidence from Insurance Choices
Levon Barseghyan

Francesca Molinari

Cornell University

Cornell University

Ted O’
Donoghue

Joshua C. Teitelbaum

Cornell University

Georgetown University

July 21, 2010

Table A.1: Summary of Premium Menus ‐ Auto Collision
Core Sample (4170 Households)
Deductible Choice
$100

$200

$250

$500

$1000

All

Mean annual premium for coverage with $500 deductible

110

129

146

189

255

180

Standard deviation

54

54

66

96

168

100

Mean cost of decreasing deductible from $500 to $250

33

38

44

57

77

54

Standard deviation

17

17

20

29

52

31

Mean savings from increasing deductible from $500 to $1000

24

29

33

43

58

41

Standard deviation

12

12

15

22

39

23

Number of households

42

559

467

2822

280

4170

Note: All values in dollars, except number of households.

Table A.2: Summary of Premium Menus ‐ Auto Comprehensive
Core Sample (4170 Households)
Deductible Choice
$50

$100

$200

$250

$500

$1000

Mean annual premium for coverage with $500 deductible

61

70

92

98

136

258

Standard deviation

27

33

43

41

71

247

Mean cost of decreasing deductible from $500 to $250

16

18

24

26

36

68

Standard deviation

7

9

11

11

19

66

Mean savings from increasing deductible from $500 to $1000

12

14

18

19

27

51

Standard deviation

5

7

9

8

14

49

216

171

1397

440

1795

151

$5000

Number of households
Note: All values in dollars, except number of households.

Table A.3: Summary of Premium Menus ‐ Home
Core Sample (4170 Households)
Deductible Choice
$100

$250

$500

$1000

$2500

Mean annual premium for coverage with $500 deductible

366

520

631

972

2218

3366

Standard deviation

113

218

308

593

2289

1808

Mean cost of decreasing deductible from $500 to $250

31

42

52

80

183

275

Standard deviation

6

18

26

48

201

140

Mean savings from increasing deductible from $500 to $1000

41

57

69

107

244

368

Standard deviation

8

23

34

64

268

188

Number of households

36

1239

2166

664

50

15

Note: All values in dollars, except number of households.

A-1

Table A.4: Claim Rate Regressions ‐ Auto
Poisson Panel Regression Model with Random Effects
Full Data Set (1,348,020 Household‐Year Records )
Collision

Comprehensive

Coef

Std Err

Coef

Std Err

Constant

‐6.7646 **

0.0616

‐7.9277 **

0.1057

Driver 2 Indicator

‐0.0485

0.0593

‐0.3542 **

0.1022

‐0.1261

Driver 3+ Indicator

0.3215 **

0.0733

Vehicle 2 Indicator

0.5991 **

0.0466

0.1201

0.6502 **

0.0782

0.0596

0.8766 **

0.0937

Young Driver

‐0.0058

0.0296

0.0895 **

0.0453

Driver 1 Age

‐0.0210 **

0.0015

0.0113 **

0.0029

Driver 1 Age Squared

0.0002 **

0.0000

‐0.0002 **

0.0000

Driver 1 Female

0.1040 **

0.0093

‐0.0672 **

0.0168

Driver 1 Married

0.0630 **

0.0111

0.0640 **

0.0201

Driver 1 Divorced

0.0186

0.0141

0.0914 **

0.0247

Driver 1 Separated

0.0392

0.0256

0.0791

0.0428

Vehicle 3+ Indicator

Driver 1 Single
Driver 1 Widowed

0.7312 **

.

.

0.0031

‐0.0170

0.0335

‐0.0286 **

0.0030

‐0.0354 **

0.0019

Vehicle 1 Age Squared

‐0.0006 **

0.0001

Vehicle 1 Farm

.

0.0160

Vehicle 1 Age
Vehicle 1 Business

.

.

.

‐0.2575 **

0.0872

0.0000

0.0002

.

.

0.0206

0.1194

Vehicle 1 Pleasure

‐0.1094 **

0.0306

‐0.1118 **

Vehicle 1 Work

‐0.0831 **

0.0304

‐0.0620

0.0523

Vehicle 1 Passive Restraint

‐0.1087 **

0.0239

‐0.0858 **

0.0352

Vehicle 1 Anti‐Theft

0.0754 **

0.0078

0.0735 **

0.0136

Vehicle 1 Anti‐Lock

0.0581 **

0.0080

0.0729 **

0.0139

Driver 2 Age

0.0115 **

0.0024

0.0181 **

0.0042

‐0.0001 **

0.0000

‐0.0001 **

0.0000

Driver 2 Female

0.1204 **

0.0151

‐0.0376

0.0257

Driver 2 Married

‐0.0835 **

0.0191

‐0.0408

0.0326

Driver 2 Divorced

‐0.1579

0.1027

‐0.1347

0.1636

Driver 2 Separated

0.0254

0.2130

0.1796

0.3226

Driver 2 Age Squared

Driver 2 Single

0.0526

.

.

Driver 2 Widowed

‐0.0802

0.1383

‐1.1835 **

0.3864

Vehicle 2 Age

‐0.0332 **

0.0016

‐0.0229 **

0.0027

0.0004 **

0.0001

0.0002 **

0.0001

Vehicle 2 Age Squared
Vehicle 2 Business
Vehicle 2 Farm

.

.

.

.

.

.

‐0.1703

0.1056

‐0.1345

0.1500
0.0663

Vehicle 2 Pleasure

‐0.1805 **

0.0380

‐0.0563

Vehicle 2 Work

‐0.1670 **

0.0381

0.0119

Vehicle 2 Passive Restraint

‐0.0428 **

0.0201

‐0.0875 **

0.0294

Vehicle 2 Anti‐Theft

0.0547 **

0.0103

0.0385 **

0.0171

Vehicle 2 Anti‐Lock

0.0317 **

0.0105

0.0199

‐0.0017 **

0.0000

Driver 1 Credit Score

0.0664

0.0170

‐0.0013 **

0.0001

Driver 1 Previous Accident

0.0913 **

0.0156

0.0756 **

0.0277

Driver 1 Previous Convictions

0.1476

0.0888

0.0648

0.1670

Driver 1 Previous Reinstated

0.0170

0.0558

0.0003

0.0996

Driver 1 Previous Revocation

‐0.0218

0.1456

0.3156

0.1967

Driver 1 Previous Suspension

0.0463

0.0564

0.0125

0.1026

Driver 1 Previous Violation

0.0827 **

0.0093

0.0577 **

Year Dummies
Territory Codes
Variance (φ)
Loglikelihood

Yes
Yes
0.2242 **

Yes
0.0065

‐399,318

** Significant at 5 percent level.

A-2

0.0161

Yes
0.5661

0.0198

‐169,817

Table A.5: Claim Rate Regression ‐ Home
Poisson Panel Regression Model with Random Effects
Full Data Set (1,265,229 Household‐Year Records )
Coef

Std Err

‐7.3642 **

0.0978

Dwelling Value

0.0000 **

0.0000

Home Age

0.0016 **

0.0006

Home Age Squared

0.0000 **

0.0000

Constant

Number of Families

‐0.0021

0.0023

Distance to Hydrant

0.0000

0.0000

Alarm Discount

0.2463 **

0.0195

‐0.1852 **

0.0239

Farm/Business

0.1044 **

0.0242

Primary Home

0.4832 **

0.0819

Owner Occupied

0.2674 **

0.0419

Construction: Fire Resist

0.1525

0.1342

Construction: Masonry

0.0751 **

0.0172

Construction: Masonry/Veneer

0.0755 **

0.0252

Protection Devices

Construction: Frame

.

Credit Score

.

‐0.0026 **

Year Dummies

Yes

Protection Classes

Yes

Territory Codes

0.0000

Yes

Variance (φ)

0.4514 **

0.0086

‐347,278

Loglikelihood
** Significant at 5 percent level.

A-3

Table A.6: Model 3 Estimates
Core Sample (4170 Households) r Coef

Λ
Coef

Std Err

α
Std Err

Coef

Std Err

Constant

‐8.61 **

0.21

‐17.00 **

4.24

‐0.34 **

Driver 1 age

‐0.64

0.01

0.11

0.77

3.44

‐0.07

0.02

0.07

‐0.26

2.45

0.01

Driver 1 female

‐0.11

0.07

0.79

1.03

Driver 1 single

0.14

0.09

‐0.81

1.59

0.01

0.01

‐0.09

0.15

‐0.50

1.07

0.00

0.01

Driver 1 credit score

0.03

0.07

‐0.98

1.26

Driver 2 indicator

0.41

0.28

Driver 1 age squared

Driver 1 married

**

**

0.01
0.00

‐0.02 **

‐0.06 **

0.01

0.01

‐1.20 **

σ

0.03

0.02

0.12

0.10

1.15
0.23

0.00

0.00

Coef

Home value

5.71 **

Std Err

2.45 **

0.07

Parameter mean

0.0003803

0.0000

0.730

Parameter median

0.0002531

0.0000

0.723

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

Table A.7: Model 4 Estimates
Core Sample (4170 Households) r Λ

Coef

Std Err

Constant

‐3.48

2.60

Driver 1 age

‐6.76 **

3.21

Driver 1 age squared

‐1.28

Driver 1 female

‐0.37

Driver 1 single

‐0.12

Driver 1 married

‐0.44

Driver 1 credit score
Driver 2 present

Coef

α
Std Err

‐14.97 **

Coef

Std Err

2.24

‐0.41 **

0.01

‐0.79

6.25

‐0.05 **

0.01

1.02

‐1.12

5.79

0.01 **

0.00

0.16

‐1.24

1.80

‐0.02

0.21

‐2.91

6.25

0.00

0.01

0.32

2.60

4.99

0.00

0.01

0.10

0.17

0.89

3.67

‐0.06 **

0.00

0.43

0.53

‐0.54

3.54

0.03 *

0.01

Coef

Std Err

**

σ

2.89 **

**

0.05

Parameter mean

0.000069

0.0000

0.687

Parameter median

0.0000003

0.0000

0.678

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-4

0.01

Table A.8: Model 5 Estimates
Core Sample (4170 Households) r Coef

Λ
Coef

Std Err

Constant

‐3.93 **

0.62

Driver 1 age

‐7.09 **

0.58

Driver 1 age squared

‐1.34 **

Driver 1 female

α
Std Err

‐19.65 **

Coef

Std Err

3.01

‐0.41 **

0.01

‐0.89

10.67

‐0.05 **

0.01

0.28

‐9.95

7.02

0.01 **

0.00

‐0.40 **

0.18

1.09

2.04

‐0.02 **

0.01

Driver 1 single

‐0.11

0.24

‐0.47

1.09

0.00

0.01

Driver 1 married

‐0.42

0.34

‐3.96

4.78

0.00

0.01

Driver 1 credit score

0.07

0.17

0.30

1.44

‐0.06 **

0.01

Driver 2 indicator

0.35

0.52

0.15

1.72

0.02 **

0.00

Home value

0.03

0.14

1.34 **

0.35

0.02

0.02

Coef σ Std Err

2.89 **

0.05

Parameter mean

0.0000619

0.0001

0.684

Parameter median

0.0000002

0.0000

0.675

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

A-5

Table A.9: Model 2u Estimates
Core Sample (4170 Households) r Coef

Λ
Coef

Std Err

Constant

‐7.37 **

0.17

Driver 1 age

‐0.98

α
Std Err

‐17.00 **

Coef

Std Err

2.04

‐0.23 **

0.01

‐0.10

0.01

0.13

‐4.42

4.49

Driver 1 age squared

0.11

0.08

‐9.87 **

1.56

0.01 *

0.01

Driver 1 female

0.16 *

0.10

‐5.40 **

2.21

0.01

0.01

Driver 1 single

**

**

‐0.02

0.11

0.31

1.56

0.00

0.01

0.19

0.15

4.75

2.97

0.01

0.01

Driver 1 credit score

‐0.09

0.09

‐0.85

3.24

‐0.07 *

0.01

Driver 2 indicator

‐0.32

0.22

1.07

1.67

0.01

0.02

Driver 1 married

Coef σ Std Err

3.14

0.08

**

Parameter mean

0.001046

0.0001

0.817

Parameter median

0.0005689

0.0000

0.086

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

Table A.10: Model 5u Estimates
Core Sample (4170 Households) r Coef
Constant
Driver 1 age
Driver 1 age squared

Λ
Coef

Std Err

α
Std Err

Coef

Std Err

3.87 **

0.25

‐22.52 **

3.12

‐0.27 **

0.01

‐1.55 **

0.18

8.70 **

1.84

‐0.10 **

0.01

0.11

‐12.80 **

4.44

0.02 **

0.01

0.00

Driver 1 female

‐0.22

0.10

0.24

2.12

‐0.02

Driver 1 single

‐0.09

0.11

‐0.90

2.85

‐0.01

0.01

Driver 1 married

0.20

0.16

‐2.91

3.12

0.00

0.01

Driver 1 credit score

0.06

0.09

1.87

2.86

**

Driver 2 indicator

‐0.64 **

0.30

‐1.15

1.34

Home value

‐0.02

0.06

‐1.99 **

0.74

Coef σ **

‐0.06 **
0.00
‐0.01 **

0.10

Parameter mean

0.0007313

0.0000

0.785

Parameter median

0.0002334

0.0000

0.767

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

A-6

0.01
0.02

Std Err

3.34 **

0.01

0.00

Table A.11: Model 2 with Cumulative Form of Rank Dependence
Core Sample (4170 Households) r Coef
Constant

Λ
Coef

Std Err

α
Std Err

Coef

Std Err

‐13.99 **

1.98

‐11.67 **

4.14

‐0.37 **

0.01

Driver 1 age

‐3.97 **

0.87

1.43

1.33

‐0.04 **

0.00

Driver 1 age squared

‐0.76 **

0.38

‐5.03 **

Driver 1 female

0.22

0.19

‐0.08

1.06

Driver 1 single

0.01

0.17

‐1.33

1.05

0.00

0.01

‐1.71

1.97

0.69

1.04

‐0.01

0.01

‐0.04 **

Driver 1 married
Driver 1 credit score

0.49 **

2.09

0.22

‐5.05

1.61

1.81

4.69

‐2.07

1.24

Coef

Driver 2 indicator

0.00

0.00

‐0.01 **

0.00

Std Err

σ

3.25 **

0.02

0.00
0.01

0.06

Parameter mean

0.0000298

0.0000

0.698

Parameter median

0.0000000

0.0000

0.695

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

Table A.12: Model 5 with Cumulative Form of Rank Dependence
Core Sample (4170 Households) r Coef

Λ
Coef

Std Err

α
Std Err

Coef

Std Err

Constant

‐8.06 *

4.70

‐12.45 **

1.32

‐0.38 **

0.01

Driver 1 age

‐4.91 **

2.02

0.27

1.00

‐0.04 **

0.00

Driver 1 age squared

‐0.86

0.74

‐1.35

1.12

Driver 1 female

‐0.12

0.27

‐0.43

1.03

Driver 1 single

‐0.12

0.23

‐0.29

1.01

0.00

0.01

Driver 1 married

‐6.45 **

3.19

0.40

3.71

‐0.01

0.01

‐0.04 **

Driver 1 credit score

0.00
0.01

Home value

0.34

0.14

1.04

‐7.67 **

2.34

0.38

1.01

0.02

0.01

0.46 **

0.13

‐0.12

1.02

0.01 **

0.00

Coef

Driver 2 indicator

0.34

0.00
‐0.01 **

Std Err

3.26 **

σ

0.05

Parameter mean

0.0000242

0.0000

0.695

Parameter median

0.0000000

0.0000

0.693

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-7

0.00

Table A.13: Model 2 with Tversky and Kahneman (1992) Probability Weighting Function and Cumulative
Form of Rank Dependence
Core Sample (4170 Households) r Coef
Constant

Λ
Coef

Std Err

‐15.63 **

δ
Std Err

Coef

Std Err

1.42

‐21.00 **

4.13

‐0.45 **

0.01

Driver 1 age

‐2.16 **

0.25

15.17 **

4.80

‐0.03 **

0.01

Driver 1 age squared

‐1.40

0.52

‐5.13 **

1.85

‐0.03 **

0.01

Driver 1 female

‐0.80

0.52

‐0.21

0.42

‐0.03 **

0.01

Driver 1 single

‐0.27

0.30

‐0.83

0.97

Driver 1 married

‐3.08 **

1.19

‐1.99 **

2.57 **

0.61

Driver 1 credit score
Driver 2 indicator

‐0.93

0.42

Coef

‐0.01

0.01

‐0.04 **

0.01

3.91

10.01 **

σ

0.01

0.19

0.17

2.35

0.00

‐1.12 **

0.02

Std Err

3.09 **

0.05

Parameter mean

0.0000830

0.0212

0.438

Parameter median

0.0000000

0.0000

0.556

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

Table A.14: Model 2 with Linear Probability Weighting Function
Core Sample (4170 Households) r Coef
Constant
Driver 1 age
Driver 1 age squared

‐12.03 **

Λ
Coef

Std Err
0.51

‐0.56 **

Std Err

‐11.11 **

0.16

0.97 **

α

0.09

0.01

4.19

‐0.03 **

0.00

2.85

3.90
‐6.49 **

‐0.31 **
0.01 **

0.00

‐0.01 **

0.00

‐0.16

0.17

1.66

1.60

Driver 1 single

0.00

Driver 2 indicator

0.23

0.32

6.30

‐0.47 **

0.42

0.80

3.70

0.05 **

0.18

0.81

3.72

‐6.67 **

1.10

‐2.20

2.80

Coef

Driver 1 married

Std Err

2.81

Driver 1 female

Driver 1 credit score

Coef

Std Err

2.91 **

σ

0.00

0.00

0.00

0.01

‐0.04 **
0.01

0.01

0.05

Parameter mean

0.0000352

0.0000

0.740

Parameter median

0.0000075

0.0000

0.736

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

A-8

0.00

Table A.15: Model 2 with Lattimore et al. (1992) Probability Weighting Function
Core Sample (4170 Households) r Coef
Constant

‐18.91 **

Λ
Std Err
2.18

Coef

a
Coef

Std Err

‐12.73 **

δ
Std Err

1.00

1.42 **

Coef

Std Err

0.08

‐1.39 **

0.07
0.04

Driver 1 age

‐1.87 *

1.07

0.04

0.99

‐0.17 **

0.04

0.08 *

Driver 1 age squared

‐0.20

0.42

‐0.01

1.00

0.11 **

0.03

‐0.09 **

0.03

Driver 1 female

0.26 **

0.10

‐0.03

1.00

0.03

0.05

‐0.07

0.05

Driver 1 single

0.16

0.10

0.02

1.00

0.07

0.05

‐0.06

0.06

0.07

0.08

0.07

0.03

0.08 **

0.03

0.11

0.01

0.11

Driver 1 married
Driver 1 credit score
Driver 2 indicator

2.08

‐0.01

1.00

‐0.12 *

0.03

0.09

‐0.01

1.00

‐0.12 **

‐0.40

1.03

0.01

1.00

‐8.70 **

0.03

Coef σ 3.55

Std Err
**

0.06

Parameter mean

0.0000864

0.0000

4.996

0.235

Parameter median

0.0000000

0.0000

4.954

0.230

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-9

Table A.16: Model 3 with CARA Utility
Core Sample (4170 Households) r Coef

Λ

α

Std Err

Coef

Std Err

Coef

Std Err

Constant

‐7.34 **

0.05

‐11.40

17.94

‐0.21 **

0.03

Driver 1 age

‐0.07

**

0.02

‐0.55

1.00

‐0.09

**

0.01

Driver 1 age squared

0.18 **

0.02

‐0.20

1.01

0.06 **

0.01

Driver 1 female

0.09 **

0.02

0.00

1.02

0.00

0.01

Driver 1 single

0.05 **

0.02

0.25

1.00

0.03 **

0.01

Driver 1 married

0.07

0.04

‐0.68

1.00

0.04

0.02

Driver 1 credit score

‐0.04

0.03

0.17

1.01

‐0.09 **

0.01

Driver 2 indicator

‐0.35 **

0.07

0.61

1.00

‐0.10 **

0.04

Home value

‐0.06

0.00

‐0.04

1.00

Coef

Std Err

**

σ

6.57 **

0.21

**

0.01

0.24

Parameter mean

0.0007058

0.0000

0.884

Parameter median

0.0006769

0.0000

0.794

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

Table A.17: Model 2 ‐ At Most One Claim
Core Sample (4170 Households) r Coef
Constant

Λ
Coef

Std Err

‐13.80 **

2.30

Driver 1 age

‐4.62 **

1.25

Driver 1 age squared

‐1.00 **

0.49

α
Std Err

‐19.72 **

Coef

Std Err

2.49

‐0.47 **

‐0.56

1.66

‐0.06 **

‐7.03 *

3.73
2.14

0.00

0.01
0.01
0.01

Driver 1 female

0.24

0.20

‐1.41

Driver 1 single

‐0.02

0.18

‐0.02

Driver 1 married

‐1.54

2.21

4.12

1.54

‐0.01

0.01

0.39

0.24

‐0.86

1.12

‐0.05 **

0.01

‐6.12

6.10

‐3.13

2.74

Coef

Std Err

Driver 1 credit score
Driver 2 indicator σ 1.03
**

3.16 **

‐0.02 **
0.00

0.01

0.02

0.02

0.05

Parameter mean

0.0000461

0.0000

0.637

Parameter median

0.0000000

0.0000

0.633

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-10

0.01

Table A.18: Model 5 ‐ At Most One Claim
Core Sample (4170 Households) r Coef
Constant

‐7.29

Driver 1 age squared

‐1.44

Driver 1 female

Coef

Std Err

‐3.87

Driver 1 age

Λ

α
Std Err

Coef

Std Err

3.60

‐27.16 **

2.80

‐0.48 **

0.01

4.42

‐11.04

**

1.88

‐0.06

**

0.01

1.38

‐6.41 **

1.96

0.01 **

0.01

‐0.37 **

0.18

9.11 **

1.19

‐0.02 **

0.01

Driver 1 single

‐0.19

0.24

6.32 **

2.11

0.00

0.01

Driver 1 married

‐0.45

0.37

11.60 **

2.13

0.00

0.01

Driver 1 credit score

0.10

0.18

1.94 **

0.88

‐0.05 **

0.01

Driver 2 indicator

0.30

0.56

‐12.63 **

1.71

0.02 **

0.02

Home value

0.02

0.14

2.43

0.41

0.01

0.00

*

**

Coef σ Std Err

3.15 **

0.06

Parameter mean

0.0000644

0.0014

0.637

Parameter median

0.0000002

0.0000

0.630

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

Table A.19: Model 2 Without Extreme Deductibles
Core Sample (4170 Households) r Coef
Constant

Λ

α

Std Err

Coef

Std Err

Coef

Std Err

‐11.85 **

1.47

‐21.49

14.72

‐0.45 **

0.01

Driver 1 age

‐3.59 *

1.85

0.53

8.25

‐0.09 **

0.01

Driver 1 age squared

‐0.57

0.60

‐15.34

12.03

0.02 **

0.01

Driver 1 female

‐0.08

0.13

1.67

9.88

Driver 1 single

‐0.01 *

0.01

0.24

0.22

1.95

14.84

0.01

0.01

‐0.14

0.30

‐0.94

24.75

‐0.01

0.01

Driver 1 credit score

0.02

0.13

1.58

2.77

‐0.06 **

0.01

Driver 2 indicator

0.21

0.46

10.42

12.90

Coef

Std Err

Driver 1 married

σ

2.46 **

0.02

0.02

0.07

Parameter mean

0.0001114

0.0001

0.668

Parameter median

0.0000094

0.0000

0.653

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-11

Table A.20: Model 2 ‐ Auto Collision Only
Core Sample (4170 Households) r Coef

Λ

α

Std Err

Constant
Driver 1 age

‐0.43

Std Err

Coef

0.14

‐6.82 **

Coef
‐11.72

39.66

‐0.20 **

1.83

2.89

‐0.01

0.01

2.73

0.01

0.01

5.40

0.02 *

0.01

0.08

**

Driver 1 age squared

0.01

0.06

Driver 1 female

0.15 **

0.07

0.94

Driver 1 single

‐5.92 **

Std Err
0.02

‐0.08

0.07

‐0.39

7.81

‐0.01

0.01

Driver 1 married

0.10

0.09

1.48

1.84

0.00

0.01

Driver 1 credit score

0.15 *

0.08

0.93

8.39

‐0.03 **

0.01

Driver 2 indicator

0.16

0.17

‐1.73

3.33

0.10 **

0.02

Coef

Std Err

σ

3.26 **

0.12

Parameter mean

0.0012839

0.0000

0.869

Parameter median

0.0011647

0.0000

0.867

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

Table A.21: Model 2 ‐ Auto Comprehensive Only
Core Sample (4170 Households) r Coef

Λ
Coef

Std Err

Constant

‐6.70 **

0.20

Driver 1 age

α
Std Err

‐10.39 **

Coef

Std Err

2.82

‐0.37 **

0.02

‐0.21 **

0.08

1.44

2.54

‐0.09 **

0.01

Driver 1 age squared

0.18 **

0.06

‐2.17

12.66

‐0.01

0.01

Driver 1 female

0.28

0.07

0.11

2.16

Driver 1 single

0.02

0.09

1.16

3.52

‐0.02 *

0.02 **

0.01
0.01

Driver 1 married

‐0.09

0.14

0.02

1.12

‐0.02

0.02

Driver 1 credit score

‐0.01

0.09

0.07

12.33

‐0.03 **

0.01

0.06

0.26

‐0.93

1.18

Coef

Std Err

Driver 2 indicator

**

σ

4.23 **

0.03

0.03

0.20

Parameter mean

0.0016726

0.0000

0.692

Parameter median

0.0014480

0.0000

0.696

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-12

Table A.22: Model 2 ‐ Home All Perils Only
Core Sample (4170 Households) r Coef
Constant
Driver 1 age

Coef

Std Err

‐11.29 **
‐0.51

Λ
0.53

α
Std Err

Std Err

5.92

‐0.40 **

0.02

0.18

14.98

**

5.79

‐0.07

**

0.01

0.17

‐6.08 **

2.82

0.02 **

0.01

4.08

2.86

‐0.02 **

0.01

‐2.22

**

Driver 1 age squared

0.96 **

Driver 1 female

0.26

‐22.00 **

Coef

0.19

Driver 1 single

‐0.05

0.20

1.70

0.01

0.01

Driver 1 married

‐0.49

0.39

1.41 **

0.70

0.00

0.02

0.17

0.23

‐5.67 **

1.65

‐0.08 **

0.01

‐10.29

26.66

‐6.15

4.72

‐0.02

0.03

Coef

Std Err

Driver 1 credit score
Driver 2 indicator σ 1.83 **

0.05

Parameter mean

0.0000749

0.0064

0.684

Parameter median

0.0000174

0.0000

0.668

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level.

Table A.23: Model 2 with Coverage‐Specific Choice Noise
Core Sample (4170 Households) r Coef

Λ

α

Std Err

Coef

Std Err

Coef

Constant

‐9.21 **

0.26

‐18.51

29.42

‐0.40 **

0.01

Driver 1 age

‐0.72 **

0.15

0.01

0.99

‐0.04 **

0.01

Driver 1 age squared

0.19 *

0.11

‐6.22

21.34

0.00

0.00

Driver 1 female

0.23 **

0.11

‐0.06

1.01

0.00

0.01

0.11

0.53

2.21

0.00

0.01

0.22

1.33

4.99

‐0.01

0.10

1.18

4.26

‐0.04

11.93

‐3.53

12.45

0.00

0.02

0.11

4.79 **

0.16

Driver 1 single

‐0.02

Driver 1 married

‐0.24

Driver 1 credit score
Driver 2 indicator σ 0.37

**

‐9.01
1.64 **

0.05

3.98 **

Std Err

0.01
**

Parameter mean

0.0001132

0.0000

0.678

Parameter median

0.0000699

0.0000

0.676

Note: Each variable z is normalized as (z ‐mean(z ))/stdev(z ).
** Significant at 5 percent level. * Significant at 10 percent level.

A-13

0.00

Similar Documents

Free Essay

Student

...Revision of Critical essay *Introduction In today's society there is a lot of pressure on students academically to have a good performance and with that comes a lot of stress. Some students find a way to try to balance their hectic school life style whether it be some kind of recreational activity. One of those activities is sports and whether it can make a better student. I believe that yes it can increase your performance academically because it teaches you skills such as focus, fitness and communication with others. In the article “do athletes make better students, Natalie Gil written for the guardian.com. Natlie Gil claims that studies show that doing both can benefit studies and sports performance, providing motivation and preparation. Natalie Gil also goes on to state that it helps organization and pervents procrastination and that being fit alters students mood in a good way claiming a healthy body is a healthy mind. Lastly, Natalie Gil goes on to show evidence that it also helps with communication and team work whether at school or later in landing a career. Pathos Natalie Gil Appeals to the stress and desire to succeed in today's world as students upcoming in today's society. She also uses the points or appeal to support her view or stance on the subject that athletes do make better students and that this will lead to success not only in their academic life but also in their career choice Logos Natalie...

Words: 616 - Pages: 3

Premium Essay

Student

...are important to be included in the evaluation of teaching effectiveness. These factors are as the criteria for the evaluating of educational effectiveness. Some of these factors still work as a criterion for the evaluation process. While, the other factors have to be excluded from the evaluation and not to be given as much weight. Therefore, the main goal of this study is to ask administrators about which items still valid until the now and have to be included in the evaluation process and which of these items are invalid to be an evaluation criterion. This article also offers the main sources of data for evaluation of faculty performance as one of the important components of evaluation of educational effectiveness. There sources are students’ evaluation tools, teaching portfolios, classroom visitation reports, and scholarship activities. These sources offer significant information about the faculty performance and consequently they will contribute significantly in assessing and evaluating the teaching effectiveness. There are some items of evaluation have to be included and be given more weight in any evaluation process of the educational effectiveness because they have a significant relation to the success of the evaluation process. These items are currency in field, peers evaluation, classroom visits, professors preparations. While, there are some items have to be excluded because they do not contribute in success of evaluation of teaching effectiveness...

Words: 325 - Pages: 2

Free Essay

Student

...SOX testing, I was also assigned to assist building the Compliance Universe for the whole organization. I appropriately allocated my time and energy to these two projects, so that I completed most of my work in a high quality and on a timely basis. I am a dedicated team player who loves communicating with people. I interviewed Hologic’s employees to understand key business processes, joined all the staff meetings and presented my ideas and achievements to the team, collaborated with colleagues to work on other projects to meet the deadline. I am also a person with great research and analytical skills. I used CCH, FASB Codification and some other information sources to finish my cases in academic study. Even though I am an international student, I believe that I am better for this position than anyone else. Companies like Signiant need global perspective people. I majored in International economy and trade during undergraduate study. I have knowledge about foreign currency, international transactions and taxes. All I need is a chance to learn and contribute in a fast-paced company like Signiant. The enclosed resume briefly summarizes my educational background and experiences, I would like to meet with you for an interview during which I can fully express my capacity and desire to work for Signiant. In the meantime, if you need any additional information, please contact me by phone at 781-502-8582 or via e- mal at liulezi2012@hotmail.com Thank you for your time and...

Words: 319 - Pages: 2

Free Essay

Student

...Study of Asia-Pacific MBA Programs Bloomberg Business week posted an article on March 17th 2014 titled, Elite Business Schools Hike Tuition for the Class of 2016. This article draws a comparison between tuition costs for the class of 2015 for selected US MBA programs and the class of 2016. Tuition costs are increasing more and more every year, for this reason looking at other alternatives may be more cost effective. The following study provides and interpretation of tuition cots both local and foreign in the Asia-Pacific region. From this study we can see the comparison between tuition costs and starting salaries. We can also see other deciding factors such as admission requirements. Finally this study provides a recommendation for an MBA program in the Asia-Pacific region. Please note Table 1.1 listing the study’s programs with their correlating graph ID. Table 1.1 Business School | Graph ID | Lahore University of Management Sciences | LUMS | Indian Institute of Management (Calcutta) | IIMC | University of New South Wales (Sydney) | UNSW | Indian Institute of Management (Bangalore) | IIMB | Curtin Institute of Technology (Perth) | CIT | Massey University (Palmerston North, New Zealand) | MU | University of Queensland (Brisbane) | UQ | University of Adelaide | UA | Monash Mt. Eliza Business School (Melbourne) | MMEBS | Melbourne Business School | MBS | Royal Melbourne Institute of Technology | RMIT | Macquarie Graduate School of Management...

Words: 3907 - Pages: 16

Free Essay

Student

...THE RATE OF INVOLVEMENT OF KPTM KL’S STUDENTS IN SPORTS AT THE COLLEGE Prepared by : MUHAMMAD AEZHAD BIN AZHAR CVB130724387 MUHAMMAD FARHAN BIN ABDUL RAHMAN CVB130724287 RAHMAN MUSTAQIM BIN KHOSAIM CVB130724279 MUHAMMAD AIMAN BIN MOHD HUSNI CVB130724388 Prepared for : Madam Jaaz Suhaiza Jaafar Submitted in partial fulfillments of the requirement of the 106km course. TABLE OF CONTENTS NUMBER | CONTENTS | PAGES | 1. | ACKNOWLEDGEMENT | 3 | 2. | INTRODUCTION | 4 | 3. | OBJECTIVES | 5 | 4. | METHODOLOGY | 6-7 | 5. | GRAPH | 8-11 | 6. | CONCLUSION | 12 | 7. | APPENDIX TABLE | 13 | 8. | APPENDIX | 14-17 | ACKNOWLEDGEMENT First of all,we really want to thankful to Madam Jaaz Suhaiza Jaafar because allowed me to do this mini project until we’ve successfully completed it.We want thankful too because madam helped us a lot such as give instructions or order how to make it properly done until we’ve finished it. If we didn’t get help from madam,its really hard to us for completed it in a short time. We also want to very thankful too all our 50 respondents which all of them its from KPTM KL students who was in diploma,degree or professional. They all was nice and very friendly with us and nobody refuse to give a little time to fill up our questionnaire. We really want to wish thanked you so much because without them we can’t finished our mini project. Last but not least,thank you so much too our...

Words: 2116 - Pages: 9

Premium Essay

Student

...playing a basic rule in the education, and the government was searching for a solution to eliminate this phenomenon. They found that establish public schools overall the states will improve a lot of the poor income people to be introduced in the educational field, and over the years will produce community with cultured educated society. The education is varies in all levels, starting from preschool reaching to postgraduate like masters and doctoral degree. The insurance of improvement in education that any non U.S graduate must have multiple exams prior to admission e.g. TOEFL, ILETS, GRE, GMAT. Nowadays there are gradual increase in the numbers of international students want to continue their educations in United States. The improvement of the education in United States is very obvious and attracts the students worldwide, and they release a lot of plans in progress. All the opportunities social, health, economic, academic will depend on the basic structure...

Words: 306 - Pages: 2

Free Essay

Student

...Retention(n), retain verb (used with object) the ​continued use, ​existence, or ​possession of something or someone:Two ​influential ​senators have ​argued for the retention of the ​unpopular ​tax.The retention of ​old ​technology has ​slowed the company's ​growth.​water/​heat retention Particularly(adv) Especially(adv) Deter(v) to make someone less likely to do something, or to make something less likely to happen caydırmak, vazgeçirmek, yıldırmak Perception(n) BELIEF [C]› what you think or believe about someone or something algılama, sezgi, görme The public perception of him as a hero is surprising. NOTICE [U] the ability to notice something fark etme, farkına varma, tanıma, görme Alcohol reduces your perception of pain. Conationimpulse Unanimous agreed by everyoneoy birliği ile üzerinde uzlaşılan; herkesçe kabul edilen; genel kabul görenThe jury was unanimous in finding him guilty. unanimity     /ˌjuːnəˈnɪməti/ noun [U]› when everyone agrees about somethinggenel/toplumsal uzlaşı; oy birliği ile anlaşma; genel kabul; fikir birliğiunanimously adverb›oy birliği ile kabul edilmişThe members unanimously agreed to the proposal. dissonancenoun [U]  UK   /ˈdɪs.ən.əns/  US   /ˈdɪs.ə.nəns/      › specialized music a ​combination of ​sounds or ​musical ​notes that are not ​pleasant when ​heard together:the ​jarring dissonance of Klein's ​musical ​score› formal ​disagreement dissonant adjective UK   /ˈdɪs.ən.ənt/  US   /ˈdɪs.ə.nənt/ specializedor formal ›a dissonant ​combination of...

Words: 335 - Pages: 2

Premium Essay

Student

...Student Handbook 2015/2016 www.praguecollege.cz Table of Contents Introduction Message from the Director Mission, Vision and Values Why study at Prague College Admissions A short guide to Prague College qualifications English for Higher Education Foundation Diploma in Business Foundation Diploma in Computing Foundation Diploma in Art & Design Professional Diplomas in Business Professional Diplomas in Computing Higher National Diploma BA (Hons) International Business Management BA (Hons) International Business Management (Flexible Study Programme) BA (Hons) Business Finance & Accounting BA (Hons) Graphic Design BA (Hons) Fine Art Exp. Media BSc (Hons) Computing BA (Hons) Communications & Media Studies MSc International Management MSc Computing Accreditation & Validation UK/Pearson Credit system Transfer of credits Student support Accommodation Study Advising and Support Financial support Visas for foreign students Scholarships Benefits for students Study abroad Internships Assistance in employment Counselling Centre Student Resources Computer labs Online Learning Centre (Moodle) Prague College email Physical library Digital Library ISIFA Images Textbooks and class materials Graphic Design/Interactive Media/Fine Art materials and costs Personal computers Message boards and digital signs Newsletters Open lectures, seminars and events Student ID cards Centre for Research and Interdisciplinary Studies (CRIS) Prague...

Words: 27092 - Pages: 109

Free Essay

International Student

...[pic] TOPIC: INTERNATIONAL STUDENTS’ ATTITUDES ABOUT HIGHER EDUCATION IN THE UK Student: Pham Trang Huyen My Student ID: 77142444 10 weeks Pre-sessional course December, 2013 List of content Abstract 3 1. Introduction 4 2. Literature review 5 2.1. Higher Education in the UK 5 2.2. Teacher-student relationships and the quality of teaching 5 2.3. Different learning styles 6 2.4. Group work 7 2.5. Financial issues 8 3. Methodology 9 4. Results 10 5. Discussion 14 6. Conclusion 16 List of References 17 Appendix 19 Abstract Higher education is a competitive business which produces huge benefits for the UK economy. This paper reveals international students’ attitudes about UK higher education and focuses on direct factors which can affect students’ opinions. Reports of international students’ attitudes already carried out in Leeds Metropolitan University are analyzed and the main findings are emphasized. A total of eighteen international students interviewed provided data on their experience in UK education that involves the challenges they have faced and what they have achieved. The project concludes that not only UK tuition fees but also the quality of education can affect international students’ decision to study in the UK. Therefore measures should be taken in...

Words: 3732 - Pages: 15

Free Essay

Working Student

...INTRODUCTION Many students of HRM in Taguig City University work part-time Employment during school could improve grades if working promotes aspects that correspond with academic success, such as industriousness or time management skills, or instead reduce grades by reducing time and energy available for school work. Otherwise, working might be associated with academic performance, yet not directly influence it, if unobserved student differences influence both labor supply and grades. Unmotivated students might neither work for pay nor receive good grades because they put little effort into the labor market or school. In contrast, HRM students uninterested in academics might work long hours that would otherwise have been devoted to leisure. Students might misjudge the link between college achievement and future earnings when making labor supply decisions. If so, obtaining a consistent estimate of how such decisions affect academic performance is prospectively important for policy consideration. Some of HRM students in Taguig City University Students are more likely to work than they are to live on campus, to study full time, to attend a four-year college or university, or to apply for or receive financial aid. Students work regardless of the type of institution they attend, their age or family responsibilities, or even their family income or educational and living expenses. Most HRM students at Taguig City University face many challenges in their already busy everyday lives...

Words: 2898 - Pages: 12

Free Essay

Student Adversity

... Adversity allows an individual to develop a sense of discipline, as well as encouraging individuals to exercise their mind to confront a problem or conflict. Specifically, students who encounter hardships are more inclined to try harder, which promotes competition within the school. Although adversity may be beneficial towards some students, challenges can be detrimental for students who lack confidence. For instance, some students develop a mentality of despair; they believe that if one has to work hard, then the person does not have the natural ability for the assignment. Based on the effects of adversity aforementioned, I believe that students can both benefit from the obstacles faced in school with the proper mentality or the effects could be hindering. Students face adversity every day, regardless of how transparent the obstacle may be; some problems may not be as evident as others. According to Carol S. Dweck, author of Brainology, all students face adversities throughout their high-school career, specifically, the challenge of overcoming a fixed mindset. In this excerpt, “The belief that intelligence is fixed dampened students’ motivation to learn, made them afraid of effort, and made them want to quit after a setback”, Carol portrays the illusion that students have over intuitive intelligence (Dweck 2). Students who share this belief of a...

Words: 1029 - Pages: 5

Free Essay

Student Handbook

...Student Handbook (Procedure & Guideline) for Undergraduate Programmes 2014 Revised: April 2014 UCSI Education Sdn. Bhd. (185479-U) VISION AND MISSION STATEMENT OF UCSI UNIVERSITY VISION STATEMENT To be an intellectually resilient praxis university renowned for its leadership in academic pursuits and engagement with the industry and community MISSION STATEMENT  To promote transformative education that empowers students from all walks of life to be successful individuals with integrity, professionalism and a desire to contribute to society  To optimize relationships between industry and academia through the provision of quality education and unparalleled workplace exposure via Praxis Centres  To spearhead innovation in teaching and learning excellence through unique delivery systems  To foster a sustainable culture of research, value innovation and practice, in partnership with industries and society  To operate ethically at the highest standards of efficiency, while instilling values of inclusiveness, to sustain the vision for future generations 2 UCSI Education Sdn. Bhd. (185479-U) Graduate Attributes Getting a university degree is every student‟s ultimate dream because it opens doors to career opportunities anywhere in the world. A university degree is proof of one‟s intellectual capacity to absorb, utilize and apply knowledge at the workplace. However, in this current competitive world, one‟s knowledge and qualifications...

Words: 28493 - Pages: 114

Premium Essay

Student Policy

...Student Academic Policies Computer Usage: Sullivan University Systems (SUS) provides computer networking for all staff, students and anyone else affiliated with the university community. Sullivan University will provide a platform that is conducive for learning while maintain and respecting the user privacy. Users are authorized to use the accounts only. Passwords should be protected, please keep the confidential (Computer Usage. (2012) Sullivan University. Student Handbook 2012-2013, pp. 12-14.). While using the SUS users have a responsibility and are expected to follow some key rules: 1. Do not abuse the equipment 2. Computers must be used for course work 3. No unauthorized down loading 4. At no time will user install software of any kind Disciplinary action for violations of the Computer usage of policy will be enforced and are as follows: 1. Loss of computer privileges 2. Disconnection from the network 3. Expulsion 4. Prosecution The Compute usage policy is standard and pretty straight forward. The statement lets students know what is and is not proper usage. What I would have like to have seen is a social media portion in the usage policy. Academic Integrity: Cheating and Plagiarism is a violation of the University’s Academic Integrity Policy. All students are expected to submit their own work. Penalties for those who are found guilty of cheating may include: (Academic Integrity. (2014, January 1) Sullivan University. Sullivan University 2014 Catalog...

Words: 320 - Pages: 2

Premium Essay

Student Satisfaction

...between the quality of school facilities and student...

Words: 2174 - Pages: 9

Premium Essay

Working Students

...performance of hiring working students Introduction While most students have parents that can support them, there are those students that need get what you call a “part-time job” to help their parents that can’t support them all the way. However, being employed and being a student can be too much to a person. The business process outsourcing industry in the Philippines has grown 46% annually since 2006. In its 2013 top 100 ranking of global outsourcing destinations. Significance of the Study There are situations in the life when one must do what they can to achieve their dreams or help their families. Especially if dealt with financial difficulties and there is a need work while studying. They also need to deal with their everyday busy schedules. This research aims to help understand and discuss the issues and concerns of the employed students to benefit the following: Working Students – Being an employee and student at the same time takes a lot of hard work. It can be rigorous but also rewarding especially if you helped your parents. It can also be a good working experience for them for their future. This study will assist them to see the behaviors that help them achieve their professional skills. Scope and Limitations This is study is conducted at the LPU-Manila and the information is viewed only in the light of the particular student and his or her experience as working student. It does not reflect the view of the general working student population or that of other...

Words: 606 - Pages: 3