Free Essay

Advanced Investments

In:

Submitted By Afhaalchinees
Words 10578
Pages 43
ADVANCED INVESTMENTS

Risk & return
A1. Agents prefer more over less (nonsatiation).
A2. Agents dislike risk (are risk averse).

How should investors, given their preferences, invest their money? (normative)
What can we say about how the market and (how its participants) actually operates (and invest)? (descriptive)
Both revolve around the risk/return relationship and interact: information about how markets work influences investment decisions, which influences the market in its turn.

The amount matters (mean and variance) and the relation with other factors (covariance) matters.
, x = return distribution (magnitude), p = price, E = expectation (which captures and combines the probability that different outcomes can/will happen) and m = SDF and captures the relation with other factors and the reward required to bear the risk inherent in x (it indicates how much (marginal) utility the outcome has, which captures the role of when we like the payoff more, the conditions matter; it captures the premium needed for this specific risk).

The SDF can be derived from the utility function, this gives: . The problem with this is determining marginal utility.
In many cases, the SDF is a linear function of a factor (CAPM):

That factor f captures when returns in situation A may be more pleasant than the same returns in situation B.

Portfolio theory (Risk & return: theory – empirics)
Uses assumption A1 and A2, and more:
Investors:
A3. Agents maximize utility, and do so for 1 period. (Rationality: agents are capable to find the very best solution for their problem, and are willing to do so).
A4. Utility is a function of expected return and variance (and nothing else).
Market conditions:
A5. No distortion from costs, transaction fees, inflation or taxes. If trading has costs, the optimum shifts (another allocation becomes optimal as fees eat part of the return): investing a small amount in an asset becomes less attractive, so costs favor investing in fewer assets (assuming fixed fees). Inflation and taxes tend to disrupt the choice between consumption now and investments (for consumption later).
A6. All information is available at no cost.
A7. All investments are completely divisible.
A4 & A5: agents minimize variance at a given return or maximize return at a given variance, if A5 & A7 hold both given the same outcome (the efficient set). The degree of risk aversion determines which efficient portfolio will be chosen.
The return on a portfolio is given by:, its risk is given by:
Optimal weights: ,

This implies that the optimal portfolio weights are a function of the required return (how much return you want/how much risk are you willing to bear), the relation between the assets (how much can you gain through diversification) and the average returns of the assets: .

The bullet
The bullet is a collection of optimal portfolios, which yield minimum variance. Adding more assets will shift the bullet outwards (more diversification). The optimal portfolio will contain a lot of assets (in reality encountered by transactions costs). The bullet can be replicated with 2 assets that are on the efficient frontier (portfolio separation).
Diversification: the first 10 stocks lower the portfolio volatility significantly, 11-100 the effect decreases, >100 close to no diversification effect by adding a stock.
Mean returns and correlations affect the bullet (maintained assumptions, especially A5-A7). Transactions costs, information costs and indivisibility make the problem non-smooth. Optimization techniques fail and some participants could have arbitrage opportunities (in this case the bullet is irrelevant).
To arrive at the optimal portfolio requires you the know the preference structure (Utility curves/functions).

The risk free object
The existence of a risk-free object will generally result in portfolios with higher utility, especially for risk averse agents. Incorporating the risk free object mathematically means we drop the weight restriction, instead of adding the risk free object to the formula. &
The straight line created in the figure (with σ instead of variance on the y-axis) is the Capital Market Line in the CAPM.
Portfolio separation still holds but now between 2 special assets, the risk-free object and the tangency portfolio. This also implies that the restriction that all portfolio weights must sum up to 1 () does not have to hold anymore. Each investor can (and will) maximize his utility by combining risky and risk-free assets. If all agents have the same expectations (they agree on the parameters: expected return and covariance matrix) and A5-A7 hold, they will all have the same tangency portfolio.
The next logical step is the equilibrium model: If everyone needs the same portfolio for their optimum and if all assets are to be held (returns get adjusted if they are so low no-one wants the assets), then the tangency portfolio must contain all assets - it must in fact be the market portfolio.

Restrictions
In realty investors face problems. There are limits on shortselling and it is not that easy to get a bank loan for it, but the restriction will limit the possibilities in portfolio optimization which depends on long and short positions. To obtain returns below that of the asset with the lowest return or above that of the asset with the highest return, one must sell short risky assets or combine with the risk free object. If there is no risk free object, one can only get returns between the minimum and the maximum of the risky assets. If there is a risk free object, returns between the maximum return and are possible. Higher than that requires shortselling the risk free object. (No shortselling is easy to incorporate mathematically: ).
The economic purpose of shortselling is to obtain leverage, short stocks with low reward for their risk, or to create diversification benefits if only positive correlations exist. Technically: the inverse of V will not be completely positive; some assets have a positive correlation while their characteristics are put to best use with a negative one; negative portfolio weight arrange that.
Restrictions on portfolio weights are used to limit exposure to specific sectors, but they limit diversification and lead to an increase in variance instead of decreasing the risk as it was supposed to do. Implicitly saying there is more to risk than just variance.
No infinite divisibility results in an almost unsolvable problem (many local optima ). Solve for every optimum but when the number of assets increases the number of points increases and becomes unsolvable. Furthermore, optimization might result for example in a weight of 0.734, this has to be 0 or 1 with no infinite divisibility, which results in both cases in a worse risk/return tradeoff. (The problem is no longer differentiable).

CAPM
The CAPM is strictly speaking descriptive, but it is based on such a wealth of assumptions (mostly from normative portfolio theory) that its descriptive qualities are often questionable (low).
The CAPM builds on portfolio theory but goes much further. It is an equilibrium model; describing the entire market. It is more descriptive in nature, as it works not well enough to warrant using it the normative way.
Extra assumptions made by the CAPM:
A8. All investors have the same expectations regarding returns and covariances. (Needed to arrive at the same bullet for everyone, and thus the same tangency portfolio for everyone).
A9. All investors can lend and borrow at the risk free rate.
A10. Asset markets are characterized by perfect competition: no-one is big enough to have an effect on assets prices (everyone is a price taker). If this assumption fails, game theory comes into play. This assumption is the least likely to get dropped because with it returns and prices are given. Dropping it results in ‘chicken-or-egg’ (everyone’s decision will influence prices and everyone else decisions).
The CAPM does not assume all investors have the same (or similar) preferences; they differ in their utility curves (). It just needs them to follow the MV-criterion.
If everyone needs the same portfolio for their optimum and if all assets are to be held (returns get adjusted if the are so low no-one wants the assets), then the tangency portfolio must contain all assets: it must be the market portfolio.

Characteristics of the tangency portfolio
The tangency portfolio is the only efficient portfolio which has zero investment in the risk free object.
,
and

The relation of the return of any asset with the tangency portfolio (and in equilibrium, the market portfolio) makes the importance of diversification even more clear: diversifiable risk is not priced because you can get rid of it.

Security Market Line: , where the market risk premium
This formula applies for all assets (efficient or inefficient). It is an equilibrium model (CAPM) so assets that yield a return too low for its risk will be sold short until its return rises (pay less for the same payoff = higher return).

CAPM as factor model
,

In the CAPM, one invests in a combination of the risk free assets and the tangency portfolio. Since all assets are held (prices adjust if they are not), the tangency portfolio is the market portfolio, containing every asset there is (this leads to the SDF in the CAPM). Hence, when the market return changes consumption changes. Marginal utility of consumption was the basis of the SDF. The SDF in the CAPM can be obtained in various ways, all justify the people only care about mean and variance only.
The market is a factor: there is a perfect negative correlation between marginal utility and the market portfolio. Perfect, because there is nothing you do not invest in. Negative, since the extra utility from each further unit of return (=consumption) decreases as a consequence of risk aversion (marginal utility is decreasing).

Fama & McBeth
Testable implications: is linear, no other factor than β should explain E(r) & > 0 because otherwise there would be no return for risk.
They added non-linear terms (Taylor expansions, to test the linearity) and a factor mimicking idiosyncratic risk.
When checking the estimates, not only do you want > 0, but also = intercept, and slope = market risk premium.
The β’s were calculated: . (Errors-in-variables problem and β’s are not constant over time, they used portfolios to solve the second problem).
Estimating risk premia: other factors.
Results: both the Taylor expansion and the factor mimicking idiosyncratic risk where insignificant.
Roll’s critique: Market portfolio is needed to estimate the β and the market portfolio contains every asset. This is utterly impossible, because that data is not available. In reality we have to use a proxy for the market which leads to the testing of two hypotheses: the CAPM is right & the choice of the proxy for the market is right. Some comfort comes from the diminishing benefits of diversification by adding extra assets. But we are likely to forget complete asset classes, especially when the market portfolio only consists of stocks.
Why do we want to test the CAPM? The CAPM is descriptive, how good is this description? Furthermore, violations of the CAPM can point us in interesting directions: return with no risk (golden opportunity for investors), risk/return relations that are different from the CAPM point to gaps in our theoretical understanding.
CAPM as a regression:
What is an anomaly?
Anomalies are on the descriptive route (what is happening out there). An anomaly is indeed anomalous when there is a structural, replicable pattern, that cannot be explained in the framework of existing (mainstream) financial theory, but can (potentially) be economically. This definition is quite loose, but it contains some useful elements:
- There must be a pattern, so random events do not qualify.
- The pattern must be accompanied by economic rationale. Sometimes this can get pretty arcane. For example the fact that companies starting with an A receive closer analyst attention that companies starting with a Z.
- The pattern must go against accepted financial theory. So industries with structural higher returns, but also a higher β are not anomalous.
In fact, an anomaly is only empirically relevant when money can be made from it. If so, the anomaly should reflect a:
- Yet unknown aspect of risk (because of ‘no risk no return’), or
- a market imperfection that prevents arbitrage from eliminating free lunches.
In reality it is often the case that trading restrictions or trading fees eliminate the possibility of making money from possible anomalies. If no money can be made from it the anomaly is not anomalous after all.
Over time, several anomalies have been found in the stock market; some of these are highly dependant on the sample period or market you choose (January effect disappeared over time). Factors that are relevant in some datasets are irrelevant in others, yet they tend to be correlated with factors that do matter. Also, taking all factors in account will give you a good idea of the interactions between them but a poor view on the total effect. With anomalies we tend to first establish the existence and then check for interfering factors.

Size & value anomalies
According to the size anomaly, small firms earn, after correction for their market risk, somewhat higher returns than large firms. The magnitude of this size anomaly is around 2%-4% per annum, mostly in the 1960s and 1970s. It is quite possible that after the discovery of the anomaly, investors traded on it and hence the magnitude decreased to insignificant levels (statistically and/or economically).
There are 2 economic motivations behind the size anomaly:
- Small firms receive less analyst attention, so their prices update less often. This carries a risk for which compensation would be required.
- Small firms are not much traded, which also means that their prices update less often and that they are less liquid (and that they carry higher transaction costs). Again, this carries a risk for which compensation would be required.
According to the value anomaly, value stocks (stocks with an high B/M-ratio) earn higher returns than growth stocks (stocks with a low B/M-ratio), even after corrections have been made for their market risk. The magnitude of this anomaly is around 4%-6% per annum.
The economic motivation behind the value anomaly starts from the fact that ultimately asset prices are determined by (expected) payouts. If market value is close to book value (high B/M), the firm appears to be in dire shape (no growth opportunities that have any value), and is therefore more risky. This risk raises the required return. (However, it must be said that this explanation is just one of the possible explanations behind the value anomaly.) The problem with the financial distress hypothesis is that the factor associated with the value anomaly has low correlations with measures of distress. However, Lettau & Ludvigson (2001) suggest the value factor may work primarily in times that are already bad.
The problem with the size and the value factors (F&F factors) is not whether they are relevant, but what they actually represent. This is because the economic rationale behind the F&F factors is still unclear and we tend to have multiple (not watertight) explanations where 1 single solution is still missing. In theoretical sense, we have problems where to look for improvements. Is the assumption A10 violated, which corresponds to the illiquidity premium? Is our understanding of risk (and thus our SDF) flawed (A4. Utility is a function of means and variance, and nothing else)?

Importing the F&F factors into the SML gives:
, where: = compensation for market risk (according to theory: ) = compensation for size risk Problem: No theoretical numbers to compare results to. = compensation for value risk (magnitude not accounted for by theory)
Including the F&F factors into the SML means you automatically incorporate it into the SDF too:
SDF: , where is now a vector of , and .

Momentum anomaly
The momentum anomaly states that, based on middle-long term autocorrelation, assets (stocks) that have performed well in the recent past (say 3-12 months) will outperform ‘losers’ for another year. The magnitude of this anomaly is mostly 4%-6% per annum but can be as high as 2% per month. This outperformance can be obtained by sorting past ‘winners’ and ‘losers’ and then (for example) buy the 20% best performing stocks and finance this by short selling the 20% worst performing stocks. Obviously this requires careful selection and rebalancing because at some time winners stop winning and losers stop losing.
The momentum anomaly is not constructed into the SML and SDF because from an economical point of view it has nothing to do with risk (especially the when do we get returns part, because it is based on the past which is irrelevant). There is no economic argument to support the idea that returns are less desirable just because the stock value has increased the past period.
Trading on the momentum anomaly seems easy, why doesn’t it disappear rapidly? 2 possible explanations:
- Trading costs. The short positions are costly to obtain and maintain. There are also transaction costs that can make it expensive the replicate the strategy. Taking this into account there might not be an anomaly after all.
- Illiquidity effects. Especially in smaller stocks, shortselling might be encountered by illiquidity. Illiquid stocks may fall a lot further if you try to sell them in a decreasing market.
Both explanations are based on market imperfections (violation of our assumptions). The momentum anomaly is rather robust (if you correct of risk with the SDF for the other anomalies or macroeconomic factors it is still there). Furthermore, illiquidity effects are hard to measure so testing on this is also hard to do.

More anomalies
- Calendar anomalies: January effect (reasons: fiscal effects, bonuses, balance sheet clearing; selling the risky stocks)
- Reversed momentum: after longer timeframes the momentum anomaly reverses itself (losers become winners).
- Macro-economic factors (not captured by the CAPM): oil prices, inflation, interest rates.
The macro-economic factors are often motivated by the fact that our market proxy is not perfect and hence the relation between consumption and marginal utility of consumption is not as perfect as the CAPM suggests. Resolving this can be done in different ways: use commodity prices e.g. oil prices (there might be some truth inhere but the factor is probably a proxy for something itself) or use consumption information (this can be plugged into the SDF). The problem for both is that the frequency of the available data is too low.

Testing for anomalies: sorted portfolios (Fama & MacBeth)
A convenient method of magnifying the effects of an anomaly is to choose sorted portfolios (not single stocks because of unstable β’s, taking portfolios results in an average-out of β’s). When you want to test the size anomaly for example, you rank all assets on size, construct portfolios by size (for example 10 deciles going from largest size to smallest size), buy the decile with the smallest firms and finance this by short selling the decile with the largest firms. Taking this into a regression gives the following:
1. A time series regression: , where = using normal returns and = using excess returns, = factor β, where in this case the factor is size, = return on portfolios sorted by their factor, which is in this case size.
2. A cross sectional regression: , where = using normal returns and = using excess returns, = risk premium on market, = estimated ’s from regression 1, = coefficient of factor β’s, ≠ when anomaly exists, = estimated ’s from regression 1.
However, a problem in this way of testing consists of the fact that the independent variables used in regression 2 () are estimated values and therefore carry uncertainty. This will leave regression 2 with an errors-in-variables problem.

Testing for anomalies: GRS test
The big advantage of the Gibbons-Ross-Shanken (GRS) test compared to Fama & MacBeth is that GRS only need the time-series regression (regression 1). In this test the emphasis in on (the joint significance of) the α’s, which are also called the pricing error (obviously only when using excess returns). The regression equation, using 2 factors, looks as follows:

In theory, the α’s of all (for example) 10 regressions should be equal to zero. In practice, randomness will cause some slight deviations. GRS showed that if we take the sum of all squared pricing errors () and weigh/standardize them properly, that sum follows an F-distribution. This means that we can test whether this sum of squared pricing errors significantly differs from some threshold value and conclude whether or not we have found an anomaly. This all looks easy, but the difficulty lays in weighing/standardizing the squared errors:
,
where T = number of observations in the time-series, N = number of cross-sections (assets/portfolios), f = factor & Σ = covariance matrix of the residuals.

Empirical return distributions
There is a direct link between portfolio weights and return distributions; when doing research there are several choices to make concerning the data. Because of the fact that portfolio theory uses a 1-period model (A3), the frequency of the data also implicitly sets the investment horizon considered.
In practice, investment horizons are longer or shorter than the model assumes. It makes no sense to optimize for a month and then still have the other 11 months left in which the money has the be invested as well. The implicitly set horizon would be not much of a problem if return distributions were the same regardless of the frequency one uses. Sadly, means, variances, correlations and higher moments (skewness and kurtosis) all change not ‘normal’. For example, the distribution of daily data can be entirely different from the distribution of weekly data. Actually, this is expected because returns are multiplicative (the product of 2 normal distributions is a χ2 distribution)
Solutions: multiperiod model (but mathematics are too difficult) & high frequency (monthly returns seem reasonable).
Also when working with lognormal distributions (meaning ln(r) is normally distributed) we see that reality does not cooperate. (A true solution does not exist, but the problem is dwarfed by ‘the problem of expectations’).

Role of normal distributions
The normal distribution plays an important role in theory because of the fact that normal distributed returns are one way to frame the investment problem as a balance between mean (return) and variance (risk). The assumption of normally distributed returns is used in portfolio theory, CAPM and the vast majority of scientific and empirical literature.
Since risk has to express itself in the return distribution, and a normal distribution can be reconstructed solely from its mean and variance, a normal distribution directly justifies the MV-framework. So, only the mean and the variance can influence the risk-return relationship, which is exactly the same as the construction of the normal distribution. Variance is the only risk factor, which makes risk nothing more than probabilities on returns different from the average. (The other way to arrive at the MV-framework is assumptions on the utility function, mainly quadratic utility).

Testing for normality
Testing from normality comes from 2 reasons:
- Does the problem exist?
- Is it worthwhile to look at the ‘straightforward’ expansions which might solve the issue?
To judge if a distribution is normal one has to look at the skewness and the kurtosis (higher moments).

Skewness is a measure of asymmetry; a normal distribution is symmetric around its mean. If the left tail (the lower/negative returns) is more pronounced than the right tail (the higher/positive returns), the distribution has a negative skewness. If vice versa, positive skewness. Skewness has everything to do with risk because, for example, more negative returns relative to positive returns implies more downside risk (more chance on negative returns (A) & less chance on positive returns (B) ).
Formula:
Kurtosis is the degree of peakedness of a distribution, meaning that if the kurtosis is higher than 3 (the value for a normal distribution) the tails are fatter, giving much higher probabilities to extreme returns. This is typical of financial data, especially at higher frequencies.
Formula:
The standard test for normality combines the skewness and the kurtosis into the Jarque-Bera test. This test has a χ² distribution with 2 degrees of freedom, so one can use standard testing procedure here.
Formula:
All moments of the distribution above the second moment should have specific values under a normal distribution (skewedness = 0 and kurtosis = 3).

Incorporating higher moments
As mentioned, non-normality can be ‘solved’ using higher moments (using Taylor expansions). Taking higher moments into the model means that one also has to adjust the SDF. It can, for example, look like this:
,
where = mean, = variance, = skewness & = kurtosis.
The major downside of this approach lays in the fact that risk is not only based on variance anymore. We now need to balance return with variance, skewness and kurtosis and we do not know the relative importance of these factors in advance.
This is the reason why this approach works easily in explaining returns, where we can employ several factors, but that it is more troublesome in the context of optimalisation. Also with the SDF we can determine if a portfolio is ex post efficient given our risk attitudes (relative importance), but an expression for the efficient set is a more troublesome (we need to balance return with variance, kurtosis and skewedness). Mathematically, we can expand the objective function, but we also need information on the importance of skewedness and kurtosis relative to mean, variance and each other. Simply minimizing variance does not work anymore.

No rounding up the usual suspects but incorporate personal views
Until now we assumed historical means and (co)variances (the usual suspects) offer a good description of the asset market in the future and that every agent is equally well informed. Both are wrong in practice. Therefore it makes sense to incorporate personal views (superior knowledge) into the portfolio decision (active portfolio management). Testing this against a portfolio created by simply historical data gives an outcome on whether personal views is valuable in the portfolio decision. (The strong EMH: personal views do not matter).

The Black-Litterman model gives weight to both expectations and historical data. It roughly works as follows:
- Get the historical or CAPM predicted returns as a starting point,
- Incorporate personal views and their uncertainty,
- Contrast these with historical data/baseline estimates and get a weighted average,
- Use these to perform classical portfolio optimalisation.
This is a relatively easy model to apply using a few assets. However, the model is of limited use on a stock level where there are (for example) 500 assets, so 500 means and 25.000 covariances. As a consequence, the model works well in only 2 situations:
- When comparing classes of assets. For example stocks, bonds, commodities, currencies.
- When focusing on a limited number of stocks. For example, focusing on a single industry or a single country.
However, this leads to the problem of hierarchical portfolio management: when every analyst focuses on another subset of the market (the combination of their results is never optimal).

Excess returns
The next stage is to judge if your strategy did indeed give a superior risk return tradeoff. Therefore you have to check whether the realized return didn’t come from taking more risk. The return exceeding the return required for the level of risk is called the excess return, or alpha, or tracking error.

If α > 0 there is an outperformance of the market (corrected for the risk).
Tracking error , where is the return of a benchmark.
Actually, one can use these techniques to judge in advance whether a particular stock is worth it or not, one can translate target prices into α’s, which can be combined with the Black-Litterman approach as well.

The returns can be distinguished into 3 types:
- Holding period return. The return in the period under consideration:
- Arithmetic return. The average return per period t:
- Geometric return. Time weighted average return per period t:
If we want to compare performance of portfolios over different time periods, the geometric return are normally the better choice, as they check what actually happened ().

In the actual performance measure there should be corrected for risk too. The following are risk-adjusted performance measures (all based on the MV framework):
- Sharpe ratio. Measures reward (average excess return) per unit of total risk.
- Treynor ratio. Measures reward per unit of systematic risk.
- Jensen’s alpha. Measures returns above those predicted by the CAPM.
- Information ratio. Measures abnormal returns per unit of diversifiable risk. , where or .

Somewhat more flexible is the M² measure, which allows us to make direct comparisons of returns: .
The rationale behind the M² measure is to lever a portfolio so that the variance is equal to that of the benchmark and then compare returns. The purpose is to eliminate the difference in risk, so that it is a fair comparison. Merely comparing returns does not work become the risk is different.
The first term stands for the return on a ‘managed portfolio’ (X’), which is a combination of the real evaluated portfolio X and the risk-free object. The managed portfolio (X’) is created in such a way that its standard deviation is the same as the standard deviation of the benchmark portfolio (making direct comparisons of returns possible). Disadvantage of this way of measuring is the fact that you have to ‘leave’ your level of risk (which might be the desirable level) to compare portfolio X to the benchmark portfolio (which can for example be the tangency/market portfolio).

A second way of making returns directly comparable is to use the risk-free object in such a way that the benchmark portfolio has the same standard deviation as the real evaluated portfolio X. This way offers a closers analogy to Modern Portfolio Theory (MPT). The matched portfolio can be taken to the optimal portfolio from the MPT (assuming the MPT holds).
An advantage of this second way is that it is a natural comparison of active and passive investment. When the real evaluated portfolio X has the risk (std. dev.) you find desirable, you can easily check whether or not you are doing better (or worse) than the benchmark portfolio.
In a traditional way, the M² measure is linked to the Sharpe-ratio.

Hedgefunds
Hedgefunds are ‘exotic’ investment structures where measuring the excess returns is a difficult job. The hedgefunds often have strategies which are ‘market neutral’, meaning that they have a low or zero correlation with , while making returns well above (often with taking highly leveraged positions). A low or zero correlation with the market doesn’t make the strategy less risky; they are plays on arbitrage, default risks, convertibility, currency relations, etc. The CAPM assumes this risk is diversifiable and hence, not priced, but it is certainly there. So we need to look for risk measures that are appropriate, especially as returns from these strategies are extremely non-normal. Hedgefunds are often secretive regarding their holdings (legal reasons, competition). But even if we have the proper returns the risk adjustment may have to include nonlinearities and alternative risk measures (upside vs. downside β’s, etc.).

Hierarchical portfolio management
Finally, we have to distinguish between the performance evaluation of a complete portfolio and the performance evaluation of a part of the invested capital. Each manager in (for example) a company has its own field of expertise where he is likely to invest in. However, if each manager builds his portfolio according to the data in his own remit, the total portfolio of the company might in fact be not optimal. This is the problem of hierarchical portfolio management.
The combination of portfolio A and portfolio B is inside the ‘total’ bullet, which is the bullet of using all assets at the same time. Because the combination is inside this bullet, AB is not an efficient portfolio, where it could have been efficient using all assets at the same time.
Solutions: assume each sub-portfolio is well diversified, then treat them as separate assets and optimize again on this level. Or recognize that the sub-portfolios may not be well diversified and coordinate among different sector managers, this means you look at total risk again.

Marginal utility and risk
The implied risk attitude (descriptive) and the preferences/marginal utility (normative) are basically the same. Until now, we devoted little attention to preferences. The assumption that preferences are only a trade-off between mean and variance is an oversimplification of reality, and if fact we have little insights into the true nature of risk. One crucial point however is the relation between risk and marginal utility at t+1:

The time preference β is constant and current consumption is known so marginal utility at t+1 determines and therefore risk is discretely related to future marginal consumption. In the optimum an extra unit of return is (almost) worth the extra unit of risk. We compare incremental changes, so it is logical that marginal utility is the driving factor.
In the CAPM the SDF is: . Again, we see the importance of the assumption that everyone invests in everything, otherwise the relation between and either wealth or consumption would be weak.
But it might be unrealistic to judge risk solely as a relation to the market portfolio. In fact, all investors have some sort of poorly diversified holdings (house, human capital, business ownership), which means their utility is influenced by other factors than the market.
Still, even in a more general setting we can say something about the SDF, which is based on marginal utility which must make economic sense. This will lead us to Stochastic Dominance.

Stochastic dominance
The orders of stochastic dominance are a framework that categorizes utility functions according to the assumptions they require. Rather than finding factors, we allow the SDF to be a set of numbers and see which restrictions should be imposed on that vector for it to make economic sense.

The first economic assumption we make is that of nonsatiation (one prefers more over less), which means that marginal utility (and the SDF) should be positive at all times. This corresponds to the assumptions of first order stochastic dominance (FSD); if a portfolio is better for all investors who adhere to nonsatiation, it is said to FSD dominate the alternatives. This is the case when an asset has a better payoff in all states. Similarly, a portfolio for which no such dominating asset/portfolio exists is FSD efficient, and belongs the FSD efficient set (no better risk/return tradeoff).
The second assumption is that of risk aversion: one prefers a certain alternative (C) over a variable one (A1, A2) given the same expectation (A). Risk aversion requires a marginal utility and hence the SDF to be decreasing (concave utility). The assumption of risk aversion brings us to second order stochastic dominance (SSD). The SSD merely adjusts the FSD by replacing the nonsatiable investor by the nonsatiable and risk averse investor. So SSD needs FSD. If an asset is FSD efficient it does not mean it is SSD efficient too. But the opposite is true; if an asset is SSD efficient it is FSD efficient as well.

SD versus CAPM
The SD approach is far less restrictive as the MV approach is. In the CAPM () risk aversion is present as long as b < 0. But if the market portfolio is not the only relevant factor we still need to concern us with the question if all elements of the vector b () are negative. Nonsatiation can be violated in the linear CAPM; a straight line with a less than zero slope will at some point be negative, which implies investors prefer less over more.
Also, the CAPM model, constructed from preferences, implies quadratic utility. This because U’ is straight line, implying U to be a parabolic/quadratic function. This can only be valid over limited range, namely the part of the function which has a positive slope. Furthermore when returns are normally distributed, the CAPM and SD result in the same, but in reality this assumption does not hold.
The MV-framework (CAPM) can create another unforgivable error; if an asset has a better return in every state than another asset but also has a greater variance the MV-framework cannot choose between them, while it is FSD dominant.
The MV-framework is clearly not a good description of reality, but as an approximation it can still be valuable. Tsiang (1972) defends that over a limited range the Taylor theorem is applicable () and that the differences between quadratic utility and real utility will be small. Furthermore FSD dominance is rare in practice.

Beyond SSD and the usefulness of the SSD
A CAPM/MV approximation performs very weak in explaining extreme returns (for example, just extrapolate the straight line in the above graph), while those can be very interesting. Therefore we turn back to SD. We impose very little structure on the SDF under SD rules, which is fortunate for a descriptive approach: we can see if a vast number of risk/return tradeoffs will capture the market. But perhaps we want to narrow is down a bit more.
The preference of positive skewness can be added. Skewness is related to asymmetries in any place on the distribution. Suppose one buys fire insurance; one does so because one severely dislike the asymmetric nature of the risk (burnt down house is a big loss, the situation has no corresponding upside), even though insurance gives a negative return (insurer makes a profit). The same is true for lotteries; the expected return is usually between -30% and -60%, but still people buy tickets. Still people buy lottery tickets because of the attractive upside potential (positive skewness). There is probably a real skewness preference. The corresponding SD rule is third order stochastic dominance (TSD), which says the SDF is positive, decreasing and decreasing at a decreasing rate (U’>0, U”<0, U”’>0).
The fourth order of stochastic dominance is based on kurtosis (fat tails) aversion (U””<0). Beyond kurtosis, economic arguments are difficult to make. Also, the difference between orders becomes smaller as the new restrictions eliminate fewer portfolios (the new restrictions are less restrictive).
It is possible to build descriptive models based on SD principles, let the PC calculate which SDF best fits the data, instead of imposing a parametric risk/return tradeoff we let the data speak. The results are mixed, some anomalies are easier to explain than others (momentum remains a hard one).
Furthermore, when excess returns are used one can construct pricing errors: , and minimize the sum of weighted (squared) pricing errors (GMM, assets pricing tests). Post & Versijp (2007): strong risk aversion to big losses (-12% and more) found in data, and the CAPM seems to perform badly for large negative returns ().
What is not possible with SD is portfolio optimization. Instead of a single utility function or a 2 parameter MV-model we have a whole collection of possible utility functions. Each dataset will potentially give a (completely) different SDF. Of course, if investors are similar enough that their SDF should be alike, those differences should not be a problem, but we cannot prove they are similar enough.
Hence, we cannot construct an optimal portfolio, or even a tangency portfolio. We can only drawn normative conclusions if we have a parametric characterization of the SDF (or hard numbers).
For this reason we look at traditional measures of risk and risk aversion that work with a more explicit SDF (at minimum the form of the line should be specified)

Traditional measures of risk
- Constant Relative Risk Aversion (CRRA) utility functions are assuming risk aversion stays the same as a percentage of wealth, i.e. an asset with either -10% or -15% return is equally risky regardless of the underlying amount (3 euro’s or 3 million). constant, x = wealth or payoff.
The x is needed for the relative element, the second derivative enters since the shape of risk aversion is determined at that level (decreasing marginal utility). With this function type you can look at returns instead of wealth.
Example functions: power utility (U(x)=(1/α)Xα) and log utility (U(x)=ln(x))

- Constant Absolute Risk Aversion (CARA) utility functions are assuming that risk aversion stays the same if the underlying amount at stake or your underlying wealth stays the same. A millionaire will find it equally bad to lose 100 euro’s as a broke student, This may only work better with big risks. constant.
Example functions: (negative) exponential utility (U(x)=-e-bx). Quadratic utility has increasing ARA.

- Decreasing Absolute Risk Aversion (DARA) utility functions are supported by the economic argument that a rich person will care less about losing a fixed amount than a poor person (they still have CRRA). is declining in x.
Example: U(x)=ln(x). This setup is reasonably popular.

Lower partial moments
The idea is that returns below a certain threshold constitute risk. The advantage here is that large positive returns do not make the same contribution to risk as large negative returns. Various thresholds can be used and the intensity of riskiness can be altered with order (n). The most appealing characteristic of LPM is that under certain conditions it can be expanded to an equilibrium model like the CAPM.
If returns are normally distributed, LPM and MV optimalisation will give the same results, but if returns are not normally distributed LPM’s can still be valid and offer a more promising description of risk.

In LPM investors are risk neutral above τ and risk averse below τ. Returns above τ do not add any risk. If τ=0 and n=2, the risk measure is called semi-variance (and if the distribution is symmetrical it is the same as variance for the CAPM but only negative returns). This makes most economic sense if one uses excess returns, which is actually the same as using as your threshold ( is the natural alternative).

The M-LPM nests the MV model if the latter is obtained from assumptions on normal returns. The M-LPM problem is as follows: , with (X = returns).
This will give us an efficient frontier not unlike that of modern portfolio theory (though it will look different in mean, σ space if that is not the correct risk measure).
A tangency portfolio can be constructed if there is a risk-free object present and portfolio separation still applies. Actual portfolio optimization is more sophisticated due to the discontinuity of the objective function at τ. Furthermore, the sensitivity to the threshold can be large and a too low threshold might lead to elimination of risk. Also, the importance of a representative sample increases and portfolio weights are no longer a linear function of the required return.

Lastly, one can create a β that acts just like the β in the CAPM (instead of covariances one uses co-LPM’s).
In SDF terms, the equilibrium model based on an LPM of order 2 and excess returns () looks as follows: .
The assumption that the investor is risk neutral above the threshold is doubtful, but as an approximation LPM might be satisfactory and one can combine different thresholds to create a more realistic description. In fact any positive (nonsatiation) and decreasing SDF (risk aversion, so at least SSD applies) can be constructed (LPM will not violate the nonsatiation assumption by constructing SDF’s with negative values). However, the equilibrium model which nests the CAPM cannot be easily obtained this way.

Assets pricing models: not just stocks
One can also invest in bonds, real estate, derivatives, commodities and abroad. Currencies may themselves also qualify as a separate asset. All these asset classes have their own return distributions and correlations. In theory, this should not make a difference because portfolio theory and the CAPM claim to be valid for all asset classes. The CAPM even requires an investment in all assets/asset classes (tangency portfolio). But in practice, these assets classes have their own factors, which drive the risk/return tradeoff. Apart from the theoretical claims, stocks are popular because they are the easiest to analyze.
The picture becomes even more complex if we realize that various assets classes can be bundled. When a European investor buys a mortgage-backed security in the US he actually buys a combination of: a bond (lends money to a bank), real estate (call option on the collateral), currency (the dollar) and derivatives (mostly short positions, depending on exact terms. For instance: early repayment, repurchase, convertibility). So 1 assets can contain multiple others.

Bonds
Bonds are fixed income and mostly the bond market is bigger and more liquid than the stock market. 3 main characteristics of the bond market (which are at the same time 3 reasons why the bond market is hard to model):
- The upside potential is limited because profits arise from a decrease in de interest rate, but interest rates can only decrease to zero at the lowest. The downside potential is -100% so the distribution is (negatively) skewed.
- There is a nonlinear relation between risk-factors and returns.
- Coupon payments need to be reinvested. This is also true for stocks (dividends).

Risk factors for stocks are sometimes hard to identify, while for bonds they are obvious (only these 2 factors can change bond prices):
- Interest rate risk. The demand for credit tightens/loosens, affecting interest rates and with this prices of all bonds
- Default risk. The chance that full repayment is not forthcoming has changed.

Again the SDF is useful, because it should be applicable on all assets:

The covariance shows that the situation in which you receive the return matters. If the interest rate does not change or if it is completely unrelated to the SDF and the same goes for the probability of default, then the covariance between the SDF and the returns equals 0 (). This will lead us to:

In reality the covariance is not zero, because there is a relation between interest rate and default risk and other risk factors we identify for shocks.

Interest rate risk
The effects of a changes in the interest rate are:
- Not linear: .
- Not symmetric: 1000, i=6% and T=5. If i changes to 7% the return is -41.0 but is i changes to 5% the return is +43.3.
Another source of nonlinearity is time to maturity, as long term bonds are more sensitive to interest rate changes than short term bonds. Also, high coupon bonds are less sensitive because you get a larger part of your cash earlier (lower duration). This means that the biggest part of the cash flows is discounted at lower values of t (where lower coupon bonds have large part of cash flows later and have them discounted against higher values of t) and therefore a change in the interest rate has relatively less impact. These are all reasons which make it more difficult to incorporate bond factors in the SDF.

The interest rate risk can be approximated using Taylor expansions: duration (linear) and convexity (quadratic). Duration is the average time to maturity (weighted by discounted cash flows):

Duration incorporates time to maturity and can be used as a linear approximation of price changes due to changes in the yield. In fact, this is even a proxy for short term returns on a bond (in the long term default prob.’s shift too). .

Convexity is a second order approximation: ,
.
Duration is a bad approximation when is large, convexity is a lot better but is still off when is large.
However, yields are an artificial construction, it is a combination of several interest rates in de future, but it does matter for whether the expectation of , or changes. Especially, when we look at bonds that mature in between. Furthermore, yields and spot rates are only the same for zero-coupon bonds (errors in literature). Fixed income analysts look at the term structure instead of yields (very difficult).

Default risk
Bonds are rated. The main difference between government bonds and corporate bonds is that the first are on average higher rated and are more liquid (governments can raise taxes which lowers the default probability, but a company cannot raise profits directly). The exact default probability is needed for the value of a bond while ratings only give a range which may not have been updated for a while. Empirically, we see that many big investors have the restriction to solely invest in investment grade bonds (BBB or higher), so an adjustment from BBB to BB can trigger a sell-off. Bond prices and returns can therefore adjust more than implied by merely the default probabilities.
Default probabilities are unknown; if the value of a bond is 90% of what it should be when discounted at the risk-free rate the default probability is much lower than 10% because there also is a risk-premium. This risk premium may depend on the credit rating, but the true premium is unknown. Looking at historical defaults and calculating probabilities from there may solve this chicken-egg problem. This is not merely chasing the usual suspects because rating companies often have a lot of detailed information on the rated company, which is hard to get for outsiders. Results: the lower the grade the higher the default probability and the longer the maturity the higher the default probability (see table). Furthermore, we need to keep an eye on the recovery rates because default and -100% return do not have to be equal.
Even so, investors have a hard time correctly judging risk, in the figure are the risk premia they demanded (the eventual result). These were only default risk premia, because interest rate risk was the same for all. This dataset was affected by 2 recessions, causing a large amount of defaults. Maybe a general economic factor does make sence…

5 Factor model
However, a general economic factor and are – in reality – two different things. Research that tries to explain both stock and bonds (like F&F 1993), uses and a proxy for defaults. In total, that leaves us with a 5 factor model for both the SDF and the Security Market Line.
, where
- HML = high-minus-low (value factor),
- SMB = small-minus-big (size factor),
- Term = long term minus short term government bond return and
- Credit premium = government bond index minus corporate bond index.
The bond related factors also have solid explanatory power when stock returns are taken as dependent variable. In fact, stocks and bonds are related (stock and bonds are competing assets). Interest rates in general determine the time value of money. With stocks dividends and proceeds from the sale of the stock are often far in the future (they have long duration). Empirical proof comes not only from regressions; a major unexpected interest rate shift by the ECB or FED can also be seen at the stock market. Furthermore, equity duration is interesting on its own; duration is not only used to gauge interest rate risk, but also to immunize it (zero net duration).
In fact, stocks and bonds are directly related. When a company defaults the stock are (nearly) worthless. The stocks are a long call option on the value of the company, with an exercise price equal the (present) value of the debt. Bondholders have a short put option with the same underlying values; if all goes well, they get a fixed amount and if not, they get less and risk losing their investments.
This also means the hierarchical portfolio construction (separate bond and stock positions) is not a good idea. Portfolio theory claims it works regardless of the asset type and it is clear that the correlation between the two classes is not zero. The link with descriptive research is especially strong: bond returns cannot be explained by stock factors only (and vice versa) and a comprehensive model works better than 2 smaller ones.
However, placing the 5 factors in the SDF does not suggest a transparent (portfolio formation) nor overly accurate (nonlinearity interest rate risk) model. Some factors are nonlinear (for example, interest rate risk) and can result in bad portfolio formation or optimization.

Derivatives
The SDF approach is also valid for derivatives since their returns are based on those of the underlying assets, so only the risk-factors, needed to explain the underlying assets, are needed. Derivatives can be nonlinear instruments (options) but in a multiperiod model one can replicate a derivative with a risk-free object and the underlying asset. We only need to adjust the proportions along the way (time to maturity is also a cause of nonlinearity for options).
One can replicate this option () by buying 5/9th of the share and borrow $36.86 to finance this. (Note: this says nothing about the probabilities of price changes nor ones risk attitude). A problem here is the continuously rebalancing of the expected returns, but when rebalanced often enough, one can create an entire return distribution from a binomial tree.
The entire argument is made from arbitrage; any non-satiable investor will jump at the opportunity bringing the option prices at the correct level again.
Of course, this only works when we know the future returns, rounding up the usual suspects (backward-looking) will not help much because derivatives are even much more sensitive to changing expected returns (forward-looking). This is best shown with the CAPM-β of derivatives, which is the weighted average of the underlying assets β (the stock and the risk free object). (Put options have negative β’s). The problem here is that the weights change every time the stock price (and the probabilities) changes. This means the option β’s change all the time, and perhaps quite drastically because the risk profile of an option can change very fast. Conclusion: SDF does not work well with derivatives.
The relation between the SDF approach and derivatives is still useful for 2 reasons:
- Risk neutral probabilities. Risk neutral probabilities: (for the same option example). This results in and . However, the real probabilities will be different, but it still might be valuable information especially when you have no idea what the real probabilities are. At least, these probabilities are forward-looking and shifts in risk neutral probabilities presumably indicate some shift in the real probabilities as well.

- Option bounds. Option bounds: a positive SDF will lead to the following restrictions:
1. The put-call parity (),
2. C > 0 (a call can always result in a payoff when time to maturity > 0),
3. C > (a call is worth more than the intrinsic value, since the underlying stock can always rise even further)
4. C < S (a call is worth less than the stock because you have to pay the strike price. These 4 restrictions result in option bounds ().

In a single period we could treat derivatives as additional assets, but this is rarely done; either the multiperiod justification is used or derivatives are ignored. Also, derivatives are generally poor choices in the traditional portfolio model: their price depends on tail-risk, which is usually much larger than variance suggests (hence they are too costly for long positions). Derivatives are more suited for risk management and trading strategies based on information/expectations (not limited to expectations about returns, they can also be used to bet on risk).

Foreign assets
When investing abroad, currency risk can become important (foreign assets = asset risk + currency risk). Returns and covariances of assets can be very different in home currency. Usually, it is best to recalculate the whole portfolio optimalisation process in home currency.
Diversification benefits should be present in almost all cases (the non-dollar and non-euro parts of financial markets are simply too big, both for stocks and bonds). International correlations tend to be lower than those between assets in the same market, even though globalization may push correlations upwards, these correlations are especially high in bad states (less correlation = more diversification benefits). Furthermore, emerging market offer opportunities not easily found elsewhere and one can invest in industries that are not available domestically (or just without large growth opportunities). Also, emerging markets have investment possibilities that offer a risk/return ratio that is unobtainable in developed market without employing a lot of leverage (so foreign assets might give higher return without the need for shortselling).
Restrictions play a major role in international investment too. In general only developed market allow for easy shortselling. Developing markets have shortselling restrictions, or even restrictions on long positions held by foreign entities (a cause of the home bias). After accounting for these restrictions, the (diversification) benefits may decline notably. Investing in foreign asset also opens the door for exchange rate risk, one invests in 2 assets simultaneously; in the foreign asset and in currency (foreign assets = asset risk + currency risk).
The CAPM does poorly in describing the international context and its assumption are even less plausible. Foreign investment may benefit from local/specialized knowledge. Here the dangers of hierarchical portfolio management surface again.
We have to look at risk and return again to find out whether currency itself is a good investment. Pinning down factors is tricky. In theory interest rate differences should explain most long term currency movements. In reality interest rate difference are not the complete story; expectations regarding the central banks actions can be influential too (which themselves are more focused on inflation). Furthermore, currency market can be quite focused on the future rather than the present (currency markets are probably the most forward-looking markets) and exchange rates may in some cases be managed in a certain degree. Foreign assets pose a problem in the SDF approach; both due to changing correlations and the issue of hedging (normative) and the choice of appropriate SDF factors (descriptive).

Role of information
Information (descriptive) affects preferences (normative) and leads to a difference in investments (difference in return distributions) (normative).
The optimal portfolio weights are very sensitive to a change in expectations.
Until now we assumed that all information is available to all at no costs. However, collecting information takes effort and has costs (fees etc.), and some information may not be available at all. The consequence for a single investor who has superior information is approached in the Black-Litterman model.
Now we look at the consequences of many investors with superior information at hand (it alters the description of the market): * Not all investors will invest in the same tangency portfolio anymore (the bullet shifts with different information) * Demand (X) for an asset will no longer be solely a function of asset characteristics (the risk/return relationship). Also information available plays a role. * Trading on this information will affect the price (returns and return movements become a piece of information itself, not just regarding the underlying asset payoffs, but also regarding what others think those payoffs will be). Therefore prices are not only based on fundamentals (dividends etc.).

Grossman model
The Grossman model incorporates the role of information and is based on several assumptions: * One risky asset (think well diversified portfolio; makes reasonable sense if the information relates to the economy as a whole, or the risk factors instead of a specific company); prices are normally distributed, total supply is (supply is fixed), which is a reasonable assumption. * The risk-free assets is available in unlimited quantity at a fixed price. * N traders which maximize their end-of-period wealth with a CARA utility function. * At the beginning of the period, each trader gets a signal , indicating the distribution of the end-of-period value with some uncertainty (capturing quality of the information): * A high variance of indicated poor quality of the information, itself is random to account for expected occurrences not related to information.
Under these assumptions, wealth is maximized (using for example: ):
; (which is sort of a budget restriction, assuming all wealth is invested in the risk free object, and that the investment in the risky asset is financed with borrowed money).
The optimal portfolio can then be expressed in terms of ; the amount of the risky asset that is purchased.
,
where the numerator tells us that better investment opportunities lead to investing more in the risky asset and the denominator shows that more expected risk leads to investing less in the risky asset.

The properties of the sign can be made more explicit:
;
Resulting in:
.
If the correlation is high, the information is good. The demand of the asset is adjusted depending on the squared of the correlation coefficient. A good signal gives a high correlation and hence a big adjustment on portfolio holdings. Other investors are assumed to respond to their private signals in the same way, affecting their demand. These signals therefore affect demand and change the price. If we have an expression for each investors portfolio, we can equate supply () and demand () to find prices.
The price itself is an average of the effects of preferences, initial wealth and expectation. The first 2 are modeled in such a way that they are inconsequential (2 small investors can act the same as 1 bigger investor, provided the risk aversion coefficient () is chosen properly).
Consequence: One can see price as a combination of all private signals. And investors adjust their portfolio accordingly: if the market has information that the risky asset is a much better investment than one personally believes, it is logical to adjust one’s own portfolio accordingly (demand is proportional to the information).
This only works if everyone has information of the same quality, or if you have no idea about the relative credibility of the information. Not very realistic. (Different tangency portfolios for different investors; the CAPM breaks down ).

But in this rational expectations equilibrium, any private information will first be incorporated in the portfolio choice of the investor with that information, but then spread out to all investors. In this way market efficiency is once again reached: current prices reflect the best estimate of future performance, as weighted by the risk/return tradeoff (SDF) (a way for the strong EMH to hold).
However, this fully revealing rational expectation equilibrium is very dependent on its assumptions. It offers an argument to defend market efficiency, but certainly it does not end the debate.
If we adjust the model to allow for enough uncertainty that the price cannot be used to determine other investors’ information, we end with a model that cannot be explicitly solved (because there are also people that act on no information). However, it does give some insights: * Investors can and will use private information, to what extend depends on the quality of the signal. * Portfolios do change according to the signal (private information, expectations) (the signal will affect the price). * There must be some coefficient indicating the aggressiveness of the response of the investors to the signal (responsiveness to the signal). Some investors will be more aggressive in responding to the information (price change) than others. * Investors will still try to extract information from the prices. (Investors will try to determine whether price changes are random or due to changes of expectations/information of the other investors.)

Market efficiency in the Grossman model: Investors get a private signal (), on which they trade. However, the price itself also acts as a signal since the quantity demanded is influenced by the private signals (of all investors). From this publicly available signal (the price) investors will draw their conclusions, so that the price effect of all private information is incorporated.

--------------------------------------------
[ 1 ]. An outperformance of the CAPM is the same as a return without corresponding risk, and therefore is a measure of superior performance.

Similar Documents

Premium Essay

5 Steps to a 5 Ap English Langauge

...Copyright © 2013 by McGraw-Hill Education. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher. ISBN: 978-0-07-180360-1 MHID: 0-07-180360-2 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-180359-5, MHID: 0-07180359-9. E-book conversion by Codemantra Version 1.0 All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill Education eBooks are available at special quantity discounts to use as premiums and sales promotions or for use in corporate training programs. To contact a representative please visit the Contact Us page at www.mhprofessional.com. Trademarks: McGraw-Hill Education, the McGraw-Hill Education logo, 5 Steps to a 5 and related trade dress are trademarks or registered trademarks of McGraw-Hill Education and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property...

Words: 76988 - Pages: 308

Free Essay

Risk Assessment

...Interface * Intra-LTE (Intra-MME/SGW) Handover Using the S1 Interface * Handover Measurements * Handover Optimization and Design Principles * Handover Parameters * Handover Evaluation Mechanisms * Handover Failures in LTE * Conclusion. References: 1. Han, J., & Wu, B. (2010, October). Handover in the 3GPP long term evolution (LTE) systems. In Mobile Congress (GMC), 2010 Global (pp. 1-6). IEEE. 2. Iñiguez Chavarría, J. B. (2014). LTE Handover performance evaluation based on power budget handover algorithm. 3. Rao, V. S., & Gajula, R. (2010). Interoperability in LTE. White Paper Continuous Computing, published in webbuyersguide. com. 4. Cox, C. (2012). An introduction to LTE: LTE, LTE-advanced, SAE and 4G mobile communications. John Wiley & Sons. 5. Kreher, R., & Gaenger, K. (2010). LTE signaling: troubleshooting and optimization. John Wiley & Sons. 6. Network, E. U. T. R. A. (2011). S1 Application Protocol (S1AP)(Release 10). *More...

Words: 259 - Pages: 2

Premium Essay

Jaime Alfonzo Escalante

...computer teacher at Garfield High School in downtown Los Angeles, California. To his surprise, the school lacked the computers he needed for the class, so he took an alternative position as a math teacher. In that first year of teaching, Mr. Escalante was confronted with many obstacles. The school was in the verge of loosing its operative certification; many students scored poorly in their academics, had problems with drugs or were involved in gang violence. As a new teacher, he constantly felt frustrated, with a sense of guilt for not inspiring his class to love math the way he did. Subsequently, and against all the odds, Mr. Escalante made a great contribution in the lives of students, and transformed the face of the College Board’s Advanced Placement Program® (AP®). Most...

Words: 1046 - Pages: 5

Free Essay

Kansai Digital Case

...OMNITEL PLAN 1) OPERATING MARKET: Market sector: Telecommunication After the collapse of Soviet Union in 1991 the first Lithuanian telecommunication company was organized by two Lithuanian entrepreneurs -Doctor of Economics J. P. Kazickas and Victor Gediminas Gruodis. Today JSC “OMNITEL” is recognized as the leading mobile communication service provider in the Baltic countries. From 2004 the company was joined to international telecommunication and network service provider SC “TeliaSonera” which is considered to be listed in Europe’s 5th largest telecom operators. According to the Communications Regulatory Authority (CRA) in 2010 in the latest quarterly report Omnitel in revenue covered: * 35.4% of the Lithuanian mobile market * 39.7% of active subscribers * 56.7% of Lithuanian mobile broadband PC market Omnitel Businesses: GSM mobile services, mobile Internet access and integrated data transmission solutions. JSC „Omnitel“ activities includes data transfering, GPRS ( General Packet Radio Service), web services, mobile and radio service, voice transfering. 2) COMPANIES ORGANIZATIONAL STRUCTURE: (gavau informacija is draugu, kurie ruose diploma apie omnitel) Company constantly improves and develops the organizational structure towards horizontal one, thus ensuring the optimum flow of information from top management to the lowest level of management. However, the detailes of organizational structure of the company are not publicized. ( neisitikinus...

Words: 1471 - Pages: 6

Free Essay

Mobile Comerce Case

...Group C1 Case Study D: M-Commerce Definition: Mobile Commerce is any transaction, involving the transfer of ownership or rights to use goods and services, which is initiated and/or completed by using mobile access to computer-mediated networks with the help of an electronic device Question 1: What is the “8-second-rule” of the internet and why is it important to the m-commerce technology? 8 second rule: a webpage has to be loaded completely in less than 8 seconds in order not to bother internet users who feel frustration while they are waiting Application to the m-commerce technology: since it is a new technology and we will see that users have many concerns about it, phone service providers need to offer fast connection Especially since the current technology isn’t stable and reliable enough, especially in big western cities where it is most important Question 2: Why might it be useful to m-commerce providers to have records of their users’ purchase histories? Through the use of cookies during online navigation, users are “customizing” their use of the internet which allows them to have quicker and easier navigation (automatically fulfilled) This also allows providers to get a lot of information about customers Three main purposes of having records: * Customer Relationship Management * Targeted advertising * Geographic localization Question 3: What is the biggest concern most cell phone users have about using m-commerce services? What are some other...

Words: 493 - Pages: 2

Premium Essay

Brigance Diagnostic Inventory Essay

...require remedial work. Advanced Placement Test (AP) High school students, who are high achievers, have an option to take AP classes. The College Board requires that the AP class be taken by an accredited school and the courses have to be approved and audited by the College Board. Only the approved sites are able to use the "AP" designator on the transcript "Authorization to use the "AP" designation for your course. http://www.collegeboard.com/html/apcourseaudit/faq.html However, a home school educator can create an account on the AP Course Audit homepage at the College Board website. Once the account is created, submit the course material and course syllabus to be evaluated and pre-approved. AP is graded on a simple 5-point scale. Each university has their own guideline of which AP classes to accept and what grade level is required. Before taking the AP course, check with the Universities for their rules. The point-scale for the AP exam ranges from 1 to 5. 5 - Exceptionally qualified to receive college credit 4 - Qualified to receive college credit 3 - Qualified to receive college credit, although a 3 is not accepted by most colleges. 2 - Could possibly be qualified to receive college credit (keep in mind that almost no college will accept a score of 2) 1 - Not recommended to receive college credit There are 37 AP courses offered each May around the country. Remember that homeschoolers can only label courses as “Advanced Placement” on their high...

Words: 1630 - Pages: 7

Free Essay

Is4670

...monitoring at different locations in an organization's networks and systems. As part of a defense-in-depth scheme, it has become commonplace for organizations to build enterprise security operations centers that bank on in part on monitoring the tremendously large volumes of network traffic at the perimeter of their. There has been a recent style toward increased investment in and reliance on network monitoring in order to streamline sensor deployments, decrease cost, and more easily centralize operations. At the same time, the idea of a well-defined defensible perimeter is being challenged by cloud computing, the insider threat, the so-called advanced persistent threat problem, and the popularity of socially-engineered application-level attacks over network-based attacks. Commonly, network and security practitioners hear that the start of any network-centric project is to baseline the network. What exactly is this supposed to mean? Simplistic approaches concentrate on bandwidth utilization over time, typically focusing on spikes and troughs. Some try to describe traffic in terms of protocols and port numbers. More advanced approaches try to classify traffic according to flows or even content. Regardless, there is no single accepted taxonomy for creating a network traffic model. If the network normal challenge is related to traffic passing a single monitoring point, this involves multiple locations. By placing tools in enough locations, it should be possible to visualize the network...

Words: 621 - Pages: 3

Free Essay

Smart Metering Business Case

...Southeast Asia Smart Meter Market Overview: Market Trends, Challenges, Future plans and Opportunities Metering Billing/CRM Asia 2012 May 8, 2012 Hoonho (Andy) Bae Senior Analyst Pike Research Agenda • • • • • • • Smart Grid Overview Smart Meters and AMI Smart Meter Drivers and Challenges Smart Meter Pilot Projects and Plans Market Forecasts Global Market Trends in Smart Meters Conclusion Copyright © 2012 Pike Research 2 Smart Grid Goals Sustainable, Secure, Environmentally Safe Energy • Reduce utility operating costs • Improve grid reliability • Increase energy efficiency Less Grid Intelligence  Reduce overall demand  Reduce end-to-end system losses  Shift peak demand (C&I, residential) • “Soft” consumer-driven “demand response” • Verifiable, centrally controlled demand response • Integrate renewable generation  Intermittent, bulk generation  Renewable Distributed Energy Generation (RDEG) • Support electric transportation transition  Commercial and personal vehicles (PEV) Copyright © 2012 Pike Research More Grid Intelligence 3 General Drivers for Smart Grid Energy Independence Security Carbon Reduction Regulatory Goals Demand Response Safety Own Generation Reliability Customer Service PEVs Forecasting Efficiency Profitability Billing Lower Energy Costs Market Operation Opex Reduction Collections Energy Management Renewables IT/OT Infrastructure Communications / Automation...

Words: 1044 - Pages: 5

Free Essay

Essay 1

...customers running the PC market such as Dell and Hewlett Packard. Intel’s road to success is ferocious around the spectrum given its fast paced disruptive innovation technique that has helped it to back off the competition. Intel is the largest semiconductor manufacturer as of 2005 around the globe, supplies 80% of the CPU’s used in PCs, servers and workstations which accumulate almost 90% of the company’s profits. With competitors like AMD whose fabrication plants were spread around the globe, further it relied on the third party for foundry arrangements in the United Sates unlike Intel who had its Assembly Testing Lines abroad due to which it had to bear high freight costs too. Furthermore, the AT plants used less capital intensive and advanced technology than one used in production of chips. Intel was expanding its wafer by 100mm more that is from 200mm to a 300mm wafer to improve efficiency and allow more chips per wafer. This required the company to increase labor, as wafer manufacturing was a semi-labor intensive. And it needed the labor that was highly skilled. For that, the company needed a plant for which there were four alternatives to consider with each having its own pros and cons. As the company wasn’t allowed to have more than 40% revenues from one facility the company plans to construct a new facility, keeping in mind the footprint they want and the transportation costs they would incur. Pertinent Facts & Assumptions According to the...

Words: 744 - Pages: 3

Premium Essay

Finance

...FINA - 010 Intel: Managing Working Capital Introduction op y In early 2004, Intel was the undisputed leader in the microprocessor industry with about 90% market share. Since 1968 when it was founded, Intel had launched many groundbreaking products. By 2004, it had 450 products and services ranging from the ubiquitous PC microprocessors like Pentium, the 64-bit high-end Itanium 2 to mobile computing chipsets such as Centrino. Intel ended 2002 with revenues of $ 26.7 billion. Many analysts believed Intel’s success was as much about technology as about management. They attributed the success of Intel to its unbroken leadership chain. As one great leader retired, another took over. While Intel was well known for innovation, it had also attempted to be a disciplined company that maximized operational efficiency. Intel realized that as competition intensified, working capital management would become increasingly important. Exhibit: I Intel, Corporate Snapshot 1968 78,000 $26.7 Bill over 450 65 INTC N ot C Year founded: Number of employees: Revenues (2002): Products and services: Fortune 500 ranking: Stock symbol: Worldwide offices and facilities: 294 Source: Intel, corporate website www.intel.com D o Corporate Background A popular story which went around in Intel was that one weekend afternoon in the spring of 1968, Gordon Moore (Moore) dropped by Robert Noyce's (Noyce) home. The two decided to launch a new company to pursue large-scale integrated (LSI) memory. That...

Words: 1745 - Pages: 7

Premium Essay

Intel

...Contents INRODUCTIONS AND HISTORY: 2 Product and market history: 3 The Intel’s market, suppliers and competitors: 4 The Current and Future Challenges to Intel: 6 Analysis of Intel Corporation: 7 Corporate strategies: 8 Conclusion and Recommendations: 8 INRODUCTIONS AND HISTORY: Intel is one of the world’s largest and very best introducers of semi conductor chip Makers Company. It’s an American based multinational chip makers corporation which is located Santa Clara, California and founded on founded mountain view on July 18, 1968 by Gordon E. Moore , Robert Noyce, Arthur Rock and Max Palevsky. Rock was the Chairman of the Board. After Rock Andry Grove ran the company till 1980 till 1990. The word Intel is basically used in terms of intelligent. Intel manufactured many products as motherboards ,chipsets, network interface controllers and integrated circuits, flash memory ,graphics chips ,embedded processors and other devices which are used in communications and computing systems on large scale. In ages of 1990 Intel was only be known primarily to engineers and technologists i.e. Intel inside which made it a household name, along with its Pentium processor. The main ability of Intel is to combine advance chip design capability with as leading-edge manufacturing capability. As compared to other companies like Google in today’s world Intel is not using common system. As Google is transferring data from long distance by using fiber optics but when machines...

Words: 2338 - Pages: 10

Free Essay

Advanced Micro Devices (Amd): Strategic Plan for Managing Technological Innovation

...Advanced Micro Devices (AMD): Strategic Plan for Managing Technological Innovation TM 583 – Section C Professor Edmead 8/21/10 Section 1 – Strategy TCO F – Given an organizational and industry context, identify and suggest a deployment strategy that will facilitate the success of a technologically-driven organization. Advanced Micro Devices (AMD), founded 1968, in Sunnyvale, California is a producer of Central Processing Units (CPUs), the main computing component in modern computers. AMD’s primary capability is the design and engineering of consumer, workstation, and server CPUs. Initially, AMD competed with Intel by reverse-engineering the original 8080 processors and then creating their own x386 variant, but a lack of funding stymied sustained, long-term innovation (Valich 2008). In many ways, this scenario is quite indicative of the role AMD has played throughout its history: the underdog. They leveraged their core competencies of microprocessor engineering by assimilating the designs and processes of competitors and then building upon that knowledge to create profitable (usually) products and services. However, AMD has experienced PR missteps (like the Phenom I TLB bug debacle on an already late-to-market product) from which they have struggled to recover. In order to re-gain the confidence of partners, suppliers, and consumers, AMD must prove, once again, that it’s not the size of the dog in the fight, but the size of the fight in the dog. 2 Looking toward...

Words: 2194 - Pages: 9

Premium Essay

Literature Review United States of America

...a result they are inviting more and more investments by allowing foreign investors to invest in their land. There are several factors that help or hinder the economic growth of a country, and the factors, that are often identified as stimulants (World Investment Report UNCTAD, 1994) for a country’s growth are: (1) Large amounts of investment capital, (2) Advanced Technologies, (3) Highly skilled labor, (4) Well-developed transportation and communication infrastructure, (5) Stable and supportive political and social institutions, (6) Low tax rates, and (7) Favorable regulatory environment. Differences in the growth rates of the countries are explained by the differences in the endowments or levels of these factors (Dondeti and Mohanty, 2007). FDI has long been recognized as a major source of technology and know-how to developing countries. Indeed, it is the ability of FDI to transfer not only production know- how but also managerial skills that distinguishes it from all other forms of investment, including portfolio capital and aid. While foreign portfolio investment may, in some cases, contribute to the capital formation in a developing country, often, the capital flows via this route are limited, and above all, they do not provide the advanced technologies needed to compete in the world markets. FDI can accelerate growth in the ways of generating employment in the host countries, fulfilling saving gap and huge investment demand and sharing knowledge and management...

Words: 275 - Pages: 2

Premium Essay

Corporate Finance

...coIn 1950, there were 49 countries with stock exchanges, 24 were in Europe and 14 in former British colonies such as the United States, Canada and Australia. Their usefulness was seen as limited to only the wealthier countries in which they resided. Developing countries had low levels of savings and limited means to attract foreign capital; stock markets played an insignificant role in their economic growth before the 1980s. Funding for economic capital came primarily from foreign aid, state-to-state from advanced industrial countries to developing economies during the 50’s and 60’s. During the 1970s there was an increase in private bank long-term lending to foreign states that nearly equalled state aid, and as Keynesian ideas came into disrepute due to stagflation. In 1982 when Mexico suspended its external debt service, it marked the beginning of the debt crisis throughout the developing world; banks severely limited lending to developing nations. The IMF and the World Bank supported stock market development not solely on the grounds of ideology but rather that the stock market is a natural outgrowth of a developing financial sector as long-term economic growth proceeds and also as a criticism of early development efforts through Development Finance Institutes (DFI) . These DFI’s had difficulties during the 1970s economic crisis of the third world. Singh cites the World Development Report of 1989 that the poor performance of these DFI’s was due to the “inefficiencies of these...

Words: 2590 - Pages: 11

Premium Essay

Financial Instituation

...Assignment One The decoupling debate is back! Indeed, the notion that the health of emerging markets is no longer determined by the ups-and-downs in developed economies -- or even that emerging markets may be insulated from global shocks -- has been in vogue of late. Last fall, the collapse of Lehman Brothers and the ensuing stock market crash dragged down emerging markets: decoupling seemed dead. Now, pundits who recently mocked the hypothesis are starting to wonder aloud if there might after all be something to it. The IMF forecasts that advanced economies will contract 3.8 percent in 2009; emerging economies are expected to post 1.6 percent growth this year. And international investors are flocking to emerging markets, which have beat those in developed countries by nearly 50 percent in the past six months. Yet, neither the synchronized turndown nor uneven rebound is sufficient to prove decoupling true or false. The term is amorphous, and perhaps best used as a Rorschach test for the proclivities and interests of its wielder. But the underlying concept has staying power. And certain aspects of the decoupling hypothesis are important to examine, to see what they portend for the future of the global economy. First, there is a good deal of confusion about the distinction between cross-country synchronization of financial markets and economic activity. With capital and news flowing more freely and quickly across borders, stock markets around the world are increasingly synchronized...

Words: 1136 - Pages: 5