Shared Flashcard Set

Details

MGT171M1
M1 Material
40
Economics
Undergraduate 3
01/23/2011

Additional Economics Flashcards

 


 

Cards

Term
Decisions: Maxmin Criterion
Definition

Choose that act whose worst outcome is largest. This decision criterion studies the worst-case scenario outcome, and bases choices on these.

 

Omelette decision
State                     Rotten     Fresh
Break egg                     0        3
Throw away egg           2        2

Term
Decisions: Maxmax Criterion
Definition

For each possible act, we first find the best possible outcome. Then we choose the act whose best outcome is best.

 

Opposite of MaxMin Criterion

 

Omelette decision
State                     Rotten     Fresh
Break egg                     0        3
Throw away egg           2        2

Term
Decisions: Regret Matrix
Definition

Minimizes the maximum ex-post regret.

 

Regret here is the difference between what I would have gotten had I made the best choice for the state which occurs minus what I actually get.

 

Omelette decision
State                    Rotten             Fresh
Break egg            2-0=2              3-3=0
Throw away egg  2-2=0              3-2=1

Omelette decision
State                   Rotten              Fresh
Break egg                2                    0
Throw away egg      0                    1

Term

Discrete Probability Distribution

 

example

ex// 5 notes pg. 19-20

Definition

Discrete probability distribution:

   Discrete probability distribution is used to calculate probability for a countable number of occurrence of an event.


ex//
Consider a coin is tossed, we will get 1 occurrence(either head or tail) out of 2 possible outcomes.

Types of discrete probability distribution:

    1) Finite discrete distribution: Finite discrete distribution is used to calculate probability for a countable number of values.

ex//
   Consider a die is rolled, there is a chance of 6 possible outcomes and total number of outcome is a finite number 6.

Term
Continuous Probability Distribution
Definition
Continuous probability distribution:

   Continuous probability distribution is an infinite probability distribution used to find probability for a continuous range of values.

ex//
Consider students mark in a class, we want to calculate the probability of students those who got above 35% and below 80%. Here, we are finding probability with some range of values and this process will continue for each student in the class.
Term
Probability: Discrete Density Function (PDF)
Definition

Represents a probability distribution →probability" of a given point (it is a function which tells you the probability of certain events occurring.)

A discrete density function puts weight only on a finite number of outcomes, {x1, ..., xk}. In this case, a density function is a function f for which
k
Σ  f (xi) = 1,
i=1

and for which f (xi) ≥ 0 for all i.

 

PDF Properties:
the prob. that x can take a specific value is p(x):

1) f(x)>0

2) sum of p(x) over all possible values of x is 1

Term
Probability: Continuous Density Function (PDF)
Definition

→probability" of a given point. This is a distribution over the real numbers. It is specified by a density function f : R → R for which f (x) ≥ 0 for all x and



∫f(x)dx = 1
-∞
                                                                                                   
We have to be careful here: f(x) is not the probability of x; but ∫b a f(x)dx is the probability of the interval [a, b] → probability that the outcome lies between a&b

 

PDF Properties:
the prob. that x can take a specific value is p(x):

1) f(x)>0

2) sum of p(x) over all possible values of x is 1

Term
Probability: Cumulative distribution functions (CDF)
Definition

The cumulative distribution function F is given by
F(x) = Pr(X ≥x)
That is, for any real number x, it gives the probability that we get at most x. (function F(t) which tells you the probability that X is less than or equal to t)

 

CDF Properties
1) CDF's are always increasing
2) CDF's are always bounded between 0 and 1
3) CDF's always go to 0 to 1 in the limit
4) They are "continuous" from the right, so there are no downward jumps

 

Discrete CDF:

F(x) = Σi:xi≤x f(xi)
       
Continuous CDF:

F(x) = ∫x -∞ f(y)dy

Term
Expected Value: E[x]
Definition

The expected value is just a weighted average of the values that the distribution could take, where the weights are given by the density function.

 

Discrete probability distribution with density p:
k
Σ p(xi)xi
i=1

Continuous probability distribution with density p:

xp(x)dx
-

Term
Expected Value of a function
Definition

expected value of a function of a distribution

 

k
Σ g(xi)f(xi) in discrete case
i=I


∫ g(x)f(x)dx in continuous case
-∞

Term

Expected Value E[x] for continuous cdf F

 

examples

ex// 8 notes pg. 26

Definition
For an arbitrary continuous cdf F, the expected value is given by
0                    0
∫ (1-F(x))dx - ∫ F(x)dx
1                   -∞

For a distribution that takes only positive values, the expected value is given by

∫(1-F(x))dx.
0

For a distribution that takes only negative values, the expected value is given by
 0
-∫F(x)dx.
-∞
Term
Variance: E[X-E(X)]2
Definition

Expectation of (X-E(X))2, E(X) denotes the expected value of distribution X. 

 

Examples:

ex//10 notes pg. 28 → coin flips

ex//11 notes pg. 29 → uniform

Term

Binomial Distribution


(n         n!
      =    ———
x)         x!(n-x)!

 

For a set size of n, this provides the number of subsets x
p= prob. of success of any given trial

ex// sets of size 2 in 3 flips →(3 --> 2Heads in 3 flips
                                                2)

Definition

p= prob. of success of any given trial

 

Pr(x) = (n px (1-p)(n-x)
             x)

 

Expected Value with parameters n and p: E[X]= np

Variance with parameters n and p: Var= np(1-p)

Term
First Order Stochastic Dominance (FOSD)
Definition

For any value x, the probability of getting at least x has increased (or stayed the same) under q from p → q FOSD p (for discrete or cont. distributions)


p     .25 .25 .5
         0    1   2         Prob. of getting 1 under q ↓ from p
       ————         q still better b/c prob. of getting at least 1 has gone up
q     .20 .10 .7        (p= 0.75 vs. q=0.80)
         0    1   2        prob. of getting 2 ↑ (p=0.5 vs. q=0.7)

 

The first order stochastic dominance concept tells us when one distribution is intuitively better than another.
But there is a problem: in general, it is possible that for two distributions, neither first order stochastically dominates the other.

 

p       .25 .25 .5       prob. of getting at least 2 greater under q (p=0.5 vs q=.6)
           0    1   2       prob. of getting at least 1 greater under p (p=0.75)
         ————
q        .3 .10 .6
           0    1   2

 

For two distributions X and Y with cdf’s F and G, X first order stochastically
dominates Y if for every x, FX(x) ≤ GX(y) → X is better if its cdf is everywhere less than that of Y

Term
Probability Mixtures

examples
ex// 13 notes pg. 37
Definition

Combination of two density functions, for some weight a in between 0 and 1. In either the continuous or discrete case, it is given by, for all x,
af(x) + (1-a)g(x)

 

Probability mixture is useful for explaining lotteries over lotteries (or distributions
over distributions) → decisions whose payoffs are themselves lotteries

You first face the lottery (a, f ; (1-a), g); you then face the residual lottery, as in this table:

 

State               1      2
                        ————
                        a     (1-a)
                        ————
ap + (1-a)r       p       r

Term
Conditional Probability
Definition

How one updates probabilities upon the revelation of information. Define the conditional probability of event A given event B:

 

Pr(A|B)= Pr(A∩B)
                 Pr(B)

 

Conditional probability of A given B tells us the probability that A is true given
that we are informed that B is true.

For any B, Pr(B|B) = 1. So when B is known to be true, B occurs with probability one.

Term

Conditional Probability: Bayes' Rule

 

example:

ex// 15 notes pg. 39 → fair/weighted coin

Definition

Pr(A|B)
Pr(A∩B)
      Pr(B)
Pr(A∩B) Pr(A)
       Pr(A)   Pr(B)
=  Pr(B|A)   Pr(A)
                   Pr(B)

 

→ Pr(A|B) = Pr(B|A) Pr(A)
                                 Pr(B)

Term
Concavity
Definition

concave: f is concave if every straight line joining two points on the function lies below the function

 

line connecting two points: af(x) + (1-a)f(y)
function: f(ax + (1-a)y)

 

f is concave if for all a in between 0 and 1 and for all x and y,
af(x) + (1- a) f(y) ≤ f (ax + (1-a)y).


→ line is always under the function

→ the avg. of the numbers is always going to be less than the avg. of the fxn.

Term
Convex
Definition

convex: f is convex if every straight line joining two points on the function lies above the function

 

line connecting two points: af(x) + (1-a)f(y)
function: f(ax + (1-a)y)

 

f is concave if for all a in between 0 and 1 and for all x and y,
af(x) + (1- a) f(y) ≥ f(ax + (1-a)y).


→ line is always above the function

→ the avg. of the numbers is always more than the avg. of the fxn.

Term

Concavity Twice Differentiable

 

example

ex// 18 notes pg. 43

Definition
f is concave if and only if f''(x) ≤ 0 for all x
Term

Convexity Twice Differentiable

 

example

ex// 17 notes pg.43
ex// 18 notes pg. 43 --> neither

Definition
f is convex if and only if f''(x) ≥ 0 for all x.
Term

Decisions: Value of Information

 

examples

ex// 19-23 notes pg. 44-49

Definition
Finally, for a given decision problem, one can compute the value of information. To find this, we follow a few simple steps:

1. Calculate the value of the problem without information: that is, the value of the optimal decision.
2. Modify the original problem by subtracting a variable x from each element of the payoff matrix.
3. Calculate the value of obtaining information (given x).
4. Find x for which the value of obtaining information is equal to the value of the optimal decision without information.
Term

Utility Function → Expected Utility


examples

ex// 25 notes pg. 52

 

Concave utility function: risk aversion

 

Convex utility function: risk loving

Definition

Dictates how choices should be made, or how they are made → object with teh highest utility is the one that should be chosen

 

utility index: function whose expectation we compute
expected utility: utility of a lottery, p → linear functions of p (value of utility changes linearly as p changes linearly)

 

Discrete:

k
Σ u(xi)p(xi)
i=1

Continuous:


∫ u(x)p(x)dx
-∞

Term

Indifference Notation

 

p > q →  pick p over q

p ~ q → indifferent between picking p and q

p q → either pick p over q or we are indifferent between p and q

Definition
Term

Expected Utility: Independence Axiom

 

p > q →  pick p over q

p ~ q → indifferent between picking p and q

p q → either pick p over q or we are indifferent between p and q

Definition

All expected utility functions generate choices which have the following property:

Independence axiom: For all lotteries p, q, and r and all 0 < a ≤ 1, p q if and only if ap + (1-a)r  aq + (1-a)r
→if p is ranked as high as q, then ap + (1-a)r is ranked as high as aq + (1-a)r

 

State              1         2
                       a        (1-a)
ap + (1-a)r      p         r
aq + (1-a)r      q         r

Term

Utitity Indices

 

examples

ex// notes pg. 60

Definition

* just b/c utility indices are diff. fxns doens't mean they lead to diff. rankings

 

Two utility indices u and v make the same choices if and only if there exists a > 0 and b for which u(x) = av(x) + β
→ one is a positive linear transformation of ther other but they have the same ranking

 

two indices u & v
For every p & q

E[u(p)] ≥ E[u(q)] ↔ E[v(p)] ≥ E[v(q)] → shape is the same, diff. scaling

Term

Partition

 

examples

ex// 30 updated notes pg. 62

Definition

Given a state space S, a partition is a collection of events {I1, ..., Im} with the following two properties:

1. Every state is in some event in the partition
2. No two events overlap
A partition is simply a list of events which is mutually exclusive and exhaustive.

 

Partition P refines partition P' if every event E in P is a subevent of an
event in P' → P has more events → provides more info

 

Thm: If P' refines P, then for any expected utility maximizer, the value of P' is higher than that of P

Term
Value of Decision with Partitions
Definition

Value of decision problem D with partition P = {I1, ..., In) is given by the following:
n
Σ p(Ii)(max j=1,...,m E[u(ai)jIi])
i=1

E[u(ai)jIi]: utility of choosing ai given information Ii

max j=1,...,m  : the maximal utility you could get given Ii

n
Σ p(Ii)(max j=1,...,m E[u(ai)jIi]): your expected utility looking ahead before any   info. is revealed

Term
Risk Averse
Definition

Decision maker is risk averse if she always prefers the expected value of a lottery to the lottery itself → E[u(X)] ≥ u(E[X]). → E[p] p

 

Expected utility maximizer with utility index u is risk averse if u is concave

 

p   a    (1-a)
     x    y

How does a risk averse person behave?
Expected value is ax + (1- a)y
Expected utility is au(x) + (1- a)u(y)
So risk aversion implies
u(ax + (1- a)y ) ≥ au(x) + (1-a)y  → definition of concavity

Term

Jensen's Inequality: For any distribution X and any concave function u, E[u(X)] ≤ u(E[X]) (exactly the requirement for risk aversion)

 

→u is concave if the utility of the expected value is higher than the utility of the lottery

Definition
Term
Risk Lover
Definition

Someone who would choose a lottery over its expected value

U is convex

Term
Risk Neutral
Definition

Expected value maximizers are a special kind of expected utility maximizers, with utility index u(x) = x. And in fact, these decision makers are both risk averse and risk loving → risk neutral

 

u is linear

Term

Risk and Expected Value

 

risk averse:  always ranks E[p] over p --> u is concave

risk loving: always ranks p over E[p] (for all p) --> u is convex

risk neutral: E[p] and p are indiff. for all p --> u is linear

Definition
Term

Certainty Equivalents

 

examples

ex// 30-31 notes pg. 65

Definition

Certainty Equivalent (ce(p)): If we have a lottery p and a ranking, the certainty equivalent gives that value x € R for which x ~ p; that is, for which x and the risky lottery p are indifferent → how much money for sure is a lottery worth?

 

For an expected utility maximizer, it is therefore the number c for which u(x) = E[u(p)].

 

ce(p)= u-1(E[u(p)]) → ce(p) is the amount of money an indiv. would accept in lieu of p itself

 

 

Term
Certainty Equivalents and Risk
Definition

Risk averse iff ce(p) ≤ E[p] for all p
Risk loving iff ce(p)  ≥ E[p] for all p

Risk neutral iff ce(p) = E[p] for all p

 

Risk aversion says E[u(p)] ≤ u[E(p)] → take inverse ce(p) = u-1(E[u(p)])

Term
Attitudes towards risk
Definition
                              utility index               ce(p)            risk premium
Risk Averse           concave              ce(p) ≤ E[p]       Π(p)≥0/nonnegative
Risk Loving           convex                ce(p) ≥ E[p]       Π(p)≤0/nonpositive
Risk Neutral          linear                   ce(p) = E[p]          0
Term
Risk Premium
Definition

risk premium for a given lottery is just the amount that an individual would be willing to pay to get rid of the risk. In other words, it is E[p] -ce(p).

 

Π(p) = E[p] - ce(p) for risk premia

 

Note:
That if an individual is risk averse, the risk premium is negative. 
For a risk loving individual, the risk premium is negative; so that you would have to give money to someone to get rid of risk.

Term
Comparing Risk Aversion
Definition

Individual 1 is more risk averse than individual 2 if every risk that individual 1 is willing to take, so is individual 2
In other words, p ≥1c → p≥2c for all lotteries p and sure things c

 

For expected utility, u corresponds to a more risk averse individual
than v if and only if for all lotteries p and all sure things c,
E[u(p)] ≥ u(c) → E[v (p)] ≥ v(c)

 

 

Term
Comparing Risk Aversion Thm
Definition
For expected utility maximizers with utility indices u and v, the
following are equivalent.
1 u is more risk averse than v
2 For all p, ceu(p) ≤ cev (p)
3 For all p, pu(p) ≥ pv (p)
4 There exists strictly increasing concave j for which u = j º v.
Furthermore, if both u and v are twice di erentiable, the preceding
are equivalent to
- u''(x) ≥ -v''(x)     for all x.
  u'(x)       v'(x)
Supporting users have an ad free experience!