Stanford GSB Sloan Study Notes, Week 9, Summer quarter done
Pages assigned for reading: who cares now? 🙂
And this is it, one quarter of our year here is done already, with an awesome Ethics crash course finale competing with TechCrunch Disrupt SFÂ (not part of official curriculum, but stealing a lot of attention from most of the class anyway, esp with their sweet -90% student discount) and all culminating with our last exam, Microeconomics this morning. In which, as you can see from the picture of the scoring table below, I am quite confident to bring home at least 5 points:
Yes, we have been warned about this year passing (too) fast, but I still didn’t expect this blink… And here’s what we learned in the last summer week, before the MBAs stormed the campus for their Week Zero and took most of the spots in the lunch line at Arbuckle:
GSBGEN259 – Ethics (prof Krehbiel)
- Attribution theory in experimental psychology: people are inclined to suppose that what someone does reflects her underlying character
- even when you explain to them that the person is just putting on a performance
- think about obsessions towards actors & pop stars based on the roles they play?
- Aristotle – a person with good character is virtuous.
- Test: write down 3 words on a person you admire. If these are virtues (ex: “selfless”) you should be able to derive a deficiency (“selfish”) or excess (“doormat”) for each as a test.
- Husthouse:
- The right thing to do is what a virtuous agent would do in the circumstances
- A virtuous person is one who has and exercises the virtues
- A virtue is a character trait that a person needs in order to have eudaimonia, in order to have a good life
- Most stolen books in libraries are philosophy books. And among them the most stolen subset… ethics books! (research by Appiah; Schwitzgebel)
- Situationalists claim that virtue ethics is more contextual and less “agent-centered”
- e.g same people do good and bad deeds based on situation, not character
- Many successful experiments showing how finding a dime a moment before or proximity to a good-smelling baker shop dramatically increase the likelihood a person will help a stranger on the street
- 19th-century hope that logic could be reduced to “physics of thought” and another one it is the “ethics of thought” (sources?)
- Nonconsequentialist (deontological) ethics: what is on the list is intrinsically right
- 10 commandments for Christians
- human rights
- negative rights (life, privacy, property): ones that usually impose nothing more than restraints on [other] people’s conduct
- positive rights (education, shelter, health care): require active effort by other people / government
- issue: conflicts between items on the list require subjective prioritisation (until breaking which other points are “white lies” OK?)
- Consequentialist (teleological; “telos” = goal in Greek) ethics: actions’ observable _consequences_to people are what make them right or wrong
- right if an act produces greater sum of desired value in the world than its alternatives
- depending on goal: aesthetic, hedonistic, utilitarian teleology
- Milgram experiment: read about it here or much better – watch the original 1961 experiment on Youtube:
- Question in ethical “cost-benefit” analysis – how wide to draw the net for impacted constituencies?
- concentric circles: self -> family -> business -> other businesses -> society -> global economy
- benefits are concentrated on self, costs dispersed
- “things immediately good for us, with no cost in sight” could be a yellow flag of a potentially unethical behaviour
- consequentialists (“should”) ethics: always take the assessment global
- Doesn’t matter if an act is legal, it is unethical if there is deception involved
- Ex: a certain bookkeeping mechanism is fully by the law, but the reason of using it is to avoid using another way which would expose problems
- Estonian JOKK concept
- Paradox: unethical first step is often re-enforced by good virtues
- Ex: person helping a friend in fraud because she is compassionate, helpful, trusting. And the fraudster gets even better buy-in by being honest.
- situational context outweighs the virtues
- Utilitarianism: an act is right if it produces the greatest amount of happiness aka utility
- (“…to greatest number of persons” – often added, but wrong: with absolute highest utility # of people doesn’t matter)
- happiness – state of life when all of one’s most important desires are satisfied
- utilitarian decision making in practice often can lead to conclusions that defy our moral “common sense”
- ex: nothing prohibits an utilitarian society to mistreat a small ethnically different subset, if that were to increase the “happiness” of the vast majority (if the perceived benefit, even if minor per capita of large crowds outweighs the intense suffering of a few people)
- is the fear and terror spread in the entire society as a result to prohibit the acts as “lessening overall happiness”?
- can create situations where acts are morally improved if kept hidden!
- Jack Bauer of 24 – an example of the most hardcore utilitarian 🙂
- Rule utilitarianism (labelling the above “act utilitarianism”): right acts are those that conform to a basic rule whose existence produces the greatest amount of happiness for the greatest number of persons
- E.g, once agreed that “telling the truth” makes a society happy in general, the rule doesn’t need to be weighed again for each individual case
- RU-s presume the world shares their moral standards – if not, very un-utilitarian things can immediately happen (RU looses, total utility goes down)
- Kant’s logical exercise:
- word a maxim – “to get money, I’ll lie”
- generalize – “everyone lies for money”
- test if such a world is logically conceivable – “world without truth”
- If no: can’t be ethical
- universalisation roots out exceptionalism via logics (not empirics)
- Risk: understanding ethical frameworks creates a toolkit for justifying self-serving biases!
- Rawl’s Theory of Justice: summary: http://bibliobrevity.blogspot.com/2012/02/summary-of-rawls-theory-of-justice.html
- Main interest: to design social institutions to give everyone a fair chance
- Framework for evaluating current, not creating new civilisations
- Self interested people behaving will maximise outcome for worst-off people (because they are mitigating for a situation they’d happen to be one of the worst-off one day)
- Snickers division example – if you let 2 kids, S & E divide a snack bar:
- A: [ Â Â S Â Â ] [ Â E Â ] Â <- if S gets to cut
- B:Â [ Â S Â ]Â [ Â Â E Â Â ] Â <- if E gets to cut
- C: coin flip: one cuts, the other chooses (Nash equilibrium)
- In ethics, utilitarianism is to justice what in economics efficiency is to distribution
- Importance of motives
- people will always inevitably try to assess motives
- yet they are impossible to observe
- they matter in anticipating non-market reactions
- Contradiction: “this space is intentionally left blank” 🙂
- CONCLUSION: Ethics is fundamentally about tensions between
- Strong gravitational pull to the left (self)
- Imperfect antidotes: ethical push to the right (social interest)
- Bumper sticker summary:
- Defy gravity. Do what’s right. (Krehbiel)
MGTECON209 – Statistics & Economics (prof Oyer)
- Risk is a situation in which the likelihood of each possible outcome is known or can be estimated, and no single possible outcome is certain to occur
- if it is not quantifiable, it is just uncertainty
- Fair bet is a wager with expected value of zero
- Risk preference defined through fair bet: risk averse (doesn’t want to bet), neutral (indifferent), preferring (willing to make a fair bet)
- Fair insurance is a fair bet (EV=0) from policy holder’s view
- In real life examples, can virtually always ignore the risk neutral / risk seeking people and focus on risk aversion (just different function)
- comparing functions
- can not compare absolute utilities: one is not “happier”
- slopes matter, not values
- common utility function: natural log of wealth/income as conservative view
- comparing functions
- Reasons for not diversifying risk inside a portfolio
- incentives (employer stock plans)
- information (or rather, lack of)
- value of large holdings (control, voting rights…)
- high transaction costs
- Moral hazard (aka “hidden action”) – incentives for people to take inefficient actions
- ex: fixed salary = less risk, but mute incentive to perform
- est 20-50% of US healthcare cost
- you pay $10 in insurance, insurance pays $1000 for a simple procedure – buy you don’t care, because you are insulated from seeing that bad directly
- Adverse selection (aka “hidden information”) – people distort or hide details that they would not hide if they bore the risk
- Cool chronology of Game Theory history:Â http://www.econ.canterbury.ac.nz/personal_pages/paul_walker/gt/hist.htm
- Game Theory terms definitions
- common knowledge: a piece of information that is known by all players, and it must be known by all players to be known by all players, and it must be known to be known to be known, … etc
- complete information: situation where the payoff function is known to all players
- perfect information: situation where the player who is about to make a move knows the full history of the play of the game to this point, and that information is updated with each subsequent action
- rules of the game: define both the actions players can make at each move, but also the timings of the moves
- static game: each player acts only once and the the players act simultaneously (or at least without knowing their rivals’ actions). Firms have perfect information about the payoff function, but imperfect information about rival’s moves.
- dynamic game: players move either sequentially or just repeatedly
- pure strategy: each player chooses an action with certainty
- mixed strategy: action choices based on probability
- Predicting outcomes of games:
- rational players will avoid strategies that are _dominated_by other strategies
- instead of focusing on dominants, the approach is eliminating these non-dominant strategies iteratively
- dominant strategy (if it is available): A’s payoff is higher no matter which strategy B chooses
- in prisoner’s dilemma all players have dominant strategies, just the sum of payoffs is inferior to if they used co-operative strategies
- can not calculate expected values across rows/columns of options!
- would assume other side is random, but this is a strategy game
- rational players will avoid strategies that are _dominated_by other strategies
- Pure strategy with multiple Nash equilibria: two companies considering building a gas station on a road where there is enough demand for just one.
- Either firm entering is an equilibrium where the other one doesn’t want to enter any more
- Nash (1950) proved that every static game with finite number of players and finite number of actions has at least one equilibrium
- First mover advantage is changing the game from static to dynamic
- In dynamic games per Stackelberg model, player’s strategies are Nash equilibriums in every subgame
- Solved through backward induction, e.g finding the best option for the last player to move and working back from there
- Payoffs are lower in equilibriums where players cannibalize each-others payoffs
- Ex: firms advertising in market where they attract new customers VS they can only “steal” customers from each-other
- Game visualisation types:
- Normal form: table of payoffs based on static game combinations between players
- Extensive form: decision tree diagram of a dynamic game (players take turns and have information about the other party’s move)
- Credible threat / commitment example playing against yourself:
- To get over a writers block, an author sets the due date & if he doesn’t deliver by then, has a friend transfer a $10,000 donation to someone he hates the most (Neonazi party, KKK, …)
- Cheap talk (before making simultaneous moves) can only solve coordination games
- Golden Balls – an UK TV show example on how to play the “split or steal” game:
For more posts on the Stanford GSB Sloan life – click here to search by tag “sloan”.