Stanford GSB Sloan Study Notes, Week 5-6 (15-16), Autumn quarter
This post consolidates my notes from two weeks instead of a normal one, yet will be a bit more concise than usual too, for a few reasons: I was down with flu for several days and had to miss a few classes and then the midterm exams in Financial Accounting and Organizational Behaviour changed the normal scheduling.
Also, the first session of the latest addition in our core timetable, STRAMGT 259: Generative Leadership by Dan Klein yesterday was too… experiential to take any notes, really. Basically, we did three hours of improv theatre. It was a lot of fun, but instead of getting into the theory here – get the book: Improv Wisdom: Don’t Prepare, Just Show Up by Patricia Madson. And say “yes” more to whatever life throws at you, go with the flow and see what happens.
For additional entertainment, here is an experiment shared by my classmate Marc who is lucky to take a Behavioral & Experimental Economics class by freshly Nobel-prized Al Roth: primatologist Frans de Waal showing how even monkeys reject unequal pay (see especially from 2nd minute).
And now on to the regular programming. Covered in this issue:
- Why people suck at predicting when they finish a task
- How overdiversification, and especially uncontrolled aquisitions lead to dysfunctional conglomerates
- Lemmings following lemmings, but not sheep
- Predicting future divorces
- Research from surveying 10,000 founders that quantifies the impact of common “gut decisions” like picking investors or sharing stock between co-founders
- Guest speakers explaining how they’ve used creative incentive schemes to get more out of porn site classification crowdsourcing and VAT payments in China
- The impact of investment lags on IP value creation in startups and established companies
Stanford GSB Sloan Study Notes, Week 2, Summer quarter
Pages assigned for reading: 310
- Reminder: think of normals not in their absolute value, but as “how many standard deviations from mean” (Statistics)
- After going through the theory and visualisations behind probabilities of standard normal distribution (Z) and t-distributions, I have a growing suspicion, that in 95% (pun intended) of real life business cases needing confidence estimates, we’ll be dealing with a simple constant: 2. (In case of 95% probability on standard normal distribution, Z=1.96 and in cases the sample size n < 30, you should technically use t but, it in reality tends to be so close, that all other uncertainties around sampling and data collection would rarely be less than the benefits of simplicity of multiplication by two) (Statistics reading + class discussions)