This post appeared first on Closest Point of Approach in May 2024.
How do sophisticated investors behave in financial markets?
The professional acts on her assessment of the tradeoff between risk and reward.
She acts knowing that she won’t be right every time. Her edge, if she has one, comes from, at least, two things.
One, she needs to have a good method for estimating the risk of permanent loss and the potential to lock in gain in an individual situation.
Two, she must have the discipline to invest only when the tradeoff skews predominantly in her favor, i.e., when the potential gain far exceeds the permanent loss she may incur if things don’t work out.
We could add other factors such as her assessment of the probability of getting a favorable outcome vs. an unfavorable one. She needs to be disciplined about selling when the valuation hits her target or when counterfactual evidence emerges to invalidate her original view. Or we could consider her willingness to withstand the vicissitudes of the market’s ups-and-downs so that she doesn’t get forced into selling into weakness.
These and other considerations only act to modify the core two dimensions of edge. Her evaluation of the risk/reward in any individual situation and for the portfolio as a whole is a continuous process. Her buying and selling is an act of constant, conscious discipline.
It sounds easy, but it’s difficult.
For example, some people may have a good ability to estimate risk and reward, even as they lack the patience to act only on the right kind of setup.
Warren Buffett compares investing to a game of baseball in which there are no called strikes. You can only strike yourself out by swinging at the wrong pitch. In Buffett’s game, you wouldn’t swing without a strong belief that you would hit a homer. You would wait for the fat pitch.
It’s a Law of Large Numbers exercise. This means investors have to be confident enough in their edge to be able to stick with their system even when they endure a losing stretch. Their edge will work over a long enough period of time, if it’s real. Of course, you won’t know if you have a genuine edge until you have been in the game for a while. If you have a bad run at the beginning, you might abandon a winning approach before it starts to pay off.
It’s incredibly difficult.
People are weak and impatient and lack faith in themselves. It’s rare to have someone with an authentic, disciplined edge.
There is an edge to taking risk in an organization just as there is in investing in capital markets. The objective is different, though. The risk/reward tradeoff isn’t about maximizing profit; it’s about generating political capital.
Large organizations make decisions every day. They have problems to solve. Some of these choices are big and some of them are small. Some of them work out and some of them fail. For example, a large organization may elect to implement an Enterprise Resource Planning system. ERPs (even just individual modules) are notoriously difficult to implement, not to mention extraordinarily expensive. Or organizations may try to replace a system that works because, as the kids say, “reasons.” The current failed FAFSA rollout is a good example of what can go wrong.
In large organizations, people do things to advance their own interests. If Alice can lead the successful implementation of a system that improves the output of the firm or makes her boss look good or that replies to a demand from the Board of Directors, then Alice increases the size of her political capital account. This puts her in a better position to be promoted or to see her compensation increase. Maybe she can use her win to get a better job elsewhere. That’s the reward.
Alice is like an investor who plunges into the stock market with borrowed money. If she wins, her bank balance goes up. If she loses, the institution that lent her the money loses. Alice moves on to the next trade. If she loses a lot of money, nobody will lend to her anymore. Or it might cost her more to finance the next trade.
Except in the large organization context, it’s about Alice’s political capital. These are made up of her power and her compensation.
The risk is that the project doesn’t go well and the firm suffers increased costs or foregone revenue. These may or may not translate to a depletion of Alice’s political capital account. Perhaps she’ll be lucky and it blows up after she moves to her next role (or, ideally, to another company). But if she’s still in the seat when it fails, she will suffer. She may even lose her job over it.
In both cases, the risk/reward calculus focuses on Alice. Alice can decide to sit on the opportunity or she can foist it on someone else, say a rival peer.
Alice, if she is a canny operator, will take on only projects with a high likelihood of success, in which the consequences of failure are small (or deniable) and the upside from success are personally large and tangible. People have made their careers from leading these kinds of large projects.
Naturally, this risk/reward calculus is a function of the organization.
Culturally, if there have been a number of heralded previous successes, Alice’s evaluation of the risk/reward tradeoff may be biased in favor of overestimating the reward and underestimating the risk. There are many examples of people who have benefited personally from leading a successful project while there is scant evidence of people who suffered from project failure. Alice swings at every pitch. This is like the momentum investor who chases the stock market higher.
On the other side, if the corporate history is littered with high-profile project failures that have killed or maimed individual career trajectories, Alice’s sense for the risk/reward will tend towards the pessimistic, overemphasizing the risk while underestimating the reward. She won’t swing at any pitches, fat or skinny. She just hopes that there is some miracle way for her to walk to first base. If there is an uncertain benefit from propelling change, but the consequence of failure is perceived as career death, no one will do anything requiring initiative. This is a bear market in initiative.
An organization with a recent history of winning takes more risks. One with a recent history of disaster is risk averse. The principals in an organization have a flawed edge because their willingness to swing is either manic in a winning culture or depressed in an unlucky one.
As they say on Wall Street, “when they’re yelling, you should be selling, and when they’re crying, you should be buying.”
These kinds of distortions create opportunity as any value investor will tell you.
Assuming that there is nothing structurally wrong with the organization, making it prone to failure, the valiant Alice should prefer setups where the organization is coming off a run of failed projects. If Alice can succeed where others have failed, she may reap orders of magnitude more plaudits than in an organization in which success is seen as automatic. She can argue credibly that her project succeeded because of her leadership not because of the organization’s culture or resources or history. She will be seen to have made things happen despite these artifacts.
What else can Alice do to tilt the odds in her favor?
She can take a scientific approach, starting with small experiments designed to gather data with which she can build the project from the ground up.
Managing execution risk like this is not easy.
“Acting like a scientist is difficult for leaders because it can challenge their legitimacy. Undoubtedly, that’s because someone’s position in the corporate hierarchy is often assumed to be the result of experience and a track record of successful moves and ideas. Senior executives live in a feedback loop of positive reinforcement that makes them unlikely to question the foundations of their decisions. The scientific method, in contrast, requires intellectual humility in the face of difficult problems and relies on an objective, evidence-based process, rather than predominantly personal insight, to frame and address decisions.”
It means questioning the assumptions that underpin the status quo. Just because everyone else is buying System X doesn’t mean it’s the right choice for Alice’s organization (or any organization, for that matter). It means thinking independently. It means being open to different possibilities.
“When business leaders adopt this mindset, their biases and errors won’t get in the way of finding the truth. They will employ reason, demand evidence, and be open to new ideas. In scientific practice this means seeking independent confirmation of facts, placing more value on expertise than on authority, and examining competing hypotheses. Above all skeptics question assumptions. They ask, ‘Why do we believe this?’ or ‘What is the evidence that this is true?’ History is full of examples where such skepticism helped overturn commonly led ideas and led to important scientific advances.”
Maybe the small vendor that is willing to build a product around Alice’s problem set is a better solution than a large company with a pre-built approach. Maybe it’s sufficient to solve a small set of problems. Maybe Alice’s organization is thinking about the problem the wrong way.
Alice should get information wherever she can.
“In business, ideas for hypotheses can come from multiple sources. A good starting point is customer insights derived from qualitative research (focus groups, usability labs, and the like) or analytics (data collected from calls to customer support, for example). As we have seen, hypotheses can also be inspired by anomalies, which can be found in everything from overheard conversations to successful practices that deviate from the norm at other companies.”
Alice can do three more things to improve her personal risk/reward profile.
One, she can enlist others into her project. Let’s say that her colleagues Bob and Carol each have related projects. If the three of them pool their risk and jointly run the three projects, then they can diffuse the downside should anything happen on any individual project, even as they improve their chances of being part of a winning project. In fact, the lessons they learn from each project can make this portfolio more likely to succeed. Risk pooling is a winning strategy.
Two, Alice should break the project into bite-sized pieces that build on one another. This means that she proceeds only when successful. If any individual step is deemed a failure, then she can abandon the project or she can redo the step until successful. Each individual step is contained and reversible, but the scale of the entire project is large. There’s a reason they call it the Big Bang.
Three, Alice can change the definition of success to make it more likely that the project is deemed a success in the context of the internal competition for political capital among peers within her organization. She can frame the project as an information gathering exercise (at least initially). Combine this with outsourcing execution (and political risk) as much as possible and Alice has transformed the project into a Shark Tank where she is the investor and the risk of failure (at least politically) falls on the heads of the outside contractors.
Alice can increase the likelihood of a fat pitch.
If Alice succeeds, she is seen as an operating hero. If she fails, well, we learned and it was really the outside vendor’s issue. But we now know what approach not to take and we can experiment with another.
The best setup for taking risk in a large organization is in one with a history of loud, public failure because the risk/reward calculus is distorted. People underprice reward and exaggerate risk. There is less competition for the best projects. Even then, the leader can manage her exposure by pooling her risk with others, chunking, outsourcing, and recategorizing execution. Eventually, enough of these experiments strung together will constitute massive progress. Failure at any intermediate sub-step is something to blame on the contractor.
No risk, no reward.