In 1961, during an Antarctic expedition, Leonid Rogozov came down with appendicitis. While this condition is typically solved with relatively minor surgery, there was just one problem: there was only one doctor on the expedition, Leonid Rogozov. Faced with a choice between certain death or an insane attempt at performing his own appendectomy, Dr. Rogozov chose to live. With a small amount of Novocain to provide local anesthetic, Dr. Rogozov spent the next two hours cutting himself open, overcoming an errant stab into his large intestine, which he had to suture, and then finally was able to remove his appendix before sealing himself up.
While quite a feat, it was only possible by aligning a highly qualified professional with a relatively minor medical procedure. Had Dr. Rogozov instead had a coronary artery blocked, or thrown a clot into his lung, or suffered a myriad of other medical maladies, he would undoubtedly have died. This story lends itself to the statement, “A doctor does not perform his own surgery,” or other variations such as “only a fool has himself for a lawyer.” Yet, it would seem there is a growing belief that we soon shall be doing just that.
With the growth of large language model AI software over the past two years, bolder and bolder claims are being made about the ability of AI to do just about anything. Entire companies are being built on the back of AI-generated code, marketing copy, and automation. Yet, the data on AI’s ability to perform complex tasks reliably is mixed at best. A multitude of examples, from AI-operated bots on social media to even dedicated AI bots being used to test the difficulty of standardized exams such as the MCAT, bar exam, and others, are being “stress tested” in the pursuit of an AI-dominated landscape. With the transference of ideas such as Moore’s Law (the observation that computer memory doubles every two years or so) being copied onto the growth of AI, it seems to be a commonplace belief that soon, any thinking professional can be replaced with the use of an adequately trained AI. Yet, let us question that assumption.
Let’s Start Simple
Everyone has heard of and knows what a cookbook is. A collection of ingredients and step-by-step instructions that, if followed, should result in a dish resembling a well-photographed example on the glossy page next to the instructions. A figurative “paint by numbers” example of what it is to be given explicit directions about what is required to make something, how to make the thing, how long it should take, and what the result should be. Yet, we are all very familiar with the experience of baking a cake only to cremate it or serving some undercooked or oversalted nightmare to our friends and family. While practice makes perfect, there are literally years of experience between a home-cooking chef and a Michelin-star chef’s work.
Bento Box pictured from the first course at “The Wolf’s Tailor,” a Michelin Star restaurant in Denver.
So, herein is the simple question: have cookbooks replaced restaurants and chefs? Of course not, the US restaurant business is 4% of the US GDP, or $1.01 trillion dollars annually, and employs 15.5 million workers; not all of whom are chefs. Of course, we might argue that this is for convenience or as a luxury, and fair enough. Yet, our demand for convenience and luxury clocks in at over a trillion dollars a year and shows no sign of slowing down.
Perhaps a Bit More Complex
While the stakes of a ruined meal might be a disappointed family or a date gone wrong, the costs are relatively nominal. Perhaps you order takeout for convenience or to avoid food poisoning your family, but either way, it’s arguably harmless. So what happens when the stakes are raised a bit? Imagine something as innocuous as a speeding ticket. When you’re pulled over, you sign a minor certificate saying you’ll show up in court or simply pay the fine for a matter so minor. Most people pay the ticket, and some will actually go to court to see if they can negotiate it down a bit, but it’s a rare case that someone is going to hire an attorney to represent them, because, as with the meal, the potential costs are rather nominal. Of course, if this is your third speeding ticket in the past year and you’re at risk of losing your license, then you might engage that attorney.
When we then consider ratcheting up the risk, let’s say, to a misdemeanor crime, or stepping out of the legal realm and into the material domain, think about doing some of our own plumbing, we start to get into the territory of “is this a bad idea?” While the penalties for a misdemeanor crime are typically measured in the hundreds and not thousands of dollars, some can carry jail time, and many would rather avoid even the slightest risk of spending time incarcerated for even a night. While tightening a pipe or unclogging a drain can be relatively simple at face value, we might start to balk at the idea of pulling out a whole plumbing section in our home. Yet, some might argue that an AI algorithm could get you through this. Simply prompt: “how to negotiate down a careless driving ticket” or “how to install a toilet” and you’ll get a full step-by-step guide from any number of AI tools. From there, all you need is confidence! And maybe a really good water damage insurance policy.
How High Do The Stakes Have to Be?
There’s a joke at my expense that I still chuckle at: “Lord, give me the confidence of this mediocre man.” Despite the jest, there is an element of overconfidence that is almost a necessity to accept the premise that AI is going to replace the value of trained professionals in every corner of the world, from white-collar to blue-collar. Fundamentally, while the “illusion” of an AI responding to you personally may improve with time, shifting from a text box that responds promptly with a large volume of text or a generated image to something akin to “I, Robot,” we will still be put in a position to accept what it tells us as accurate and to then execute based on its instructions to the best of our ability. So, how high do the stakes have to be?
Are you going to be self-diagnosing cancer based on your text inputs and the response of AI? What about treating it? If there is a world in which AI can qualify as a licensed medical provider and prescribe treatments, will you do your own chemo? What if there is a tumor that needs to be excised? If it’s in a physically possible spot, will you try to be the first AI-led Rogozov? Or would you rely on a surgeon following the directions of an AI as to how to perform a surgery? What about a layperson who has that same hypothetical “medical licensed” AI telling them how much anesthetic to push, where to cut, and so on? After all, if the AI is so good one day, why pay for the surgeon, right?
What of less life and death matters? You might argue your speeding tickets with advice from an AI, but what about a charge of white-collar crime? An assault charge? A murder? Are you going to try self-reliance in a less criminal domain for the merger of your business or the sale of your home? What about buying out the company from your family trying to file a patent, or suing someone for violating yours? What about contesting a parent’s will that leaves you with nothing and your misbegotten sibling with everything?
For us in the financial domain, we’ve been told for over a hundred years that this generation of computer-thing is going to replace all the accountants/financial advisors/bankers/etc. First, it was the calculator, then the spreadsheet, then the internet, then cloud computing and software as a service, and now AI. Yet, for every stage of technological innovation, the result has been the same time and time again. Few jobs are eliminated; instead, those existing jobs become substantially more productive. AI is no different. Despite a plethora of guides on how to perform a Roth conversion or why you should or shouldn’t be investing in this or that, the marketplace for financial planners is underserved at a ratio of 1 planner for every 7,302 households, and many financial planners stop taking clients after about fifty or so. While AI might help a small number of planners serve more clients, at face value, there is little risk that AI is going to take anyone in finance’s job, and it’s even less likely to take the job of those who do more than simply crunch numbers, but help real people make their way through challenging life events, both financial and non-financial. Because, as the observation goes, there is a world of difference between knowledge and execution. Otherwise, we’d all be stunningly healthy, incredibly fit, and fabulously wealthy, no? After all, how to do those things is well within the knowledge of the public domain, and I’m sure AI can help you make it happen.
Comments 1
Ah, job security for financial planners. Good. But, but, can we get rid of the lawyers?