Illustration: Shutterstock
Haymaker Introduction
Aloha from Hawaii, Haymaker readers! That’s a big clue that yours truly is on a much-needed vacation, or what passes for one in my life these hectic days.
Team Haymaker plans to keep up the publishing flow while I’m in paradise, with a bit of a change of content. Today’s note, authored by my creative colleague Mark Mongilutz, is an example of this, with its focus on Artificial Intelligence (AI). It’s not typical Haymaker fare, but it is a vitally important topic.
The rapid shift to an increasingly AI-dominated world holds both enormous promise and peril. The benefits are nearly endless, with one example being the potential for it to reduce the massive waste that characterizes the U.S. healthcare system. Companies that develop proprietary technology which actually manages to rein in America’s out-of-control healthcare outlays should make outstanding long-term investments. Of course, this has been an area where the promise has greatly exceeded the reality. A series of breakthroughs is seemingly inevitable, with enormous societal benefits some day.
On the perilous side, AI lends itself terrifyingly well to autocratic states seeking to monitor and intimidate its citizens to an even greater degree. Sadly, that would be all of them, with China in the lead. Worryingly, Western governments are also increasingly relying on AI to keep closer tabs on their citizens.
Returning to my playground of the investment world, AI has made considerable inroads, though with mixed success. Growing amounts of capital are now at least partially AI-driven. In the past, I’ve chided these strategies by pointing out their less-than-impressive results. Somewhat irreverently, I’ve quipped that AI-reliant programs and algorithms often seem more artificial than intelligent. So far, anyway, they seem better at exploiting fleeting and microscopic pricing inefficiencies (frequently at the expense of retail investors) than coming up with long-term, market-beating investment ideas. Undoubtedly, they will improve over time.
Yet, I believe human supervision, based on deep experience, research, and, perhaps most essential of all, common sense, will remain critical. In a way, this reminds me of another application of AI, with a similar acronym: AVs, or Autonomous Vehicles.
We’ve long been told that AVs being ready for prime time is right around the corner. Unfortunately, though, they seem to struggle with recognizing actual corners and other driving challenges… like not hitting pedestrians. There is no question that advanced technology is a great boon to drivers, but letting cars run on autopilot is, for now, a leap too far. That’s likely to be the case with myriad AI technologies far into future, possibly forever. Consequently, regulators and policymakers need to be extremely vigilant to ensure AI applications are used to enable human flourishing and not oppression.
-David Hay
Chatty Automatons & Expendable Hominids
“And your very flesh shall be a great poem.” - Walt Whitman
The narrative genius of Steven Spielberg’s Jurassic Park is in its ability to captivate the imagination while articulating a dose of philosophical insight in just the right measure for a summer blockbuster. The film’s heart lies (as we all know and feel) in the first instance of its protagonists laying eyes on living, breathing dinosaurs (the Williams score helps quite a bit). Its intellectual core is, of course, Dr. Ian Malcolm’s critical assertion that Hammond’s “... scientists were so preoccupied with whether they or not they could, that they didn’t stop to think if they should.” The “should” here applying, as the central characters assert, to the arrogant resuscitating of long-dead lifeforms whose biochemistry, instincts, and behavioral expressions might be immutably incompatible with the present-day earth and with humanity itself.
There’s more real-world relevance in the fictional Malcolm’s damning assessment now, some 30 years later, than perhaps Michael Crichton (book’s author), David Koepp (film’s screenwriter), or Steven Spielberg could have foretold. Today, technology and science have become exalted in ways that extend their roles beyond the mere utilitarian, taking on a sort of fetishized mantle in the popular discourse. If I sound in any way jaded, don’t interpret me entirely as such. After all, I drafted this work in Google Documents, using a laptop, then shared it with you using Substack’s mass-email functionality. Without modern computing/communication technology, this diatribe’s audience would be limited to whomever I could convince to hear me out at my local Starbucks.
No, I am not so much jaded as I am perplexed by a strange inversion of priority, and perhaps with the rate at which technological advances are landing on the face of humanity with scarcely a moment to spare for deliberative review as to their potential harms. In other words, who’s asking the techies, as Ian Malcolm might, if they should? “Could” is all that seems to matter.
The specific inversion to which I am referring is that of emphasizing the tool itself over the tool’s function. Obsessive technologists and the fervent celebration of all things mechanical have confounded me over the years. Our earliest tool-making human forebears did not fashion cutting instruments from rock because they wanted to marvel at the cutting instruments themselves, but because they needed (just imagine) to cut something. I didn’t download Microsoft Word to marvel at Microsoft Word; I downloaded it because I needed a word processor. The fetishizing of tech seems to have the relationship backwards. The need becomes secondary; the tool takes on a starring role for its own sake. Where does such a mentality lead?
Michael Johnston of Evergreen Gavekal, the Haymaker’s firm, recently penned a piece on ChatGPT and whether or not AI’s “iPhone moment” had just manifested. The result is inconclusive, even after Michael asked ChatGPT for its own input, but the question the piece raises is nevertheless deserving of analysis. Was, as Michael rhetorically inquires, “2022 a Turning Point for AI?“
To learn more about Evergreen Gavekal, where the Haymaker himself serves as Co-CIO, click below.
It is often argued that humanity divorced itself from a conventional evolutionary path the moment we began manipulating our environment around us. No longer did we rely on slow-to-manifest biomechanical and neural adaptations to rescue us from the perils of living in an inhospitable world. We tamed fire itself, we shaped materials into shelter, we cultivated crops, and we invented internet memes. Okay, there were some other items on the list ahead of internet memes. (The Haymaker was sure to note in his review of this piece: “Don’t forget plastics and, arguably, the most amazing invention ever: fiat currencies, enabling the financialization of almost everything.”) But the progression has been effectively along those lines; from gaining mastery over fundamental elements, to refining civilized living into something of consistent comfort and endless trivia.
In his remarkable book Blindsight, biologist and author Peter Watts re-works the “survival of the fittest” maxim to “survival of the least inadequate”, essentially (and cynically) arguing that every lifeform is at the mercy of forces for which no combination of strengths and defense mechanisms can hold out indefinitely. We trot along for a bit until a predator or microscopic life-ender of one sort or another catches up to us; maybe it’s a sabretooth tiger, maybe it’s a plague. Either way, inadequacy is the hallmark of all living creatures; but, per this argument, some are just a little less inadequate, and can therefore live a day or two longer than the rest. Peter’s not necessarily the sunniest of biologists.
In humanity’s case, we’ve allowed for the inventiveness of our brain matter to make up for various physical shortcomings. No claws? Who cares? Here’s a spear. No fur? Whatever. Have a sweater. Children take years to become even remotely self-sufficient? So what? Here’s a civilization, complete with laws, and roads, and food surpluses, and walls to make sure subsequent generations of slow-developing primates will survive to one day create the Internet and the Roomba.
Okay. That all sounds good, at least for the first few thousand generations. But what if, as some futurists and concerned philosophers have asked, that pathway leads away from extinction at the hands of an unforgiving natural world and into the hands of extinction brought on by ourselves? What if us brainy bipeds do ourselves in by allowing the technology we once used to survive to eventually surpass… as in surpass us?
Well, that’s probably a bit overblown at present, but there are still some chilling hypotheticals in need of attention. If the technologists of today, many of whom seem to actively despise human existence in its extant form, can’t be troubled with the “... if they should” question, if they’re going to fanatically work away at rendering their own abilities obsolete, then the rest of us need to be ready for whatever it is they plan (even unintentionally) to “bestow” upon us.
The beautifully plaintive Neil Oliver gets it mostly right in his recent video, with the subtitle of “… is this really how we want to live?”. He is hypercritical of the “transhumanism” movement, and rightly so; I likewise find abhorrent the notion that all our bodies need is a few mechanical implants and we’ll be as nature intended us to be all along. And at the risk of laying bare my sensitive inclinations, I metaphorically weep each time I see a member of my species donning a Metaverse helmet/headset or the like – the world teems with lands to see, creatures to marvel at, waters to wade in, and skies to overwhelm even the least poetic among us. Whatever Zuckerberg has concocted in his reality-loathing laboratory, it can’t hold a candle, or even a digitally rendered candle, to the majesty of earth’s bounty which is, by virtue of our presence in the natural order, nothing short of an inheritance, one we share with all other terrestrial life.
Where I find some difference with Neil is in his argument that instead of “…populations [being] slaves to technology … technology should be the slave to the people”. Of course, if forced to choose, I would prefer the latter option. But where I would encourage a still more impassioned maxim is in voicing the reminder that in creating a world of enslaved technology, we might also create a world of utter human dependence upon technology. Dependence too often leads to uselessness, to mental decline, to physical atrophy (a real problem these days) and to spiritual indifference. Perhaps rather than viewing machines as slaves to our species, we should view them as optional companions, the sort with which we can accomplish more, but without which we could perhaps still find our way…
This diatribe would be insufficiently dimensional if it were devoid of good-faith concessions as to what AI technology might yield for humanity. One field, that of medicine. Indeed, if AI can assist human brains in their knowledge of any and all medical mysteries, we would be acting immorally in not accepting such assistance for the health/vitality-preserving blessing it would surely be.
But what if that blessing comes with a price so high as to marginalize the benefits to the point of irrelevance? Imagine walking through a corridor in which, on one side, we see dozens of patients recently restored to full health via the AI-blueprinted, robotically performed implantation of new organs, all perfectly compatible with their hosts’ respective biochemistry. Their vitality is maximized, their longevity ensured. Brilliant, indeed. On the other side, we see dozens of human beings mindlessly taking in whatever streaming garbage the all-knowing algorithm has decreed humans should visually ingest in the interest of uniformity and social acceptability. Nobody thinks, nobody questions, nobody reads, nobody changes the channel. The Wall-E analogy is all too obvious. Life will have been “optimized” by machines who know better than we do – but, hey, at least our bodies are intact. Now imagine that both sides of the corridor belong to one unified realm of AI-governed, 21st-century American living.
The question then becomes: do we have the right sociocultural safeguards in place to avert the zombifying of our species without forfeiting the biological, intellectual, and technical boons AI might bring to the table?
You see, in the case of just about every other technological milestone throughout the unusual human project, the one thing we could count on was that human creativity and thought would remain the sole dominion of humans themselves. Nobody ever imagined that their Game Boy would start conversing with them on matters of existential importance. Nope – it was just crappy graphics with a spinach-green color scheme. Enjoyable in its way, but no hint of AI therein.
With ChatGPT, it’s different. The singular apparatus with which our species sidestepped the evolutionary game, that of ingenuity and creative capacity, is very possibly on the chopping block. That’s an eventuality that I don’t actually think will lead to a swift end to our species, but it could lead elsewhere, and not necessarily to a place you, or I, or your Roomba will want to live. A place where lethargy, nihilism, and a bankruptcy of purpose could well be the orders of the day. In many ways, I feel we’ve already been living a dress rehearsal for that outcome.
And yes, I could be wrong, just as ChatGPT is apparently wrong about quite a bit, at least at this early stage. Perhaps AI functionality will help us to perfect the mechanics of civilization and engineer for us an idyllic world of fruitful social discourse and fulfilling personal endeavors. Perhaps what machine technology did to compound our physical capacities, AI will do for our cerebral potential.
A distinction here, and one that should worry you, is that when we built the first wrecking ball, we know what we wanted to do with it… and why. Can the same necessarily be said of ChatGPT, or AI at all? We’re barreling towards this end without a good braking mechanism (talk about a good tool), and with our windshield fogged up by a steamy concoction of blind ambition and fetishized innovation for the sake of fetishized innovation. We’ve seen what happens with SPACs – their indefinite nature too often leads to pain and loss. AI might well be the ultimate SPAC, only it's one in which we’re all invested, whether or not we wish it so.
Anyway, this is probably a useless exercise. Privacy concerns have done nothing to limit our use of the Internet, Chinese labor concerns have done nothing to diminish Apple’s profits, and existential risks to human wellbeing will likely do nothing to halt the advances of AI – techno fetishism is proving more attractive than techno cautionism. After all, did Ian Malcolm win the day? We’ll surely be at 15 films in the Jurassic Park/World franchise by decade’s end, assuming ChatGPT can crank out enough passable scripts in that time to keep said franchise viable. If not, its algorithm will surely tell you what to watch instead. “Enjoy it, or else…” they might say.
For my part, I’ll simply wish you a good day.
-MJM
Here are 2 books on AI that will really get your attention! Homo Deus by Yuval Harari and Scary Smart by Mo Gawdat.
You worry about "a world of utter human dependence upon technology" - but we are there already. how can 8 billion people survive living a pre-agricultural, pre-industrial way of life?