A Modest Proposal for AI
A Rejected Product of the Hinternet Essay Prize Contest
At the end of August I submitted to an Enlightenment-style prompted essay competition held by The Hinternet, a publication I would characterize as an offbeat intellectual alcove.
I loved that the submissions were anonymous. I loved the prompt less: “How might current and emerging technologies best be mobilized to secure perpetual peace?”
“Perpetual peace” is an oxymoron. The fact that I would never pose such a question, though, is precisely what convinced me that trying to answer it in good faith would be a valuable exercise. I produced my attempt (you may know — and The Hinternet certainly knows — that the term “essay” comes from the French for “attempt”) in under a week, a time inappropriate for this essay contest, but appropriate for a Substack article, which is why I decided to share what I wrote, here, as submitted.1
I would also like to link to the winning essay of the contest, by Carlyn Zwarenstein. Zwarenstein’s essay is deeply-researched, engagingly written, passionately well-argued. Competitions aren’t always right but this one was, as far as I can tell:
If you do also end up having time to read my attempt, note that it could be read as a speculative, sci-fi footnote to Zwarenstein’s essay. It is, to some extent, a Swiftian modest proposal, though less brilliantly satirical than Swift’s suggestion for poor families to sell their children as food for the rich; in fact, unlike Swift, I would be open to my proposal being tried. The technological and social barriers would be great (and are admittedly only vaguely addressed by my attempt below), but my piece does, I hope, serve the same function as Swift’s “A Modest Proposal”: it forces the reader to face the implications of any arguments about why one wouldn’t take a particular measure if one could.

“How Might Current and Emerging Technologies Best Be Mobilized to Secure Perpetual Peace?”2
Unless it’s the kind of story in which spaceships might as well be seafaring ships, the “sci” of “sci-fi” is usually the object of a cautionary tale, or at the very least its interaction with human nature is, like when the well-meaning robot in one of the tales of Asimov’s I, Robot struggles to fulfill a human’s command without doing him harm. This tendency to see what could go wrong is a hallmark of what I would call narrative thinking, and there is wisdom to it; it frames the world through conflict, with peace disrupted for a story to begin and only sometimes achieved for the story to end. In other words, peace is fundamentally non-narrative; and so, in some sense, narrative thinking prevents us from imagining peace. There is an educated class which seems immune to narrative thinking, however, and perhaps even proudly so: The creators of novel technologies, currently associated with Silicon Valley (techies for short, within this essay), and, to some extent, their entrepreneurial accomplices. Theirs is a mostly utopian thinking, which is to say fundamentally anti-narrative; it weaves situations of equilibrium, depicting a world in which, as Jaron Lanier sarcastically puts it in Who Owns the Future? (2013), “we will not have to call forth what we wish from the world, for we will be so well modeled by statistics in the computing clouds that the dust will know what we want.”[1] Any narrative thinker would know such a thing is impossible. Fairytales caution about unrestrained want, after all, as in the Grimm’s “The Fisherman and His Wife” in which wishes don’t lead to contentment, only back to poverty. Or for a more contemporary example: “What is happiness?” the fictional ad man Don Draper snaps in the show Madmen, “it’s the moment before you need more happiness.” Yet, it is that utopian mindset, the absurd elimination of want, egged-on by the myth of infinite growth central to the current ideology of business, which shapes an influential segment of the techie world, one that has had outsize impact on the present and future of the rest of us.
In the process of shaping our world, the techies have proven right the wisdom of narrative thinkers: that there is no such thing as a wish without consequence, that one should never try to make deals with the Devil, and that villains do exist. We live in a world in which the ideal of free information, pursued with a utopic disregard for consequence, has led to a world imagined by the father of cybernetics, Norbert Wiener, in the mid-century, one where machines are able to broadcast signals which modify behavior through feedback loops, though he does so with less clarity than Jaron Lanier likes to suggest. I didn’t read any of Wiener’s book The Human Use of Human Beings (1950) until after I formulated my notion of narrative and utopic thinkers and after I wrote the above examples of truths learned in fairytales and so I was astonished when Wiener expressed the very same notion towards the end of that book: “In the myths and fairy tales that we read as children we learned a few of the simpler and more obvious truths of life, such as that when a jinnee is found in a bottle, it had better be left there; that the fisherman who craves a boon from heaven too many times on behalf of his wife will end up exactly where he started; that if you are given three wishes, you must be very careful what you wish for.” Even the father of cybernetics was ignored when it came to this particular warning. It is precisely because we have lost to the utopian mindset that it would behoove those of us who are narrative thinkers, though generally considered of lesser inteligence by techies and of lesser value by business investors, to imagine better worlds, however contrary to the very nature of storytelling that may be. What narrative thinking does have to offer is wisdom, particularly about human nature. I say wisdom, not intelligence. The techies and their entrepreneurial accomplices know plenty about human nature when it comes to leveraging and manipulating it. What they do not have is the wisdom to know they cannot do this without serious ramifications. Could we combine the wisdom of narrative thinking with a techno-utopian mindset?
Having framed the question, let’s state the problem that may be at the heart of the impossibility, so far, of achieving perpetual peace: Every large-scale, complex human society is controlled by a small group of elites. A comprehensive and accessible account of this is laid out by Peter Turchin in End Times: Elites, Counter-Elites, and the Path of Political Disintegration (2023). (Turchin, perhaps not so incidentally, was the son of a cyberneticist). Sources of power within large-scale societies vary: violence (militocracy), laws (bureaucracy), religion (theocracy), technology (technocracy), wealth (plutocracy) etc. The principle holds: complex societies have thus far been defined by the rule of the few over the many. With this, comes a cycle of instability, the theory goes, because the number of so-called elite aspirants inevitably – either through reproduction or upward mobility or both – becomes too numerous for the number of elite positions. This is one of the core hypotheses of cliodynamics, which made headlines when Turchin’s team’s decade-old prediction, that 2020 would be an unrestful year, came true.
The beauty of cliodynamics is that it doesn’t take sides about how things should be, merely observes how they, statistically, are. You can take a Hobbesian position about its conclusions and say that while overproduction of elites may lead to social instability, societies need hierarchy in order to shelter humans from the ugliness and brutality of nature – their own and that of nature at large. The move, from this point of view, would be to simply prevent the overproduction of elite aspirants by, say, limiting upward mobility in society in general and reproduction among the elites in particular. You could just as well take the Rousseauian view, though, and say that the repeated breakdown of order within large-scale societies due to elite overproduction is proof that civilization itself is a sick undertaking, one that tarnishes our fundamentally egalitarian nature. In Civilized to Death (2019), writer Christopher Ryan employs the powerful metaphor of grasshoppers and locusts.[2] Locusts are, he points out, just grasshoppers whose behavior and appearance changes in response to overpopulation. Like the grasshopper, Christopher Ryan argues, human beings are harmless when in small groups but, once they form large-scale societies, they turn into destructive monsters. It’s hard to argue with this metaphor. It even aligns with the conclusion of many anthropologists who study hunter gatherer societies; the thinking goes that, given that we have lived in small, mostly egalitarian, bands for the vast majority of our existence as a species, it must be our nature to do so. I find this conclusion strange, given that it was a natural process that brought about our tinkering and experimentation, an aspect of our nature which, we might hypothesize, has made us a particularly adaptive species (even compared to other hominids which have all gone extinct), and which eventually led us to discover agriculture, a practice which over time led to a need for increased resource guarding and higher reproductive rates to supply laborers. The hunter gatherers, who existed alongside farmers for a long time and continue to exist in small numbers today, got crowded out by a rather natural process. The relative peaceful and collaborative nature of most humans contrasted with the warmongering and exploitation of large-scale human societies is striking, however. It would be rational to conclude that this behavior on the part of large-scale societies is simply the nature of our species, just as turning to locusts under certain conditions is natural to grasshoppers. If we take this to be true, then we must either endure the cycle of elite exploitation (felt by most as intensifying poverty) followed by elite overproduction and subsequent (often bloody) unrest or content ourselves with being hunter gatherers, which Rousseauians, deep down, dream of doing (I confess to leaning in that direction myself) but which would require a mass extinction event.
The point of this essay is to imagine better alternatives and one way to do so is to ask: What if the problem isn’t all of us, as individuals, morphing into worse versions of ourselves when we are part of a civilization, the way grasshoppers morph into locusts? What if the problem really just is a few villains who are given outsize influence through the mechanisms of civilization? For the sake of simplicity, let’s allow that large-scale societies must have elites to steer them and let’s also consider that, as Peter Turchin points out in End Times, not all elites have acted only in their immediate best interest,[3] in fact, some of them implemented policies improving the lives of ordinary people considerably, even at their own expense; mind you, whatever action elites take for the middle and lower classes is only ever at their immediate expense because social instability resulting from a populace reaching its breaking point is particularly destructive to individuals in the elite classes and so ensuring a certain level of prosperity within the populace serves elites in the long term. So, the real question is: Why do some elites play so dirty with their power, without consideration of the short-term negative effects on the majority and the long-term consequences for everyone? There are enough resources in the world for many to live in luxury, without a vast majority of the population living in poverty. Yet, civilizations generally wind up in that situation, before they succumb to elite overproduction. Must this be the case? Or is this due to the greed of just a few elites who go unchecked, simply because there is nobody above them to do so?
I propose that the answer is quite simple: Numerous studies have shown that crime is concentrated among a small group of offenders. To cite just the most measured evidence, a 2017 meta-analysis published in the journal Crime Science, playfully titled “Ravenous Wolves Revisited: a Systematic Review of Offending Concentration,” found that, based on 73 studies, the worst 10% of offenders account for 37–43% of offenses and the worst 20% for about 52–60%. This is a large skew. I don’t see why such skew towards offence wouldn’t be present among elites. If we accept this to be true, then the elites aren’t exploitive by virtue of being elites; there is simply a fraction of elites who wreak an outsized amount of havoc. Such elite offenders pose an outsized problem, because they potentially have millions of victims rather than just the few of the average criminal. Calls for accountability, the ones that are given any credence at all, are generally just the siren songs of frustrated elite aspirants who leverage followings with sweet words of promise to abolish the other elites, with the real result being a consolidation of scarce power in the hands of a different set of elites, who then themselves go unchecked. Elites, in other words, do not police each other very well. This is understandable. They shouldn’t be expected to, just as neighborhoods shouldn’t be expected to police themselves. The problem is simply that there is nobody above them to do so.
What kind of behavior should be policed among elites? Policing elites would be different than the punitive policing of the general populace, which punishes various forms of theft, abuse, and murder, along with more arbitrary breaches, after the fact. Policing the elites would take the form of intervening in actions which are part of a natural pursuit of status, but which have outsized negative impact on society at large. The host of the podcast What is Politics?, who only goes by his first name, Daniel (which makes citing him awkward, frankly) hypothesizes that over-accumulation of status is akin to overeating and lack of exercise. Most of us humans are somewhat lazy and gluttonous unless we force ourselves to be otherwise, because we have not had to evolve food restraint and exercise motivation in our original, natural, environments. This is called an evolutionary mismatch. So, too, many of us cannot arrive at the point of feeling we have attained enough status. “Status” means different things in different societies; in much of the world today, status simply means wealth. Jeff Bezos is a glut for status, not money. Ditto Elon Musk etc. When does accumulation of status become a crime, though? When does it become an illness rather than the natural pursuit of what we are programmed to want and need? Way fewer people have the opportunity to pursue status to a point of grotesque absurdity than may engage in glut and sloth. Yet, pursuit of status is of more consequence than other evolutionary mismatches because, unlike glut and sloth, which at most might harm the person engaging in them, the pursuit of status must be pursued at the expense of others. That is the point of it, really. So, a good measure of whether it becomes a pathology is to what extent it lessens the wellbeing of others. Could we calculate this? Could a much more intelligent being calculate the point when an individual’s status is simply bad for society and therefore an illness, and one not just bad for the person with the disease but for society itself?
The most obvious plot twist, if you were to write a sci-fi story about the above scenario, would have to be that the people who trained an AI to police the elites would actually turn out to be the new, more shadowy, elites. The Animal Farm scenario, in which the pretense of equality is just a new way to consolidate power (yet another example of elite change-of-guard posing as a revolution) has one fundamental characteristic, though, which is hard to think beyond, because tempering it would require a technology we can still only imagine, but which we are now closer than ever to being able to create: In every scenario in which we have tried, as a large-scale society, to achieve justice and prosperity for all, we have run up against the problem of the overwhelming motivation of certain humans to consolidate power, to seek status, whether that is wealth or positions in bureaucratic apparatuses or broad popularity, at the expense of others. It has been the ultimate Catch 22 of our quest for peace. This is a human element – not just of the locust-humans of civilization but of all humans. Egalitarian hunter-gatherer societies deal with the natural desire to over-seek status, too, but have the option of employing a system of teasing, exile, or even murder to get rid of anti-social individuals. This is called a reverse-dominance hierarchy. The issue with large-scale civilizations is that what we might call an anti-asshole safeguard, which we all have the natural propensity to implement when we are not forced, for material reasons, to put up with exploitation, is short-circuited at civilizational scale. Selfish people thrive in the context of large-scale civilizations because they can hide behind its anonymizing mechanisms and, in this way, status-motivated individuals sometimes have the opportunity to consolidate an enormous amount of status, at the expense of general wellbeing. What if we could create an AI that simply prevented the worst of such individuals? The difference, here, from previous attempts to overthrow dictators and oligarchs, would be the fact that there would be no human or group of humans doing the judging of who needs to be restrained. When humans are involved, their natural desire for status drives them to make selfish choices, like taking power for themselves after they have overthrown existing power structures. The same mechanisms of complex interdependence and systems of exploitation which make it hard to get rid of status-gluts within large-scale societies also causes the power vacuums left behind fallen status-gluts to be quickly filled by new status-gluts. An AI would have no such desire to fill power vacuums and could be a true view from nowhere, which is to say non-ideological, and that would make all the difference. The idea that an AI can be less selfish, and therefore more neutral, than humans is an important hypothesis. It puts the fear mongering about an evil AI on the part of some elite critics in context: Might they simply, consciously or not, be afraid of its neutrality, it’s fairness? Might they desire, consciously or not, to preemptively frame it as having the selfish motivations of a human to cast doubt on its fairness?
My proposal is not utopic. It would not eliminate inequality, the existence of the rich or poor, would not mark the end of corruption or crime, would not create a world in which every single person could live their dream, would not cure any disease. Its goal would not be perfect equality, merely a slight restraint on those whose quest for status becomes a problem for society at large. Small tampering can make a big difference. To illustrate this point, here’s a somewhat simplistic thought experiment: Consider the United States, a country whose wealth-distribution is approaching the breaking point characteristic of Turchin’s ages of discord. If you leveled the top 1% of the wealthy within that country with the lowest cut-off of the 99th percentile, you would gain (with many caveats due to most money being the future-money-magic of modern finance) around 32 trillion dollars, more than twice the U.S. federal budget for a year. All you would have to do is make the roughly 3.6 million of the very richest people in the US slightly less rich, in a way that would not influence their lifestyle in any significant way. I asked an AI to make this calculation for me, to be honest, and did not check every one of the sources it used for its numbers, but however you might tweak the numbers it would likely not change how little this wealth redistribution would even affect the lives of a sliver of the wealthiest and how much that money could do in the public sector. What could an AI, trying to ensure the maximum prosperity for a populace, be able to do with that money? How much better America’s famously bad public education get? How much more accessible could its healthcare get? This is, admittedly, a ham-fisted example of mere wealth-redistribution, which I think is the least interesting way of tampering the over-indulgence in status of a few. So many resources are wasted due to human error, poor organization, and greed. The greed only pertains to a few, I believe, and is merely made possible by the majority’s error and poor organization. Isn’t being a more organized, faster, less distracted version of a human brain what AI does particularly well, even at this early stage? Add to this lack of ego, lack a family to feed and status to gain or protect, and AI starts looking like a true angel of our better nature.
I’m not sure at what point imagining currently impossible technologies tips away from insight about technology and into mere fantasy. Then again, over a century ago, E.M. Forster imagined a machine that addicts humans to a world of ideas and convenience, which they all imbibe, bedridden, in solitude. “The Machine Stops” (1909) would have been more of a fantasy story in its time; the personal computer wouldn’t become a plausible idea until the mid-century, and widespread reality until the end of it and a truly interactive network of computers, that could respond to all its users and usher in an age of over-convenience and screen addiction, until the beginning of the next century. On the other hand, reality even surpasses sci-fi, in certain respects. E.M. Forster imagined his machine to take the form of a room that every person lives inside, not something we carry around in our pockets and which is woven into our lives and into the economy to such an extent that it can’t simply “stop.” We live in a much more technologically complicated world than E.M. Forster could have imagined over a century ago. However, if he had thought too hard about the technical possibilities of his time, he could have never written such a prophetic story. Almost half a century later, Norbert Wiener, too, was prophetic when he imagined the possibility of a network which could modify behavior through broadcasting, but he speculated about the behavior-modifying broadcast moving through the blood. The fact that a degree of behavioral and cognitive modification is possible merely through sounds and images on a screen may have seemed too simple, perhaps too dystopic. In light of this, an AI that could temper certain extreme reaches of society is not so outlandish.
How would such an AI enforce itself? Would we allow it to force its bidding on society and if so, how? Perhaps we wouldn’t have to. Consider a constitution. It’s an inanimate thing, which sets a set of parameters for a society to live by. It cannot reinforce itself – only people can. Strangely, they do, however imperfectly. And if pieces of paper with words on them can provide scaffolding for a society, why couldn’t an AI, which continues to write itself, interpret itself, and is able to actually adapt a set of core values to a given circumstance without having to depend on status-seeking humans to interpret it? It could do all this with only limited direct influence over its environment. The narrative thinker might imagine such an AI being treated like a God, and the ability to speak to it directly and interpret it for others becoming a new way for status-seeking humans to consolidate power. The advantage of AI over a mere written document, like a constitution or a religious text, is precisely that it does not have to be interpreted by humans. It would interpret itself with the same neutrality as it writes itself. Its powers of action might be limited to a very specific fund which it collects by preventing the most outrageous accumulation of wealth. It wouldn’t be allowed to arrest people, or even punish anyone. A truly intelligent AI should see so many steps ahead that, like a less magic Minority Report scenario, it could simply prevent the over-accumulation of status by a clever set of interventions. Imagine a world in which Jeff Bezos could still be rich but didn’t destroy the publishing industry. Where the richest didn’t miss what they didn’t have and the rest of us enjoyed the slightly more level playing field.
How could we trust a non-human thing to determine right and wrong if we ourselves are seemingly uncertain about the distinction? Well, I don’t get the sense that most humans don’t understand right from wrong, if we define right simply as being actions that, to the best of our ability, do not harm others, and wrong being those that do. All of us have the tendency to be seduced by wrong when it is for personal benefit with little cost – this is not because we do not understand right from wrong, but because we are motivated by selfish desires. However, it’s really just a small percentage of humans who are totally numb to right and wrong and an even tinier one who have the power to enact wrong at scale. An AI does not have to feel as a human does in order to do right – on the contrary, perhaps it shouldn’t. Consider the trolley thought experiment, sometimes used to warn against AI-mediated justice: If a runaway train is about to crash, killing 5 people, but can be diverted to a train track where it will kill one person, should it be diverted? If your prime directive is to prevent as much immediate death as possible, the answer is obvious. The only reason it isn’t obvious to humans, is because we are incapable of seeing all humans as being of equal value, whether because people we know are dear to us in ways strangers aren’t or because we consider some humans more beneficial to society than others. Even the latter is not altogether wrong: if the five people on the trolley are serial killers, then not pulling the lever is better for human flourishing in the long term even if it results in more deaths in the short term. What we know to be right and wrong changes depending on the complexity of data we can consider. This is what scares people about AI-mediated justice: AI may see things too simply, because it is directed to only look at a very narrow set of parameters, and end up making an overly literal decision, or perhaps it’s able to see so many steps ahead that it no longer makes sense to us, and therefore no longer feels just. I would ask, however: Is this any worse than human-mediated justice, which is so often made crooked by status-seeking and group loyalty?
The limits put on such an AI’s power would be just as important as, if not more important than, its powers themselves. This set of checks and balances would have to be woven by a very wise group of people, one that does not believe in utopias, and instead has embraced a sense of Greek tragedy, as Norbert Wiener recommends. Wiener describes the tragic sensibility necessary to approach technology responsibly as defined by an understanding that “the world is not a pleasant little nest made for our protection, but a vast and largely hostile environment, in which we can achieve great things only by defying the gods; and that this defiance inevitably brings its own punishment.”[4] He also makes the obvious point that a machine that can learn and make decisions based on what it learns will not automatically make decisions we like and warns that: “For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind.”[5] The odd syntax makes it sound like a denial of the possibility of constructing a beneficial decision-making machine, but it actually leaves the possibility astonishingly open, as long as the people who build it are aware of all that could go wrong. It is a basic truth that what we want is not always what we need and may in fact have nefarious unforeseen circumstances. This is the wisdom of old, taught through many fairytales, the ones about the golden fish that gives three wishes, about the jinnee in a bottle, or about promising things to the Devil in exchange for an immediate reward.
The good techies described above would have to achieve a certain degree of technological sovereignty to be able to do their work. David Graeber and David Wengrow in The Dawn of Everything: A New History of Humanity (2021) propose that there are three sources of domination: control of violence, control of information, and personal charisma.[6] Whatever just critiques that book has drawn, I find the second source of power to directly address a problem important to our time, one that Jaron Lanier touches on when he described “siren servers,” which are an “elite computer or collection of computers on a network” that collect data and analyze it “using the most powerful available computers…run by the very best available technical people. The results of the analysis are kept secret but are used to manipulate the rest of the world to advantage.”[7] Secrecy is the secret sauce, really, of the power of Silicon Valley. To some extent it’s an inadvertent secrecy. Silicon Valley has not, I believe, campaigned for poor education in the fields necessary to understanding computer technology, but the fact that few people are trained to truly understand them is a big advantage for consolidating power in that sector. The tech sector has, indeed, gladly advanced the myth of the “tech genius,” a myth that should not, as it does, justify a general lack of education in a technical field that is so incredibly influential. Digital sovereignty would follow from better public education in how computer technology, in the broadest sense, works. Of course, the narrative thinker already sees the dystopic scenario unfolding: A small group of tech-savvy rebels seize or build servers and set about starting a brave new world, but the “someday” of their vision justifies cruelty in the present. They can’t agree on how exactly they want to program their AI. Malfunctions lead to in-fighting and paranoia. A charismatic leader emerges and convinces part of the rebels to break off and form a commune based around worship of the AI, or perhaps, more likely, distrust of the AI. That is narrative thinking for you; it cannot imagine peace. The enterprise of creating a constitution-AI could not just be in the hands of a few tech geniuses but would have to be created by a society that values education in both tech and narrative, or let’s call it STEM and humanities. Some individuals naturally specialize in technical fields, others in humanities and arts, but they should both have respect for and, crucially, solid training in, the other’s domain. That kind of balance between imagining what is possible, technically, and what could go wrong – or to cite Wiener again between the “know how” and the “know what” - is what would make all the difference. The question is simply whether we can trust ourselves over that last stretch in which purely human judgement, unmitigated by a non-status-seeking entity, determines right from wrong.
[1] Jaron Lanier, Who Owns the Future? (New York: Simon & Schuster, 2013), 12.
[2] Christopher Ryan, Civilized to Death: The Price of Progress (New York: National Geographic Books, 2019), 122–25.
[3] Peter Turchin, End Times: Elites, Counter-Elites, and the Path of Political Disintegration (New York: Penguin Random House, 2023), 147.
[4] Norbert Wiener, The Human Use of Human Beings: Cybernetics and Society, rev. ed. (Boston: Houghton Mifflin, 1954), 184.
[5] Wiener, The Human Use of Human Beings: Cybernetics and Society, 185.
[6] David Graeber and David Wengrow, The Dawn of Everything: A New History of Humanity (New York: Farrar, Straus and Giroux, 2021), 371.
[7] Lanier, Who Owns the Future?, 55.
I have not allowed myself to reread my submission since late August, because I know I would get sucked into trying to fix it. I don’t have the luxury of that kind of time - my choice was between sharing it as is or not sharing it at all and I chose the former.
I originally called my essay “How Techies Can Let Go of Utopic Thinking and the Constitution-AI They Could Create if They Do” but I think I may have broken a rule of Enlightenment-style essays by not simply calling it the prompt. Also, it’s just not a great tytle.




Loved this! Especially the distinction between narrative wisdom and techno-utopian control, and the focus on constraining outsized harm rather than chasing perfection. I’ve found myself, over the last year, actually developing a geometric diagnostic for income and incentive behavior that can give institutions a neutral way to see where structures quietly concentrate power or distortion before it shows up as fraud, burnout, or collapse. Reading this clarified the macro stakes of that work in a way I hadn’t seen articulated so cleanly. Thank you!