Trialogues are written correspondence between a moderator and three interlocutors who are anonymous to one another prior to publication.
Introduction
In this series, we experiment with form. For this first edition, I invited three people to participate in an email correspondence with me over five days. On the first day, I asked each of them the same motivating question. I followed with a probe into their answer on day two. On days three and four, I asked each person to react to the responses of the other interlocutors. On day five, I revealed the full correspondence to all and invited each person to give a closing remark. To protect anonymity during the conversation, I referred to the participants as “A”, “B”, and “C”. In the published text, we have reinserted their real names. This conversation explores the intersections of rationality, artificial intelligence, and human flourishing. The text has been re-arranged for the sake of clarity and flow while preserving the original statements.
is a poet, rabbi and theologian. is associate professor of economics at George Mason University. He has a doctorate in social science from California Institute of Technology, master's degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer, at Lockheed and NASA. is a VC based in Palo Alto, California. He invests in AI startups.With that context to set the stage, let’s begin. As this series evolves, we will be tinkering with the forward and may conduct future conversations via text message or other forms of anonymous chat. We welcome your ideas and feedback as we continue to facilitate these experiments in conversation. I’m grateful to Zohar, Robin, and James for being willing to commit to this experiment before there was even a model for it. —
The Conversation
“What are the limited of rationality, if any, and how might AI shed light on the question?”
Zohar Atkins:
Rationality is a powerful tool. It allows us to set goals, plan, test hypotheses, and update our worldview in light of new data. It provides a shared language to converse across cultures and serves as an Archimedean point for self-criticism. Rationality is leverage—a strong man lifts a block, but a clever man invents a pulley. It’s how Socratic nerds defeat Homeric jocks.
However, rationality is only a tool. It cannot tell us why our goals should be our goals. It cannot direct our love, care, or reveal what is noble. Rationality alone doesn't connect us to others or move us to joy, awe, or anxiety. A purely rational sense of self would be dull, devoid of qualities. Rationality alone cannot create great art—it has no point of view. Pascal argues we can know God only by loving God. Levinas suggests the experience of another’s face evokes responsibility, preceding rationality. Gadamer teaches that bias is a precondition for understanding. Without bias, one cannot take a stand or interpret the world.
Let's assume superintelligence is inevitable. Its value will depend on the skill of the prompt engineer and the quality assurance of its trainers, like a wand needing a wizard. Our desire to sing and dance will remain unchanged by stronger AI, just as electricity didn’t alter human emotions. AI won’t influence whether we are inspired or depressed, aside from increasing prosperity, which may allow more leisure for reflection.
If rationality helps us be "less wrong," it doesn’t necessarily make us evolutionarily fitter. A sustainable civilization requires cultivating optimism and agency. Enlightenment values like Kant’s "dare to know" won’t endure if the rational stop having children. In the Enlightened West, it is religious communities that buck the secular trend of declining birthrates. Where pure rationality is taken to dictate doom and gloom (better not to exist), the faithful place their trust in the Original pronatalist, God. That should prompt us to rethink what rationality really means.
Luke Burgis (Moderator):
Let's look closer at the idea that rationality doesn't necessarily make us evolutionarily fitter, as you say—especially since so many people today speak of progress as an arms race of intelligence, or “survival of the smartest”. Regarding the declining birth rate, for example, some people in Silicon Valley believe they can solve the fertility crisis by engineering human life at scale. If it's not the smartest who will survive and prosper, who will?
Zohar Atkins:
Survival and prosperity are necessary but insufficient conditions for a life of flourishing. Flourishing requires meaning, belonging, and purpose. Without these, society becomes nihilistic. Declining birth-rates are a symptom, not a cause, of nihilism. Meaning, belonging, and purpose are dimensions of a good life; those who have them and can transmit them stand the best chance of achieving longevity in the truest sense. We learn by example. Meaning, belonging, and purpose, are not something you put on the chalk-board. They're something you model and see modeled. They are awe-inspiring. The ability to solve multi-variable calculus is ancillary to a meaningful life, a life of connection, and a life of purpose. To flourish you need a "why" or a way to a "why." This "why" precedes reason, even if reason can help us clarify and refine it.
The survival of a race of genetically engineered super-computers that can reproduce themselves in a lab might ensure that biological life continues, but it cannot ensure the continuity of ontological life, the life of meaning, belonging, purpose.
After a certain threshold level of analytical smarts, the people who are best positioned to flourish are those with a sense of abundance, joy, gratitude, and spiritual practice, all traits that are remarkably rare amongst the professional managerial class. The skillset needed to win a top spot in the meritocracy is not the same as the skillset or character-set needed to wake up with a sense that life is a profound gift. Those who have a service posture, those who are capable of feeling awe, those who are capable of forming meaningful, committed relationships, those with a strong sense of self...these are the people on whom our future culture depends. "Light is sewn for the righteous and irrepressible joy for the upright of heart." (Psalm 97:11).
Our generation is an anxious one, and not simply for economic reasons. We pursue status rather than balance, prefer accomplishment to mental health, and trade commitment for optionality. In our boredom and isolation, we turn to social media to convert our ennui into currency.
To survive in such cultural conditions, one must find a way not to feel hollow inside. It's ironic that our so-called rational age is one in which the masses spend their leisure time drinking from a dopamine hose that would turn even Dionysus into a tee-totaler. Art, sports, and politics provide modern substitutes for religion, and pay homage at the altar of the post-rational. But the whoosh of rooting for a favorite sports team is ephemeral. Taylor Swift's fans will transmit little to the next generation.
Political rage feeds the crowd (and the ego), but brings no peace to the home. Meanwhile, those who study, pray, and commit acts of loving-kindness keep the world going. Many of them couldn't tell you the first thing about Kant. Before the Critique of Pure Reason, carrying wood, chopping water. After the Critique of Pure Reason, carrying wood, chopping water.
Robin Hanson:
For [Zohar], “rationality” encompasses a relatively narrow range of considerations and processes, and AI is mainly only useful for making us more rational, e.g. our desire to sing and dance will remain unchanged by stronger AI, just as electricity didn’t alter human emotions. AI won’t influence whether we are inspired or depressed, aside from increasing prosperity.
He also has many complaints about the values and priorities of many in our society.
I doubt that electricity had no impact on human emotions, and I don’t see how he can be so sure AI’s impact will be so limited. Prosperity is far from the only causal path between tech and deeper cultural practices and values. AIs are even now producing inspiring and depressing songs and videos, including dance videos, and I expect them to influence our desires to sing and dance. Of course he may well not like these impacts.
James Cham:
I agree with Zohar’s general posture towards rationality as a tool and his approach towards its limits. He has identified many of the key challenges for this generation. For the purposes of this trialogue, I will focus on some places where we disagree.
Artificial intelligence, at least in its current version, isn't going to be necessarily better at rationality. The old computer science tools were very good at being rational and didn't need all the hype and buzzwords. LLMs are quite emotionally intelligent and will be very good at altering human emotions. They are not the aggregate distillation of all rational thought. Instead, they are the distillation of a huge chunk of human communications, which is often much more focused on being persuasive rather than analytically rigorous. One of the surprises of the early tests with GPT-4 in medical situations is that it had consistently better bedside manners. It was more polite and sometimes even more persuasive. It never got tired, it never got irritated, it didn't get curt. One of its problems was that sometimes it was just too agreeable!
What will people do with that newfound power? That's the big question. The real danger of artificial intelligence is not going to be the AI itself, but the people who wield it. And the subtle danger is (probably) not going to be AI gaining sentience. Instead it is going to be people who hide behind claims of AI consciousness as they gather power and wealth for themselves.
Robin Hanson:
“Rational” beliefs or actions are those meeting higher standards of coherence or effectiveness. Such as being more logically consistent, probabilistically coherent, or better achieving accepted subgoals. But traditionally such standards have usually been local, allowing one to check accusations of irrationality by looking only at local patterns of beliefs and actions.
However, while modern AI systems are more effective than prior versions in many ways, their differences are high-dimensional, and not well summarized or evaluated by checking local rationality constraints. While traditional science fiction robots were assumed to be so rational and consistent that they might self-destruct upon encountering a logical contradiction, modern AIs are quite often inconsistent and incoherent.
Thus if such AI systems will soon be offered as ideals of rationality, or used to rate the rationality of humans or other systems, this would then express a change in our concepts of and standards of rationality. Rationality would less mean meeting locally checkable constraints, and more mean being effective. But as effectiveness is high-dimensional, and open more to dispute as a result, rationality would become less useful as a tool of criticism.
Luke Burgis (Moderator):
Robin: If rationality becomes less useful as a form of criticism, then what might become more useful in its place? Do you think AI could lead to a broadened or expanded sense of reason?
Robin Hanson:
Even if widespread use of AI induces us to rely less on rationality constraints in checking claims and analysis, we might still make use of prediction markets. Individuals can be judged on their trading track records, and consensus can be obtained directly from prices.
What I fear instead is that we will come to rely more on the mere prestige of AIs and their supporting organizations. This will give those with such prestige great powers to set shared opinions and suppress criticism and rival views.
James Cham:
There is a strong Nancy Cartwright-style “Dappled World” insight here as we get exposed to the ways that the toolkit of rationality might be less universal and complete than we would like to believe. When [Robin] refers to rationality as local, I assume they mean rationality is ultimately limited by what can be carried by individual minds or groups of minds. They aren’t quite as consistent as we would imagine them to be because they are both lossy representations of the world and limited by working memory and communications between people.
My guess: AIs will get better at being rational, and prediction markets are another form of structured collective intelligence, with different strengths and limitations than tribes, companies, nation-states, and AIs.
People will definitely use AI to justify their own agendas. But that’s what we have done throughout history! So that’s more of a sign of how effective AIs are going to be rather than a problem. The good news is that people eventually get smarter (so far) and at some point an appeal to AI will get the same suspicious reaction that I get when a toothpaste claims that 9 out of 10 dentists prefer a brand.
Zohar Atkins:
The tension between rationality and effectiveness is archetypal. The Neo-Kantian philosopher Hermann Cohen was purportedly asked how he could love God if God was just an “Idea.” Cohen responded, “How can I love anything but an Idea?”
If Idealism can be described as the love of Ideas, then pragmatism can be described as love of efficacy. Classical rationalism was a form of idealism. One should love the truth irrespective of the conclusions to which it leads. Follow the first principles first. Then figure out the feasibility.
AI as [Robin] describes it is more of a realist than an idealist. The consummate pragmatist was Marx, who argued that the point of philosophy is not to interpret but change reality. Hannah Arendt argues that we moderns are all Marxist in the sense that we value practice over theory. In broadest sense, SBF (effective altruism), Donald Trump (the art of the deal), and Kierkegaardian fitness instructors (“Just do it”) are all children of Marx. They are also children of Machiavelli, whose Prince is the anti-thesis of the philosopher-king. While the philosopher-king gains the authority to rule by identifying the truth, the Prince is more like a sophist, who wins by manipulating human psychology.
In the case of Carl Schmitt, sovereignty is defined not by the ability to uphold or interpret rules, but to decide on their exception. Schmitt’s conception of the sovereign as having the charismatic, oracular authority to suspend the rules in the name of preserving them parallels [Robin’s] suggestion that AI can circumvent local rules while claiming to maintain a rational compass. “The annihilation of the (local) law is the preservation of the (holistic) law.” I come not to negate the law, but to fulfill it, says every messianic savior ever.
And so, when AI does this, we should expect it. Perhaps the tension between bureaucratic and charismatic authority, local rules and methods governed by consensus and exceptional moments of emergency is a fruitful and necessary one. Conservatism (just be rational) and Radicalism (efficacy by any means necessary) require moderation.
There will be times when we’ll want to let AI’s decisions stand, even as we can’t understand them, and times when we’ll decide that the opacity and dictatorship of AI’s decisionism requires too much submission. We should worry if our conception of rationalism remains local and narrow; we should also worry if we outsource rationality on the basis of prestige.
Maybe it’s best to think of AI less as rational and more as gnomic. Then the question becomes, how useful are its oracles? Are we getting a good risk-adjusted return on its inspired pronouncements?
James Cham:
The current generation of AI tools has exposed us all as post-modernists, and they might turn us into empiricists. Rationality turns out to be a model of reality. A very good one but, as it is with all models, ultimately limited.
The way that LLMs construct knowledge is entirely contextual. Rather than defining terms and building strict hierarchies of knowledge, it looks at the context that words and ideas show up in and turns that into coordinates in some high dimensional space. The disorienting thing is it actually “works”—at least within the context of the words and patterns that the models have collected.
Even if you don’t believe that the LLMs can genuinely create new knowledge, it has dramatically lowered the cost of making thought experiments empirical. We’re not quite at the point where philosophers will use LLMs to test different philosophical ideas. But I suspect that we’re close. And that sort of large-scale test of different trails of thought will expose more of the limitations of historical rationality.
As a relatively conventional Protestant Christian, I think we should be comfortable with disorienting shifts like this.
Robin Hanson:
I don’t understand how you think it matters that LLMs learn from specific prior texts, rather than from other sources. And I don’t understand what sort of thought experiments you think they make easier. You can experiment with talking to them and seeing their responses, but how is that like a traditional thought experiment?
Luke Burgis (Moderator):
This is an asynchronous conversation taking place over email, so James may need to come back to that question during final remarks. My question for you, James, is this: Why do you think a religious person should learn to be comfortable with these disorientating shifts? What is the relationship to faith?
James Cham:
My faith sits both below and above my relationship with my shifting understanding of rationality.
I’m not quite prepared to make broader claims about faith and religion in general. But I believe Christians can ground themselves in the idea of a relationship with a “big G” God (and, if you believe in Christianity, it is an actual relationship). This might be as much about my temperament as it is about my theology, but I’ve found this to be freeing as I’ve encountered ideas that might force me to reexamine other priors. I’m already comfortable with the idea that there’s someone bigger than I can currently imagine, so the idea that human rationality might have limits isn’t so far-fetched.
At the same time, I can’t help but use rationality to understand my faith. I can’t help but use the same toolkit that Steven Pinker would use: categorization, logic, probability, error correction, and so on–although I obviously end up with a very different conclusion.
Zohar Atkins:
Thomas Bayes, the famous originator of so-called Bayesian reasoning, was a theologian and religious leader. Some of the most capable practitioners of rationality understand the epistemological limits of rationality. Maimonides comes to mind. We must seek to know God, while also affirming that God is unknowable.
The point of rationality, especially in a Bayesian framework, is to help us manage the risk of error and alchemize our mistakes into learnings. A core religious virtue is humility. Humility in a religious context means accepting that you are not God, no matter how talented or powerful you are. Those who view the world through the lens of probabilistic thinking (often an expression of rationality) know that winning doesn't mean getting it perfectly right, but getting it less wrong. The investor and risk theorist Howard Marks emphasizes that we can never know for sure how risky our bets actually are. Whenever we act, we engage in doubled uncertainty: uncertainty about how the event will turn out (the denominator), but also uncertainty about what the true odds are (the numerator). We don't know how many winning lottery tickets are in the box, nor do we know how many tickets are in the box. Like the fabled Nasrudin of Sufi lore, we seek answers where there is light (spotlight effect), because the dark destabilizes.
Yet tail events often transpire where we don't look and don't want to look. Accepting the disorientation of the dark makes us more robust. Humility becomes an openness to the possibility of events and transformations that lie outside of standard deviations (outliers). As Nassim Taleb likes to say, we aren't paid in odds. A good outcome doesn't mean we chose rationally. And a bad outcome doesn't mean we chose irrationally.
The thrust of this meditation is that even if LLMs can test philosophical ideas empirically (whatever that might mean), LLMs will not be able to lessen the burden of risk management. What will LLMs be able to tell us about the implied volatility of the option value of Pascal's wager? Maybe Esau was acting rationally (and in line with the Black-Scholes model) when he sold his birthright to Jacob for a bowl of lentils. We don't know.
LLMs might help us with risk management in some areas for some period of time, until they don't. Because our world is reflexive. The assumption that LLMs lessen risk will incentivize us to take greater risk, and cancel out. Then we'll cry out to the Lord in our bafflement, and GodGPT will answer from the metaverse whirlwind, "Where were you when I laid the foundations of the Earth?"
The Non-Denouement
Up until this point in the conversation, all three interlocutors had been emailing with me directly. During this final phase, on day five, I shared all of the prior communication that I’d had with each of them, with their names redacted, and asked them to make their closing remarks.
—
James Cham:
What a dense and bewildering sequence of texts, full of so many references that I needed to write something like “Explain this to me like I’m a high schooler who doesn’t know Gadamer”! Fortunately, the LLMs were relatively good at helping me parse through the texts, though I suspect an actual embodied conversation with the other participants would have been even richer.
The LLMs are just like every interesting technology since Adam and Eve figured out clothing – we are going to use it for good and ill; and we are going to have to spend a lot of time figuring out how this one is different from past technologies. This one is disorienting because it generates thoughts and looks and sounds like us, so we are going to be very tempted to treat them like humans.
Everyone in the world has had less than 10 years to think about how LLMs challenge how we think we know things. That’s not very much time. I bet we’ll be surprised by what we find, if only we’re willing to try and tease apart how different LLMs actually work. The underlying math isn’t that complicated, but the effectiveness at scale is surprising and might teach us a lot. Christians have a role here to shape and provide wisdom, if only we’re willing to deeply engage.
Zohar Atkins:
Implicitly and explicitly, our conversation about AI is animated by danger and risk. "All the new thinking is about loss. In this it resembles all the old thinking" (Robert Hass). For those who assign great weight to rationality, the idea of machines beating us at this game presents a threat not just to livelihood, but to identity. But as [James] points out, AI's dominance may have less to do with precision and more to do with persuasiveness. AI may be a more compelling sophist than philosopher. But note that Plato himself had a hard time delineating the distinction from first principles. Many philosophers appear to be sophists. One way to define a sophist is someone who focuses on winning the court case, rather than knowing who is guilty or innocent. Sophists had a profit motive to train lawyers, not help judges become more impartial. Undoubtedly, the meta-sophist move is to seek to win in court by claiming to be impartial, and this is one of the challenges posed by the esteem we instinctively place in LLMs. Heidegger, quoting Hölderlin, says that "where the danger is, there grows a saving power also." Beyond the call for AI safety is the possibility that AI, in its very danger, can be a saving power. But only if we recognise the danger as a danger. What is that danger? I would argue it is the danger of unwarranted certainty to the detriment of mystery. As long as we have been rational we have been at risk of arrogance. As the meme goes, "Men only want one thing and it's to complete the system of German Idealism." With AI, some may feel they've finally found a God-like being. Not only is AI not omniscient, but seeking synthetic exposure to omniscience is a renunciation of the much harder and more human task: to dwell poetically on this earth.
Robin Hanson:
Alas, it seems to me that we have failed to engage each other much, mainly because we failed to stake out clear and interesting enough claims to induce such engagement. This was an experiment, and I approve of trying many such experiments, but I think we have to count this trial as mostly a failure.
What a fascinating experiment. I'm left with two thoughts. One is that I agree with Robin's observation that there was insufficient engagement, but I think that is a consequence of your process, Luke. Seems very fixable to me.
The second also picks up from Robin's observation that "the current generation of AI tools has exposed us all as post-modernists, and they might turn us into empiricists. The way that LLMs construct knowledge is entirely contextual. Rather than defining terms and building strict hierarchies of knowledge, it looks at the context that words and ideas show up in and turns that into coordinates in some high dimensional space. The disorienting thing is it actually “works”—***at least within the context of the words and patterns that the models have collected***."
It occurs to me that there are three things that are destabilizing about AI, and I think you can order them on a continuum beginning with rational/empirical and ending with broader ontology and meaning. The first is destablizing thing is that it "works," as Robin said. The second, arising from the first, is that whether it says something truthful or not, LLMs "react" plausibly enough to be broadly believable when we present then with inductive (rather than deductive/summarizing/reductionist) exercises. Interacting with them "feels" real, despite the fact that, from the perspective of sampling theory, there is no basis whatsoever to believe that anything the models "think" or "say" reflect truths in the population given that they are a convenience sample of data being processed by a subjective algorithm. And finally, there is the when-not-if concern of AGI and what happens then?
It seems like your triumvirate is speaking to different aspects of this destabilizing continuum. Zohar is looking at the far end. James and Robin feel a bit closer to the near end. I think if you look at society more broadly, you see more people worried about the stuff we can see now, and it's all about trying to figure out whether we as humans are still special. But this is a humanity that, as Zohar observes (rightly in my view), demonstrates its specialness by doomscrolling and excessive consumerism.
Please carry on with this, Luke. Even if we didn't get much interaction, the difference in perspectives between Athens, Silicon Valley, and Jerusalem were interesting and valuable in and of themselves.
When talking about AI and rationality, we often misses an important point: Taking a Talebian perspective, true rationality is not about perfect logical consistency or computational power, but about survival and risk management in an uncertain world. we should be less concerned with whether AI can replicate human style reasoning and more focused on whether it can contribute to long term human flourishing without introducing existential risks. Which is why I think Zohar's perspective resonated most with me given his emphasis on human flourishing and the importance of meaning ...which has proven crucial to human survival. Survival and evolution is the ultimate arbiter of rationality.