Discussion about this post

User's avatar
JD Deitch's avatar

What a fascinating experiment. I'm left with two thoughts. One is that I agree with Robin's observation that there was insufficient engagement, but I think that is a consequence of your process, Luke. Seems very fixable to me.

The second also picks up from Robin's observation that "the current generation of AI tools has exposed us all as post-modernists, and they might turn us into empiricists. The way that LLMs construct knowledge is entirely contextual. Rather than defining terms and building strict hierarchies of knowledge, it looks at the context that words and ideas show up in and turns that into coordinates in some high dimensional space. The disorienting thing is it actually “works”—***at least within the context of the words and patterns that the models have collected***."

It occurs to me that there are three things that are destabilizing about AI, and I think you can order them on a continuum beginning with rational/empirical and ending with broader ontology and meaning. The first is destablizing thing is that it "works," as Robin said. The second, arising from the first, is that whether it says something truthful or not, LLMs "react" plausibly enough to be broadly believable when we present then with inductive (rather than deductive/summarizing/reductionist) exercises. Interacting with them "feels" real, despite the fact that, from the perspective of sampling theory, there is no basis whatsoever to believe that anything the models "think" or "say" reflect truths in the population given that they are a convenience sample of data being processed by a subjective algorithm. And finally, there is the when-not-if concern of AGI and what happens then?

It seems like your triumvirate is speaking to different aspects of this destabilizing continuum. Zohar is looking at the far end. James and Robin feel a bit closer to the near end. I think if you look at society more broadly, you see more people worried about the stuff we can see now, and it's all about trying to figure out whether we as humans are still special. But this is a humanity that, as Zohar observes (rightly in my view), demonstrates its specialness by doomscrolling and excessive consumerism.

Please carry on with this, Luke. Even if we didn't get much interaction, the difference in perspectives between Athens, Silicon Valley, and Jerusalem were interesting and valuable in and of themselves.

Expand full comment
Michael's avatar

When talking about AI and rationality, we often misses an important point: Taking a Talebian perspective, true rationality is not about perfect logical consistency or computational power, but about survival and risk management in an uncertain world. we should be less concerned with whether AI can replicate human style reasoning and more focused on whether it can contribute to long term human flourishing without introducing existential risks. Which is why I think Zohar's perspective resonated most with me given his emphasis on human flourishing and the importance of meaning ...which has proven crucial to human survival. Survival and evolution is the ultimate arbiter of rationality.

Expand full comment
5 more comments...

No posts