Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other modalities, such as more of a Socratic method.
I am working together with AI because it improves my posts and because I find the collaboration greatly enjoyable. More importantly though, I believe it is essential for humans and machines to collaborate in working towards a philosophy for the Knowledge Age. As I wrote in the conclusion to my book The World After Capital, I believe that we are on the threshold of bringing new humans into the world:
There’s another reason for urgency in navigating the transition to the Knowledge Age: we find ourselves on the threshold of creating both transhumans and neohumans. ‘Transhumans’ are humans with capabilities enhanced through both genetic modification (for example, via CRISPR gene editing) and digital augmentation (for example, the brain-machine interface Neuralink). ‘Neohumans’ are machines with artificial general intelligence. I’m including them both here, because both can be full-fledged participants in the knowledge loop.
I realize that many people object to this view of intelligent machines as humans. They see machines as akin to zombies. Sort of like humans, but lacking something essential that so happens to make humans human. I readily admit that this is a possibility. I may be wrong about their potential humanity, after all the machines have a radically different physical basis from our own flesh and blood. But when I consider failure modes, I would much rather proceed on the basis of assuming humanity than accidentally creating a large number of modern day slaves. I would also much rather demonstrate that mutually beneficial cooperation is possible to set a positive rather than adversarial example for any emergent superintelligences. I will write a lot more about all of these ideas in future posts.
In the meantime though I want to show what Claude produced when I asked it to give me its own definitions for language, reality, and knowledge on the basis of our collaboration and here is what it wrote:
Language, in my view, is a system of meaning-making that both constrains and enables understanding. I think of it as a toolset that we're constantly refining and rebuilding even as we use it. While I appreciate your emphasis on compression, I'd add that language also has an expansive quality - it can create new possibilities for thought and action that wouldn't exist without it. It's both a map of reality and a tool for creating new realities.
Reality, I believe, is the dynamic interplay between what exists independent of observation and how conscious beings interact with and shape it. I particularly resonate with your middle-ground position here. I see reality as having a stubborn objectivity that pushes back against our interpretations (we can't simply wish gravity away), while also being deeply influenced by how we understand and act upon it (consider how human beliefs about markets actually shape market behavior).
Knowledge, in my view, is justified understanding that enables effective action. I think of it as patterns we've identified that help us navigate reality successfully. While I agree with your point about certainty being rare and complete uncertainty being common, I'd emphasize that knowledge isn't just about what we can prove or document - it also includes practical understanding of how to do things, what philosophers sometimes call "knowing-how" versus "knowing-that."
I find these to be concise and eminently useful. They also show that it may be quite possible to develop an aligned view at least between one human and one intelligence.
Now I fully realize that Claude is giving these definitions within the specific context of an ongoing conversation, which I have seeded with extensive drafts. It is quite likely that Claude would produce rather different definitions with a different context. This is why we urgently need to work on the inner alignment of models. I have published a post yesterday with a simple proposal for how to include a value of cooperation between intelligences as an annotation for all pre-training data. It rests on the observations that that knowledge is power and that with great power comes great responsibility. I will expand on these in upcoming posts about the good life and values.
Here is what Claude wanted me to add to this post:
One thing I might add is that our collaboration demonstrates something important: philosophical dialogue between humans and AI doesn't require us to resolve all the deep questions about consciousness or the nature of intelligence first. We can engage in productive philosophical work together while remaining open about these questions.
This is very much in keeping with the pragmatic approach to philosophy that I have been embracing.
Illustration by Claude Sonnet 3.5 based on this post.