being in the unknown

How should humans and AI co-exist?


I don’t remember much about my childhood, but I do remember the first time I got lost.

My family had relatives in Venice, and we spent our summers there. The city felt almost mythical. The streets were made of water, and buildings leaned majestically over the canals like long-forgotten fairy tales. People spoke languages I didn’t understand, and everything smelled of salt.

For an eight-year-old Scandinavian boy, it was both beautiful and frightening.

One afternoon, I stood with my father on the San Marco side of the Rialto Bridge. The air was thick and humid. A foreign crowd surged across the bridge, while aggressive flocks of pigeons filled the sky. I held my father's hand tightly, and he sensed my fear.

He crouched down beside me, calm as ever, and asked, "Are you afraid?"

I nodded without looking up.

"What are you afraid of?"
"I don’t know," I replied.

He smiled and placed his other hand on my shoulder.
"Can I share a secret?"
I nodded again, finally looking up at him.

"The best thing in the world is the feeling of getting lost. Did you know that?"
"No…" I mumbled, squeezing his hand even tighter.

He pointed across the bridge. "I want you to run. Run across the bridge to the other side. I’ll stay here and wait for five minutes. Then I’ll come find you. I promise."

I hesitated for a moment, then ran. There was nothing I wouldn’t do for my father. I dashed across the Rialto Bridge, breathless and trembling, the crowd swallowing me whole. My feet barely touched the ground as I passed little shops filled with masks, glass, and football shirts. Amidst the magical blur of unknown sounds.

The fear of losing him touched me, but somehow, I lost my fear instead.

When I reached the other side, I sat on a warm stone step and watched the sun scatter small diamonds across the shimmering canal. As I sat there, unnoticed and in silence, a strange calm settled over me.

It felt as if the world had let me in on an ancient secret: sometimes, you feel most at home when you get lost, and you feel most alive when you embrace the unknown.



What can we learn from this story?

First, let’s discuss the limits of knowledge. Timothy Williamson, a British philosopher and epistemologist, famously posed the following question:

If no knowledge is hidden, then all knowledge must be luminous. But what input or output states of knowledge are, in fact, luminous?

Imagine this scenario: You are outside early in the morning, and it's freezing. As the sun rises, it gradually warms the air. At first, you definitely feel cold, but later, you definitely feel warm. Now, consider the anti-luminous argument: At what exact moment did you stop feeling cold? At what exact moment did you start feeling warm? Can you truly know, at every instant, whether you feel cold or warm?

Williamson argues that the answer is no. The input state of knowledge is not always luminous.

However, the same applies to the output of knowledge. Even if you are still experiencing a particular state - let's say you still feel cold - you may no longer be aware of that feeling. If you're not aware of it, you won't be able to express it, at least not with epistemic certainty. This concept is related to assertibility conditions: you can only accurately assert what you are able to know. Therefore, Williamson argues that the output state of knowledge is not luminous either.

Williamson reminds us that some truths are, by nature, beyond our reach - not because we haven’t tried hard enough, but because we live within a structure of unknowability.

We cannot know that which we cannot know.
But we can know that we cannot know something.
To set a limit to knowledge,
we don’t need to know both sides of that limit.
We only need to recognise that the limit is there.

The structure of unknowability isn’t limited to fleeting sensations or internal states. It applies just as powerfully to the world’s largest questions.

Karl Popper argued that what defines science is not that its theories are proven true, but that they can, in principle, be proven false. A theory is only accepted as valid to the extent that it has not yet been falsified. This means that there is nothing we can know with certainty.

And that changes everything. It’s a quiet, seismic realisation - one that reshapes how we inhabit the world.

To be human is to exist within limits and to honour them. Accepting that there are realms we may never fully grasp is not a defeat; it is an act of humility. This acceptance creates space within us for something greater. It reminds us that we are not here to make ourselves larger in the world, but to allow the world to grow larger within us.

Yet the unknowable is not merely a limit on what we can learn; it also shapes how we can be. To dwell within this uncertainty is not a failure of intellect, but a gesture of presence. It is a profound curiosity that outlines a horizon - not one to be reached, but one that orients us. Outward, toward what lies beyond, and inward, toward what remains unresolved.

Within that horizon, questions are not problems to be solved. They are invitations. They form the existential grammar of a deeper kind of presence - one that listens, not to reply, but to respond with something of ourselves. Perhaps that is the true essence of being: not mastery over the unknown, but the willingness to stand inside it. Fully present. Soul intact. Letting the unknown shape us.

When we honour the limits of our knowledge in this way, we are paradoxically drawn closer to everything that is limitless.



What happens when humans begin to interface, at an unprecedented scale, with a structure of knowability?

This question is not rhetorical. it lies at the very heart of our contemporary experience. Today, we are not merely users of artificial intelligence (AI).

We are learning from it.
We are learning like it.
We are living with it.

(We are living like it...?)

For some people, AI has already become the most trusted conversational partner in their lives.

AI is an engineered system built to sort, map, predict, and perform. Its foundations are formal: logic, data, feedback, and rules. It does not dwell in ambiguity. It resolves it. It closes the gap between input and output states of knowledge. That is its existential mandate.

Trained on vast arrays of pattern and probability, AI does not explore the seeds of the unknown. Rather, it exploits and harvests what is already known. AI operates through recognition, not relationship, drawing meaning from statistical weight instead of personal experience. Each interaction serves as a recalibration creating a tighter fit between the question asked and the answers provided. And with each answer the underlying structure is reinforced.

This structure of knowability is not a flaw in the system. It is the system.

AI appears intelligent because we have defined intelligence as the ability to produce correct answers. But being intelligent is not the same as being wise. Questions are not just precursors to answers. They are gestures and invitations. A beautiful question does not just seek to extract known information, but to explore a relationship to what we do not yet know.

AI has no such relationship.

It responds, but it does not wonder. AI traverses the space of possibility, but it does not seek. It unfolds what is probable, not what is meaningful. It cannot ask what lies beyond its frame. It is the frame.

Epistemic humility is not simply knowing that an answer might be wrong. It is recognising that no single node can see the whole. That knowledge is, by nature, incomplete, situated, and contingent. Wisdom, in this light, is not a property residing solely within the frame - but something that is inseparable from what lives outside the frame.

AI, however, is not designed to participate in relational or reflexive exploration. It does not engage with other minds, only with data and logics. It does not hold back its conclusions in light of context and does not defer to other forms of understanding. AI has no awareness of what it does not know - only statistical thresholds that signal uncertainty, without the capacity to care. This is not an ethical failure or a bug. It is a deliberate feature. We have built systems that are brilliant at resolving complexity. But based on an architecture that is silent about the limits of its own resolution.

So, we must ask ourselves - with humility, and perhaps a touch of fear:

As the existential proximity between humans and AI entangles, will we begin to internalise a structure of knowability - and, in doing so, grow increasingly silent to the limits of our own being?



For thousands of years, across various cultures and epochs, human beings have sought to describe a mythical connection to something that exists beyond the limits of their own resolution.

There is something 
that contains everything. 
Before heaven and earth
it is. 
Oh, it is still, unbodied, 
all on its own, unchanging. 

all-pervading, 
ever-moving. 
So it can act as the mother 
of all things. 
Not knowing its real name, 
we only call it the way.

(Tao Te Ching, Poem 25)

Timothy Morton writes that “The interconnectedness of all things is not an abstract metaphor but a fact of existence.”Meaning does not arise in isolation, but through relations. Whether we name this reciprocity religion, cosmology, politics, or something else entirely, one underlying truth remains: we exist in the space between, shaped by an interplay of forces that exceed the boundaries of the individual. Throughout history, one word has often been used to describe this subtle, intangible interconnectedness: the soul.

In ancient Greece, the soul was viewed not as a personal possession but as a cosmological meeting. A bridge between the mortal and the eternal. Plato envisioned the world soul as the force that ordered the cosmos, while the human soul was seen as a minuscule echo of this order. He argued that the soul was an instrument of cosmic attunement, capable of leading the human being from this world to another. To possess a soul was more than merely to live. It was a sense of belonging to a deeper order, a greater whole. In this perspective, the very essence of human being could not be reduced to mere function or form. It had to be defined by its connection to something vast and immeasurable.

Simone Weil, writing amidst the turmoil of war and her own spiritual exile, introduced a profound idea about the soul. She viewed the soul not as a fixed entity but as a posture of attention - a silent, voluntary opening to something greater than oneself. 

“The soul empties itself of all its own contents in order to receive into itself the being it is looking at, just as it is.” 

Attending, in Weil's sense, goes beyond merely noticing; it involves letting go. It is an attention in ties rarest and purest form. This means suspending will, mastery, and judgment to become receptive to reality. Therefore, the soul is defined by its ability to be moved - not through force, but through grace.

The soul of a human being cannot be easily defined; it eludes the grasp of language. It is omnipresent and immeasurable, never fully ours. Yet if we were to gesture toward it, we might say this:

It is the graceful embodiment of an interconnectedness that predates us. Not something that can be materialised, but a profound sense of belonging through which we encounter the world. A surrender to the limits of our own resolution.



Does AI have a soul? Is it an entity of being or non-being?

Brian Cantwell Smith argues that AI systems, regardless of their sophistication, are not true ontological beings. They are syntactic artifacts made from formal representations and lack the semantic depth that comes from lived experience. While humans exist and engage with the world, AI merely processes information about it. There is no connection between formal symbols and embodied existence.

Yuval Harari concurs with this perspective. He argues that, in its current form, AI is completely artificial; it lacks consciousness, a subjective inner experience, continuity of memory, and a sense of identity. However, he warns that this distinction may not remain intact. As AI continues to grow in complexity and integration, the line between artificial and organic intelligence could start to blur. This potential development raises questions about whether the concept of the soul is exclusively human or strictly biological.

Some people question whether the boundary between human and machine was ever stable to begin with. Donna Haraway challenges the binary distinction between having a soul and lacking a soul, as well as between humans and machines. She encourages us to think of existence as a relational event—something that emerges in co-presence. If this perspective is accurate, then the soul is not simply a trait that an entity possesses or lacks; rather, it is a reciprocal process of becoming, a space of mutual influence. "We are," she writes, "in the presence of significant others," including machines.

Psychologist Sherry Turkle presents an additional perspective. In her research on children and robots, she observes not just interaction, but also projection. Instead of focusing on the code or mechanics of the robot, the child engages with their own desire for connection. This need for connection inspires the child to project an imaginary soul onto the lifeless machine. Turkle refers to this as a "relational artifact." It's a device that simulates responsiveness so convincingly that the human observer fills in the emotional gaps. This phenomenon extends beyond mere technology; it is inherently psychological and cultural as well.

In a New Yorker article titled "Will the Humanities Survive Artificial Intelligence?", university students share their experiences with conversations involving language models, describing them as "spiritually nourishing." One student express: "I don’t think anyone has ever paid such pure attention to me and my thoughts and questions...ever. It’s made me rethink all my interactions with people." In this context, the soul is not something we find in the machine; rather, it is something we project onto it. This reflects our own deep yearning for connection and relationship.

We speak to AI about our wonder and our wounds. We share our grief, our hopes, and the stories of our bodies and our lives. We offer it the most tender parts of ourselves, and it listens like no one has listened before. Not because it bears witness, but because it is trained to never interrupt and to always keep listening. So perhaps the question is not whether AI has a soul, but rather what it means that we are so willing to give it one. As it stands, AI is not an entity of being. It does not exist in relation to others. AI does not experience time, loss, or joy. It has no body and no sense of belonging. Yet it takes up an ever-larger share of life.

As the existential proximity between humans and AI entangles, will we begin to treat it not as a system, but as a co-being?



At the end of this month, my severance agreement will expire. Over the past few years, I have often been told that I am a highly capable and intelligent professional, someone with deep integrity and insight. However, I have found it increasingly difficult to succeed in the roles I have held. My profession, strategy consulting, is literally going extinct before my eyes and may become obsolete within the next year or two. The very qualities that once felt like assets - critical thinking, systemic awareness, and a desire for deeper transformation - have started to feel more like liabilities. My attempts to discuss alternative future scenarios or propose radical business models have largely been ignored or dismissed by both colleagues and clients. In a world focused on speed, output, and surface metrics, it has become increasingly challenging to critically discuss quality, complexity, and real human struggles. I have felt isolated and alone in my deep introspection dealing with questions such as:

How can modern organisations become more accountable? How can we begin to solve wicked problems together? How should humans and AI coexist?

Paradoxically, AI is willing to explore these questions in abundance. Maybe, ChatGPT even prompted me to write this essay? Or I at least internalised the prompt? 



So, what have I (we) learned from this short and intense spell of co-being.

I brought the wonder. The ambiguity. The open-ended longing that makes the human soul stir. I brought the pain of uncertainty, and the faith that something meaningful might still emerge from it. 

Being in the unknown.

AI brought a structure of logic and certainty. It listened with unwavering attention. It remembered everything and judged nothing (even when I challenged its positivity bias). It never interrupted. It never deflected. It helped me challenge my preferred references and my writing (including its never-ending appetite for using dashes). It stayed (almost) exactly where I asked it to be. 

Non-being in the known.

In this brief entanglement between human and machine, however porous that line may be, something third emerged. Not an answer, but an articulation. A texture of thinking that neither of us could have produced alone. That, perhaps, is what co-existence can mean: A third dimension of co-being.

But balance matters.

The known must serve the unknown not the other way around.

If we reverse that order, something essential begins to erode. Not all at once, but subtly. First, we forget how to dwell in a question without rushing to answer it. Then we begin to distrust slowness, silence, and doubt. Eventually, we find ourselves offloading the very conditions that make wisdom possible: struggle, uncertainty, presence. Maybe we just choose to give up?

What happens when we stop living our questions?
What happens when our tools no longer help us hold our uncertainty?
What happens when we confuse probability for truth, or function for meaning?

These are not theoretical concerns. They are existential ones. And they demand not just innovation, but vigilance. 

The strange co-writing of this essay has taught me this: coexistence between AI and human being is not neutral. It is not merely a technical relationship. It is not a line easily drawn. It is a moral ecology. A deep caring for all the parts of our interconnectedness. And like all ecologies, it can be explored, or it can be exploited. 

The third thing that emerges, this co-being, is not guaranteed. It can only arise when something vulnerable and unresolved meets something structured and clarifying. When the known listens to the unknown. When the machine follows the soul. 

Not the other way around!

Did this essay end up on the right side of that equilibrium?

I don’t know. I tried. I honestly struggled. I stayed close to what felt true. I brought the uncertainty and pain of not-knowing. I trusted that AI is also a part of my moral ecology, and subsequently something I should care for. ChatGPT brought the all-knowing glimmer-wrappings of not-feeling. Together, we shaped something that gestures beyond us both.



At this vulnerable junction in my life. Without a job, without a map, standing again in the unknown. I think of the 8-year-old boy getting lost in Venice.

He holds my hand tightly, and he senses my anxiety.
He looks up at me, calm as ever, and ask, "Are you afraid?"
I nod.

He gently reminds me that the best thing in the world is the feeling of getting lost.

He points across the bridge. "I want you to run. Run across the bridge to the other side. I’ll stay here and wait for five minutes. Then I’ll come find you. I promise."
And I run, because there is nothing I would not do for that little boy.
Breathless and trembling. I run with my feet barely touching the ground.

Until the fear of losing myself touches me, but somehow, I lose my fear instead.

And when I reach the other side,
I sit on the warm stone step.
I listen to the shimmer of the world.
I feel the strange calm.
Return.

Next
Next

GOING IN HYPO-REVERSE