From Dead Storage to Living Memory
A house that has held a family for thirty years is not just a structure; it’s an archive. When my mother’s dementia diagnosis prompted my parents’ move from my childhood home, it was time to start excavating the life packed into its walls. I wasn't just sorting through old furniture and childhood drawings; I was navigating the personal cloud of a more analog era, a lifetime of data stored in cardboard boxes and filing cabinets.
In the quiet of the emptying rooms, I discovered the interior lives of the two people I thought I knew best. There was my father’s Master’s thesis in engineering, a world of elegant equations and meticulous research I had never seen. I found my mother’s notes from her pediatric nursing courses, filled with not just clinical observations, but outlines for articles she had hoped to write, her curiosity and intellect captured on note pads and fragments of paper carefully clipped together into file folders.
There was no deep regret hidden in these files: no unsent letter or hidden secret. The discovery was quieter, and more profound. I was holding the evidence of their rich, intellectual passions, the people they were when they weren't just "Mom" and "Dad." It was a curated record of their ambitions and ideas. But the boxes never spoke. The files never nudged me toward this understanding while they were living it.
This collection of data sat dormant for a reason. It was the remnant of divergent paths, ideas left behind as my parents chose a different way to live a life well-lived. It was an archive of forks in the road, honorably abandoned. But the digital filing cabinets we keep in our organizations are different. They are not paths left behind in pursuit of something better; they are overgrown thickets, hazy ways forward that were never cut into trails that could be traveled. We sit on goldmines of human connection—every donation, every volunteer hour, every event attendance, every heartfelt email—and we lock it all away in a Customer Relationship Management (CRM) system. For most, this powerful technology isn’t a dynamic brain; it’s a digital filing cabinet.
The result is a crisis of posture. We are perpetually looking backward, navigating the future by staring in the rear-view mirror. Our strategies are based on what has already happened, not what could happen next. But what if those files could talk back? What if our data could wake up? We are at the beginning of a monumental shift, a transition from the static digital filing cabinet to the dynamic “Insight Engine”—a new class of tool that doesn’t just store the past, but uses it to have a conversation with us about the future. This isn’t just a technological upgrade. It’s a philosophical one. It changes our relationship with memory itself, and it forces us to ask what it truly means to learn from our own history.
The Tyranny of Hindsight
The filing cabinet, whether metal or digital, is a monument to a specific, limited kind of memory. It is built only to answer the question, “What happened?” It remembers, but it cannot forget. This is a profound limitation.
As the philosopher Søren Kierkegaard suggests, true forward movement requires an "inverse dialectic" between remembering and forgetting. It’s a philosophical concept where a positive idea, like forgiveness, can only be truly understood through its negative counterpart, like the consciousness of sin. The two are perpetually intertwined. It is not to say that these are a simple dualism of opposites that stand on their own. Nor is this a situation where these two tensive poles ever resolve into some third idea (as with a more familiar sense of dialectic). Instead, there is an irresolvable, paradoxical tension that gives depth to each pole of the inverse dialectic as a dyad.
In thinking about the act of forgiveness, Kierkegaard sees this inverse dialectic at work between forgetting and remembering. Forgetting, in this sense, isn’t a passive failure to recall; it is the difficult, active work of “taking away being from that which nevertheless exists” (Søren Kierkegaard, Works of Love). It is a willed act of letting go. Paradoxically, this powerful forgetting is only made possible by an equally powerful remembering. To truly move forward, you must remember what is essential—a core purpose, a foundational relationship, a key insight—with such clarity that it gives you the strength and wisdom to intentionally forget what is not.
While Kierkegaard located this dynamic in the spiritual act of forgiveness, the principle itself extends to a wider form of practical wisdom, or what the ancients called prudence. Strategic thinking, at its core, is the art of navigating complexity to make effective decisions. This art is not about knowing everything; it is about knowing what to pay attention to. It is an act of prudence that requires the constant, skillful application of this inverse dialectic. To form a strategy is to remember a desired future with such intensity that it allows you to forget the distracting noise of the present. It is to remember a core value so deeply that you can forget a sunk cost. It is to remember the most vital signal in your data so clearly that you can forget the thousands of data points that are merely static. This dialectic is the engine of critical thought.
The digital filing cabinet is incapable of this wisdom. It clings to every data point with equal tenacity, giving the same weight to a ten-year-old lapsed email address as it does to a recent, passionate expression of interest. Because it cannot perform the dialectic—it cannot forget—its memory is strategically inert. It’s like driving a car while looking in a rear-view mirror that shows you every road you’ve ever been on, all at once. You see where you’ve been, but the sheer volume of the undifferentiated past obscures the road ahead.
This is the dark side of our modern obsession with "data-driven decision making." We imagine that if we just gather enough information, the right answer will reveal itself. But without the prudence to forget, "data-driven" becomes a process of exhaustion. Decisions are delayed, bogged down by the need to consider every possible data point, no matter how trivial. Instead of leading to clarity, the data becomes a weapon, used to justify pre-existing biases or to stall action indefinitely. The filing cabinet mindset doesn't empower decisions; it buries them under the weight of undifferentiated information.
The first attempts to solve this problem came with the rise of machine learning. These early algorithms acted as a filter on the rear-view mirror. They couldn’t talk to you about the road ahead, but they could help you sort through the noise of the past. By identifying patterns in historical data, they could help segment audiences, predict which donors were most likely to give again, and identify those at risk of lapsing. This was a crucial first step—a way to begin the strategic work of forgetting by focusing on the data that mattered most. It was the beginning of thinning the thicket, of finding the faint outlines of a trail.
But this approach wasn't enough. It introduced its own set of temptations that, paradoxically, led many organizations right back to the state of data-overload they were trying to escape. This was a failure of prudence. The "black box" nature of the models, which could identify a pattern without explaining the why behind it, created a crisis of trust.
Lacking a narrative, the human impulse was to retreat from the difficult work of forgetting and instead demand more data to "prove" the model was right. The dialectic collapsed. Instead of remembering a core purpose to guide their forgetting, they forgot the purpose and remembered only the data. Similarly, the temptation to chase correlations was a failure to remember what was essential. A model might find that people who volunteer on Saturdays are more likely to become major donors. The prudent leader remembers the deeper human story—that commitment is the essential factor—and forgets the superficial correlation. The imprudent leader forgets the human story and remembers only the correlation, leading to shallow, ineffective strategies. Machine learning could point to a trail, but it couldn't provide the wisdom to know why the trail was worth following.
This reactive posture is the common failure of both the simple digital filing cabinet and the data-overloaded machine learning approach. Both systems are trapped by a form of strategic misremembering: they remember everything, and therefore understand nothing of value. In the specific case of the CRM, this over-remembering defines people by their last transaction, not their potential. It reduces relationships to a series of static data points, lumping individuals into crude buckets—the “major donors,” the “volunteers,” the “alumni”—as if they were monolithic blocks. We send them generic appeals based on these labels, and then wonder why they feel unseen. The exhaustion that mission-driven people feel when buried in administrative work finds its parallel in the exhaustion our supporters feel when they are only ever seen for what they did yesterday.
The Power of Active Forgetting
The Insight Engine is built on a different premise. It’s a tool for cultivating foresight, but it understands a crucial philosophical point: foresight is not simply about adding a new layer of future-oriented data. True foresight requires the active power of forgetting. To see what's coming, you must first be able to forget the noise of what has been. The Insight Engine performs this act of strategic forgetting. By analyzing the entire history of interactions, it doesn't just show you everything; it shows you what matters. It remembers the signal. By focusing on something like a pattern of high engagement or the revealing turn of phrase in an email, it actively forgets the thousands of less relevant data points. This technologically-assisted prudence is what transforms our core question from “What happened?” to “What is most likely to happen next, and with whom?”. It augments human judgment by clearing away the clutter of the past, making space for a clearer vision of the future.
But this vision of technologically-assisted prudence comes with a profound ethical risk. If the machine is to practice the dialectic of remembering and forgetting, we must ask: who teaches it what is essential? AI models learn from the historical data we provide. If our history is one of bias, if we have consistently focused our attention on certain demographics while overlooking others, the AI will learn this prejudice as a core principle. It will learn to remember the patterns of the privileged and to actively forget the potential of the marginalized. This is not prudence; it is the amplification of our own blind spots, a high-tech version of the same old biases, now given the veneer of objective, data-driven authority.
The solution to this ethical dilemma, however, is found back in the philosophy that revealed it. For Kierkegaard, the inverse dialectic of forgiveness was not a cold, mechanical process. It was conditioned by a profound ethical commitment: the duty to presuppose love in the other. This is the beautiful, generative core of his thought. The act of forgiveness, of strategic forgetting, is not possible unless you first remember a commitment to the inherent worth and potential of the person before you. This presupposition of love is the anchor. It is the essential thing you must hold onto, and it is this act of remembering that gives you the moral clarity to forget their transgression.
When we translate this to our work with AI, it offers a powerful path forward. To mitigate the risk of amplifying bias, we must consciously and deliberately build this ethical presupposition into our process. We must begin not with the data, but with a foundational commitment to the potential of every person our organization serves. This commitment becomes the essential thing our AI is taught to remember. When we start from the premise that worth and potential are universal, it forces us to confront the biases in our own data.
If we presuppose potential in everyone, but our AI only "finds" it in one demographic, we are forced to conclude that our historical data is flawed, not the people. Or, more significantly, if the data really is good, then we need to conclude that there has been a real problem in our past actions and decision-making. In that case we let the AI reveal the bias that has been there all along as a call to action for change. Either way, this ethical commitment, this "presupposition of love," becomes the corrective lens through which the AI must see the world. It is how we ensure that our Insight Engines are not just powerful, but also prudent.
This is what it means for the Insight Engine to see the whole person. It’s not an automatic function of the technology; it’s a direct result of embedding our mission into the AI’s instructions. The engine finds the “hidden gem” not just because it can connect disparate data points, but because we have taught it what to look for. We have told it that our mission is to foster scientific innovation, so it remembers to pay attention to someone who opens every email about the new science facility. We have told it that community engagement is a core value, so it remembers to elevate the profile of someone who attends every gala. The AI finds the signal because we, guided by our mission, have defined what the signal is. This transforms the process from mere pattern-matching into a partnership, where technology becomes an extension of our deepest-held values.
From Raw Data to Human Motivation
The true power of these new tools lies in their ability to translate raw data into human motivation. The filing cabinet provides a way to remember what people did. The Insight Engine, in contrast, helps us forget what is not essential about their actions so that we can finally see why they did it.
For Kierkegaard, these practices of remembering and forgetting were expressions of the “single individual.” This is a crucial concept. The “single individual” is not an isolated, atomistic self, but rather a person who takes ultimate, subjective responsibility for their existence. This individual stands in contrast to “the crowd” or “the public”—an anonymous mass where responsibility is diffused and thinking becomes generic. For the single individual, general truths are not enough; they must be existentially appropriated, lived-into, and made real through personal commitment. The dialectic of remembering and forgetting, therefore, is not a mere cognitive exercise; it is the deeply personal, ethical work of the single individual shaping their own consciousness and relationship to the world.
Because of this, the scale at which Kierkegaard imagined applying the concepts of remembering and forgetting is fundamentally different from what I have imagined here in terms of AI. His focus was on the inner life of one person, the single individual taking responsibility for their own consciousness. The work was intensely personal. An Insight Engine, however, operates on the collective memory of an entire community, because a community’s memory is not just held in the minds of its members; it is encoded in its shared artifacts and records. In a modern organization, the CRM is the primary repository for the story of the community’s interactions—every gift, every email opened, every event attended. Each data point is a fragment of a collective story, a recorded memory of a relationship between an individual and the community.
The AI, therefore, doesn't have a personal consciousness to shape; it has the collective memory of a community to analyze. This shift in scale doesn't eliminate the ethical burden of prudence; it transfers it. But it's crucial to argue that the collective memory of the CRM is not the same as Kierkegaard's "public." The public is an abstraction, a phantom responsible for the kind of diffused, generic thinking that the single individual must resist. The CRM, in contrast, is a repository of specifics. It is a recorded, granular history of actual relationships and interactions. This specificity is what makes it so powerful, but also so dangerous.
The responsibility for prudence is transferred to the single individuals who design and direct the system, because they are not dealing with a vague public, but with the specific, recorded memory of their community. They must become the prudent, ethical consciousness for this new kind of collective memory, deciding what to remember and what to forget. The AI then becomes the powerful tool that executes this act of strategic forgetting across thousands of data points, allowing the organization to act with a kind of collective, systemic prudence that could never be realized at the scale of the single individual.
This last point is important. This is a level of understanding posited through AI “remembering” and “forgetting” that is impossible to achieve at scale manually. You can’t read 10,000 survey responses and hold all the nuance in your head. But an AI can. And armed with that insight, you can finally speak to people as they are. At its best, the AI lets you stop shouting at a crowd and start having thousands of quiet, individual conversations that respect each member of your community as the single individual they are. You are no longer just managing data; you are stewarding relationships based on a deeper understanding of what drives the human heart.
The temptation with these new tools is to focus only on what they can do for us—find the donor, write the email, optimize the campaign. But the deeper transformation happens when they change what we can do for others. When our tools move from mere storage to active insight, they don’t just amplify our efficiency; they have the potential to amplify our empathy. This power brings with it a profound new responsibility. The real question isn’t, “How can we use these engines to better achieve our goals?” The question we must now ask is:
If we truly had the power to understand the people we serve, to anticipate their needs, to see their hidden potential, and to hear their unspoken motivations, what new responsibilities would we have to them?
If you are looking for help with turning your AI ideas into practical realities or deciding how to develop an AI strategy for your organization, feel free to contact me.