One of the inherent weaknesses of Language Learning Models (LLMs) like GPT-4 is their struggle with maintaining context or conducting complex reasoning over an extended interaction. Increasingly vectors are being employed as a short-term memory solution, serving as quick look-up tables to bridge this gap. However, vectors fall short in capturing relational complexities between data points. With a nod to symbolic AI, there is increasing relevance for graphs – data structures that consist of nodes connected by edges, designed to model intricate relationships and hierarchies.
In the context of powerful AI models, the knowledge captured by graphs represents facts and relationships, but also societal memory; shaping how machines process and interact with human culture, ethics, and values.
The simplest approach is to use static (i.e. prepopulated) structures. These graphs offer relational context that vectors can’t but don’t shift to match new contexts or adapt and learn. As the technology matures, the complexity of the data we wish to represent increases and our desire for a generalised neuro-symbolic approach grows, we might see a push for dynamic, hybrid graphs that evolve and learn from ongoing interactions to form new representations of accepted knowledge.
This shift from vectors to static and dynamic graphs is not just a technological leap; it’s also a shift in power dynamics, creating new, pivotal roles of curators and administrators of the stored knowledge graph.
The initial curators aren’t just data input specialists; they are gatekeepers of an AI’s foundational knowledge. Their role might involve both manual curation – hand-picking and validating information – and overseeing automated processes, which might involve algorithms that gather vast amounts of data at scale. Both come with their unique challenges. Manual curation ensures a level of accuracy and intentionality, but is time-consuming and might inadvertently introduce human biases. Automation, on the other hand, can efficiently populate the graph with vast datasets but might also bring in noise, errors, or unanticipated biases from the sources it pulls.
The power that these curators wield is immense. A seemingly minor decision on which data to include or exclude, or how to structure relationships within the graph, can significantly influence the AI’s decisions, behaviors, and even its understanding of societal norms and values. Their choices set the initial trajectory for the AI, determining its capabilities and limitations. This initial framework becomes the lens through which the AI views and interprets the world, and reshaping or correcting it later can be a daunting task.
While initial curators set the foundation, the administrators of dynamic updates guide the AI’s growth and evolution. They are the gardeners tending to a rapidly growing tree, pruning here, nurturing there, ensuring balanced growth. They oversee the continuous feedback and learning cycles that allow the AI to adapt to new information and changing environments. Their tools include automated learning algorithms, but importantly they decide what data the AI is exposed to, how it should be integrated into the existing graph, and when anomalies or potential biases should be addressed.
Their power lies in the subtleties of these decisions. For example, how do they prioritize new data? Do they allow the AI to adapt rapidly to trending topics or maintain a more stable, longer-term perspective? How do they handle conflicting information or potential misinformation? And most importantly, how do they ensure the AI’s alignment with desired ethical standards and societal values while it continuously evolves?
Mismanagement or pursuit of innovation above investment in safety at this stage can lead to the AI drifting from its intended purpose or amplifying existing biases in its knowledge base. Further, there’s a risk of the AI developing interpretations or behaviors that are out of sync with human values and societal norms.
A Call for National Involvement
There’s an urgent need for countries to take an active role in this evolution. National governments must ensure that the unique knowledge and culture of their diverse societies are adequately represented and have a voice. Without this active participation, there’s a real risk of erasing the richness of our distinct societies as the monochrome reflecting only dominant cultures or viewpoints becomes the oracle on which the next generations of AI are built.
By proactively understanding and representing our cultures, nations can shape the evolution of these technologies in a way that respects and preserves their unique heritage. It’s not just about avoiding biases; but ensuring the inclusion in the foundational models of the future the specific domains of traditional knowledge, localised ethical considerations, and societal norms that make each culture unique. The bedrock of AI understanding must preserve the richness of humanity.
The Way Forward
The evolution towards using pre-populated and dynamically updated graphs in AI architectures is poised to give unprecedented power to the roles of the curators and administrators. As this technology becomes an integral part of our lives, these roles won’t just be about technical proficiency; they’ll be positions of social, ethical, and even political significance.
To ensure we’re heading in a direction that respects and embraces the pluralistic world we live in, it’s critical to have a diversified, multidisciplinary approach to curation and administration. From ethicists and social scientists to technologists and policymakers, it’s going to take a village to raise this new breed of AI responsibly.
To the curators and administrators of tomorrow, the spoils are immense, but so are the responsibilities. And to our national governments, the call is clear: take an active role in shaping this future, or risk being shaped by it