Our Post-AI Future 3/4

Navigating Our Societal Multiverse

The previous articles highlight the economic, governance and information integrity challenges emerging in our post-AI era, and some of the possible solutions. To chart a path toward a positive outcome we need a framework for thinking about how these factors combine into whole systems; to identify system archetypes, which are essentially different combinations of economic model, governance model, and information environment that could emerge as AI transforms society. We can imagine a 3-dimensional space with three axes:

·       Economic model: Ranging from private/market-driven at one end to public/commons-driven at the other. Do we double-down on free-market capitalism, or move to more collective ownership and provision?

·       Governance model: Ranging from representative democracy, with elected officials, top-down decision-making, or move to a more deliberative or participatory democracy, with bottom-up, frequent citizen deliberation and direct involvement.

·       Information integrity: Ranging from low integrity with an information space that is polluted, manipulated, and “post-truth”, to high integrity information space which is trustworthy, transparent, and broadly disseminated without manipulation.

If you take these three binary dimensions, you get a cube with eight possible combinations. Eight archetypal “post-AI society” scenarios. To make them memorable, let’s give each corner of the cube a name:

Societal End-State Cube

So, what do each of these alternative post-AI societies look like and would we like each to be a world we and our children inhabit. Let’s explore each in a bit more detail:

Manipulated Marketocracy:

Imagine an exaggerated version of today, where Big Tech and corporate giants run a highly privatised economy, dominated by winner-take-all AI platforms. In this society politics remains formally democratic, but this is largely illusory because public opinion is steered by constant algorithmic manipulation and propaganda, due to the low information integrity. The government might still be nominally representative, elections will happen, but as one analysis said, “all that will be left is the ritual and theatre of voting”. Meanwhile, social stability is maintained by a basic universal income or maybe frequent “bread-and-circus” distractions. This scenario is frighteningly plausible if we do nothing. It’s essentially a plutocracy-by-algorithm. While it’s economically productive, as AI makes lots of goods, it tramples upon human agency and is potentially politically unstable in the long-term. Citizens may eventually realise the emperor has no clothes – that there is no real democracy - and rebel, or we slide into authoritarianism. Innovation could continue, but directed to whatever maximises the oligarchs’ power, not public wellbeing.

Liberal Dividend State:

In this scenario, the market economy stays mostly private and competitive. We don’t nationalise Amazon or ban private enterprise, innovation and entrepreneurship continue, but the state heavily redistributes AI’s bounty via public dividends e.g. robust UBI, public investments, etc. Think of it as a “Norwegian oil fund” model but enabled by the profits from AI: government taxes the tech sector or holds equity in AI firms and sends checks to citizens. Governance remains largely the familiar representative democracy, and importantly, the information sphere is healthy with a high degree of integrity and perhaps strong journalism, an educated populace and rules to keep deepfakes at bay. So, people get accurate information, vote in accountable governments and those governments to account. This tries to ensure everyone benefits from the AI-driven wealth, preventing a huge inequality spiral. It is basically an attempt to update the 20th-century welfare state for the 21st. It’s quite realistic in the near term, politically, many countries might take this path as pressure builds; for example, we already see calls for extra taxes on robot taxis. It would alleviate poverty and keep consumer economies working; however, it might not solve the purpose problem. If people have income but no opportunity to contribute meaningfully, social cohesion and individual fulfilment could erode. In turn, this risks “moral drift” where freed of the consequences of adopting unproductive moralities, the conditions are set for the emergence of moralities which are the most viral, to spread and come to dominate; imagine the rise of narcistic influencers on steroids, unconstrained by the fact that today most are not economically viable. There’s also a risk of a calcified status quo, with a small elite who permanently own the productive AI assets and everyone else is locked in to being a dependent consumer; an almost feudal regime. This is much better than a “Marketocracy”, but perhaps not the end state we aspire to.

Illusory Participation:

Here we have a society that, on paper, has moved towards deliberative governance; maybe there are town hall apps, lots of referenda or citizens’ assemblies, so it’s more participatory than old representative systems. However, the information integrity is low. That means all those participatory processes are constantly derailed or co-opted by misinformation, troll armies, and polarisation. Economically, this scenario stays on the market/private side, as the name “illusory participation” implies that we haven’t really changed who owns stuff; it might still be big companies and global markets calling the shots on production and jobs. We’ve just slapped a layer of public engagement on top. Maybe this is a cautionary tale. Imagine a government launches an online platform for citizens to weigh in on policy (like Decidim or vTaiwan) but they don’t pair it with any disinformation countermeasures. Very quickly, bots and extremist groups could flood it, or conspiracy theories could dominate the discourse, leading to policy outcomes that are ill-informed, or majorities swayed by demagogues. It might give the illusion of democratic empowerment while actually furthering chaos or populist capture, by powerful corporates or political cliques. In the long run, this could discredit participatory democracy, “Oh, we tried involving citizens, it turned into a dustbin fire”. So, this archetype highlights: deliberation without information integrity leads to trouble. Real-world examples might be some social media “town halls” we’ve seen; lots of noise, little signal. The positive is that at least there’s an impulse to involve citizens; the fix would be to also implement mechanisms to drive this type of society towards the higher information integrity quadrant.

Chartered Pluralism:

In this scenario we move to the private economy, deliberative governance, high information corner of our 2x2x2 cube. This could be described as a reimagined liberal democracy, that keeps a market economy but deeply integrates participatory governance, all within a high integrity information space. The term “chartered” reflects the creation of new institutions, perhaps chartered citizen councils, or social charters that set responsibilities for corporations. While “pluralism” implies multiple centres of power offering decentralised and diverse voices. Picture something like; a strong civil society with platform cooperatives, neighbourhood assemblies feeding into city decisions, and multi-stakeholder governance of data or AI. The economy is still largely market-driven, but these markets are shaped by values through democratic input, for example, maybe there’s a “charter” for AI companies that they must abide by community-drafted ethical rules. This is reminiscent of how in medieval times, charters granted rights and set duties. Potentially this could be a very healthy scenario. It’s like current liberal democracy but upgraded: power is more distributed, not just elected politicians and CEOs, but citizens and workers have channels to influence decisions regularly, and the information environment is sane enough that debate is about facts and trade-offs, not QAnon fantasies. It’s also pragmatic in terms of the steps needed to create it. Many of the governance innovations we discussed in the earlier article push in this direction e.g., citizens’ assemblies and participatory budgeting. Its long-term value is rooted in its potential to preserve the dynamism of markets and personal freedoms, while addressing democracy’s accountability problem and tech oversight. The challenge is whether it can tackle huge inequalities and the potential mass joblessness; since it doesn’t inherently change ownership, however it could rely on tax and redistribution to support those without work. Alternatively, it could be combined with something like the Liberal Dividend State’s UBI, to yield an enlightened “digital social Charter”; a new Magna Charter for the digital age. This is a quadrant many reformers would be quite happy with, it’s incrementalist but in a radical way, by layering deliberation and commons governance onto our existing systems.

Technocratic Paternalism:

Now we shift to the commons/public economy side but keep representative governance and low information integrity. This archetype might emerge if the threat of AI-driven collapse pushes a strong central authority to take over much of the economy “for the public good,” but without really fixing the flow of information or citizen empowerment. Think of a scenario where, as jobs disappear, governments nationalise major industries or AI platforms and provide lots of welfare such as free services, basic income, etc. Here power concentrates among a small elite of experts or bureaucrats who say, “trust us, we’ll take care of you.” The public might even accept this initially, out of fear or because misinformation has made public deliberation too toxic, so they cede control to “the adults in the room.” It’s “technocratic” because decisions are made by supposed experts supported by AI systems, and “paternalism” because citizens are treated a bit like children who need looking after, not co-creators. This feels like a dangerous dead-end. While a commons-based economy consisting of co-ops, nationalised assets, etc. could be non-despotic under transparent, democratic control, in a society with low information integrity and weak accountability it is likely to slide into authoritarianism. A real-world analogue might be China’s system, albeit China is more authoritarian than representative. They have a state-driven economy in many sectors and rely on heavy-handed control of information through censorship and propaganda. Such a society can be “efficient” in building infrastructure or rolling out tech, like AI surveillance, and arguably ensures material needs are met, but it utterly lacks the liberal values of free inquiry and individual rights. In a crunch, even democratic countries could lean this way e.g., a state of emergency where government takes over AI firms and implements strong censorship to combat disinformation, saying it’s for society’s own good. Short-term, it might stabilise things, but it sacrifices liberty and so innovation in the long run. A society of passive recipients and censored media is unlikely to be truly happy, as there’s only the weakest of mechanisms to align the action of the ruling elite with the true personal interests of those they govern. If information integrity is low, even the technocrats may make bad decisions, because they, too, consume a distorted information diet. So, Technocratic Paternalism is a bit of a false promise: security and order at the cost of freedom and truth; which sounds like a Black Mirror episode.

Wellbeing Welfare Republic:

This combination has a commons/public-leaning economy, representative democracy, and high info integrity. You might think of it as a kind of Nordic social democracy on steroids or perhaps New Zealand in a decade or two: the state and cooperatives/non-profits play a big role in providing services and maybe even owning key assets like nationalised AI utilities. The goal of the economy is explicitly public wellbeing. It’s a “Republic” meaning it’s still largely representative democracy, and not doing citizens assemblies every week, but thanks to a healthy information ecosystem, quality journalism, educated citizenry, and transparent algorithms, the electorate is well-informed and holds leaders accountable. Policies are geared toward welfare include art, sport, health, education, environment and community action. This is a scenario where something like Bhutan’s GNH or New Zealand’s Wellbeing Budget is fully realised at scale: success is measured in median wellness, not shareholder value. In many ways this is quite an attractive scenario, especially for those who favour a strong social safety net and communal provision. It might emerge in countries that already have robust welfare states, as they incorporate AI productivity: essentially expanding public services, including free basic housing, free transport, universal healthcare and education. People might work less in formal jobs, since the economy’s productive core might be partially automated and publicly owned, but they benefit from rich public goods. The information integrity being high means government and media are transparent, mitigating corruption or propaganda. The level of realism associated with this scenario is perhaps medium: it requires high trust in government and a culture of solidarity. Scandinavia, perhaps, could evolve this way. Long-term, such a society could be stable and happy, though critics might worry about whether it remains dynamic. Innovation could actually thrive with the right state support, but there’s a risk of stagnation if bureaucracy is high and the representative welfare state becomes paternalistic. Still, among the eight archetypes, this is one many would prefer over the status quo, as it doubles down on the common good and preserves individual agency.

Commons Captured:

This scenario is a heartbreaker: a commons-based economy with deliberative governance, but low info integrity. It’s basically the nightmare of idealism gone awry. Imagine we push power to the people, with lots of cooperative ownership, community-run services, and citizen assemblies making local decisions, but the information space is riddled with disinformation and manipulations. What happens? Likely, those commons institutions get captured by demagogues or special interests who exploit the confusion. It could start with the best intentions: say a country replaces big corporations with community co-ops and sets up frequent referendums and forums. If those communities are then flooded with conspiracy theories or influenced by, say, a charismatic misinformation peddler, they could vote or decide in ways that undermine the very spirit of the commons. You could end up with local oligarchs, as people in different areas fall for different false narratives. This just underscores that democratising ownership and governance is not sufficient if you don’t also safeguard truth and knowledge. Historical revolutions sometimes followed this path; power was given to “the people” but soon seized by a faction or strongman who controlled propaganda. Exactly this effect was seen in the alternative communities that arose in California in the 1960’s. This scenario is a failed utopia. It might be short-lived or transition into something like Technocratic Paternalism or even authoritarian rule, as the chaos wrought by bad information forces someone to “restore order.” The lesson here is that high citizen power requires high civic education and reliable information, otherwise it can blow up.

Democratic Commons:

Finally, the shimmering star on the hill: public/commons economy, deliberative governance, high information integrity. This is arguably the “Democratic Commons” vision that many futurists and activists strive for. But is it a chimera? It means the core economic resources, whether data, AI, land, or capital, are managed as commons, not necessarily all state-owned, but owned by communities, cooperatives, or held in trust for the public. Governance is deeply democratic at all levels, with citizens actively participating in shaping policies and maybe even enterprise decisions; think workers’ co-ops, participatory planning. And our information environment is enlightened: journalism flourishes, education enables critical thinking, digital spaces are moderated for truth and civility, and AI is used to assist fact-checking and reasoned debate rather than to manipulate. It sounds utopian. This is basically the opposite of what we have now: instead of a few billionaires owning half the economy, wealth might be distributed or socialised; instead of voters tuning out only every four years, civic engagement is frequent and substantive; instead of doomscrolling through junk information, people have trustworthy feeds and open data at their fingertips. However, is this achievable? Possibly in slices. We see glimpses: Wikipedia is a global knowledge commons run democratically with largely accurate information; a tiny but real democratic commons in information. Some cities, like those using participatory governance or cooperatives widely, are inching toward it locally. Scaling it up is the challenge. It would require a cultural shift towards cooperation and high social trust, and likely new tech tools to coordinate complex decisions among millions of people. However, if AI can help with anything, it might be managing this complexity. Personal AI assistants could help citizens understand issues and solutions, making widespread deliberation feasible. The Democratic Commons is high in long-term value, but it’s sustainable only if people feel ownership and purpose, reducing risk of revolt or despair and it naturally aligns innovation with what people actually want. Unfortunately, in terms of the scenario’s realism it’s the furthest from the status quo, but not impossible as an evolutionary endpoint if many of the positive experiments we discussed in the earlier articles about UBI, cooperatives, digital democracy, anti-disinformation regimes were to merge over time.

These eight archetypes aren’t destiny; think of them as eight corners of a maze we’re collectively living in. The purpose of this 3D model is to help us name our choices. We can ask: do we want more private competition or more commons collaboration in our economy? Do we want to stick with just voting for politicians or involve citizens more directly? And how much are we willing to invest in protecting the truth and an informed citizenry? Each choice nudges us toward one of these corners. The purpose of the cube is to make it more obvious into which we are being nudged. Importantly, not all corners are equal morally or practically. Some, like Democratic Commons, obviously sound more aligned with liberal values of freedom, equality, and community, whereas others, such as Manipulated Marketocracy sound like liberalism’s nightmare. But by sketching even the nightmares, we’re reminded that without conscious effort, we could slide there. The cube also shows some counter-intuitive combos: you could have a high-tech welfare state that’s quite paternalistic (Technocratic Paternalism) or a noisy quasi-anarchy of participatory decision-making gone wrong (Illusory Participation). It underscores that information integrity is the z-axis that we can’t ignore. It elevates or undermines any economic / governance mix.

Where are we today in this cube? Many would argue we’re drifting in the direction of Marketocracy in many countries, with a tug toward Liberal Dividend State as awareness of inequality grows. But we also see participatory governance is rising in some localised places, and misinformation is being tackled in others. The end state is up for grabs. Which brings us to the final article: how do we deliberately steer toward the better outcomes; the ones that keep the liberal spirit alive and well in the age of AI?

Previous
Previous

Our Post-AI Future 4/4

Next
Next

Our Post-AI Future 2/4