Our Post-AI Future 2/4
Rewiring Democracy for the 21st Century
Today’s representative democracy model is visibly straining under AI-era induced pressures. Filter bubbles, echo chambers, political manipulation such as Cambridge Analytica, and the politisation of the media, all seem to be making society more polarised and less deliberative. We’ve trapped our politicians in the attention economy and are now asking why they focus on emotive slogans rather than analysing solutions to our collective problems. This is leading to many innovators asking: can we make democracy more open, participatory, and deliberative using new tools? Around the world, cities and nations have begun to experiment with digital platforms, sortition-based councils, and novel voting methods to deepen citizen engagement and counteract the increasingly erratic centralised power. In this article we are going to take a look at some of these innovations in governance.
Digital Democracy; Grassroots, Not Astroturf
We’ll start off in Estonia, as nowhere has embraced digital governance on their scale, often leading it to be dubbed “e-Estonia.” Every Estonian has a secure e-ID card enabling access to virtually all government services online. You can vote, pay taxes, sign contracts, check medical records, all with a few clicks. Crucially, the system is backed by blockchain-like data integrity and transparency (citizens can see who’s accessed their info). This digital backbone builds trust and saves an estimated 2% of GDP in bureaucratic costs yearly. Estonia’s e-voting, introduced in 2005, allows citizens to vote from anywhere in the world; in the 2019 election, 44% of votes were cast online. The convenience is obvious, but the larger point is how a digital public infrastructure can empower citizens and streamline participation. In our AI future, having a trusted digital identity and platforms for continuous interaction, not just periodic voting, will be key to evolving our governance at scale.
Meanwhile, cities like Barcelona and Taipei are using open-source technology to crowdsource policy. Barcelona, under Mayor Ada Colau, built Decidim (“We Decide” in Catalan), a free, open-source platform for citywide participatory democracy. Through Decidim, tens of thousands of citizens have proposed and debated ideas for the city’s strategic plan and even allocated part of the budget. For example, Barcelona’s recent participatory budgeting process let residents decide how to spend €75 million, which is about 5% of the investment budget, on local projects. Citizens submitted proposals, nearly 10,000 ideas, discussed them on the platform and in assemblies, and then voted on which parks, schools, or services to fund. Decidim provides transparency as every comment and vote is open, and subject to a degree of deliberation rarely seen in conventional politics. It’s not perfect, as online participation must be balanced with inclusiveness for the less tech-savvy, but it shows how open digital forums can complement representative councils. Meanwhile in Taiwan, the government tapped a civic tech community called g0v, pronounced “gov-zero”, to create vTaiwan, an online consultation process that has tackled contentious issues from Uber regulation, to online alcohol sales. The magic ingredient is an AI-powered tool called Pol.is. Pol.is allows mass participation but cleverly avoids flame wars: people vote on each other’s statements but cannot directly reply or dunk on each other. The system then maps out clusters of opinion and highlights consensus positions that bridge divides. In the Uber case, hundreds of drivers, users, and citizens converged on a surprising point of agreement: both sides supported a level playing field with fair regulations for ride-sharing and that consensus informed the government’s changes to law. Taiwan’s use of vTaiwan and Pol.is has been called “crowdLaw”; it’s not direct democracy overruling legislators, but a new way to generate legitimacy and collective intelligence for policymaking. A government minister in Taiwan described it as a way to “hear from citizens at scale without the conversation devolving into chaos,” thus improving trust.
A big concern in any form of digital democracy is who owns the data and the platforms? Barcelona’s “Decentralised Citizen-Owned Data Ecosystems” (DECODE) project tackled this by giving citizens tools to control their personal data. In one pilot, Barcelona integrated DECODE with Decidim, enabling residents to decide how to share data from social media and IoT sensors for the public good, under privacy rules they set. The broader “digital commons” approach, championed by Barcelona’s former CTO Francesca Bria, imagines cities building open-source, privacy-friendly alternatives to Big Tech platforms. For example, city-run sharing platforms for transportation or housing could ensure data isn’t monopolised by Uber or Airbnb, and civic data trusts could let people pool their information, say on energy usage or neighbourhood conditions, for community benefits. By reclaiming digital infrastructure as a commons, cities aim to empower citizens rather than cede control to corporate “monarchs” of data. This is nascent, mostly at a pilot-stage, but it’s potentially important if we want democracy to extend into the data-rich domain of smart cities and AI-driven services.
Some democracies are turning to an ancient idea with a modern twist: selecting citizens by lottery to form deliberative councils. The logic is straightforward, random selection, which is technically called sortition, can create a group that is more diverse and representative than those who self-select or get elected with money and party structures. Ostbelgien (East Belgium), the small German-speaking region of Belgium, has institutionalised a permanent Citizens’ Council. Established in 2019, this body of 24 randomly chosen citizens serve 18-month terms, and its job is to pick topics of public concern and convene larger Citizens’ Assemblies to discuss and propose solutions. In this case the Councils are also randomly selected but each focus on a specific topic. Crucially, the parliament of Ostbelgien agreed to formally respond to and debate the recommendations of these citizens’ panels. In effect, it’s a “third chamber” of the legislature, one comprised of ordinary people deliberating in depth (with expert input) on issues like climate policy or healthcare. This Ostbelgien model has inspired similar moves in France, the UK, and elsewhere, where we’ve seen one-off Citizens’ Assemblies on climate change or constitutional questions. The results so far are promising; given time and information, random citizens come up with sensible, nuanced recommendations, and are often bolder than politicians, yet surprisingly consensus driven. They don’t fall prey to the partisan posturing we see in elected chambers. The challenge is turning those recommendations into action, but at least in Ostbelgien there’s a mechanism to feed them into the normal law-making process. In a future where AI might handle a lot of administration, we could see more use of sortition to handle the value judgments and community choices that tech can’t decide. Who knows, perhaps a “House of Citizens” will complement elected parliaments elsewhere, keeping governance grounded in everyday people’s perspectives.
Beyond whom gets to make decisions, how we make decisions is ripe for innovation too. One intriguing concept is Quadratic Voting (QV). In a normal vote, majority rules and intensity of preference isn’t counted; 51% beats 49% no matter how passionate or lukewarm folks are. QV instead gives voters a budget of vote credits to spread among options, with the cost of each additional vote on an option increasing quadratically. In plain terms, if you care a little about every issue you sprinkle votes around cheaply, but if you care deeply about one issue you can spend a lot of credits to pile votes on it, at a steep cost. This allows a group to find outcomes that maximise overall satisfaction, not just majority rule. It sounds wonky, but Colorado’s state legislature actually tried a version of QV in 2019 to prioritise bills for funding. Lawmakers anonymously allocated votes to their most valued proposals, which helped reveal which bills had broad but mild support versus a few with intense backing. The experiment got mired in a transparency controversy since the votes were secret, but once made public it offered insights into collective priorities beyond party lines. A related idea, Quadratic Funding, has been used in philanthropic and community grant programs; for instance, the website Gitcoin uses it to distribute matching funds to open-source software projects. The formula amplifies donations that come from many people (signalling broad support) more than those that come from a single wealthy donor. In essence, $1 each from 100 people unlocks way more matching funds than $100 from one person. Imagine city budgets or national participatory budgeting using quadratic funding to decide, say, which local arts projects or neighbourhood improvements get grants; it would encourage organisers to gather wide grassroots support, not just woo the rich or cater to the loudest activist base. These novel voting mechanisms are still experimental, but they point to a future toolkit for democracy where we can better measure preference strength, encourage consensus, and break out of binary yes/no choices.
From digital platforms to citizens’ assemblies to funky math-based voting, these governance innovations seek to make democracy more deliberative and distributed, acting as an antidote to the centralising tendency of today’s AI. They aim to give citizens meaningful roles beyond being passive voters or data points, whether it’s co-creating policies online, being randomly enlisted for civic duty, or having more expressive voting power. By doing so, they not only counter the “centralized influence” problem, where a few actors with big data could otherwise manipulate outcomes, but also inject fresh legitimacy into policymaking. A representative legislature might fear tackling a divisive issue, but a citizens’ assembly could find a creative compromise that the public trusts because people like them crafted it. We are essentially updating the software of democracy for the post-digital age, patching bugs, adding features, and strengthening the security against manipulation for the challenges ahead.
Information Integrity Innovation: Safeguarding the Truth Commons
In the kingdom of AI, information is power, but also a major vulnerability. We’ve seen how social media algorithms, political bots, and deepfakes can undermine shared reality. If liberal democracy is a marketplace of ideas, then protecting the integrity of information, the quality of news, the authenticity of content, and the transparency of algorithms, is as crucial as protecting property rights was in the old economy. Around the world, a variety of actors are experimenting with ways to ensure trustworthy information reaches citizens and to blunt the effect of disinformation and propaganda. Think of this as innovation in our “epistemic infrastructure.” This infrastructure combined with some of the above innovations in democracy, would seem to lay at the heart of the improvements we need to make to preserve our democracies through the era of AI. Let’s take a look at some of the most noteworthy developments.
Governments and civil society are getting more organised in fighting fake news. Taiwan, for example, after being barraged by hostile disinformation, with rumours ranging from vaccine safety to election conspiracies which were often traced to external actors, set up a nimble system dubbed “humour over rumour.” When a damaging false rumour emerges, say, about toilet paper shortages, within hours government agencies push out a meme or funny clarification on social media to debunk it, using the same viral channels the rumour travels. This works because a joke or catchy graphic spreads faster among the public than a dry press release. During Covid-19, Taiwan’s approach prevented panic; one viral meme featured the Premier winking, saying “We only have one butt each, don’t hoard toilet paper,” which sharply cut panic-buying. On the more institutional side, the UK government during the pandemic quietly ran a Counter Disinformation Cell that received daily reports from social media companies and fact-checkers on trending falsehoods. This helped them target public information campaigns. The UK is also home to startups like Logically, which uses AI to monitor misinformation online at scale and provides analysis to governments and platforms. Logically’s team of analysts and algorithms work to identify fake narratives from QAnon to extremist propaganda and trace bot networks, acting as a kind of private intelligence service for truth. They reportedly flagged thousands of false Covid posts, which platforms then removed or labelled. These are early attempts to institutionalise defences against disinformation, akin to public health departments for the information ecosystem. However, without transparency and robust public oversight such systems can raise concerns about the infringement of the freedom of speech. This is a necessary public conversation which neither politicians of the right or left want to have, as there’s lots of potential for bad PR, but few votes in it; yet it remains critical to our flourishing in the age of AI.
The big social media and search companies are the unwitting monarchs of our information realm, their algorithms deciding what billions see. Innovation here is coming via policy: the EU’s Digital Services Act (DSA), took effect in 2024, forces the largest platforms to assess and mitigate systemic risks like disinformation. It requires transparency. Users will finally get some insight into why they are shown certain content, and it mandates quick removal of illegal content. There are also provisions to restrict micro-targeting in political ads. Basically, Europe is beta-testing the regulation of algorithms for public interest, which if successful, could be a template globally. Another idea gaining ground is requiring “source indications” or authenticity labels on AI-generated media. For instance, developers are working on cryptographic content signing, where a photo or video can carry a secure certificate of origin. If widely adopted, your device could tell you if an image claiming to be from CNN actually came from CNN, or if it’s a deepfake. Adobe, Microsoft, and BBC have been piloting this. Meanwhile, browser extensions and apps like NewsGuard or Botometer act as “nutritional labels” for news; flagging known misinformation sites or identifying if a Twitter account behaves like a bot. One can imagine such schemes being broadened to include a “information source trust score”, which you can instruct your personal AI to use to filter out untrustworthy information before it reaches you, based on criteria you set. This would be a form of decentralised self-censorship model; after all who wants to be lied to! None of these are silver bullets, but together they start to create an ecosystem where lies have a harder time masquerading as truth.
Some other innovations are decidedly low-tech, focusing on educating the public. Finland is often cited for its national push on media literacy; after facing Russian disinformation, Finland added curriculum in schools to teach students how to identify fake news, check sources, and understand biases. By sowing “inoculation” in the population, the Finns aim to build herd immunity to fake narratives. Early evidence shows Finnish adults are among the most resistant to false news in Europe. Other countries are copying this: Sweden and Taiwan, for example, have run nationwide media literacy ad campaigns, including comic strips showing grandma how to verify that shocking Facebook post before sharing. There’s also a revival of public service journalism models. Some cities and philanthropists are funding local non-profit newsrooms to fill the void left as local newspapers die. This ensures communities have at least one reliable source of factual reporting on local affairs, which also reduces susceptibility to rumours. Experiments in community content moderation are happening too. Reddit’s community moderation system, imperfect as it is, often manages quality in niche forums better than AI alone. Twitter (now X) has introduced Community Notes, a feature where approved volunteer contributors can add contextual notes to misleading tweets; the notes are only shown if contributors from different viewpoints both rate them as helpful. It’s a kind of crowdsourced fact-check attached to virality, and interestingly, it’s cut down on some political misinformation by adding a layer of peer accountability.
It's notable that some jurisdictions are enacting laws specifically addressing AI-manipulated media. China now requires deepfake content to carry a clear label. California has an election law that makes it illegal to distribute deceptive deepfakes of candidates within 60 days of an election. Bots are another target: several countries are mulling a “bot or not” rule, stipulating that automated social accounts should be identified as such. The idea is to let people know whether they’re engaging with a human or a script. Even if the bot isn’t malicious, the transparency helps users calibrate trust.
Broadly, the goal of all these information integrity efforts is to strengthen the immune system of the body politic. Open democracies depend on a shared baseline of reality and good-faith debate. If AI systems can produce infinite misinformation, we’ll need human institutions, powered by AI, to counter it with infinite vigilance. The promising news is we’re seeing the first antibodies form, in the shape of new fact-checking alliances, laws for algorithmic transparency, and digital literacy drives. The balance between curbing harmful lies and preserving free expression is delicate, but inaction will lead to society “drowning in disinformation”. Like the experiments in new economic models and governance which we’ve reviewed in this and the previous article, these innovations in information integrity are about ensuring technology supports an open society, rather than undermine it.