Our Post-AI Future 4/4
Towards a New SOCIETAL Model
In the previous articles we explored the challenges the emergence of AI will pose for society and some of the innovation happening around the globe which may furnish us with solutions. We then developed a cube of possible societal end-states, depending on their approaches to economic, governance and information integrity. In this article we are going to build on the above by exploring a societal model that may combine the best of the above and consider how we might journey toward it.
Markets with a Social Purpose
To forge a society conducive to human wellbeing and agency in the era of AI, we will need to build upon the foundations of liberalism. We will explore combining ideas from the multiple experiments described in the pervious articles into a new synthesis. This vision, draws on the idea of “Markets with a Social Purpose,” by creating a two-tier economy: one comprising “economic labour” and a “virtue labour.” The idea is this: Let the AI and robots handle the economic labour; that is, producing the goods and services, generating the wealth, while humans shift increasingly into a virtue labour focused on activities that machines can’t, or we do not want them to, replace, and that enrich human life and society. These are things like caregiving, learning, artistic creation, mentoring, community leadership, environmental restoration, the kind of “work” that traditional markets undervalue because it doesn’t yield big profits yet is incredibly valuable in terms of social well-being.
In the “Markets with a Social Purpose” framework, individuals would be rewarded not for “economic labour” but for “virtue labour”. Think of a parent teaching their child to read, a neighbour helping an elderly person with errands, or a citizen organising a local tree planting. Today, these acts get you at best a thank-you, not a pay-check. In a virtue economy, you could earn credits or income for them, if they can be verified and measured in a fair way. That’s where AI and data come in positively: personal AI assistants and sensors could help quantify and certify these contributions without infringing privacy. For example, your personal AI, by securely accessing your private personal data vault, might confirm that you spent 5 hours this week tutoring underprivileged kids, or that you reduced your carbon footprint by X by biking to work, or that you completed an advanced course in caregiving skills. These achievements could translate into social purpose credits that have real monetary value or privileges attached.
We already see prototypes of this, such as an early-stage company called Dataswyft enabling the reward of exercise by those at risk of diabetes in South Korea, or local time banks where helping a neighbour earns you hours you can spend on someone else helping you. An existing analogous example is the carbon credit markets which prices a emissions reduction (social good) and trades it. The proposal is to extend this to personal action. Essentially, “free markets” are repurposed to trade in positive outcomes, where the currency is not just money for widgets, but tokens for verified good deeds. Crucially, it’s not a top-down government awarding gold stars; it’s a market, meaning people and organisations can sponsor or buy these tokens. For instance, a city might pay residents virtue credits (convertible to rent or food) for neighbourhood clean-up work. Or a foundation fighting illiteracy might reward people who mentor teens with tokens funded from its endowment. Because these credits are tokenised and tradable, they create a flexible incentive system, potentially as dynamic as traditional markets but aligned with social purpose.
To make this work while preserving liberal freedoms, several ingredients we saw earlier must be blended:
· Personal Data Control (à la DECODE): People need ownership over their data and the ability to share proofs of their activities without Big Brother. Projects like DECODE and personal data stores give a blueprint: you keep detailed data on yourself (from your health app, learning app, volunteer log, etc.) in an encrypted vault.
· Co-op Platforms and Data Commons: The tech to run this virtue economy could itself be cooperatively owned or publicly governed. For instance, instead of Facebook gamifying our attention for ad clicks, a civic network could be built on a decentralized protocol and used to gamify civic virtue. The “likes” and “shares” would be replaced by community endorsements of helpful acts; a form of reputation system owned by users, with robust safeguards.
· AI as Personal Coach (à la Apple’s on-device AI): Rather than AI being a centralised overlord, each person could have an AI assistant, within their personal data vault, that helps them achieve their self-chosen goals; a bit like a life coach or guardian angel for personal development. For example, it nudges you to stick to that French practice if your goal is learning French, or it suggests nearby community events if you’ve been isolated. Crucially, the AI can also verify when you hit milestones. You choose which virtue activities matter to you, within some democratic social consensus on which broad goals to reward. You’re free to opt out and live modestly on UBI. The virtue economy is an opt-in game, not a forced march.
· Democratic Goal-Setting (à la Polis/vTaiwan): Who decides what counts as a “virtuous” activity worthy of reward? This must be a democratic conversation, or else it becomes tyranny of whoever sets the metrics. Here, digital deliberation tools like Polis, Decidim, or citizens’ councils could be used to define and update the menu of incentivised goals. The key is that it’s bottom-up and expert-informed. This ensures the virtue market reflects plural values and can evolve. It also preserves liberal pluralism: there isn’t one monolithic “score” everyone must chase; there could be a diversity of tokens and credits for different good acts, and people pursue those aligned with their own vision of the good life.
· Sardex-style Currency Networks: To actually make virtue credits spendable, one can create credit networks parallel to fiat currency. Think of local Sardex-like systems where for example, credits earned by helping seniors can be spent on other community services. In a virtue economy, mutual credit could mobilize idle human capacity; people’s time and skills that are currently not used because there’s no paid demand. Credit rewards could either be earned by do communally designated tasks, or simply by doing things other are willing to pay their credits for. Such a system could sit upon a basic UBI system to ensure activity is a freely chosen.
This new social model, call it a Virtue Economy directed by ground up participatory democratic governance and embedded in a high integrity information space, essentially means no one is left without meaningful things to do or means of support. It should be seen as complimentary to a minimal UBI. People are free to choose from a wide array of socially constructive activities (or create new ones) and get recognised and rewarded for them. It reframes “work” as any effort that builds personal or societal well-being, not just what a market employer will pay for. By making these new markets voluntary and diverse, it respects liberal freedom; you pursue your own path to flourishing, but society gives you a nudge (incentives) and a safety net (UBI or services) to ensure you can do so without starving. And crucially, it keeps the innovation flywheel spinning; people and organisations can innovate new ways to solve social problems and be rewarded, much as entrepreneurs today innovate commercially. In fact, linking this with open innovation challenges or quadratic funding could turbocharge solutions to things like climate change or education inequality. If you create a big positive impact, you earn dividends, all of which is funded by tax or dividends from the AI-driven “real” economy.
Such a model tries to fulfil the original liberal promise, maximising individual potential and freedom, in the new era of AI. It says you are free to choose if and how you contribute, but we as a local community, County or Nation State each value all sorts of contributions, not just those the old market did. It also helps prevent the new elite capture that AI could cause, by socialising the gains of AI and by decentralising decision-making on what matters via citizen input. It uses AI to empower individuals via your personal AI assistant rather than to control individuals. Far from a top-down “social credit” system, it’s a bottom-up social capital system, growing the real capital of communities: trust, knowledge, care, and creativity.
Of course, this is ambitious. Sceptics will ask: won’t people game it? There are risks; any system of incentives can be gamed. There would need to be checks such as AIs spotting cheating, community oversight boards, etc. If done right, engaging in the virtue economy should feel like play or a calling, not drudgery. It should amplify the natural human drive to learn, help, and connect by removing the economic barriers to doing so. We’re essentially gamifying societal improvement, in a non-dystopian way. What is the payoff if we succeed? A society where technology’s bounty frees us not for mere leisure, but for purposeful activity. One where innovation doesn’t stagnate after automating jobs but pivots to improving quality of life. A society where the liberal values of freedom and pluralism are preserved, because people have options and voice, and enhanced by a greater sense of common fraternity.
We stand between two eras, an interregnum, with the future in our hands. To avoid the Hobbesian nightmare and seize the renaissance of human thriving, we must be proactive in building these new institutions. That means policy can’t wait for the AI tsunami to hit full force; it means piloting and learning now.
Policy Roadmap: Human-centred Societies in Era of AI
Governments that wish to navigate the impending AI disruption and steer toward a flourishing future must start acting today. Waiting until the social fabric is in crisis, with mass unemployment, democratic breakdown, etc., will be too late. So, what could a policy roadmap for the 2020s to early 2030s look like?
Enabling Policies
Firstly, governments need to encourage the adoption of self-sovereign personal data stores, data portability and data sharing frameworks. These are the foundations of a new information and decision ecosystem, controlled by users, rather than Big Tech. It’s only when individuals have control of their data that trustworthy AI services can be built which are less likely to exploit individuals. Much of the government action required is outlined in the EU Data Strategy and implemented in a raft of associated legislation. This framework needs to become global, and investments made, through tax breaks and grants, into services that create value for individuals from this new data ecosystem.
Secondly, government needs act as a catalyst for the new economic model; the “Virtue Economy”. This can begin with simple steps: incorporate wellbeing metrics into budgeting, like New Zealand did. If every government department must justify spending in terms of how it improves say mental health or community cohesion, they’ll start funding programs that generate those outcomes. Governments can reinforce this by issuing social impact bonds for the private sector achieving societal goals, effectively paying for virtue. This can then, in combination with local participatory democracy, be migrated to personal goals such as rewarding learning and community engagement. These are proto-markets for virtue. A range of standards need to be defined to enable this: common measures for things like a “learning hour” or a “care hour” so that credits are interoperable. Additionally, consider piloting a national service/earning program: not mandatory but guarantee every young person (18-25) the option to participate in a year of paid community or environmental work, earning credits or wages. This normalises virtue work and builds institutions for it. By 2035, such programs could smoothly transition millions of youths from formal employment pathways to blended paths of learning and civic contribution.
Thirdly, government needs to promote participatory democracy leveraging digital tools and roll out this policy in stages. The first stage would focus on access and trust, ensuring every citizen has a simple, secure digital identity and a single platform where they can propose ideas, give feedback, and take part in citizen assembly consultations. The second stage would introduce deliberation at scale, using tools that cluster opinions, highlight common ground, and let people see where their voice sits in relation to others, helping to build consensus rather than amplify division. The third stage would move into decision-making, with participatory budgeting and new voting methods that let people express not just yes/no choices but the intensity of their preferences, so that collective priorities are clearer and more legitimate. Finally, the policy would evolve into continuous civic participation, where citizens receive tailored updates on the impact of their input, can track progress through public dashboards, and occasionally gather in local assemblies to complement the digital process. The goal is to make taking part in democracy as easy as shopping online
Finally, to avoid falling into the low information integrity trap, we must invest in the civic machinery of democracy. Government policy should focus firstly on promoting the adoption of tools that establish and signal the truth of information, and secondly on public education that equips citizens to navigate the digital sphere with confidence. On the tools side of the equation, content provenance systems, authenticity labels, and trusted algorithms should make it clear when news or media is verified, AI-generated, or disputed, so that people (or their personally controlled AI filters) can judge information at a glance. Alongside this, a major investment in media literacy is essential, starting in schools but extending through public libraries, lifelong learning programmes, and community workshops. Citizens should be taught not only how to fact-check and evaluate sources, but also how to participate responsibly in digital forums. The combination of truth-marking tools and broad-based civic education would create an environment where reliable information circulates more widely, disinformation is easier to spot, and citizens are empowered to take part in democratic life, free of manipulation and with independent agency.
Implementation Policies
One of the toughest challenges in building new virtue markets will be how to fund them. There are two main levers. The first is to capture savings created by a more efficient data ecosystem. As better data-sharing cuts costs across the “real” economy, then with the right business models, part of those gains can be redirected to support virtue markets without dipping into the public purse. The second is taxation, but taxing capital directly, often drives it abroad. A smarter alternative would be to rebase corporate tax on the opportunity cost of not doing business in a given nation, rather than on the profits declared in the company’s accounts. Until recently such calculations would have been impossibly complex, but advances in data and AI make them achievable. This approach would seem preferable to an AI tax which could be more easily gamed. At the same time, intellectual property and tax rules should ensure that when AI draws value from public datasets, society receives a return Perhaps through a “data dividend” paid into a sovereign wealth fund; akin an idea California has toyed with. Establishing these frameworks by, say, 2030 is critical, because by then AI will be pervasive and we need the legal spine in place before the wild west becomes entrenched.
A coherent policy to manage workforce transition and seed a virtue economy could unfold in phases. In the first phase, government would cushion workers as automation erodes traditional jobs by encouraging shorter working weeks without cutting pay, using tax breaks and social security credits to make this viable. In parallel, local councils help displaced workers form cooperatives and win public contracts, and public institutions direct procurement to local suppliers, following models like Preston to keep value circulating in communities. In the final phase, a minimal UBI would be introduced alongside a virtue economy. With the standards for the Virtue economy developed and tested, and a national/local service program piloted, the Virtue economy could be rolled out at scale and implemented at the community, local and county council and national level, all within one coherent, interoperable framework. By 2035, these steps could converge into a mature framework where millions transition smoothly from traditional employment into a blended economy of paid work, civic contribution, and socially recognised virtue labour.
To pilot the above, policymakers should establish regional “Societal Labs”, urgently, in regional cities to act as exemplars to trial and refine the above; akin to the new model villages pioneered by the industrialists Carberry and Rowntree in the 19th and 20th centuries, to test new ways of mass living. This will enable us to see how the components interact and can reinforce one another. Such pilots not only provide valuable data, vital to learn how to scale the initiatives, but they also acclimatise the public to new ideas and create constituencies for scaling what works.
There’s Urgency
Governments should prepare for when the need becomes acute. Many studies suggest the 2030s will be the crunch time. By then, automation could displace nearly half of current jobs. So set milestones: e.g., by 2026, have a national AI task force produce scenarios and action plans. By 2028, aim to have a basic income or negative income tax mechanism legislated (even if at first at a small amount), ready to scale if unemployment shoots up. By 2030, have at least one city in each region operating as a “Post-AI City” showcase. Keep an eye on key tipping points: if labour force participation drops rapidly or if a particular sector automates en-masse, be ready to intervene with sector-specific support and retraining. Likewise, monitor democratic health indicators. If online misinformation or polarisation gets worse, that’s a sign to double-down on info integrity policies. Essentially, use the 2020s to build resilience so that by the time AI is as ubiquitous as electricity, our societies are not caught off-guard. International cooperation will help, as these challenges are global. Democratic nations should share best practices on digital democracy and economic tools and collectively put pressure on tech companies to adhere to standards; through something like a “Digital Democracies Charter”.
The urgency is real. AI is progressing on an exponential curve; public policy moves more slowly. But we have a window now, before the 2030s storm, to make choices that avoid the worst-case “Marketocracy” outcomes and instead empower individuals to preserve our open liberal democracies. The post-AI future can be bright if we act with foresight. Liberalism has faced down tyrants, upheavals, and revolutions before by reinventing itself. Now is the time to reinvent once more, so that free markets broaden to encompass markets with a social purpose, and representative democracy becomes participatory democracy with robust information integrity. By experimenting, learning, and scaling what works, we can ensure that AI/robotics liberate humanity, not just from toil, but for higher pursuits. In doing so, we will demonstrate the liberal promise that technology and freedom, together, can uplift all. The monarchs of the tech age need not rule unchecked; the innovators and reformers of our time are already at work. Let’s join them, with both humility and boldness, to build an open future where humans and our AI creations thrive side by side in service of life, liberty, happiness and in the best interests of the people.