Public-Private Partnerships Seen as Key to Building Trust in AI

The integration of artificial intelligence into businesses has grown dramatically, increasing by 115% between 2023 and 2024. Despite this rapid adoption, confidence in responsible deployment remains fragile—only 62% of executives and 52% of workers believe AI is used ethically within their organizations. This gap highlights a pressing need for coordinated efforts that go beyond what any single sector can achieve independently.

Public-private partnerships (PPPs) are emerging as a vital mechanism to bridge this trust deficit. By combining governmental oversight, private-sector innovation, and civil society input, these collaborations can transform abstract notions of trust into concrete systems involving audits, accountability measures, and redress mechanisms. Without such governance, the global economy could lose up to $4.8 trillion in potential gains by 2033, primarily due to unequal access to AI technologies across nations and communities.

Across industries, skepticism continues to slow progress. A KPMG survey revealed that just 35% of corporate decision-makers trust AI tools used in their own operations. This hesitancy leads to stalled initiatives, unrealized efficiencies, and abandoned innovations. According to MIT research, 95% of AI pilot programs fail, largely because of inaccurate model outputs, security vulnerabilities, and poor organizational adaptation.

Conversely, broader and more trusted use of AI could significantly boost global trade. World Trade Organization projections suggest a universal adoption approach could increase trade growth by an additional 14 percentage points by 2040—double the outcome under current fragmented models. However, IMF findings indicate that high-income countries currently gain twice the productivity benefits from AI compared to developing economies, exacerbating global inequality.

To counter this imbalance, targeted collaborations involving international financiers, national regulators, and industry groups offer the fastest route to extend AI capabilities, data access, and technical training to underserved regions. These joint efforts address core concerns like algorithmic bias, data privacy, system safety, and accountability—issues that span technological, societal, and legal domains.

Such multi-stakeholder cooperation enhances credibility. For instance, Estonia’s digital services, including 99% online tax filing and top-tier collection efficiency, demonstrate how institutional trust drives widespread technology adoption.

Currently, while 75% of CEOs acknowledge the importance of trustworthy AI governance, only 39% report having effective frameworks in place. To close this gap, a structured approach is needed—one that translates policy into enforceable standards, generates verifiable evidence, and aligns incentives.

The TrustworthyAI Index, which draws on guidelines from the OECD, Stanford HAI, and NIST, shows that merely 20% of leading AI models meet high benchmarks for transparency and responsibility. Meanwhile, only 30% of countries have integrated AI into policymaking, revealing a systemic disconnect between principles and real-world implementation.

As Satya Nadella, CEO of Microsoft, stated at the 2024 World Economic Forum in Davos, public tolerance for poorly considered AI systems—especially regarding safety, fairness, and reliability—is diminishing.

PPPs help operationalize trust through three key mechanisms:

First, governance led by public institutions includes risk classification, procurement rules, model registries, and regulatory sandboxes. This ensures democratic legitimacy aligns with technical feasibility.

Second, shared assurance mechanisms—such as independent testing labs, certification processes, incident reporting, and post-deployment monitoring—shift reliance from claims to evidence, enabling cross-border recognition and regulatory alignment.

Third, inclusive data practices, co-managed by civic and private actors, involve secure data-sharing frameworks like data trusts and privacy-preserving technologies. These also support skills development and access initiatives in sectors and regions most affected by AI.

An example is the Partnership on AI (PAI), which brings together 129 organizations—including Adobe, BBC, OpenAI, TikTok, and WITNESS—to develop practical governance standards. Its guidelines on synthetic media establish provenance norms, helping prevent misinformation.

Additionally, the World Economic Forum’s AI Governance Alliance, launched through its Centre for the Fourth Industrial Revolution, unites governments, companies, academia, and civil society to promote transparent and equitable AI systems. Complementary tools like PAI’s guidance on safe foundation model deployment and its AI Incident Database provide shared infrastructure for managing risks.

Closing the digital divide isn’t just ethical—it’s economically strategic. The projected $4.8 trillion in unrealized value hinges on building trust and expanding access. UN Secretary-General António Guterres has urged governments and tech firms to collaborate on risk frameworks while ensuring developing economies benefit from AI’s transformative power.

The G20 has already advocated for interoperable governance, safety protocols, and inclusive digital infrastructure. PPPs serve as the delivery model, harmonizing standards and deployment across borders.

A clear roadmap includes: establishing national public-private task forces on trustworthy AI in all G20 countries within 12 months, adopting common assurance baselines (including audits and incident reporting), and piloting AI-dividend programs for workers in high-exposure industries with industry co-funding.

Following this path advances Sustainable Development Goals 9 (Industry, Innovation, and Infrastructure) and 16 (Peace, Justice, and Strong Institutions), turning ethical principles into inclusive economic growth. With the opportunity valued at $4.8 trillion, the strategy is defined—the next step is collective action.
— news from The World Economic Forum

— News Original —
Why public-private partnerships key to building AI trust
AI adoption has surged but trust remains precarious, with only 62% of business leaders believing it is deployed responsibly in their organizations.

Tackling the global AI trust deficit requires coordinated solutions that only public–private partnerships (PPPs) can deliver.

PPPs combine government legitimacy, industry capability and civic oversight to turn ‘trust’ into measurable controls, audits and redress of AI.

Artificial intelligence (AI) adoption in enterprises surged an unprecedented 115% from 2023 to 2024, yet trust remains precarious – only 62% of business leaders and 52% of employees believe AI is deployed responsibly within their organizations.

This growing trust deficit urgently demands coordinated solutions that no single entity can provide like public-private partnerships (PPPs) can. PPPs combine government legitimacy, industry capability and civic oversight to turn “trust” into measurable controls, audits and redress.

The arithmetic is unforgiving: without trustworthy AI governance, the global economy forfeits not merely growth, but an estimated $4.8 trillion in unrealized economic upside by 2033 – largely the value lost to a widening digital divide between countries and communities with AI access and those without.

Distrust hinders the development of AI worldwide

Across sectors, distrust has visibly hindered the development of AI. Even within companies, a KPMG study found only 35% of decision-makers trusted AI and analytics in their own operations. This is precisely where PPPs co-design standards, transparency, audit and accountability, so adoption is “safe by default”.

These trust gaps translate to missed opportunities: projects are shelved, efficiencies are unrealized and innovations are left on the table. According to a recent MIT study, 95% of AI pilots fail – far below expectations – due to model output inaccuracy, security concerns and ineffective change management.

World Trade Organization research shows that universal AI adoption could boost global trade by an additional 14 percentage points by 2040 – double what’s possible under current fragmented approaches. In simpler terms, everyone benefits more when AI tools are trusted and used worldwide.

However, International Monetary Fund (IMF) analysis reveals a troubling pattern: currently, wealthy nations capture twice the productivity benefits from AI compared to developing economies, widening rather than narrowing global economic disparities.

Targeted PPPs – combining multilateral financing, national regulators and industry consortia – are the fastest mechanism to diffuse capabilities, data access and skills to the Global South, narrowing the productivity gap the IMF highlights.

The very reasons there is a trust deficit in AI – i.e. concerns about bias, privacy, security, safety, and accountability – span technical, social and regulatory domains. PPPs connect public guardrails with private innovation and bring academia/nongovernmental organizations to stress-test real-world impact.

When these stakeholders work in concert, their combined credibility creates systems worthy of confidence. For example, Estonia’s 99% online tax filing and EU-leading collection efficiency show how trust unlocks digital uptake.

How PPPs operationalize AI trust

Three-quarters (75%) of CEOs recognize that trusted AI requires effective governance, yet only 39% report having adequate frameworks in place. Leaders need a repeatable stack that turns policy into controls, controls into evidence and evidence into incentives.

The TrustworthyAI Index’s assessment methodology – which builds on established frameworks including the Organization for Economic Co-operation and Development (OECD) AI Principles, Stanford Human-Centered Artificial Intelligence (HAI) benchmarks and National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) standards – identifies that only 20% of leading models meet exemplary transparency and accountability thresholds.

Loading…

Governments are catching up too: only 30% of countries have used AI in policy-making, signalling missing linkages between principles and line-of-business execution. This gap is structural, not attitudinal. As Microsoft CEO Satya Nadella noted at the World Economic Forum Annual Meeting 2024 in Davos: “I don’t think the world will put up anymore with something not thought through on safety, equity and trust.”

PPPs operationalize trust by turning principles into sector protocols, building assurance through certification/audits/incident reporting, and enabling responsible data-sharing for robust, fair models. PPPs deliver through three levers – with clear owners and artefacts:

Governance (public lead): risk tiering, procurement clauses, model registries, regulatory sandboxes

Why it works: integrates democratic authority with technical feasibility, collapsing the policy-to-execution gap.

Assurance (shared): third-party testing labs, certification/audits, incident reporting, post-market monitoring

Why it works: shifts trust from claims to evidence and enables regulatory recognition and cross-border comparability.

Inclusion and data (shared/civic): data trusts and privacy-enhancing technologies for safe sharing; targeted skills and access programs for high-exposure sectors/regions

Why it works: balances scale with responsibility and hard-wires equitable value distribution.

The Partnership on AI (PAI), for example, convenes 129 technology companies, media organizations and civil society to establish concrete AI governance frameworks. Its Responsible Practices for Synthetic Media set provenance norms and are supported by organizations, including Adobe, BBC, OpenAI, TikTok and WITNESS.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance.

The Alliance unites industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

This includes the workstreams part of the AI Transformation of Industries initiative, in collaboration with the Centre for Energy and Materials, the Centre for Advanced Manufacturing and Supply Chains, the Centre for Cybersecurity, the Centre for Nature and Climate, and the Global Industries team.

Meanwhile, PAI’s Safe Foundation Model Deployment guidance and the AI Incident Database provide shared risk infrastructure, complementing PPPs by turning principles into verifiable practice.

Economic benefits of closing the digital divide

The $4.8 trillion upside materializes only when trust closes the digital divide. As United Nations Secretary-General António Guterres argued at the UN Security Council, governments must urgently collaborate with technology companies on risk management frameworks, while systematically expanding access to ensure developing economies capture AI’s transformative potential.

The Group of Twenty (G20) has already called for interoperable governance, safety assurance, and inclusive digital infrastructure. PPPs are the delivery vehicle – aligning standards, audits and deployment across borders.

Within 12 months, every G20 economy should:

Establish a national public–private task force on trustworthy AI

Adopt a common assurance baseline (independent audits, incident reporting, provenance)

Pilot an AI-dividend for high-exposure workers with industry co-funding

Following that AI roadmap advances Sustainable Development Goals SDG 9 (Industry, Innovation and Infrastructure) and SDG 16 (Peace, Justice and Strong Institutions) and turns principles into growth – fast and fair. The opportunity is $4.8 trillion; the path is clear; execution must now be collective.

Leave a Reply

Your email address will not be published. Required fields are marked *