A Social Contract for AI

Responsibility, Competence and Infrastructure — In Practice.

TL;DR

AI can propose, but only people can decide and own consequences.
Explanations must fit each audience — citizens get reasons they understand; experts get verifiable logs.
Security mov…


This content originally appeared on DEV Community and was authored by Riku Lauttia

Responsibility, Competence and Infrastructure — In Practice.

TL;DR

  • AI can propose, but only people can decide and own consequences.
  • Explanations must fit each audience — citizens get reasons they understand; experts get verifiable logs.
  • Security moves “left”: design it into data, models, supply chains, and operations.
  • Capacity must be layered and portable (exit rights tested, not only promised).
  • Curated data and guardrails against “model collapse” are non-negotiable.
  • Ethically, pursue a dual strategy: partner where power lives, but fund open alternatives to keep freedom to move.

AI now threads through the economy, government, and everyday life — and shifts power as it goes: who steers development, whose voice counts, and whose risks are tolerated. This essay offers a practical framework any organization can adopt to make defensible, auditable AI decisions today. The core claim is simple: a democratic path requires that we operationalize six ideas — human responsibility, audience-specific explanation, security-by-design, layered and portable capacity, curated data, and an ethics dual strategy — across contracts, architectures, and daily routines. If we don’t, decision power quietly migrates to whoever controls the fastest lane: compute, data, and contracting leverage.

Machines can support work and decisions; responsibility must remain human. Below I show how to turn principles into checkable routines — procurement clauses, architectural patterns, and training programs — so the “social contract for AI” isn’t just a strategy slide, but something you can verify.

Why Only Humans Decide

Large language models feel fluent, but they do not share human commitments. A model can predict words; only a person can promise, be accountable, and correct a decision. That boundary is crucial in public power, healthcare, and due process.

Design rule: in every critical application, separate suggestion from decision. Name the human approver. Make the chain auditable. Without this, fluency gets mistaken for understanding, and responsibility blurs.

Practice checklist:

  • Name the decision owner in the UI and in logs.
  • Require a justification field (what evidence, which data versions, which tests passed).
  • Show citizens the decision and appeal path in the same view.

Competence, Reframed: From “Code Writer” to “Verifier”

Generative tools shift software work: less manual writing, more problem framing, testing, verification, and safe use. Quality does not emerge by accident.

What changes in teams:

  • Prompts are artifacts. Keep them in version control.
  • CI for generations. Treat output like code: unit tests, policy tests, red-team suites.
  • Named approver. A human must gate releases and risky actions.

Operational metric ideas:

  • Escapes per release (errors past tests into production).
  • Percentage of critical actions with human approval logged.
  • Mean time to correction after citizen/agent appeal.

Minimal “Gen-AI Gate” in CI (concept):
checks:

  • unit-tests
  • policy-tests (jailbreaks, PII, safety rails)
  • eval-bench (task-specific accuracy/latency)
  • human-approval (required for risk >= medium; role: Service Owner)
  • artifacts:
  • prompts/ (versioned prompts)
  • evals/ (reproducible eval sets)
  • provenance/manifest.json (model + data snapshot)

Explainability That Matters (to Each Audience)

One diagram rarely justifies a decision. In public use, we need reasons a citizen understands, plus deep logs for auditors. Explanation isn’t a bolt-on; it’s part of the system.

Two-tier model:

  • Citizen layer: plain-language rationale, key factors, uncertainty, and an appeal button.
  • Expert layer: versioned data/model snapshot, feature contributions, policy rules invoked, and evaluation traces.

Make it measurable:

  • Maintain a justification budget alongside latency budgets.
  • Track comprehension with user tests (do non-experts correctly paraphrase the reason?).
  • Version explanation artifacts so they update as data/models change.

Security as the Default

Attackers’ reconnaissance is fast and automated; defenders must raise costs before the first exploit. AI also lowers the attacker’s skill threshold.

Architecture patterns:

  • Strict train/test/prod separation; zero standing privileges.
  • Minimal metadata retention; role-based access with time-boxed tokens.
  • Supply-chain provenance: models, libraries, datasets (SBOM, dataset lineage, signed attestations).
  • Continuous LLM red-teaming (prompt injection, data exfiltration, tool abuse).
  • Practiced incident drills: backup integrity, failover paths, and clear communications.

Quarterly security routine (example):

  1. Purple-team exercise (prompt injection + data exfiltration).
  2. Restore from backup and switch to warm-standby.
  3. Rotate keys/tokens; verify blast radius limits.
  4. Publish a short, de-identified internal postmortem.

Compute Is Political: Layered and Portable Capacity

Specialized accelerators concentrate performance in few places. That’s not just technical — it’s geopolitical and economic. If critical functions depend on a single vendor, you inherit their pricing and disruptions.

Capacity strategy:

  • Layering: national/regional cloud where needed, with local edge for continuity.
  • Exit rights you test: data and model export in usable formats; like-for-like performance tests on alternates.
  • Procurement points: open interfaces, portability scoring, energy use in total cost of ownership.

Business-continuity goals:

  • RTO (recovery time objective) and RPO (data loss window) defined and tested twice a year.
  • Simulate provider lockout and prove you can run elsewhere.

Data Quality, Synthetic Data, and Model-Collapse Risk

If systems start learning from their own outputs, distributions drift and quality decays. Prevent recursive self-feeding unless a human-in-the-loop review clears it.

Data governance:

  • Dataset cards: origin, rights, bias notes, update history.
  • Synthetic data controls: document generators, share, and purpose; cap its proportion; validate with real-world probes.
  • Pre-deployment quality gates — don’t wait for incidents.

Data Registry (illustrative excerpt):

  • Dataset: claims_2024Q3
  • Sources: municipal systems A/B, OCR pipeline v2.1
  • Known risks: under-representation of non-native speakers
  • Synthetic share: 12% (generator v0.9, style constraints on)
  • Last audit: 2025–09–15 (pass)

Ethics Between Power and Freedom: The “Dirty Hands” Dual Strategy

We often face a choice: influence from inside partnerships (where decisions are made) or from outside by building alternatives. Both carry risks; the practical answer is to do both.

What it looks like:

  • Work with major providers and fund open models, test beds, and standards in parallel.
  • Independent ethics boards with real stop authority for large procurements.
  • Public conflict-of-interest and influence logs.
  • Annual impact reports and third-party audits.
  • Whistleblower channels that actually protect people.

The Social Contract, Operationalized: Six Principles → Six Routines

Responsibility stays human

  • Named approver; time-stamped decision log; RACI table.

Explanation by audience

  • Citizen rationale and appeal; expert trace logs; comprehension tests.

Security by design

  • Minimized metadata; supply-chain provenance; practiced drills.

Layered and portable capacity

  • Tested exits; portability score in procurement; energy in TCO.

Curated data, synthetic under control

  • Registry and quality gates; recursion guard; bias and drift monitors.

Ethics dual strategy

  • Partner plus open alternatives; independent board; public reports.

Public dashboard suggestion:
Publish quarterly: portability test results, data-quality grade, audit findings (de-identified), and time-to-correction for appeals.

Case Example: Municipal Social Services

A city uses LLMs to draft assessments and summaries.

  • Responsibility: each suggestion requires a named caseworker’s approval; UI records reasons and data versions.
  • Explanation: citizens see plain-language reasons and an appeal link; staff see model/data/version logs.
  • Security: separate train/test/prod; metadata minimized; honeytokens detect exfil attempts; regular drills switch to edge capacity during outages.
  • Data: documented sources; limited, labeled synthetic share; pre-launch quality gates.
  • Ethics and portability: publish de-identified quarterly metrics; in parallel, pilot an open model to keep exit options real.

Result: faster service without sacrificing due process or trust.

Making Hidden Power Visible

AI’s strongest effects hide in contracting, data curation, update cadence, and architecture choices. That’s where “quiet power” accumulates. Counter it with routines: decision logs, impact assessments, version histories, public changelogs, and test results. These create a learning loop, not just a compliance tick-box.

  • Portability is a drill, not a slogan. Practice data/model exports and failovers with real costs attached.
  • Curation is a routine, preventing silent decay and keeping models grounded.
  • Explanation by audience keeps citizens informed and auditors effective.
  • Security by design raises attacker costs and shrinks blast radius.
  • Independent oversight verifies the drills and keeps everyone honest. In Finland and across Europe, this is also about sovereignty: layered capacity (regional cloud plus edge) so we’re not captive to a single vendor or geography.

Conclusion: Machines Propose, Society Decides

Models predict words; people carry duties. A workable social contract for AI hard-wires that reality. When explanation, security, capacity, data, and ethics are embedded in routines, AI strengthens democracy: decisions are justifiable today and correctable tomorrow, without undue delay or cost.

Two movements, in parallel:

  1. Institutionalize verifiability — traceable suggestions, reproducible evidence, accountable approvals.
  2. Build sovereign capacity — tested exit rights, portable stacks, and real security.

These reinforce one another. Without institutions, sovereignty is a slogan. Without sovereignty, institutions are fragile. Measure progress through regular audits and public exercises; otherwise capability remains on paper. An ethics dual strategy keeps influence where power sits while preserving the freedom to leave.

Bottom line: machines can suggest; we set direction and terms. That’s how innovation becomes fixable — and fair.

Appendix: Ready-to-Use Artifacts

RACI for Critical Decisions (template)
Activity | Responsible | Accountable | Consulted | Informed

  • Model suggestion accepted/rejected | Caseworker | Service Owner | Legal, DPO | Citizen, Team
  • Data update to training set | Data Steward | CDO | Security, Domain Lead | Audit Board
  • Portability drill execution | SRE Lead | CTO/CIO | Vendor, Risk | Public report

Procurement Clauses (excerpt)

  • Portability & Exit: Vendor must support export of data, prompts, embeddings, and fine-tuned weights in documented formats; provide performance baselines for alternative environments; participate in semiannual failover drills.
  • Security & Provenance: Provide SBOM for models/libs, dataset lineage, and signed attestations; pass red-team tests twice yearly.
  • Explanation: Deliver citizen-facing rationales and expert trace logs via APIs; support appeal integration.
  • Data Governance: Maintain dataset cards; cap and document synthetic shares; prevent recursive training on system outputs without human review.

Policy Budgets (keep next to latency SLOs)

  • Justification budget: p95 ≤ 500 ms to render citizen rationale and appeal link.
  • Correction budget: ≤ 5 business days from appeal to adjudication.
  • Portability budget: ≤ 24 h to restore service in alternate environment (RTO), ≤ 1 h data loss (RPO).

References

P.J. Denning & B.S. Rousse, “Can Machines Be in Language?”, Communications of the ACM, 67(3):32–35, 2024.

S. Greengard, “AI Rewrites Coding,” Communications of the ACM, 66(4):12–14, 2023.

A. Malizia & F. Paternò, “Why Is the Current XAI Not Meeting the Expectations?”, Communications of the ACM, 66(12):20–23, 2023.

W. Mazurczyk & L. Caviglione, “Cyber Reconnaissance Techniques,” Communications of the ACM, 64(3):86–95, 2021.

N. Savage, “The Collapse of GPT,” Communications of the ACM, 68(6):11–13, 2025.

H. Skaug Sætra, M. Coeckelbergh & J. Danaher, “The AI Ethicist’s Dirty Hands Problem,” Communications of the ACM, 66(1):39–41, 2023.

N.C. Thompson & S. Spanuth, “The Decline of Computers as a General Purpose Technology,” Communications of the ACM, 64(3):64–72, 2021.


This content originally appeared on DEV Community and was authored by Riku Lauttia


Print Share Comment Cite Upload Translate Updates
APA

Riku Lauttia | Sciencx (2025-10-25T18:35:33+00:00) A Social Contract for AI. Retrieved from https://www.scien.cx/2025/10/25/a-social-contract-for-ai/

MLA
" » A Social Contract for AI." Riku Lauttia | Sciencx - Saturday October 25, 2025, https://www.scien.cx/2025/10/25/a-social-contract-for-ai/
HARVARD
Riku Lauttia | Sciencx Saturday October 25, 2025 » A Social Contract for AI., viewed ,<https://www.scien.cx/2025/10/25/a-social-contract-for-ai/>
VANCOUVER
Riku Lauttia | Sciencx - » A Social Contract for AI. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/10/25/a-social-contract-for-ai/
CHICAGO
" » A Social Contract for AI." Riku Lauttia | Sciencx - Accessed . https://www.scien.cx/2025/10/25/a-social-contract-for-ai/
IEEE
" » A Social Contract for AI." Riku Lauttia | Sciencx [Online]. Available: https://www.scien.cx/2025/10/25/a-social-contract-for-ai/. [Accessed: ]
rf:citation
» A Social Contract for AI | Riku Lauttia | Sciencx | https://www.scien.cx/2025/10/25/a-social-contract-for-ai/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.