UNSPOKEN FUTURES 2026
Unspoken Futures is a large-scale analysis of global 2026 trend reports — a project about the ideas from which we construct our image of the future, and about where we disagree, and what remains unsaid.
The future is described in hundreds of reports, charts, and forecasts. yet almost no one reads them in full. We did.

We assembled a corpus of key 2026 trend and foresight reports, evaluated their quality, selected the most significant ones, broke each of them down into concrete claims about the future, and transformed this mass of material into visual maps.

This is how Unspoken Futures emerged — a project that examines the ideas shaping our collective image of the future. Through visual frameworks such as the Trend Radar, Consensus vs. Contention, and Pressure Roadmaps, we mapped areas of agreement and zones of tension across institutions, industries, and perspectives.

Unspoken Futures is a project about the ideas from which our shared future is constructed — about where alignment holds, and where it quietly fractures.

Here you can explore the methodology in more detail. Below, you’ll find the visualizations and our key findings.
A map of “signals of the future.” Each point represents a distinct claim about what lies ahead. Its position reflects the theme and time horizon, while the color indicates level of confidence.
EXPLORE
A map of agreement and disagreement across sources. It reveals which themes most reports converge on — and where assessments diverge, and how sharply.
EXPLORE
Claims about the future grouped by time horizon and type of dynamic. This map shows what is changing, when, and for whom — distinguishing between impacts on individuals and on organizations.
EXPLORE
00.
METHODOLOGY
01. REPORT SELECTION
We began with a curated folder of trend and foresight reports assembled by independent researchers — Amy Daroukakis, Ci En Lee, Gonzalo Gregori, and Iolanda Carvalho. We used it as a foundation, then audited the corpus for completeness and added missing reports identified through our own search.

The result was a catalog of 184 analytical reports and forecasts for 2026, structured with standardized metadata. For each report, we documented the publisher, geographic scope, industry focus, and thematic tags.

This catalog became our point of departure. From there, we conducted a quality assessment of sources and selected the reports used for claim extraction and visualization.

📎 Folder of reports (Amy Daroukakis, Ci En Lee, Gonzalo Gregori, Iolanda Carvalho)

📎 Metadata table for the corpus
We began by describing the corpus itself.
First, we mapped its basic distributions: geographic coverage, types of institutions, industries represented, and the most frequent thematic tags.

We then moved from frequencies to relationships — building heatmaps of industry co-occurrence and matrices linking institutional types to industries and tags.

This stage of analysis informed our selection methodology. Since we did not observe strong structural biases across industries or themes, we chose to select reports for in-depth analysis based on AACODS scoring — rather than artificially balancing representation across institutional types.
02. REPORT SELECTION FOR IN-DEPTH ANALYSIS
All reports in the catalog were evaluated using the AACODS framework — a tool designed to assess the quality of grey literature. Each letter in AACODS corresponds to a specific criterion: Authority, Accuracy, Coverage, Objectivity, Date, and Significance.

In our study, we assessed Authority, Accuracy, Coverage, and Objectivity on a three-point scale (0 / 1 / 2), then calculated an average score for each report. The criteria Date and Significance were not applied: Date, because the corpus was intentionally composed of current trend reports; and Significance, because at this stage all sources were treated as equally relevant.

For further analysis, we selected only reports with an average score of 2.0 or 1.8.

📎 AACODS Methodology Checklist
03. CLAIMS ABOUT THE FUTURE
We then took each selected report and decomposed it into claims — clear statements about the future, formulated as “what will change / what will happen,” each tagged with an expected time horizon.

From every report, we extracted 10–20 key claims (depending on length and density). When more potential claims were present, we retained those that were formulated most concretely and supported by stronger internal argumentation.

We first captured the raw values as they appeared in the text — including thematic labels, actors, geographies, and types of evidence — using Gemini 3 Flash and Heptabase. We then manually normalized the data through a set of predefined normalization dictionaries (rules and mappings developed in advance and expanded during the process).

This allowed us to consolidate different formulations of the same concept into unified categories and to accurately calculate frequencies, clusters, and intersections.
We extracted claims in a way that made them easy to trace and verify in the original source.
Not in an abstractly parsed text, but directly within the original PDF report.
For each report, we created a dedicated working document in Heptabase containing the extracted claims.

This allowed us to return to the source at any point, check the context, refine wording, and precisely track where each claim originated.

This approach ensured that verification remained fast, transparent, and reliable throughout all subsequent stages of the research.
04. VISUALISATIONS
We developed the visualizations iteratively.

We began with rough prototypes in Google Colab using Python — calculating the necessary aggregations and metrics, testing point and cluster placement rules, and validating that the visualization logic accurately reflected the underlying data.

Once the logic was finalized, we translated it into a web format and built the interactive versions in collaboration with Claude Code.
01.
TREND RADAR MAP
A map of “signals of the future.” Each point represents a distinct claim about what lies ahead. Its position reflects the theme of the claim and its time horizon, while the color indicates the source’s level of confidence.
HOW TO READ THE TREND RADAR
The Trend Radar is a map of claims extracted from the reports.
One point represents one claim.

The position of each point shows what the claim is about and when it is expected to unfold. The circular sector indicates the primary topic (economy, AI, security, etc.), while the distance from the center represents the time horizon — from “now / 0–1 years” to 1–3, 3–5, and 5–10+ years. The denser the points within a sector and ring, the more frequently sources address that theme within that timeframe.

The color of each point reflects the source’s level of confidence. Below the map, filters allow you to narrow the view by claim type (forecast, warning, recommendation, etc.), secondary topic, actor, or affected entity — making it possible to see which specific groups of claims shape the overall picture of the future.
02.
CONSENSUS VS. CONTENTION MAP
A map of agreement and disagreement across sources. It shows which themes most reports converge on — and where assessments diverge, and how sharply.
HOW TO READ CONSENSUS VS. CONTENTION MAP
This map shows which future developments analytical reports are pointing to and how much agreement exists between sources. Each point represents a cluster of semantically similar claims about the future. The X-axis reflects the level of consensus: on the left are themes where sources diverge in direction, speed, or expected consequences; on the right are themes where stable agreement emerges. The Y-axis reflects expected impact: the higher the point, the stronger the potential influence of these ideas on the economy, technology, society, or institutions. The size of a point indicates the volume of the cluster (the number of claims it contains), while color marks the zone of the map: Mainstream Futures (high impact and high consensus) and Contested Futures (high impact combined with low consensus).

The Consensus Score measures agreement across sources through four independent components combined into a weighted index normalized from 0 to 1. The first component, directional alignment (50% weight), tests whether claims within a cluster point in the same direction of change: if 80% of claims predict growth while 20% predict decline, agreement is low; if 95% indicate the same direction, agreement is high. The second component, source diversity (20%), measures the variety of independent sources referenced in the cluster. We analyze the support_sources field, count unique references, and assess whether one institution dominates the cluster; if thirty claims reference twenty-five different studies, diversity is high, whereas if all thirty rely on a single consultancy, diversity is low. If a single source internally supports contradictory directions within the same cluster, the score is penalized. The third component, confidence convergence (15%), evaluates whether authors express similar confidence levels: uniform “high confidence” or uniform “low confidence” signals convergence, while evenly split confidence signals divergence. The fourth component, temporal consistency (15%), assesses whether claims share similar time horizons; projections focused on 2026–2027 differ structurally from those aimed at 2030 and beyond, and clusters that mix short- and long-term expectations receive lower consensus scores. A high Consensus Score (>0.6) indicates that institutional actors are aligned in direction, timing, and evaluative stance, but it does not guarantee correctness, as expert communities have historically converged around flawed assumptions.

The Impact Score addresses a different question: how strong is the signal about the future, independent of consensus. It is calculated through five components grounded in evidence-based foresight principles. Source reliability (40%) is derived from the AACODS rating of the publishing institution. Within the same weight, evidence type differentiates between claims based on quantitative data (full weight), economic models (0.75), expert opinion without data (0.5), and general assumptions (0.25), assigning greater weight to more concrete evidence. Author confidence (25%) directly uses the confidence_signal field, with high confidence weighted at 1.0, medium at 0.6, and low at 0.3. Temporal urgency (20%) evaluates time_horizon, assigning maximum weight to forecasts for 2025–2027, medium weight to 2028–2030, and lower weight to projections beyond 2030, reflecting the operational relevance of near-term developments. Scale of impact (15%) assesses geographic scope (geo_scope), giving highest weight to global claims and progressively lower weight to regional, national, and local projections.

Across 420 extracted claims, the map concentrated in two high-impact zones. Eleven clusters fall into Mainstream Futures (upper-right quadrant), representing themes where reliable sources converge around near-term expectations; these include AI-driven transformation of business processes, synthetic data accounting for up to 80% of training datasets, and large-scale ($100B+) investments in AI infrastructure. Nine clusters fall into Contested Futures (upper-left quadrant), where the signal of significant change is strong but direction or magnitude remains disputed. Autonomous AI agents, Scope 3 emissions reporting across supply chains, and the surge in data-center energy demand exemplify such contested domains, where institutional actors diverge between narratives of structural transformation, incremental evolution, or systemic risk. No clusters appear in the lower quadrants (Settled Truths or Noise & Speculation), which reflects a sampling effect: the corpus consists of flagship reports from major institutions in 2025–2026 that, by design, focus on high-significance trends.
03.
PRESSURE ROADMAPS
Claims about the future organized by time horizon and type of dynamic. The map captures what is expected to change and on what timeline — distinguishing between impacts on individuals and on organizations.
HOW TO READ PRESSURE ROADMAPS
These are two maps — one for individuals and one for organizations. They show which forms of “pressure” and structural shifts recur most frequently across trend reports, and when, according to the sources, these changes are expected to materialize.

The data were aggregated according to the following principle: for each combination of stakeholder × time horizon × topic, we calculated the number of claims, identified the dominant direction of change, and generated a short summary based on the content of the clustered statements.

The visualization is structured as a temporal unfolding. Five columns represent time horizons, ranging from the immediate future (0–1 year) to the long term (10+ years). Within each column, the top five most significant themes for the respective stakeholder group are displayed.

Topic cards are color-coded by direction of change (red for increase, yellow for shift, green for emergence, etc.). The size of the indicator reflects the number of claims within that topic cluster, while the colored bar on the left identifies the stakeholder subgroup. Interactivity allows switching between an “All” mode (where all subgroups are visible simultaneously with color markers) and a focused view on a specific group. Hovering over a card reveals a short summary describing the dominant pattern of change.

The primary analytical value of the method lies in its ability to reveal temporal waves of transformation and concentrations of pressure. When multiple cards cluster within the same time horizon, this signals a potential period of heightened adaptation. Limitations include dependence on the completeness of the underlying corpus (the maps reflect only the claims present in the reports), the loss of nuance inherent in aggregation, and a degree of subjectivity in the generation of summaries.
04.
UNSPOKEN FUTURES: SYNTHESIS AND KEY INSIGHTS
UNSPOKEN FUTURE 01: CORPORATE STRATEGIES SHAPED BY RECURRING NARRATIVES RATHER THAN DATA
The institutional discourse reveals a structural asymmetry: corporate transformations are discussed 3.6 times more frequently than impacts on individuals (162 vs. 45 claims), with 1.6 times higher consensus (consensus score 0.841 vs. 0.514), yet on a 1.3 times weaker evidentiary base (evidence score 0.562 vs. 0.750).

The cluster with the highest consensus (0.84), “AI pivots from cost-cutting to growth driver,” is built on eight claims from five institutions: McKinsey, PwC, WTW, EY, and the Internal Audit Foundation. All are consulting organizations serving corporate clients. Of the eight claims, six are grounded in expert opinion, one provides no explicit evidence, and only one is supported by empirical data.

By contrast, the cluster “AI-driven workforce cuts” (consensus 0.51) contains 28 claims from more than fifteen institutions of diverse types — including governments (OECD, European Commission), academia (University of Sydney), and international organizations. Fourteen of the twenty-eight claims are based on concrete data: surveys conducted by McKinsey, statistics from the Bureau of Labor Statistics, and OECD research.

The consulting sector produces knowledge about corporations within a relatively narrow circle of similar institutions, generating high consensus on comparatively weak evidence. Governments and academic organizations describe impacts on individuals through a broader range of stakeholders, producing lower consensus but stronger evidentiary grounding. The former cluster is categorized as “Mainstream Futures,” the latter as “Contested Futures.” Yet it is precisely within the contested zone that we observe a greater density of data, a wider diversity of sources, and a more complex representation of reality.

The Pressure Maps reinforce this pattern through a contrast between discourse on organizations and on individuals: organizations exhibit 48% positive claims with 68% marked as high confidence, whereas workers show 35% positive claims with only 38% high confidence.
UNSPOKEN FUTURE 02: THE FUTURE OF PARALLEL LABOR REALITIES
The discourse on the future of work splits into two incompatible realities, divided not by objective data, but by the structures through which that data is produced and interpreted. On one side stands a corporate narrative of positive transformation: 48% positive claims with 68% marked as high confidence, grounded predominantly in expert opinion from consulting firms (Deloitte, McKinsey, PwC account for 40 of 75 claims concerning organizations). On the other side lies a considerably less optimistic picture, yet one supported by a stronger evidentiary base (Evidence Score 0.750 vs. 0.562), produced by governments and academic institutions (OECD, European Commission, University of Sydney account for 29 of 76 claims concerning workers).

The paradox deepens in the cluster with the greatest data density — “AI-driven workforce cuts” — which simultaneously exhibits the lowest consensus (0.514). The 28 claims within this cluster, half of which are grounded in concrete data (McKinsey surveys, Bureau of Labor Statistics data, OECD research), fail to converge into a coherent narrative. Mirror forecasts coexist within the same cluster: 32% of McKinsey respondents anticipate workforce reductions of 3% or more, while 13% from the same source predict growth of 3% or more. Sixty-nine percent of Australian executives state that AI will absorb entry-level roles, while parallel claims assert that AI-related skills will increase wages by 29%. The Consensus Score of 0.514 statistically averages these opposing projections, masking a genuine interpretive conflict behind the appearance of moderate disagreement.

Both realities also terminate at approximately the same temporal horizon. Fifty-three percent of claims concerning organizations and 58% of claims concerning workers concentrate within the 1–3 year window, while the 5–10 year horizon remains largely unpopulated for both. The future appears to end where current investment cycles and budgetary models conclude — around 2028–2029. Institutional imagination rarely extends beyond financial planning horizons.

Geographic framing introduces a third dimension of divergence. Corporate transformations are described in abstract, global terms: the “AI strategy shift” cluster consists entirely of global claims, reinforcing a narrative of inevitability. By contrast, impacts on workers are articulated through specific labor markets: only 39% of worker-related claims are marked as global, with the remainder fragmented across the United States (32%), the European Union (11%), Australia (7%), and OECD countries (7%). The global framing of corporate action contrasts with the territorial specificity of human consequences. Workers experience transformation within concrete national labor regimes, shaped by local regulation, skill gaps, and hiring freezes.

The silence surrounding this divide is not a gap in knowledge, but a structural feature of institutional discourse. Consulting firms cannot readily acknowledge the weakness of their evidentiary base; governments cannot impose a single interpretation of workforce data without disregarding the plurality of stakeholders. Between these regimes of knowledge production emerges a zone of structural blindness. We know that 77% of professionals report increased workloads following AI adoption, that entry-level vacancies are shrinking, and that U.S. unemployment is projected to reach 4.5% by mid-2026. What remains unspoken is that these signals complicate, and potentially contradict, the dominant corporate narrative of productivity-led growth. Parallel labor realities persist not because truth is unknown, but because certain truths remain institutionally inexpressible.
UNSPOKEN FUTURE 03: A FUTURE THAT BEGINS IN THREE YEARS
Of the 420 claims about the future, the overwhelming majority (over 60%) are concentrated within the 1–3 year horizon. The 5–10 year range is nearly empty, populated only by isolated claims, predominantly marked with low confidence.

The institutional community — McKinsey, Gartner, OECD, European Commission, Fidelity, J.P. Morgan — collectively refrains from projecting beyond 2028–2029. At the same time, these institutions confidently invoke “transformation,” “revolution,” and “fundamental restructuring.” Yet concrete forecasts tend to end precisely where current investment cycles conclude.

Few actors attempt to describe what the world might look like once AI systems operate at full structural scale. Investment trajectories are modeled in detail, but the systemic consequences of those investments — once deployed and normalized — remain largely unspecified.
UNSPOKEN FUTURE 04: THE FUTURE OF STRUCTURAL RESPONSIBILITY DISPLACEMENT
One of the most revealing findings of the study is the cluster related to responsible AI. Formally, it falls within the Contested Futures zone: consensus is moderate (0.565), expected impact is high (0.758), and source confidence is exceptionally strong (0.880).

This configuration signals not uncertainty, but transition. Within the same temporal window, two opposing movements are documented: corporations are scaling back voluntary initiatives — reducing Responsible AI teams and narrowing ESG-related commitments — while regulators are simultaneously strengthening mandatory requirements. Notably, a significant share of the claims in this cluster refer not to distant projections but to already materialized changes: half of the claims are situated in the 0–1 year horizon.

The evidentiary structure of the cluster is equally symptomatic. Approximately one quarter of the claims rely on direct data — legislative acts, regulatory decisions, and formal rule changes. The remainder are largely grounded in expert assessments. However, unlike corporate AI clusters, here expert opinion does not construct an aspirational narrative; it records institutional retreat. The high confidence expressed by authors reflects not interpretive alignment, but the perceived factuality of the shift itself: the era of voluntary momentum toward responsible AI is closing.

The most consequential aspect remains implicit. According to the corpus, only about 52% of companies have fully developed responsible AI programs. The gap between rhetoric and implementation is not framed as a systemic risk. This creates a temporal window: between 2025 and 2028, AI deployment accelerates faster than durable institutional constraints and oversight mechanisms are established. Voluntary initiatives are already being dismantled; binding regulatory regimes are not yet fully operational; yet the discourse continues to speak of “responsible transformation” without integrating these elements into a coherent model. This constitutes the unspoken future — a period in which responsibility becomes structurally unassigned, at least temporarily.
UNSPOKEN FUTURE 05: CLIMATE IN EXCHANGE FOR AI
The cluster describing the rise in data-center energy consumption emerges as one of the most contested in terms of consensus (0.518) and simultaneously one of the most systemically significant. This indicates that the issue cuts across multiple core systems — the economy, climate governance, and labor markets — yet fails to consolidate into a coherent narrative. Different institutional actors capture only partial dimensions of the problem: technology analysts focus on infrastructural feasibility, financial institutions on capital scale and expected returns, regulators on disclosure requirements and compliance frameworks. No actor integrates these elements into a unified analytical model.

The underlying facts are well documented. Investments in AI infrastructure already amount to hundreds of billions of dollars annually; data centers contribute a substantial share of the current growth in global electricity demand. At the same time, climate regulation intensifies within the same temporal window: Scope 3 emissions are increasingly recognized as the dominant component of corporate carbon footprints, and mandatory disclosure regimes are being introduced in both the United States and the European Union. These two dynamics — large-scale expansion of AI infrastructure and tightening climate regulation — unfold synchronously and are described by many of the same institutions, yet remain analytically disconnected. Within the corpus, there is no single claim that explicitly links AI data-center investments to their contribution to Scope 3 emissions.

The geographic dimension deepens the tension. AI energy demand is framed as a global phenomenon, while the capacity to respond varies dramatically across regions. China continues to expand generating capacity and secures structural advantages in energy cost. The United States must modernize aging grid infrastructure, requiring historically large capital expenditures. Europe faces the simultaneous pressures of energy security, climate commitments, and fiscal constraint. These differences directly shape the global deployment of AI, yet they are rarely articulated as part of a shared strategic problem. AI continues to be framed as a seamless global process, abstracted from regional energy constraints and climate trade-offs.

The silence surrounding this cluster does not stem from ignorance, but from the absence of synthesis among known facts. Hyperscalers — the primary investors in AI infrastructure — publicly commit to carbon-free or carbon-negative targets within the coming years, while simultaneously expanding infrastructure whose electricity demand rivals that of entire nations. Reports regularly reference renewable energy procurement and power purchase agreements, yet rarely present the underlying arithmetic: how much new generation capacity is required, how quickly it can be deployed, and how grid stability will be maintained. The problem is visible in the near term — most claims fall within the 1–3 year horizon — yet solutions are not modeled beyond current investment cycles. The energy “elephant in the room” is not concealed; it is described in detail. It remains unspoken because acknowledging it would require synthesis — and synthesis would challenge the prevailing narrative of AI as a frictionless technological inevitability, independent of political and climatic trade-offs.
Made on
Tilda