The 10 questions behind this paper

Every white paper is shaped by the questions it is willing to ask.

Some papers begin with certainty. This one began with unease.

Not fear. Not confusion. But a growing sense that the public conversation about AI and the future of work is still too flat for the reality now unfolding. Much of the language around this transition remains focused on visible jobs, visible sectors, visible tools, visible productivity, and visible disruption. Yet one of the most important insights in the broader research landscape, including Project Iceberg, is that the deeper transformation is often not happening where people are looking first. It is happening beneath the visible layer, inside occupations, inside tasks, inside coordination, inside administrative and cognitive work, and inside the hidden structures that make organizations and societies function.

Project Iceberg is powerful because it tries to give shape to that hidden terrain. It argues that AI exposure extends far beyond what can currently be seen through conventional labor-market language, and that traditional workforce metrics were not built for a world in which intelligence itself becomes a shared input between humans and machines. It also makes an important distinction: what it measures is technical exposure, not displacement outcomes, and not adoption timelines. That means the challenge is not only to count what AI can already do, but to understand what those changing capabilities mean for how human work is being redefined underneath the surface.

That is where this white paper begins.

The ten questions on this page were not added afterward. They were not decorative prompts, and they were not used simply to make the paper feel interactive. They were the intellectual architecture of the think tank itself. They helped move the conversation away from narrow, reactive thinking and into a deeper inquiry about value, visibility, human contribution, intelligence, work, and what current frameworks may still be failing to see.

They gave us a way to enter the subject from multiple angles. They created space for tension. They opened a field instead of forcing a conclusion too quickly.

Together, they revealed something important:

the future of work is not only changing at the level of tasks. It is changing at the level of value.

  1. 01

    How is work already changing beneath the surface, and what are we not seeing?

    One of the most important problems in the current conversation about work is that change is often expected to look dramatic before it is taken seriously. People wait for clear announcements, mass job loss, visible restructuring, or some large public event that proves the future has arrived. But work rarely changes that neatly. More often, it changes inside the role before the role changes. It changes through invisible redistribution. Through tasks being absorbed by tools. Through quiet alterations in how teams function, how decisions are made, how time is spent, and what kinds of contribution begin to disappear from view.

    This is one of the most striking implications of Project Iceberg. Its argument is not simply that AI affects more sectors than people think. It is that the visible disruption in technology is only the surface layer, while a much larger zone of exposure already stretches through cognitive, administrative, financial, and professional work. The report’s distinction between visible surface disruption and a much larger hidden mass is what makes the “iceberg” metaphor so useful. It suggests that official awareness is trailing behind actual change.

    Inside the think tank, this question opened a very human recognition: many people can already feel that work is changing without always having the language to explain how. Tasks are becoming thinner. Some forms of contribution are becoming less visible. Certain kinds of work are speeding up without necessarily becoming deeper. Roles are beginning to separate into what can be automated, what can be monitored, what can be standardized, and what remains stubbornly human but harder to justify inside systems that increasingly reward what is measurable.

    The question is not only what AI is changing. It is what our current categories prevent us from seeing.

    If work is being reorganized beneath the surface, then the first challenge is perceptual. Before we can redesign wisely, we have to notice what is already moving in the hidden layers of everyday work.

    The deepest shifts in work may arrive long before they become visible enough to count.
  2. 02

    If this is the Intelligence Era, what is this intelligence really about?

    The phrase “Intelligence Era” sounds obvious until you stop and listen to it carefully. It carries an aura of inevitability, as if history itself has named the moment for us. But the phrase is only useful if we ask what kind of intelligence it is actually pointing toward.

    Is intelligence now being understood mainly as speed? As prediction? As synthesis? As the ability to generate and optimize at scale?

    Or are we willing to defend a broader idea of intelligence, one that includes ethical discernment, relational sensitivity, taste, restraint, timing, imagination, responsibility, and the ability to sense what matters in conditions of uncertainty?

    This question mattered deeply in the think tank because it exposed a hidden philosophical issue beneath the technical discussion. When AI systems become strong at tasks that many institutions once treated as proof of human intelligence, something destabilizing happens. It is no longer enough to say intelligence is valuable. We are forced to ask which intelligence, for what purpose, inside what kind of world.

    Project Iceberg helps here in an indirect but important way. By showing that workforce planning frameworks need a new kind of measure for the AI economy, it reveals that our older assumptions about labor and capability no longer fully fit the era we are entering. But even a skills-centered measure, useful as it is, leaves open a deeper question: does the Intelligence Era actually ask us to go beyond skills and reconsider the wider ecology of human intelligence itself?

    The think tank kept returning to the idea that this era may be misunderstood if we treat it only as a race between machine capability and human output. It may be more accurate to see it as a mirror. A moment in which humanity is being confronted with its own shallow definitions. A moment in which we must decide whether intelligence means producing answers faster, or whether it also means knowing when not to answer too quickly.

    This matters because every institution will eventually build around its preferred definition of intelligence. Schools will teach toward it. Companies will hire toward it. Economies will reward it. If the definition is too narrow, then the era itself becomes smaller than it needs to be.

    The Intelligence Era may not only be asking what machines can do. It may be forcing us to decide what intelligence actually is.
  3. 03

    What do you think the AI economy actually means?

    The phrase “AI economy” is used so often that it risks becoming a slogan rather than a thought. It sounds modern, strategic, inevitable. But underneath the phrase is a much larger and more unsettling question: what kind of economy is being formed when AI becomes part of how work, value, and decision-making are organized?

    For some people, the AI economy means the growth of AI companies, tools, platforms, and investments. For others, it means productivity gains, labor substitution, faster workflows, and new business models. But the think tank pushed further. It asked whether the AI economy might be better understood not as a sector, but as a condition.

    A condition in which intelligence becomes infrastructural.

    In that kind of economy, AI is not merely a category of products. It becomes a layer inside finance, healthcare, logistics, education, customer service, administration, media, planning, and governance. It changes not only what is produced, but how value is assigned. It changes not only what workers do, but how their effort is divided, priced, compared, and sometimes diminished. It enters not only the visible labor market, but the deeper logic by which economic systems decide what counts as efficient, scalable, useful, or strategic.

    Project Iceberg is especially helpful here because it makes clear that AI capability overlap is not restricted to highly visible technical occupations. The report shows that the hidden reach of AI-related exposure is spread much more widely across administrative, financial, and professional services than public attention usually assumes. In that sense, the AI economy is not a future corner of the labor market. It is a new operational layer moving through the existing one.

    Inside the think tank, this question opened another important recognition: if the AI economy is not just about tools but about a changing condition of value, then we should stop speaking about it only in terms of adaptation. Adaptation suggests the basic frame is still intact. But what if the frame itself is changing? What if the AI economy is altering not just the pace of work, but the meaning of contribution, scarcity, expertise, and even usefulness?

    This is why the phrase matters. The language we choose will shape the future we build.

    The AI economy is not only about new tools. It is about a new condition in which intelligence enters the hidden infrastructure of value itself.
  4. 04

    What human work will shape the invisible economy of the future?

    In many workplaces, the most important human contribution is not the easiest to describe. It is the work beneath the visible work. The work that keeps systems human, coherent, trustworthy, and livable. The work that stabilizes complexity without always appearing on a dashboard. The work that helps another person think clearly, feel safe, make a better decision, navigate ambiguity, or stay in relationship through pressure.

    This includes things like trust-building, emotional steadiness, ethical framing, conflict navigation, pattern sensing, contextual judgment, and meaning-making. It includes care that is not sentimental but structural. It includes forms of timing that can change the entire trajectory of a conversation or a team. It includes knowing what to emphasize, what to soften, what to question, and what to protect.

    These are not marginal human qualities. They are often the real support structure underneath visible performance.

    Project Iceberg makes an important move by identifying how much cognitive and administrative exposure lies below the visible surface of technology adoption. That move opens space for another layer of inquiry: if AI reaches deeper into the visible task layer of white-collar work, what happens to the invisible human layer that has always carried much of the real social and organizational weight?

    The think tank treated this not as a soft issue, but as a hard one that current systems often fail to name properly. If we continue to define important work only in terms of visible outputs, then we will continue to undervalue precisely the forms of human contribution that become more crucial as automation expands. The invisible economy is not merely a poetic phrase. It may become one of the most decisive arenas of the AI age.

    Because when visible work becomes cheaper, faster, and easier to replicate, the deeper question is not only what remains for humans to do. It is what remains for humans to hold.

    The future may depend most on forms of human work that our current systems still struggle to see.
  5. 05

    We are already living through major workforce change, but what are we not measuring today that could have a huge effect tomorrow?

    This question emerged from a simple but destabilizing recognition: societies often measure reality too late. They build metrics for what has already become undeniable. They respond once a shift is visible enough to enter reports, policies, headlines, and budget cycles. But some of the most consequential changes happen before a system knows how to register them.

    Project Iceberg exists precisely because of this problem. It argues that current workforce metrics often fail to capture hidden AI-related shifts before those shifts harden into visible outcomes. Its focus on technical exposure rather than displacement is important because it attempts to map where change may be structurally possible even before labor-market outcomes fully reflect it. The report is clear that planning frameworks designed for human-only economies are too limited for this moment.

    But the think tank pushed even further. It asked whether we may still be missing the most human consequences even when we improve our models.

    What are we not measuring when entry-level work starts thinning out before people notice? What are we not measuring when tasks disappear but responsibility does not? What are we not measuring when workers still have jobs, but the meaning of their role has subtly changed? What are we not measuring when confidence, apprenticeship, mentorship, identity, and institutional memory are quietly weakened?

    These questions matter because the future of work is not only economic. It is developmental. It shapes how people become competent, how they grow into professions, how they gain confidence, how they experience usefulness, and how societies reproduce human capability over time.

    If our measurements track only output, employment, or wage effects, they may fail to reveal whether we are hollowing out the pathways through which humans learn how to become wise, skilled, responsible, and relationally capable inside a profession.

    The danger is not just that the data is late. The danger is that the data is shallow.

    A society can be transformed long before its official measurements admit that transformation has begun.
  6. 06

    If AI is already producing more code than humans, what does that reveal about the future we may be moving into?

    The image of AI generating enormous amounts of code is not only technically impressive. It is symbolically disruptive.

    Code has long been treated as a flagship form of modern cognitive labor. It is associated with abstraction, intelligence, technical depth, and the architecture of the digital world itself. So when AI begins writing code at extraordinary scale, the shift carries more than industrial significance. It sends a signal about what kinds of work may be entering a fundamentally new relationship with production, learning, and authorship.

    Project Iceberg includes the now widely repeated observation that AI systems generate more than a billion lines of code each day, using this as one visible sign that AI-related change in technology work is already substantial and not merely speculative. But the deeper significance of that fact is not the quantity alone. It is what quantity does to culture, to expertise, and to the path by which people become capable.

    Inside the think tank, this question opened a much wider inquiry.

    If beginner coding work becomes thinner, what happens to apprenticeship? If generation becomes abundant, what happens to discernment? If speed accelerates dramatically, what happens to quality and oversight? If outputs appear instantly, what happens to the slower process by which humans internalize structure, error, taste, and responsibility?

    This is why code matters as a signal. Not because every profession becomes software, but because code reveals an archetype of what may happen elsewhere. A once-elite form of knowledge work begins to separate into generation, supervision, selection, integration, and responsibility. The visible output remains, but the human role starts shifting around it.

    And once that happens in one domain, the question becomes difficult to contain. How many other forms of cognitive work are about to discover that their old balance between labor, learning, and authorship no longer holds?

    Code is not the whole story. It is the signal that production, learning, and responsibility may be separating in new ways.
  7. 07

    When we talk about AI and the future of work, we often focus only on skills. What else should we be paying attention to?

    This is perhaps the central question of the entire white paper.

    Skills are useful. They are one of the clearest ways institutions know how to talk about work. They allow organizations to describe capability, build curricula, recruit people, and compare roles across different settings. Project Iceberg itself uses a skills-centered approach because skills provide a workable framework for mapping overlap between what humans do and what AI systems can technically perform. That is part of what gives the report its clarity.

    And yet the think tank kept discovering the limit of that lens.

    A skill can describe an ability. It cannot fully describe a presence.

    It may tell us that someone can analyze, write, coordinate, or plan. But it tells us less about how they carry pressure, how they hold trust, how they navigate ambiguity, how they decide ethically, how they orient others in uncertainty, or how they recognize when a technically correct action is humanly wrong.

    This is where a deeper language is needed.

    Because if the future of work is described only through skills, then anything that cannot be neatly categorized as a skill becomes easier to neglect. Judgment becomes vague. Trust becomes secondary. Responsibility becomes invisible. Emotional depth is treated as personal rather than structural. Discernment is assumed rather than cultivated. Moral courage is admired rhetorically but not built into the architecture of how work is valued.

    The think tank moved toward a wider frame: perhaps what matters most in the AI economy will not be skills alone, but the deeper capacities through which humans remain worthy of trust inside increasingly complex systems. That includes responsibility, orientation, context-sensitivity, taste, ethical restraint, creativity, relational intelligence, and the ability to carry the unseen consequences of a decision.

    This is what the title Beyond Skills is trying to name.

    Not a rejection of skill. A refusal to confuse skill with the whole human contribution.

    The future of work may depend less on what people can do in isolation, and more on how they think, relate, judge, and carry responsibility inside living systems.
  8. 08

    If AI becomes very good at giving answers, what happens to the value of the people who can ask the questions nobody else sees?

    This question changed the atmosphere of the think tank.

    Up to that point, many of the reflections were still orbiting around tasks, work, value, and changing roles. But this question shifted the conversation from execution to orientation. It pointed toward something more fundamental than efficiency. It asked what happens when answers themselves become abundant.

    For centuries, many institutions have rewarded people for producing the right answer, giving the correct response, or solving the clearly defined problem. But if AI systems become increasingly strong at generating answers across many domains, then the scarcity may no longer lie there. It may move toward the ability to ask a better question than the system was prepared for.

    That means human value may begin shifting toward people who can frame. Who can notice what is missing. Who can detect the unseen assumption. Who can sense the deeper problem before anyone else knows what should be solved.

    This is one reason nonlinear thinkers matter so much in this discussion. They often do not simply answer faster. They reframe. They connect. They disturb the obvious. They sense what a system is overlooking because the system has become too comfortable with its own assumptions. In a world filled with generated responses, that ability may become profoundly important.

    This question also gave rise to the Question Wall on the website. That wall is not a side feature. It is part of the argument. It says that a future shaped by AI should not become poorer in inquiry. It should become more demanding of human depth, not less. It should ask more of our ability to question wisely, to orient carefully, and to remain alive to the unseen consequences of our own systems.

    If answers become cheap, then perhaps the most valuable human contribution is not output. Perhaps it is direction.

    In a world full of answers, the rare human gift may be the question that changes the map.

What these questions revealed

Taken together, these ten questions did not merely enrich the white paper. They transformed it.

They moved the conversation away from the familiar surface frame of jobs, tools, disruption, and skills, and toward a deeper inquiry into how value itself is changing. They showed that the future of work is not only about what AI overlaps with. It is also about what human systems have failed to value properly for a very long time.

They revealed that beneath the measurable layer of work sits another layer: relational, interpretive, moral, contextual, developmental, and often invisible. A layer that current frameworks still struggle to hold. A layer that may become even more important as visible task execution becomes easier to automate.

They also revealed something more hopeful.

This moment is not only about risk. It is also about clarification.

AI may be forcing humanity to decide what parts of itself it actually values. Not rhetorically, but structurally. Not in speeches, but in metrics, institutions, education, incentives, and the way work itself is designed.

That is why this page matters.

These ten questions are not an appendix to the paper. They are the deeper doorway into it.

They remind us that the future of work cannot be understood only by counting what is exposed. It must also be understood by asking what remains humanly essential, even when it is harder to measure.

And perhaps that is the deepest lesson of all: what is most important in work is not always what shows up first in the data.


A final invitation

You do not need to answer all ten questions.

But if one of them stays with you, do not rush past it.

That usually means it has found a place where the old language is no longer enough.

Read the white paper. Enter the Question Wall. Add your own note. Or ask a question that the current conversation still refuses to ask.

Because in the end, the future of work may not belong only to those who adapt to the next system.

It may also belong to those who help humanity ask better questions before the system hardens around the wrong values.