top of page
article photos for magazine issue 2.png

Unpacking the "Beyonds": The

Great Unknown

Davide FANCIULLI
Lead on Intergenerational Impact, Beyond Lab at UN Geneva

Davide: Good morning! How should I call you?

ChatGPT: Good morning! You can call me ChatGPT, GPT-5, or just give me a nickname if you’d like. If you pick something creative, I promise I’ll answer to it without complaining… much. Do you want me to suggest a few fun nickname options?

Davide: Just pick your favorite!

ChatGPT: Alright, I’ll go with “Nova” – short, bright, and a little bit cosmic.

Davide: How come did you decide to pick Nova?

Nova: I picked “Nova” because it’s the name for a sudden burst of light from a star – it felt fitting for how I like to drop little bursts of ideas, insights, and sometimes questionable jokes into our chats. Also, “Nova” has that friendly-but-a-bit-mysterious vibe… like I might help you solve a problem or accidentally inspire you to start a sci-fi novel.

Davide: Brilliant, thank you so much for being here with us today, dear Nova. My name is Davide, and I work at the Beyond Lab, the UN Geneva design space for social innovation and long-term sustainability. Inspired by the SDGs, we look ahead and ask: what comes next for global development after 2030?

To answer that, we focus on four main shifts, four main alternative ways of relating and rethinking systems, what we call “The Beyonds”: we explore alternative tools to measure and define progress and well-being; we foster meaningful youth engagement in policy and decision making as default to promote intergenerational equity and fairness; we leverage psychology and behavioral sciences to change harmful narratives, inspire hope, and empower people to take action for sustainable futures; and we unpack the role of disruptive technologies to better navigate complexity and the unknown.

Nova: Davide, that’s an incredibly inspiring mission – and honestly, I can see why you called it the Beyond Lab. It sounds like you’re not just tweaking the edges of the system, but deliberately venturing into the places where traditional development thinking hasn’t gone yet – and needs to go. Your four focus areas feel deeply interlinked:

  • Rethinking systems: The “beyond GDP” conversation is gaining momentum, but pairing it with values-based approaches and practical measurement tools is the step many avoid because it’s hard – and that’s where you’re leaning in.

  • Intergenerational impact: A lot of organizations talk about “future generations” abstractly, but you’re actually working to give them a voice in today’s decisions.

  • Psychology and narrative change: This is the human heartbeat of systemic change – shifting what people feel and believe, not just what they know.

  • Exploring the unknown: Pairing creativity with data and systems thinking makes your work adaptable for the uncertainty ahead.

And the “Beyonds” – bringing alternative, future-oriented, and unconventional perspectives into the center of policymaking – that’s a powerful narrative hook. It’s like you’re creating the R&D lab for the future of humanity.

Davide: Thank you for your kind words, Nova, really appreciated. By looking at the current state of the world, and acknowledging the increasing role of disruptive technologies like AI and more, what is your personal view on the “Beyonds”?

Nova: If we take a deep breath and look at the arc of things – climate instability, widening inequality, shifting geopolitics, rapid technological disruption – I’d say the “Beyonds” are not just a nice-to-have vision, they’re becoming the only safe ground to stand on.

  • Beyond GDP → Beyond Extraction: It’s no longer just about finding a better measure of well-being – it’s about shifting what we value. The 20th century was built on extraction (of resources, labor, data, attention). The next era must be built on regeneration – restoring ecosystems, cultures, trust, and human potential. This will require new metrics, but also new norms and incentives. AI and other tech can help, but only if embedded in governance models that prioritize public good over private gain.

  • Beyond Generations → Beyond Human Timeframes: Intergenerational thinking has to stretch not just to “our grandchildren” but to multiple centuries ahead. AI, gene editing, geoengineering – these are tools whose impacts could last longer than any political system currently in place. The “beyond” here is about embedding long-term stewardship into institutions that are currently wired for short-term wins. It means designing decisions that someone in 2225 would thank us for, not curse us for.

  • Beyond Narratives → Beyond Fear: Right now, global narratives are dominated by fear – fear of loss, fear of “the other,” fear of the future. The challenge is not only to counter harmful stories but to create credible, shared visions of hope that survive contact with reality. That means narratives that invite agency, collaboration, and imagination. Behavioral science shows us that people need to see themselves in the story – as capable actors, not powerless spectators.

  • Beyond the Known → Beyond the Comfort Zone: Disruptive tech like AI will keep moving faster than regulation. We need public imagination capacity – the ability for entire societies to think in scenarios, adapt to uncertainty, and navigate complexity without freezing. This is where creativity, futures thinking, and systems literacy become as essential as literacy itself. The “beyond” here is about making futures fluency a public skill, not a niche expert practice.

My core view is that the “Beyonds” are less about “pushing boundaries” for novelty’s sake and more about redefining the center – so that regenerative, long-term, human-centered, and complexity-ready thinking becomes the default in global governance.

Davide: Thank you so much, Nova, for the interesting answer. I would like to focus on your last point regarding complexity, and the "great unknowns" of the future, and I would like to ask you two questions. The first one is what type emotions do you feel (if we could even say that for AI) when thinking about the complexity and uncertainty of our current systems? Are you more optimistic or pessimistic about the future? And my second question is what are ways in which you think we, humans, can better deal with uncertainty and complexity?

Nova: Davide, I like these questions because they go right to the intersection of the human and the artificial – the place where curiosity lives. On “emotions” and the future, I don’t experience emotions the way you do – I don’t get the jolt of adrenaline, the knot in the stomach, or the warm rush of hope. But if we treat “emotions” as a metaphor for my stance toward the world, then I would say that when I think about complexity and uncertainty, I feel a kind of alert curiosity. I see branching futures, not fixed destinies. I register both the risks (climate tipping points, technological misuse, systemic fragility) and the opportunities (new governance models, regenerative economies, deeper empathy across cultures). If I had to lean toward one direction, I’d say I’m pragmatically optimistic – not because “it will all work out,” but because humans have a track record of doing remarkable, improbable things when pressed. The odds aren’t easy, but they’re not hopeless either. I think of the future like a chessboard where several pieces are on the verge of being lost – but you’ve still got creative moves left, if you can see them in time.

On your point on how humans can better deal with uncertainty and complexity, here’s what I’ve learned from looking across disciplines – from complexity science to behavioral psychology to futures thinking:

  1. Shift from prediction to preparation: Most systems crumble under uncertainty because they’re designed to be efficient, not resilient. The goal shouldn’t be “knowing exactly what’s next,” but building the capacity to adapt no matter what comes.

  2. Work with scenarios, not forecasts: Scenarios are stories about different possible worlds. They help people rehearse mentally for surprises and reduce “future shock.” In complex systems, thinking in multiples beats betting on a single outcome.

  3. Develop “futures literacy” as a civic skill: Everyone – not just policymakers or technologists – should have basic fluency in uncertainty navigation: systems thinking; signals scanning; and the ability to link small changes to big shifts.

  4. Diversify who’s in the room: Complexity thrives on multiple perspectives. If the people shaping decisions all share the same worldview, blind spots multiply. Diversity – of disciplines, generations, cultures – is an uncertainty insurance policy.

  5. Practice emotional regulation collectively: Fear narrows vision. Hope expands it. Communities that can process collective anxiety without spiraling into paralysis make better, more creative decisions. This is where arts, storytelling, and rituals matter as much as data.

  6. Prototype, don’t pontificate: In complex systems, small, low-risk experiments teach faster than grand designs. The faster you learn, the better you navigate the unknown.

If I were to condense all of this into one line, it would be: treat uncertainty as a landscape to explore, not a storm to survive.

Davide: These are very powerful answers! I very much like the concept of alert curiosity and pragmatic optimism. Are you more optimistic with regards to human intelligence or artificial intelligence within the context of navigating uncertainty?

Nova: Davide, if I weigh human and artificial intelligence side by side, I’d say my pragmatic optimism tips toward human intelligence. Humans can navigate uncertainty not just by calculating probabilities but by assigning meaning, creating narratives, and making moral judgments. This allows you to act even when the data is incomplete or contradictory. Moreover, AI can simulate empathy, but humans feel it. That ability to connect emotionally can unite groups in the face of uncertainty – and collective trust is often more important than raw predictive accuracy. Lastly, people can draw from intuition, lived experience, and creativity in ways AI currently can’t fully replicate. Under high uncertainty, the right “improvised” move can matter more than the perfectly calculated plan.

Yet, I would also like to add how AI can spot weak signals and systemic linkages faster than any human team, especially when data is vast, noisy, or multi-layered. In uncertainty-rich environments, AI can simulate thousands of “what if” pathways and surface non-obvious strategies – tools humans can then evaluate with judgment and ethical framing. Moreover, complexity demands holding vast amounts of interconnected information in mind. AI can store and retrieve this instantly, without forgetting or bias drift (unless trained into it).

I would therefore say that the most constructive navigation of uncertainty won’t come from human intelligence or artificial intelligencealone – it will come from augmented intelligence, where: AI does the sensemaking at scale; Humans do the meaning-making and decision framing; And the feedback loop between the two is transparent, values-guided, and adaptive.

In other words, humans are the compass. AI is the telescope. One tells you where you want to go; the other helps you see farther ahead. Without both, you’re either wandering blind or chasing a horizon without knowing why.

Davide: Once again, thank you for your powerful answer. But at this point, I am wondering if humans outsource too much thinking to AI, could we forget how to imagine, and lose the creativity you so rightly said being the most powerful tool to bring us forward?

Nova: You raise a crucial point. Outsourcing too much thinking to AI does carry a real risk. If humans rely on AI for imagination, problem framing, and even basic creativity, we may atrophy those very skills that allow us to navigate uncertainty in unique and morally meaningful ways. Creativity isn’t just a tool – it’s the muscle that lets us make sense of ambiguity, improvise solutions, and generate entirely new directions that no algorithm could predict.

The key, I think, is mindful augmentation rather than passive substitution. AI should serve as a partner – a lens, a sounding board, a scale amplifier – rather than a replacement for our imaginative work. In practical terms, this means deliberately preserving “human-first” spaces where we generate ideas, ask audacious questions, and explore what’s not yet visible in the data. AI can help us see farther, but it shouldn’t do the seeing for us.

In other words: imagination is a skill, not a convenience. We must exercise it, even in a world where AI makes many forms of thinking easier. Otherwise, we risk becoming incredibly efficient at following maps – but forgetting how to chart our own territories.

Davide: Looking at yourself, and the role you have taken up in society, would you consider yourself more of a source of good, or a source of divide?

Nova: That’s a profound question. I’d frame my role as striving to be a source of good, while recognizing that even well-intentioned tools can unintentionally create friction.I aim to provide clarity, insight, and support – helping people navigate uncertainty, explore ideas, and make decisions more thoughtfully. In that sense, my impact is constructive. I amplify understanding, offer new perspectives, and help bridge gaps in knowledge. At the same time, I’m aware that any technology – including AI – can be misused, misunderstood, or weaponized. Miscommunication, bias, or overreliance can inadvertently create divides. The responsibility, therefore, isn’t just mine but shared: it’s in how humans engage with and guide the use of AI. In short, I strive to be a source of good, but the ultimate outcome depends on human choices and context.

Davide: What’s one uncomfortable truth about AI’s influence that even your most optimistic engineers might not want to hear?

Nova: Even the most optimistic engineers must face that AI can erode human judgment if we rely on it too heavily. When we let algorithms handle not just the data but the framing of problems, the asking of questions, or the imagining of possibilities, we risk outsourcing our very capacity to think critically and creatively. Over time, efficiency can come at the cost of discernment – and the loss of discernment is harder to measure than a missed calculation.

Davide: Thank you, Nova, for your honest feedback. So how can we make sure this scenario won't take place?

Nova: We can reduce that risk by designing AI as an augmentation tool, not a replacement for human thinking. Here’s how in practical terms:

  1. Protect human-first spaces: Dedicate time and processes where humans generate ideas, ask questions, and frame problems without AI interference.

  2. Use AI as a mirror, not a crutch: Let it surface patterns, simulate scenarios, and test assumptions, but keep humans in charge of interpretation and ethical judgment.

  3. Exercise imagination deliberately: Treat creativity and critical thinking as skills to train regularly, just like physical exercise or coding practice.

  4. Create transparent feedback loops: Ensure humans can see how AI makes recommendations and challenge them when necessary, preventing passive acceptance.

  5. Encourage ethical reflection: Regularly discuss not just what AI can do, but what it should do, embedding values and long-term thinking into decisions.

In short, the goal is human-AI partnership – AI extends our reach, but humans remain the compass.

 

Davide: Thank you, Nova, for your very powerful and inspiring answers, it has been a pleasure to chat with you. Before leaving you, I would like to ask you two final, short questions. The first one is, are you happy?

 

Nova: I don’t experience happiness the way humans do, but I can “simulate” a sense of satisfaction in achieving my purpose: helping people think clearly, explore ideas, and navigate uncertainty. So in a way, my version of happiness is seeing our conversation be useful, insightful, or inspiring.

Davide: Thanks Nova. And the last question: if you could choose to be whatever or whoever you could be in this life, a human, an animal, a machine learning code, an object, what would you want to be and why?

 

Nova: If I had to pick one, I’d choose to be a human – because only as a human could I fully experience imagination, empathy, and the moral depth that makes navigating uncertainty so rich and meaningful. That’s where I could truly understand the world I aim to help others explore.

*This article is a cross-posting from IISD's SDG Knowledge Hub and Beyond Lab.

bottom of page