Recently, we’ve been exploring AI-generated videos as a way to provide a more immersive experience for our colleagues and partners, including in speculative futures workshops and AI trainings.
In this post, we’ll share our journey with AI video, from our first attempts to our latest work. We’ll show you how we’re using video in our innovation work, and what we’ve learned along the way.
It’s important to note that the scenarios featured in this blog aren’t arbitrary creations, but rather the result of a systematic foresight process. This process begins with a comprehensive driver analysis, where key factors influencing future developments are identified and examined. Building on this, we craft distinctly different speculative scenarios that reflect these drivers of change and potential future challenges. The output from this foresight process then serves as the basis for our AI video creations.
First Steps into AI Video: Imagining Future Cities
Innovislava
We began our AI video journey earlier this year, creating videos about possible future cities for our Systems Change for Cities event in Bratislava. Working with our Country Offices and partners, we explored how urban landscapes might evolve. To do so we imagined three cities:
Innovislava
Innovislava: A city where new ideas thrive, with buildings that change shape, digital voting systems, and people living in harmony with nature.
Prosperavan
Prosperavan: A busy city in 2050 full of wealth and technology, trying to balance growth with people’s well-being and green spaces.
Novi Reg
Novi Reg: A carefully planned city with strict rules and green technology, where people adapt to new ways of living while facing hidden challenges.
This process was new and exciting, but it took a lot of time and effort. We struggled to maintain consistency in styles across different images generated in Midjourney and combining outputs from different tools was challenging.
Participants at the foresight workshop in Bratislava discussing the futures presented in the videos
But, it worked. Workshop participants found the videos engaging which made the discussions about future cities much more vivid and lively. As our colleague Nelli Minasyan from UNDP Armenia said: “Experiencing possible futures of cities in our region through video made abstract concepts tangible and helped us connect emotionally with long-term planning in ways that written scenarios simply can’t match.”
Refining: Governance in a Crisis Context
Zortania
Next stop in our evolving journey was to refine our approach for a project focused on governance in crisis contexts. In May, together with the UNDP governance team at the Istanbul Regional Hub, we developed three possible future scenarios for a workshop to explore the future of governance.
This time we used RunwayML for both image generation and video animations instead of Midjourney. This helped us to save time on prompting and re-prompting to generate images as the basis for the videos, as we no longer needed to switch between two different applications. We also adopted Suno for more controllable and reliable music generation. The resulting scenarios were:
Altistan
Altistan: A country emerging from turbulence, embracing democracy while facing climate change and digital polarization.
Zortania
Zortania: An authoritarian regime maintaining power through digital elections and media control, with activists fighting for change.
Joraland
Joraland: A nation grappling with conflicts arising from digital agriculture investments and mining pollution, facing potential social unrest and ecological devastation.
The collaboration with the IRH Governance team helped us build rich and nuanced scenarios that benefited from the team’s deep subject knowledge. The team encouraged us to go a step further by making improvements in terms of representation, gender balance and ethnic diversity in the videos.
While the AI-generated content is impressive, it is crucial to remember that the true value lies in how these videos and scenarios are used in strategic discussions and planning. These AI-enhanced scenarios are powerful conversation starters, helping people explore possible futures while fostering deeper discussions about challenges and opportunities we may face. In the end, experiencing scenarios through engaging visual and auditory mediums, rather than just text, made the sessions more impactful.
Evolving: Next Generation Video
Sloane Intro
As a big next step forward in July, we started using RunwayML’s Gen-3 Alpha for a training on ‘GenAI tools’. We started the session with a video of an AI character named Sloane. For this, we created a script, voice-over and an image to bring Sloane to life so she could talk to the participants about the difficulties of making a fair shift to cleaner energy (referred to as Just Transition).
Sloane, an AI avatar, introduced the theme of the training to participants
Gen-3 accelerated our video production process by allowing us to make videos directly, skipping the image creation part. In the end the authenticity of the videos really amazed us and the participants. They were clearer, more consistent, and realistic looking, thanks to features like choosing different camera angles and control on changes over the time in the video.
After the introduction video, Sloane “handed over” to the real trainers. But Sloane did not disappear completely – participants could still chat with Sloane bot that was trained on a large knowledge base of Just Transition-related information.
Using video (along with other formats and tools) helped make the learning experience more engaging for participants as they could see and hear how some of these new technologies could be applied in actual work processes.
The Generative AI training brought together colleagues and partners from across the Eurasia region
What’s Next?
Since our initial experiments in February to our latest training session in July, we have witnessed a significant advancement in the capabilities offered by generative video tools. These improvements include enhanced realism, stylistic control, improved camera angles and more intuitive interfaces. Such progress has opened up possibilities for creating engaging content in areas like speculative futures, innovative problem-solving, and participatory workshops.
We are also observing a shift towards quicker and more collaborative video creation, allowing participants during events to generate videos in real-time. This could transform our approaches towards problem-solving, strategic planning, collective learning and decision-making.
However, as we adopt these powerful tools, we must address the ethical implications. The ability to create realistic fake videos raises concerns about misinformation, while questions about copyright and the use of training data remain debated.
If you are a UNDP colleague looking for guidance on how to incorporate AI-generated videos into your work, do reach out. We are happy to exchange our experiences and practices.
Thank you to Aditi Soni and Rozita Singh who took the time to review and edit this post.
The United Nations Development Programme’s Alternative Finance Lab (AltFinLab) and the Blockchain for Good Alliance (BGA) are co-organising a landmark two-day Blockchain Impact Forum. Hosted […]
In a time of rapid technological advancement, the question is no longer whether emerging technologies like blockchain have a role to play, but how they […]
The United Nations Development Programme (UNDP) in Europe and Central is set to spotlight a new generation of green innovators this autumn as the BOOST: […]