+

FORWARD - European Perspectives on Tech & Society - No. 01 (IE)

Page 1


European Perspectives on Tech & Society

Shaping the Future with AI

04 Bold, responsible, together.

Vanessa Hartley, Head of Google Ireland, on why AI offers a once-in-a-generation opportunity for Europe – and what it requires to realise its benefits.

06 What’s your hope for AI?

How do leading figures view artificial intelligence? What opportunities do they see in it? Hear from influential European voic-es in politics, business, science and society.

10 The next great scientific instrument

Pushmeet Kohli, Vice President of Research at Google DeepMind, discusses how AI can unlock a new wave of groundbreaking scientific discoveries.

14 AI made in Europe

Meet ten businesses, institutions and NGOs that are already leveraging AI with a positive impact.

20 »Smart regulation enables broader benefits«

Cecilia Bonefeld-Dahl from DIGITALEUROPE and Joëlle Barral from Google DeepMind discuss the European Artificial Intelligence Act and its effects on European competitiveness.

24 Vast potential

Martin Thelle, Senior Partner at Implement Consulting Group, explains the economic potential that AI offers – and what Europe needs to do to really capitalise on it.

28 Reducing emissions

Contrail prevention is an efficient lever to mitigate aviation’s climate impact. Google and EUROCONTROL are working on AI-based air traffic control to identify optimal flight trajectories to achieve this goal.

30 Improving capabilities and safety

Anca Dragan from Google DeepMind is convinced that AI offers a vast array of benefits – and that it comes with a spectrum of near- and long-term risks. Both must be addressed today.

It’s about autonomy

Around 1.3 billion people worldwide have severe disabilities. AI-based tools – such as Google’s Live Transcribe and Live Caption – can help them live more independently.

AI explainer

AI can help humans solve tasks faster and more easily – and with generative AI, machines can now understand and interact in language, sound and images.

Artificial Intelligence (AI)

AI is a general term for anything that allows computers to carry out human-like behaviours, including rule-based programmes.

Deep Learning (DL)

DL is a subset of ML where computers learn in a way that mimics the human brain. With DL, machines build layers of knowledge that are increasingly complex. These AI models are typically trained on specific datasets and used within a given field or industry.

Generative Artificial Intelligence Deep Learning Machine Learning Artificial Intelligence

Machine Learning (ML)

ML is a subset of AI where machines do not need to be explicitly programmed. They use algorithms to identify and learn patterns in data, apply what they have learned and improve themselves to make better and better decisions.

Generative AI (GenAI)

GenAI is a form of AI that can understand text, code, images, sound and video and use these inputs to generate or synthesise new content. It can generally be used across almost any field or industry. Large Language Models (LLMs) such as Google Gemini (see box below) are a subset of GenAI. They are specialised applications within the broader field of GenAI, focused on generating high-quality text.

Introduction to Gemini

The Gemini app is an interface to Google’s multimodal LLM (handling text, audio, images and more) and allows people to collaborate directly with generative AI to supercharge their creativity and productivity. Key capabilities include:

Productivity

Chat with Gemini to start writing, planning, learning and more. You can chat using text, voice or even photos. »Help me write an email to my landlord Teresa about fixing the dishwasher«

Extensions

Pull together the information you need from Google services such as Maps, Gmail, Docs and YouTube to orchestrate across them more easily and quickly to bring your ideas to life. »Summarise recent emails from John Miller«

Image generation

Create captivating images bursting with vivid details and realism – for fun, inspiration, work or anything you need –simply by describing what you imagine. »Create an image of a ball gown made of paper napkins in a showroom«

Gemini Live

Enjoy voice conversations with Gemini Live on your phone to brainstorm ideas, simplify topics and rehearse for important moments – all with real-time, spoken responses. »I’m applying for a job, help me practice interview questions«

Bold, responsible, together.

Dear Reader,

Fresh out of college, my first job was working on the rollout of broadband in Ireland. Our mission? To introduce broadband to Ireland, back when most people didn’t know what the internet was. Over time, I’ve watched the internet evolve from its clunky, dial-up days with sparse content to today’s lightning-fast, information-packed powerhouse.

At the start of all of these technologies, it was hard to predict how far they would change the world, the risks they’d bring and the huge net benefit that resulted. Today, 73 per cent of the world own a mobile phone and 66 per cent have internet access. These technologies have exponentially changed lives. Artificial intelligence (AI) now presents another paradigm shift, offering unprecedented opportunities to transform businesses and lives in ways we haven’t even imagined. Like any new technology, AI comes with inherent risks. People are right to talk about the risks of misinformation, misuse and mistakes. But for me, one risk outweighs them all: the risk of Ireland missing out. This magazine takes into account both sides: the risks of AI and the many opportunities it offers.

Contrary to perception, AI is not new – and it’s not just chatbots. Autopilot systems for planes have used AI since the early 2000s – as have spellcheck and grammar tools and spam filters. At Google, we’ve been working on AI for over a decade. In 2012, we started training computers to recognise cats on YouTube – a seemingly unim-portant feat but really our first venture into machine learning. In 2016, we published our AI Principles for the first time – our public outlining of

how we pursue AI technology boldly, responsibly and with others, and to what end. We updated these principles annually between 2019 and 2023, reflecting our ongoing commitment to the responsible development of AI. 2016 also saw us create AlphaGo and Transformer – the foundation of almost all large language models to date. And we’ve continued to progress since, with AI-powered Google tools now helping people and businesses across the world every single day.

But for me, all of these developments go back to one Google product: Google Translate. Built in 2006, it originally had just two languages to help government officials translate documents for the UN. Now it has nearly 250, and we’re working on a thousand. (In fact, Google Translate also helped professional translators create Polish, Spanish, Dutch, Swedish and Danish versions of this magazine.) It translates 150 billion words a day – and do you know what the three most frequently searched terms are? »Hello«, »How are you?« and »I love you«. It’s a constant reminder to me in a sometimes complicated world that people want to build bridges, not walls. It’s also why I think that AI will empower us to become more human. Much of AI has been born from trying to better understand humans and how we express ourselves – and work on tools like Translate that seek to better understand language is what’s led to amazing breakthroughs in technology, paving the way for much of AI and large language models.

AI has captured the imagination of people across the world in the last few years, and the speed of technological advancement is

accel-erating. At Google, we are pursuing the extraordinary and the every-day opportunities of AI boldly, as they can improve lives exponentially. At the same time, we are building this technology in a way that is safe and responsible, mitigating against the very real risks of AI while still finding those benefits (see our interview with Anca Dragan, Director of Safety and Alignment at Google DeepMind on page 30). And finally, we’re working together with governments, NGOs, academics and the private sector, as making the most of technology has always been a team sport and AI will be no exception.

Once-in-a-generation opportunity

Getting this right is important. For Europe, AI offers a once-in-a-generation opportunity to accelerate economic growth and increase competitiveness. For Ireland, generative AI has the potential to boost Ireland’s GDP by up to 45 billion euros (8 per cent) in ten years. In this magazine, you will find success stories of companies from all over Europe benefiting from AI (see our interview with Martin Thelle, who conducted a report on the economic opportunity of AI on page 24, and our story on European case studies of all sizes that benefit from AI on page14) and solving some of the most complex challenges we face (see our essay by Pushmeet Koli, VP of Research at Google DeepMind on page 10). But these benefits are not automatic; they will require a regulatory framework that allows innovation and breakthroughs, investment in research and develop-

ment, the right infra-structure and an AI-ready and skilled workforce. Of course, »anything goes« is not an option – AI must be developed ethically and responsibly, and must be regulated. But Europe needs to find and strike a careful balance: mitigating the risks of new technology while en-abling innovation. For our cover story, AI research expert Joëlle Barral and Cecilia Bonefeld-Dahl, Director General of DIGITALEUROPE, discuss how to navigate this balancing act.

Twenty years ago, when I began my career rolling broadband out in Ireland, I had no idea of the impact it would have on my life and the world. The opportunities of AI go far beyond that – fighting disease (with tools such as Google DeepMind’s AlphaFold as described on page 13), supporting decarbonisation (see our article about contrail avoidance on page 28) and aiding accessibility (see our text about AI and accessibility on page 34): ultimately offering a better, fairer society. In this magazine, you’ll find many inspiring examples of these and other benefits, as well as a diverse range of perspectives and dialogues on AI. I hope you enjoy reading this first issue of Forward

Yours,

What’s your hope for AI?

»AI can help in the resolution of the biggest challenges of our time. Companies can help Europe in becoming a driving force in terms of industrial capabilities and strategic autonomy, to showcase that digital transition can be democratic and put workers at the center. We count on Google to do so.«

»Generative AI is as transformative as the early internet. It will reshape our economy and society. We cannot afford not to invest fully in the development and application of this technology but also need to make sure to responsibly guide its development.«

Prince Constantijn of the Netherlands

»Artificial intelligence promises to power a digital renaissance in the public sector. AI tools can improve the quality and impact of public services by boosting productivity, freeing up resources and enabling new ways to drive citizen engagement and empower local communities. Europeans should welcome rather than resist this transformative potential.«

How do leading figures view artificial intelligence? What opportunities do they see in it? Hear from influential European voices in politics, business, science and society.

»The significance of AI lies in its potential to contribute to achieving the United Nations’ Sustainable Development Goals. If AI can accelerate our progress towards these goals, then it becomes a vital tool. The ultimate purpose of AI should not be centred around itself but rather focused on the well-being of people, the preservation of our planet and the pursuit of prosperity.«

»Collaborating with industry stakeholders on AI development is crucial to ensuring that technology advances in a way that genuinely benefits consumers. By working together, we can advocate for transparency, fairness and ethical practices in AI, empowering consumers with tools that are innovative and enhance their lives while ensuring that they are safe, inclusive and respectful of their rights.«

»I envision a future where AI tools are widely adopted, empowering individuals and businesses to be more efficient, productive and innovative. But we need to ensure everyone has access to the training and tools needed to thrive in this new landscape. It’s also crucial to shift mindsets, fostering a society that embraces AI not with fear, but with curiosity and enthusiasm.«

»AI offers the potential to deliver a step change in productivity and growth in business, including for our SMEs. Government is acting now to seize the opportunity in order to harness AI technology for the good of both our economy and society.«

Peter Burke

for Enterprise, Trade and Employment, Ireland

»We’ve been using AI throughout our value chain for the past few years to develop new medicines and build new digital products. With predictive analytics and the responsible use of AI, we can process vast quantities of generated patient data in a secure and compliant way to uncover insights into which treatments are most effective or why some people are predisposed to certain diseases. In the area of radiology, we’re making use of AI tools with the aim to further improve medical imaging and to accelerate medical solutions that can be critical in aiding diagnostics.«

»Used effectively, AI tools can be a boost to the European employment scene. They can be used to help workers to grow their skills and organisations to reach wider pools of qualified talent. But delivering these social goods will require building a balanced ecosystem. There is a sweet spot, where citizens have the protection they need and innovators have the freedom to deliver the promise they have raised.«

»AI’s transformative power is pushing societies and economies to an inflection point. Organisations developing AI must demonstrate accountability to foster digital trust by design. Governments need a targeted approach to AI regulation. This means evolving existing laws and interpretations to address AI development and adopting outcome-based rules for high-risk areas.«
Bojana

»AI is an important part of my work as a content creator, especially in animation and photo editing. AI tools support me daily in brainstorming and creating graphics and backgrounds, as well as text corrections and translations. The combination of my own creations and AI-generated elements helps me realise my creative visions.«

»The world has in its hands the potential to apply AI to drive forward the net-zero transition and give us a chance to stay within 1.5C. AI and machine learning can contribute massively to the pace of processes, drive higher productivity and help run better systems. They can unlock new sustainable, resilient and equitable growth – whilst managing the immense and urgent risks of climate change, biodiversity loss and pollution.«

Institute on Climate Change and the Environment, London School of Economics

Gemini helped extract some of these statements from podcast interviews.

THE NEXT GREAT SCIENTIFIC INSTRUMENT

Artificial intelligence is already helping researchers around the world find answers to the big and pressing questions of our time. But this is only the beginning. Pushmeet Kohli, VP of Research at Google DeepMind, discusses how AI can unlock a new wave of groundbreaking discoveries.

We stand at a pivotal moment at which the pace of scientific discovery has slowed. Yet the challenges we face – from pandemics and climate change to economic stagnation – are increasingly complex and urgent. The solution to this innovation bottleneck – and the key to unlocking scientific advancements – lies in the interplay between cutting-edge scientific research and artificial intelligence. Today, AI systems are already an important tool for analysing scientific data and are used in almost every discipline – from structural biology to quantum chemistry, and meteorology to materials science. But there is much more to follow. AI is a powerful tool that enables researchers to look at the complexities of the world in new ways and get answers faster. At Google, we see AI as the ultimate multiplier for human ingenuity, enabling us to accelerate science and innovation in almost every field: whether it’s improving our understanding of biology to develop new medicines, unlocking limitless clean energy from fusion research or finding sustainable materials or solutions to the climate crisis.

AlphaFold, one of Google DeepMind’s core projects, is a fascinating example of how AI has had a major impact on a scientific challenge that researchers have been working on for decades: decoding and better understanding the three-dimensional structure of proteins. Proteins consist of complex, folded chains of amino acids and are at the heart of all cells. The big challenge is that every protein has a different structure. The shape determines its properties and functions in the organism. Describing a single protein and identifying its functions in the body is enough work for an entire doctoral thesis and can take years of work. With AlphaFold, we have helped speed up this process: by folding 200 million protein structures – almost all proteins known to science – AlphaFold has potentially given us the equivalent of hundreds of millions of years of progress. And with the latest generation, AlphaFold 3, its structure prediction capabilities have been extended to other biomolecules such as DNA and RNA. When we first developed AlphaFold, we were able to build on the great preliminary work of countless researchers. Whenever they decoded

By solving some of the hardest scientific and engineering challenges of our time, we’re working to create breakthrough technologies.

the structure of a protein, this 3D structure was stored in a database. This database with more than a hundred thousand protein structures served as a training basis for AlphaFold. Given its huge potential for impact, we decided to make our AlphaFold 2 technology available to research groups around the world as an open source application. For the last two years, the AlphaFold database has provided open access to our 200 million protein structure predictions, including 20,000 from the human body. This free database of predictions has helped democratise scientific research, and has now been used by over 2 million researchers from 190 countries. In Europe, the database has been utilised by renowned institutions such as the University of Oxford, the University of Copenhagen or the Leibniz Institute for Tropospheric Research. Importantly, it also empowers teams in emerging countries that would otherwise not have the necessary research infrastructure to decipher protein structures experimentally.

Predictions accelerate workstreams

Research involving AlphaFold addresses a wide range of health challenges, from malaria vaccine development (next page) to better understanding Covid-19, cancer and other diseases. It is also being used for non-medical research, such as developing plastic-eating enzymes (next page) or tackling pathogens that threaten the crops we eat. Many scientists using AlphaFold have commented that its predictions have allowed them to accelerate their workstreams by a matter of years – accelerating real-world solutions to some of the biggest challenges facing humanity. With AI and machine learning, we finally have the tools to comprehend the very sophisticated system that we call life. But we’re not stopping at AlphaFold and protein structures. We’ve also developed an AI tool that could help revolutionise materials research. It dramatically increases the speed and efficiency of discovery by predicting the stability of new materials, and has discovered 2.2 million new candidates. Amongst the 380,000 stable predictions are candidate materials that have the potential to make technologies greener – from efficient and sustainable batteries for electric cars to superconductors for more efficient computers. By solving some of the hardest scientific and engineering challenges of our time, we’re working to create breakthrough technolo-

gies that could advance science, transform the way we work, serve diverse communities and improve billions of people’s lives. Given the enormous challenges facing humanity, AI is no longer a »nice to have« but an essential when it comes to accelerating scientific progress. But this mission is too big and too important to be accomplished alone. We need to work together to maximise the potential of AI, and that’s why at Google we’re partnering with scientists and research institutions across Europe and engaging with policymakers and society at large. At the same time, we’re mindful of the need to act responsibly and mitigate the very real risks of AI, including mistakes or misuse. Guided by the scientific method, and adopting a holistic approach to responsibility and safety, we’re working to ensure AI benefits everyone and helps to change the world for the better.

Summary by Gemini

PROMPT: Help me summarise the following article on scientific breakthroughs by Pushmeet Kohli in 100 words.

Pushmeet Kohli highlights AI’s pivotal role in accelerating scientific discovery. By analysing data and providing new perspectives, AI tools like AlphaFold have significantly advanced fields such as protein structure prediction and materials research. AlphaFold, in particular, has democratised scientific research by making its predictions widely accessible, enabling researchers to tackle complex challenges like malaria and plastic pollution. As AI continues to evolve, its potential to revolutionise various scientific domains and improve human lives is immense. However, responsible development and deployment are crucial to ensure its benefits are maximised and its risks are mitigated.

AI in practical use

Scientists use AlphaFold to work on a wide range of challenges. Below are two examples:

01. FIGHTING MALARIA

Every year, more than 600,000 people die from malaria. Researchers around the world are looking for effective vaccines. One challenge is the mutability of the malaria pathogens. After entering the bloodstream through the bite of infected mosquitoes, they evade the immune system by constantly changing their own structure and that of their host cells. Researchers at the University of Oxford have used AlphaFold to find out what parts of a protein they can use for a new generation of vaccines that attack the parasites throughout their life cycle. Previously, researchers had to examine thousands of molecules individually to find the most promising ones. AlphaFold can help accelerate the search, leading to promising treatments to help prevent the outbreak and spread of malaria.

02. EATING PLASTIC

Plastic recycling has been difficult so far because industry predominantly uses mixed plastics. Separating them into their individual components is time-consuming, making complete recycling virtually impossible. As a result, a large proportion of the 400 million tons of plastic waste produced each year is incinerated or ends up in the world’s oceans. Developing enzymes to break down these polluting plastics could be a solution. Researchers at the University of Portsmouth used AlphaFold to identify 100 enzyme structures in a matter of days, allowing the team to engineer faster, more stable and cheaper enzymes for plastic recycling. This approach is already working very well on a small scale: they almost completely degraded one tonne of plastic waste in just ten hours. On an industrial scale, this type of work could revolutionise plastic recycling and significantly reduce marine litter.

AI MADE IN EUROPE

AI is not just the future. It already provides benefits right here and now: for young, small startups and traditional corporations, for cybersecurity and radiology firms, for companies in Dublin, Copenhagen and Madrid. Here are ten businesses, institutions and NGOs that profit from AI day in and day out –for customers, patients and employees.

SPOTIFY

Catering to music fans around the world

Spotify is a true European tech success story, with more than 600 million people worldwide using the company’s music-streaming services every month. In 2016, Spotify migrated 1,200 online services and data processing structures to Google Cloud, enhancing operational efficiency and gaining access to Google’s AI capabilities.

With data privacy at the core of its development activities, Spotify uses Google’s large language models to help identify individual listening patterns and tailor personal recommendations to users. Features such as Discover Weekly and Spotify Wrapped use models like AIto to help deliver curated playlists and data-driven storytelling that boost user loyalty. With the help of AI, Spotify also identifies and removes potentially harmful content, leading to a safer environment for music lovers.

Harnessing the power of AI, Spotify has been able to engage its users and display healthy double-digit growth for premium subscribers.

CROWDSEC

Making the internet a safer place

In the 2010s, Philippe Humeau and Thibault Koechlin worked in high-security hosting and designed a system which blocked bad actors’ IP addresses. One day, one of their clients was attacked by a hacker who used more than 3,000 IP addresses. While the attack was dealt with, it made Humeau and Koechlin think: »What if we share those IPs with our peers and colleagues in the industry? That would cripple this cybercriminal’s operations even further, right?«

In 2020, the two Frenchmen founded CrowdSec, a cybersecurity startup based in Nantes that offers an opensource, collaborative and proactive intrusion-prevention system and web-application firewall. Every time an IP address exhibits suspicious behaviour, it is blocked and shared. The twist is that, if this IP address is reported by enough users, it is redistributed in the real-time blocklist, enabling users to protect each other. AI plays a crucial role in avoiding false positives and identifying complex behaviours, sudden changes of behaviour, low-signal-to-noise attacks, IP addresses working in cohorts or even IP addresses of residential proxies. Recent AI breakthroughs allow CrowdSec to transition from a classic expert system to a machine-learning approach for analysing 12 million signals daily. Now other AI systems are starting to learn from CrowdSec’s unique dataset to propose their own solutions. Its unique approach made CrowdSec, which participated in the inaugural Google for Startups Growth Academy: Cybersecurity programme last year and uses Google tools and products, an instant success. Today, CrowdSec is used by more than 100,000 active users in more than 190 countries and protects from over 10 classes of attacks in 500 different scenarios.

CLEVERGY

Saving money while protecting the climate

Analyse, optimise and control your energy consumption via an AI-powered app: that’s a clear value proposition. But Clevergy, a Madrid-based company that took part in the Google for Startups Growth Academy: Sustainability programme in 2022, aims higher than mere cost savings. At the core of the company’s solution is Google Cloud providing Clevergy with the architecture to analyse millions of data points, such as hourly energy consumption, weather patterns and household information, to create personalised recommendations for individual consumers – tips for anything from boosting the performance of your solar energy installation to identifying the best times to charge your electric car or run your dishwasher. »What we want to do is create a platform that is going to unlock the energy transition for all households,« says CEO and co-founder Beltran Aznar. To accomplish this, Clevergy partners with energy providers such as Spanish utility company Naturgy or solar panel installers. Clevergy allows them to offer an app to customers that provides actionable insights into their energy consumption. By leveraging these insights, businesses can identify cross-selling opportunities and help consumers reduce energy waste and costs. This approach is mutually beneficial: not only does it boost customer satisfaction and revenues, it also contributes to a more sustainable future. As energy efficiency is a cornerstone of Europe’s decarbonisation goals, AI-powered solutions like Clevergy can play a vital role in driving positive environmental impact.

SOCIALSPACE

Empowering young people to take control of their mental health

According to a Danish government survey from 2021, one out of three women and one out of five men between the ages of 16 and 24 struggle with mental health issues – an increase of more than 7 per cent in a decade. Copenhagenbased NGO SocialSpace wants to help reverse this trend by helping young people to take control of their psychological well-being.

SocialSpace was founded in 2021 by Freja Sangild Boysen, who had personally experienced how difficult it was to navigate the complex tangle of more than 70 free counselling services available in Denmark. Her NGO designed an app that includes an AI-powered chatbot to guide young people through this ecosystem. »We believe digital tools can be part of the solution,« SocialSpace states.

The chatbot, built on Google Gemini and trained on SocialSpace’s user data, aims to create an interactive, seamless experience for individuals seeking advice. It offers personalised, anonymous support, using empathetic language to encourage young people to open up about their experiences. Based on these conversations, the chatbot suggests a relevant selection of services derived from keywords frequently found on service providers’ websites. With the AI chatbot, SocialSpace expects to double its user count in two years and hopes to create a blueprint for similar apps elsewhere.

DEEPHEALTH

Harnessing AI to fight lung cancer

According to the European Commission (European Cancer Information System), lung cancer is the leading cause of cancer-related deaths in Europe. But when found at an earlier stage, patients have more treatment options and a greater chance of survival. While low-dose computed tomography (CT) scans offer hope for earlier-stage detection, a major obstacle has been the complex and time-consuming analysis needed to accurately identify and diagnose cancerous nodules in the scans.

DeepHealth, a global leader in AI-powered health informatics, has developed a cutting-edge AI solution designed to assist radiologists in identifying cancer. By automatically detecting and segmenting lung nodules, the AI allows radiologists to interpret lung CT scans more confidently and, according to a study by Hempel, et al, up to 42 per cent faster. Today, DeepHealth’s AI-powered solution is being used to assist radiologists in one of the world’s largest lung cancer screening efforts, the NHS England’s targeted lung health check programme, where data from the UK government shows 76 per cent of cancers detected were found at earlier stages.

The targeted lung health check programme is a remarkable example of AI’s potential to not only improve clinical outcomes, but to unlock the power of screening to detect disease at the earliest stages. By harnessing AI for largescale population screening programs – in their end-to-end solutions for lung, breast and prostate health – DeepHealth is pushing the boundaries of what’s possible with imaging technology.

UKSH

Optimising hospital processes with AI

Professor Jens Scholz puts it plainly: »A hospital that doesn’t actively explore artificial intelligence today hasn’t made it into the 21st century.« The CEO of the University Hospital Schleswig-Holstein (UKSH) knows that the healthcare sector is in urgent need of modernisation. »Last year alone, we treated half a million people at the UKSH,« says Scholz. »Artificial intelligence could improve processes for everyone involved – for example in the A&E department.« A pilot phase will see AI optimise operational processes with a view to providing patients with even faster care, reducing the workload on hospital staff and using resources more efficiently.

One of the advantages of AI, as Scholz outlines, is that it can already link patient information with existing data from previous treatments while the patient is being transported to the hospital. Even before arriving, AI could use various items of data, such as the patient’s vital signs, to work out which treatments are likely to be required. In this way, AI could help to better coordinate the necessary resources. »The technology has tremendous potential,« says Scholz. When it comes to AI, the UKSH plays a pioneering role among German hospitals. It is working intensively on a wide range of possible applications for AI. For example, employees can use AI language models to draft doctor’s letters. To be able to deploy AI successfully even in conjunction with sensitive data, the UKSH is now using the sovereign cloud offered by T-Systems and Google, which is subject to particularly high security requirements. What’s more, the data is only hosted in Germany. The main goal behind every use of AI: by having AI take over as many administrative tasks as possible, medical staff have more time to care for their patients.

CARREFOUR

Cutting costs and minimising food waste

Modern shoppers expect more than just well-stocked aisles; they demand seamless digital experiences, from convenient online ordering with in-store pickup to personalised inventory recommendations. Carrefour, a leading European retailer with a rich history, has embraced this shift and is undergoing a significant digital transformation to meet consumers’ expectations and stay ahead of the competition.

For a company of Carrefour’s size, such a move is no small feat. To reduce costs and increase profitability, the company has partnered with Google Cloud to leverage AI-powered solutions. In its Belgian stores, Carrefour is using AI to optimise inventory management. Sophisticated algorithms, fuelled by vast amounts of shopping data, are now accurately forecasting demand, enabling the retailer to maintain optimal stock levels and predict reordering needs.

The results are striking: since leveraging Google Cloud, Carrefour has reduced operating costs by 40 per cent and energy consumption by 45 per cent in its information technology unit. The AI system also serves as a powerful tool for reducing food waste. By proactively identifying items nearing their expiration dates and recommending price reductions, Carrefour minimises food waste and financial losses – a win-win for the company and the environment.

ELEVENLABS

Giving a voice to the digital world

AI startup ElevenLabs is on a mission to make content universally accessible in any language and voice. Born out of its Polish founders’ frustration with emotionless dubbing, the company developed its own model capable of generating human-like speech from text. ElevenLabs models are contextually aware and can realistically reproduce intonation, pacing and emphasis as well as the emotional nuances of the text. This means that a crime novel will have a different tone to a love poem, and voices can be tailored by gender, age or accent. By partnering with Google Cloud, ElevenLabs has the infrastructure it needs to reliably serve customers around the world at large-scale and consistent uptime. Their technology is used to voice audiobooks and news articles, animate video game characters, help with film preproduction, automate localisation processes in entertainment, create dynamic audio content for social media and advertising, voice automated support agents and train medical professionals.

ElevenLabs’ mission goes beyond business and entertainment. The company’s founders Piotr Dąbkowski and Mateusz Staniszewski strive to democratise access to written content for people with visual impairments and help those that have lost their voice, reclaim it. The company focuses on both building top-notch audio AI products and enhancing safeguards against misuse. Committed to responsible AI and AI content transparency, they utilise advanced moderation systems, partner on content provenance and have developed proprietary voice authentication mechanisms.

AA IRELAND

Improving the digital experience for customers

Founded in 1910, The AA (Ireland) is a household name in the country. Although it is best known for emergency roadside assistance, it’s also a leading broker for car insurance while also providing home and travel insurance. Working with the design agency Granite, The AA (Ireland) started to transform its website in 2021, using AI tools along the way – with a focus on making it more convenient for users to get car insurance quotes. »Our redeveloped car insurance journey benefits from AI-driven recommendations for placement, journey flow and section optimisation, significantly accel-erating the process, reducing user cognitive load and enhancing overall user satisfaction,« The AA (Ireland) states. In addition, generative AI models such as Google Gemini have proven to be useful in refining creative output and developing digital ad campaigns that resonate with different core audiences. »Gemini assists in the creation and tweaking of ad copy and finding creative stock we can use. But Gemini also helps research the competitive landscape,« the company says, adding that the technology helps the organisation swiftly adapt and meet evolving customer needs. »At The AA (Ireland), we see AI as an extension of our team, continuously transforming a business with over 100 years of heritage and trust.«

BOWER

Towards the circular economy

In 2021, the European Union produced more than 84 million tonnes of packaging waste – an increase of 6.0 per cent compared with 2020 – or around 190 kilograms per inhabitant. Less than half of the municipal waste is recycled. Suwar Mert and Berfin Roza Mert, two siblings from Stockholm, Sweden, want to help people recycle their trash better than before. In 2018, they created Bower, an AI-powered mobile app that identifies any material or packaging and directs users to a nearby waste or recycling place to dispose of the item properly.

Users can either scan a barcode or make a photo of the item. Thanks to a newly developed open-source computervision model trained on Google Gemini, the app now works across the world and can identify most household items with an accuracy of more than 90 per cent. It will recognise both the object type and material type, match it with local recycling guidelines and help users dispose of the item correctly. In order to encourage users to engage, the app developers have gamified recycling with challenges, competitions, cash rewards and tracking CO2-emission savings.

Bower currently has more than 650,000 users and is available globally. In 2023, Bower received 1.75 million euros in funding through Google.org and support from a team of Google Fellows in order to accelerate the app’s development. The siblings’ goal is ambitious: to save 500,000 tonnes of CO2 annually through recycling.

»Smart regulation enables broader benefits«

Cecilia Bonefeld-Dahl, Director General of Europe’s leading digital technology association DIGITALEUROPE, and Joëlle Barral, Senior Director of Research and Engineering at Google DeepMind, discuss the European Artificial Intelligence Act and its effects on European competitiveness.

Cecilia, a few months ago the European Artificial Intelligence Act (AI Act) entered into force as the world’s first comprehensive regulation on AI. What’s your perspective on the EU taking the lead in AI regulation?

CBD It’s a historic event to try to regulate an area that is so brand new. For some points, it is good, for others not that good. Last year, only around 6 per cent of venture capital on AI went to startups in the EU, which is very little. We have a large number of small AI companies. The lack of consolidation makes it challenging to attract investment and compete globally.

How do you foresee the AI Act impacting European competitiveness?

CBD It has definitely not made companies headquartered in Europe more competitive. Many have already gone to the US. European consumers will not have the first benefits of the products that will come to market. We need to speed up our agility, and we won’t win through regulation.

Joëlle, you are the Director of AI Research at Google DeepMind. How do you anticipate the AI Act influencing your work and the development of AI technologies?

JB It’s already influencing everything we do. We have an AI Act Readiness Programme internally. Our teams are getting ready to embrace the documentation, testing and monitoring requirements. We need to make sure that the way we put that into practice will adapt as the technology evolves.

Google operates globally, encountering diverse regulatory landscapes. How does the company navigate these complexities to ensure responsible AI development and deployment across different regions?

JB Many countries recognise that AI is a tremendous opportunity for society, while also acknowledging the challenges of ensuring it is ready for widespread use. Bringing many countries together is critical to develop a common layer of global governance. To ensure consistent and responsible AI development and deployment worldwide, we work closely with governments, regulators and industry experts to understand local nuances and adapt our products and services accordingly.

Do you think it might be possible to develop a set of AI rules that is applicable globally?

JB I’m hopeful. The situation will look different decades down the road if we manage to have that common framework versus if we each look at it separately.

CBD Tech companies are not national. Having national or even regional rules creates difficulties. Everybody will benefit from having a harmonised framework.

DIGITALEUROPE recently published a study according to which the EU is behind global rivals in most critical technologies including AI. What are the main reasons for this?

CBD That we are still 27 countries! I’m a former CEO, and I chose to scale outside Europe first. That’s the case for many tech companies. There cannot be scalability in Europe, unless it’s truly one market. Removing those barriers is the biggest job for the next term of this Commission.

What can or should politics do to foster Europe’s competitiveness in AI?

JB The next EU term opens a window of opportunity to develop a multi-year strategy and policy agenda that leverages AI and other digital technologies to the benefit of EU competitiveness, against the backdrop of strong transatlantic cooperation. Policymakers’ focus should be on the implementation and enforcement of enabling policies to foster scientific breakthroughs, innovation with digital tech and broad adoption of digital tech across all sectors by organisations large and small. Furthermore, the tech ecosystem should be a place where stakeholders strengthen each other. People should be able and willing to move easily between startups, big companies and academia.

Regulating AI involves a delicate balance between fostering innovation, enabling societal benefits and managing inherent risks. How can policymakers achieve this equilibrium, avoiding over-regulation?

CBD Ask the companies: »What’s difficult for you?« We know that if you are a 50-person company, you will have a direct cost around 300,000 euros just to implement the AI Act. Who’s going to pay that? We need an investment programme. If not, we’ll slow companies further down, and they will be hiring lawyers instead of coders. And I think it’s fair to say: if a company submits all the required paperwork to roll out a product that pertains to one of the risk categories, they should expect an answer within the next three months. If no red flags are reported during that time, the product should be considered compliant by default.

JB We need to make sure AI is regulated in a way that encourages and enables the broader societal and economic benefits AI can bring, while at the same time addressing the risks and complexities of the technology as well as the concerns people have. With the AI Act already adopted, this will be particularly important to keep in mind as the code of practice is developed and implemented over the next few months. This code is a tool to bridge the gap between obligations for general-purpose AI model providers coming into effect and the adoption of harmonised European standards.

»There cannot be scalability in Europe, unless it’s truly one market. Removing those barriers is the biggest job for the next term of this Commission.«
CECILIA BONEFELD-DAHL

Which other general conditions are critical to foster AI innovation and competitiveness in Europe?

JB Investment in research and development is critical. At present, the US spends more than twice as much per capita on AI R&D as the EU. R&D is of paramount importance to better understand AI’s benefits and risks and how to manage them. Furthermore, building the right underlying infrastructure is key. We should also double down on Europe’s efforts to upskill and re-skill workers and businesses, putting AI skills at the centre.

The study conducted by DIGITALEUROPE also points out the importance of strategic collaboration. How can industry, academia and governments work together to accelerate AI innovation and strengthen Europe’s position in the global AI landscape in the coming five years?

CBD We need to have more cross-fertilisation. I don’t see why we should not have a call for industry to come and share knowledge. There are a lot of lawyers looking at regulations. They will need knowledge on how AI is built or how to document. Also, there are the real threats, like cyberattacks by malicious stakeholders and many other things that we need to learn and teach each other.

Joëlle, Google has its own AI principles. What motivated their development, and how do they guide responsible AI development and use within the company?

JB We were one of the first to put those AI principles in place in 2018. The first one emphasizes that AI must be socially beneficial. And then additional principles make sure we don’t deploy technology if it will bring harm. Our AI principles also spell out applications we don’t support, like mass surveillance or weapons. From early research to product launch, our teams use these principles as an anchor. They are real principles we live by.

Let’s talk about the public sector. How do you think that AI can help governments and politicians improve efficiency in their daily work, as well as in the public sector overall?

CBD They should use AI to find out how they can spend less public money and create more value out of it. Whether that is in curing cancer or making sure that they fight pandemics or cybercrime.

JB AI brings personalisation. A recent example is the city of Padova in Italy, which was receiving many calls because the air quality was challenging. Through AI, they were able to provide better answers to their citizens and free up resources that were put to better use. There are gazillions of examples like this one.

To wrap up, what is your vision for the future of AI in Europe?

»Europe should realise that it missed the tech transition 20 years ago and it now has the opportunity to catch up because it has the talent.«
JOËLLE BARRAL

JB Europe should realise that it missed the tech transition 20 years ago and it now has the opportunity to catch up because it has the talent. I think there is a big opportunity for Europe to be a major player in this new era.

CBD Would it be unrealistic to say 25 per cent of public spending in Europe goes to the digital transformation of society? We could become specialists in some of the areas that are not addressed enough today, like healthcare, education or fighting climate change. We’re not punching to our full strength in some of the areas that are crucial for Europe’s citizens.

VAST POTENTIAL

Artificial intelligence can give the European economy a powerful boost, according to a new study commissioned by Google. Martin Thelle, Senior Partner at Implement Consulting Group and co-author of the study, explains the specific potential that AI offers – and what Europe needs to do to really capitalise on it.

You just finished a report dealing with generative AI’s impact on the EU economy, commissioned by Google. What are the most important findings?

We estimate that generative AI has the potential to boost the EU’s gross domestic product (GDP) by 8 per cent, or by between approximately 1.2 and 1.4 trillion euros, over the next decade. Adoption rates are expected to increase rapidly, following an S-curve pattern, with a peak in marginal adoption in about ten years. This pace is significantly faster than that of previous technologies, which often took decades to reach widespread use. More than 60 per cent of jobs will be complemented by AI, enhancing productivity by automating time-consuming tasks, allowing workers to focus on more value-creating activities. To capture these benefits and help people adapt to these changes, investing in skills training is key.

Some, like Google CEO Sundar Pichai, say that AI will change the world more quickly than the invention of electricity. Do you agree?

Absolutely. While Thomas Edison invented the light bulb over 160 years ago, even today hundreds of millions of people lack access to electricity. AI is expected to achieve widespread adoption in just 20 to 30 years. The world is far more interconnected now than it was in 1879, enabling faster dissemination of new technologies.

How exactly can AI strengthen European productivity and competitiveness?

annual productivity potential from generative AI in peak year of jobs are expected to be augmented by generative AI. Over

1.4 % %

AI has the potential to significantly enhance productivity by automat-ing routine, repetitive and time-consuming tasks, allowing them to be completed faster and more accurately. Additionally, AI can help people speed up work processes by assisting with complex cognitive tasks like writing, analysing large datasets and synthesising information. This increased productivity is essential for the European economy to remain competitive and could lead to positive impacts on the economy and science alike.

Speaking of science, how can AI help accelerate research breakthroughs in Europe?

AI can help experts analyse and manage vast datasets required for scientific innovation, speeding up processes. It can also automate and simulate experiments, reducing the costs and time for scientific discoveries. For example, in drug discovery, AI can predict molecular interactions and potential side effects, significantly supporting scientists and shortening the research phase. AI can also facilitate collaboration across research institutions by enabling more efficient data sharing and analysis.

Which sectors of the economy may benefit the most from AI, and which might see less of an impact?

Sectors with high potential for automating repetitive cognitive tasks, such as information technology, finance and public administration, will benefit the most. For example, AI can streamline financial transactions and compliance checks, significantly reducing operational costs. Conversely, sectors relying heavily on human-to-human interaction or manual tasks, like personal care and construction, will see less of an impact from AI. In construction, while AI can assist with project planning and safety monitoring, the manual labour aspect of building structures remains less likely to be automated.

Historically, the service sector has struggled to boost productivity. In your report, you mention that service companies could benefit disproportionately from AI. Can you give some concrete examples? Indeed, as economies shift towards services, productivity has stag-

nated. In the service sector, AI can greatly enhance productivity through improved customer service, for example by means of responsive chatbots, and by streamlining processes like legal document or contract processing. For instance, law firms can use AI to help lawyers review contracts and legal documents quickly, reducing the time they spend on these tasks. Additionally, customer service departments can deploy AI chatbots to handle routine inquiries, freeing up human time to deal with more complex issues.

Does this also apply to public services in Europe?

Yes, AI can streamline citizen services through automated systems, reducing waiting times. For example, AI chatbots in Portugal and Denmark provide 24/7 services, personalising responses based on extensive public databases. The European Commission already uses AI for machine translation in its processes, improving communication and efficiency. AI can also assist experts in managing public health data, predicting disease outbreaks and optimising resource allocation in healthcare.

Small and medium-sized enterprises (SMEs) are the backbone of many European economies. How can they leverage AI to increase productivity despite their limited capacities?

For SMEs, the implementation of AI can be challenging due to limited IT resources and digital skills. However, generative AI is much more intuitive to use due to natural language interaction and less expensive than previous technological innovations. With lower or no data requirements, SMEs can more easily adopt AI.

the whole report for

What do you think is needed for Europe to fully capture the benefits of AI and avoid falling behind other global economies? What are the main challenges that need to be overcome?

Europe is trailing the initial wave of AI innovation, so catching up now involves

Find
Ireland here.

Upskilling for the future of work

Google AI Essentials is a self-paced course designed to help people across roles and industries get essential AI skills to boost their productivity, with zero experience required. In under 10 hours, AI experts will teach you about AI — and show you how to actually use it in the real world. After you complete the course, you’ll earn a certificate from Google to share with your network and potential employers. AI Essentials is part of the Google Career Certificates programme.

Learn more at: grow.google/intl/europe/google-career-certificates/

max-imising the use of existing technologies. The EU must establish conducive AI regulations, invest heavily in AI infrastructure and computing power, and promote widespread adoption and accessibility. This requires reskilling and upskilling the workforce to harness AI’s benefits fully. While the impact on jobs will be comparable to other technological advances, there won’t be as many job transformations as some might fear.

Can you elaborate on this? Right now, it seems that many people are optimistic about AI’s potential to improve efficiency in their work, while others fear job losses. What do you say to those who are concerned about their jobs?

The labour market disruption will most likely be no greater than with past technological shifts. Few jobs are at risk because current AI tools cannot perform all tasks within a job. Jobs requiring human interaction, like baristas or surgeons, won’t be as impacted. Only about 7 per cent of workers are estimated to be highly exposed to AI, meaning that more than half of their job-related tasks can potentially be done by generative AI some time in the future. An AI-powered economy is expected to create new job opportunities that we don’t know of today, such as prompt engineering. Most jobs, over 60 per cent, will be augmented by AI rather than replaced by it, allowing humans to focus on critical tasks while AI handles repetitive tasks.

What are the skills that workers need to develop to benefit from AI?

Workers need a range of skills, from basic digital literacy and AI knowledge to advanced cognitive abilities like critical thinking and analytical skills. Generative AI will also create new job types that we cannot yet envision. For instance, understanding how to interpret and leverage AI-generated insights will be crucial. Workers in all sectors will need to be comfortable with continuous learning and adaptation as AI technologies evolve.

One of the challenges the EU faces is related to demographic shifts compared to other economic superpowers. What makes you optimistic that AI can help address this challenge?

By freeing up time, AI enables the European workforce to address labour shortages in critical areas such as healthcare, engineering and education. For instance, AI can significantly increase efficiency among healthcare workers, potentially alleviating up to 40 per cent of the current shortage. In education, AI can provide personalised learning experiences and assist teachers with administrative tasks, allowing them to focus more on teaching and student engagement. Additionally, AI-driven solutions in engineering can optimise project management and resource allocation, addressing skills shortages in this sector.

Despite AI’s vast potential for Europe, your report highlights a widen-ing gap between the EU and the US in AI research and development. What implications does this have for the European economy and what steps can be taken to bridge this gap in the future? If the current trend continues, the US and possibly China will lead in AI development, while the EU will remain an adopter. This could mean that the US will achieve increasingly bigger economic gains from AI than the EU. To bridge this gap, the EU should foster local AI R&D and build on existing models to create new applications or tools. Collaborative efforts leveraging Europe’s vast resources can be more fruitful than pursuing individual R&D breakthroughs. For instance, the EU could establish joint AI research centres and incentivise public-private partnerships to drive innovation.

Speaking of innovation, where in Europe do you see the brightest future for AI adoption?

Digitally developed markets like Estonia, the Netherlands and Nordic countries are expected to lead in AI adoption due to their digital strength. For example, Estonia’s e-Residency programme and digital government services demonstrate how innovative policies can drive AI adoption. However, there are also promising developments in France, Spain and Portugal. France’s investment in AI startups shows how targeted funding can support AI ecosystem growth. These examples demonstrate that the potential of AI has been recognised and is being utilised in many parts of the EU. Now it’s important to make sure that Europe can collectively keep pace with other global regions.

REDUCING EMISSIONS

Contrail prevention is an efficient lever to mitigate aviation’s climate impact. Google’s AI tools can support air traffic control with identifying optimal flight trajectories to achieve this goal.

Remember the first time you gazed up at the sky as you approached an airport? The white trails you sometimes see behind aeroplanes may seem beautiful, but they can also have a surprisingly large environmental effect. Research suggests that these condensation trails, or »contrails« for short, account for roughly one-third of aviation’s global warming impact, and controlling them is an important part of the battle against climate change.

Google Research and its Climate AI team have developed an algorithm to identify areas in the sky where contrail formation is likely to occur. By integrating this data into aviation planning systems, flight paths can be optimised and contrail-related climate effects substantially reduced. »We are focused on generating the best forecasts possible to have the largest climate impact. We’ll ultimately measure the success of this work by the amount of warming we can mitigate,« says John Platt, Fellow at Google Research.

Contrails form when aeroplanes fly through layers of humidity at a high altitude and droplets of ice form around soot from the engine exhaust. They can persist as cirrus clouds for minutes or even hours, and these human-made clouds can trap heat within the Earth’s atmosphere. Over the course of 2023, Google Research’s Climate AI team worked alongside Bill Gates’ sustainable energy initiative Breakthrough Energy to analyse a vast dataset of satellite imagery, weather and flight path data to suggest routes and altitudes that might reduce contrails. Pilots at American Airlines flew 70 test flights over six months to assess the accuracy of the predictions made. The result was a 54 per cent reduction in contrail generation for 2 per cent additional fuel use, suggesting the potential for a cost-effective and scalable climate solution.

»Contrail prevention is so important as it is one of the few tools that the aviation industry has to mitigate its impact on the climate in the short to medium term,« says Ilona Sitova, Senior Aviation Sustainability Expert at the EUROCONTROL Maastricht Upper Area Control

Centre (MUAC). MUAC manages the upper airspace over Belgium, the Netherlands, Luxembourg and north-west Germany, and it is one of the busiest aviation corridors in the world, with more than 5,400 flights on a busy summer day.

MUAC is a pioneer in supporting research into contrail prevention, and it was the first to conduct an operational contrail prevention trial in 2021. Google has been working with MUAC since 2022 by sharing outputs, including specific information on predicted contrail layers, from its AI programme. These predictions enable air traffic controllers to issue tactical clearances aimed at preventing the creation of contrails by flying outside of contrail-prone areas. The trade-offs can be complex, with potential climate mitigation having to be balanced against other factors, such as flight times and other operational considerations. »For example, if in an extreme case, the prediction says not to use certain flight levels, it will inevitably reduce the airspace available for flights, so very likely induce delays. Our goal is to maintain high capacity to safely service the airspace, while mitigating contrails,« Sitova says.

AI will improve processes for contrail mitigation

Logistics and, of course, safety are also central considerations. MUAC plans its capacity and rosters air traffic controllers up to a year in advance, coordinating with military units to determine which altitudes are available to civil airspace users. But contrail predictions this far in advance are less accurate. On the day of operations, the trajectory for individual aeroplanes can be altered in real time according to Google’s AI-generated suggestions, but rerouting flights after takeoff increases complexity.

»Issuing route deviations during the flight is more precise, however, it is not the preferred way of working because it can impact network planning and the workload of the air traffic controllers,« Sitova explains. Nevertheless, she is confident that AI can play a crucial role in solving the puzzle of integrating contrail prevention into air traffic control: »AI will improve many local processes for contrail mitigation for stakeholders such as airlines, the network manager or air traffic control, and will deliver benefits.«

MUAC remains committed to harnessing the potential of new technologies as part of its contrail prevention efforts. »We recognise that high automation support for operational contrail mitigation processes is required, and that is why the ongoing trials with Google, the most recent of which began in August 2024, are such an important step,« Sitova says.

»Improvements in capabilities must go hand in hand with improvements in safety«

Anca Dragan, Director of Safety and Alignment at Google DeepMind, is convinced that AI offers a vast array of benefits to mankind – and that it comes with a spectrum of risks. She argues that both near- and longterm risks of AI must be addressed today.

Professor Dragan, you’ve been dealing with artificial intelligence for many years, starting to code as a fourth-grader while you grew up in Romania and moving to Germany as a young adult to study computer science. Today, you’re an associate professor at UC Berkeley, currently on leave to head AI Safety and Alignment at Google DeepMind. Given your extensive experience, what’s your perspective on AI’s development so far?

For decades, researchers have developed AI systems that are capable of achieving previously impossible feats – from winning against chess champion Garry Kasparov back in 1997 to helping humans better understand Mars. Now »impossibilities« are happening everywhere, every day. AI is driving people around major American cities (I myself worked with the driverless car company Waymo for six years and it was just amazing to see the progress we made there) and helping doctors detect cancers more often. But AI development won’t stop at solving specific challenges. Even more advanced and »generally intelligent« AI systems are on the horizon.

How do these systems differ from what we’ve experienced so far? We expect that Artificial General Intelligence (AGI) systems, as they are usually called these days, will be able to achieve almost humanlevel performance at cognitive tasks across the board. Developing AGI is a long-held goal of Google DeepMind and other AI researchers around the world, and for good reason. In science alone, if artificial intelligence can already solve specific, complex scientific challenges like mapping the entire universe of proteins (page 10), the benefits of creating a cross-disciplinary AI research assistant and tool could be enormous. And the hypothesis many of us believe is that it takes a general-level, across-the-board intelligence to unlock certain breakthroughs – that narrow AI can only go so far. Beyond science, the upsides for education, healthcare and, more generally, for human well-being can be huge – such as identifying cures for serious diseases or helping us combat climate change. We aren’t there yet, but with the significant improvements across increasingly intelligent, general-purpose AI happening today, it’s possible we could achieve AGI and see its benefits in the coming decade.

What does this development mean with regard to potential risks of AI?

The promise of increasingly advanced AI systems comes with a spectrum of risks, from present-day harms caused by the current systems to more extreme and potentially catastrophic risks from AGI systems.

What types of risk do you have in mind?

Today’s systems might serve as a new surface for cyberattacks; they might propagate stereotypes or lack the nuance required to balance competing concerns on contentious subjects; they might contribute to affective polarisation, give bad medical advice or amplify injustices. Some future systems might raise bigger risks –e.g. very capable cyberweapons, aiding in bioterrorism or being used as powerful tools of persuasion. Even further down the road,

»Throughout our safety work, we’re often finding the solutions to near- and long-term risks are closely linked and mutually beneficial.«

some of the most powerful systems might become capable of selfawareness, self-proliferation, sophisticated reasoning and deception. As a consequence, these improvements in capabilities must go hand in hand with improvements in AI safety.

Should we address these challenges in order of appearance, focusing on present day’s AI risks first and then moving on to future AI risks? How should these important risks be addressed?

The reality is all of these risks matter, and it’s important to work on addressing all of them today – this is a no-brainer for present-day harms, but, importantly, we also must start addressing those that we foresee further down the road. Going back in time, let’s remember that air pollution was understood relatively quickly as a long-term severe risk resulting from widespread adoption of the internal combustion engine, but it still took nearly sixty years from when the first Ford Model T rolled off the assembly line until the U.S. implemented national motor vehicle air pollution standards. Historically, humanity has been slow to address the risks of pollution because of how deeply intertwined some causes are with the global economy.

Can you elaborate on that?

This pollution analogy teaches us something about AI: if human advancement tells us anything, it should be that the sooner we discover and mitigate possible risks posed by new technology like AI – both present-day and longer-term – the better. Yet today, there is a common narrative that researchers focusing on present-day harms from AI and longer-term risks from it are in conflict, and that the long-term risks themselves are distractions to avoid addressing today’s harms. But that can’t be the case. I find it really counter-productive and think we should also stop blaming each other and get on with the very important work ahead on the full spectrum of risks.

Your job at Google DeepMind focuses on AI safety. How are you dealing with current and future risk?

Industry labs like Google DeepMind are working on forecasting, evaluating and mitigating the whole spectrum of AI risks. In fact, many of us, from my ethics research colleagues to AGI alignment researchers like myself, believe that prioritising one type of risk over the other is a false choice. Throughout our safety work, we’re often finding the solutions to near- and long-term risks are closely linked and mutually beneficial.

Can you please provide an example?

Sure thing, I’ll even provide two. It’s widely understood that reducing generative AI models’ factuality errors – or »hallucinations« in industry parlance – is important for reducing the spread of inaccurate information. It’s less well-known, however, that many of the strategies we use to mitigate these hallucinations are also instructive for how people might be able to provide oversight to AGI models in the future. Similarly, work that accounts for value pluralism in alignment can help mitigate against amplifying polarisation today, while also potentially enabling AGI systems to understand and defend against misuse, balancing user and societal goals. This is why our sociotechnical and

ethics researchers work side by side with our AI safety engineers and scientists.

Speaking of cooperation, how can or should politics, scientists and other stakeholders be involved in making AI safe?

Much like within each AI lab we have to work together on the spectrum of safety risks. It’s also critical that AI companies, academic researchers, civil society and governments around the world reject the false choice between present-day harms and catastrophic risks, and consider the full array of possibility that could arise when AI is increasingly sophisticated, autonomous and ubiquitous. Establishing common practices for AI safety will help mitigate both present-day harms and longer-term risks. By coming together, the international community of governments, civil society groups and industry can develop stronger evaluations for measuring model capabilities and alignment, and create common practices around how to securely conduct these tests, as well as what mitigations need to be in place at increasing levels of risk.

Are we moving fast enough to handle these risks?

To be clear, today’s systems do not pose catastrophic risks – the latest chatbot isn’t going to suddenly be an existential threat. But we need to improve our ability to measure dangerous capabilities and understand how these capabilities change as AI models grow in size and complexity – we need the scaling laws of dangerous capabilities – as well as be better prepared to mitigate the risk that a system would engage in such dangerous behaviours. We also still need advances in amplified oversight from humans and in how to use it to prevent optimised misalignment from occurring. And we need to better understand how humans are likely to use these models, through which applications, and what the broader implications of their widespread adoption are. It could take years of research to understand the second- or third-order effects of AI systems, as well as to make the necessary advancements in alignment research, and we should all be making urgent use of that valuable time now –something we’re doing already at Google DeepMind.

Anca Dragan is an associate professor at UC Berkeley. Currently, she is on leave to head AI Safety and Alignment at Google DeepMind, working mainly in San Francisco.

IT’S ABOUT AUTONOMY

Around 1.3 billion people worldwide have severe disabilities. AI-based tools — such as Google’s Live Transcribe and Live Caption — can help them live more independently.

Hearing aids alone don’t work for Matthew Johnston, a Londoner in his 50s who was born deaf. Instead, he mostly relies on lip-reading when talking to people. When in 2020, his youngest son Harry moved to Hong Kong, they tried to keep in touch via video calls. Unfortunately, Johnston found that lip-reading was more difficult on screens than in face-to-face chats. When he found out that the latest Google Pixel phone had a feature called Live Caption that transcribed conversations in real time, he bought the device. The first person he called was his son. They talked for half an hour, then Harry interrupted him: »Dad, it’s the first time we have ever phoned each other.« Johnston was struck: »I’m 55 years old, and for the first time in my life, I was able to call my son.«

Live Caption is one of many digital tools that Google has designed with and for the disabled community. According to the World Health Organisation, 16 per cent of the world population or 1.3 billion people have significant disabilities. Making the world easier to navigate for everyone and especially for people with disabilities fits well with Google’s mission to make the world’s information universally accessible. »It’s about autonomy; it’s about giving people the opportunity to live the life they want, barrier-free, in a world that is often not designed with them in mind,« says Christopher Patnoe, Head of EMEA Accessibility and Disability Inclusion at Google.

Artificial intelligence offers new ways for building products more inclusively. For example, people who are deaf or hard of hearing can benefit from the ability of large language models to analyse speech. Using automatic speech recognition and sound detection technolo-

gy, Google engineers built the Live Transcribe app in 2018 and Sound Notifications a year later. While Live Transcribe delivers real-time transcriptions of conversations in over 120 languages, Sound Notifications identifies sounds in the world and at home, sending notifications to people who are deaf or hard of hearing if a fire alarm goes off or a doorbell rings.

Lidia Best, President of the European Federation of Hard of Hearing People, has experienced the power of AI herself and through reams of incidents in her community: »One social media post left a profound impression on me. A colleague shared an image of her smartphone with the Live Transcribe app open, positioned next to a TV monitor in Greece. For the first time, she could follow Greek news broadcasts, courtesy of the Live Transcribe app!«

One of the people who developed Live Transcribe was Dimitri Kanevsky, a Google research scientist working on speech recognition technology who completely lost hearing at age one. When he joined Google in 2014, he had a dedicated device to display speech-to-text captions. While the solution was useful, it required multiple parts and advance preparation. With Live Transcribe, Kanevsky was able to use speech-to-text transcriptions more spontaneously than before – for example when ordering a coffee. »I’ve been dreaming about this my whole life,« Kanevsky says. Now, it has become one of our most popular Assistive Technologies on Android.

Dimitri Kanevsky was one of the creators of Live Transcribe.
Matthew Johnston was 55 when he was able to have his first phone call – thanks to AI.
Learn more about accessibility.

European Perspectives on Tech & Society

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
FORWARD - European Perspectives on Tech & Society - No. 01 (IE) by Publications - Issuu
点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载