Go to main content Go to main menu

Open AI research the only way forward

Recently, the US non-profit Future of Life Institute called on all ‘AI labs’ around the world to take a six-month break in the development of advanced AI technology. The purpose of the appeal, according to the initiators, supported by more than 50,000 researchers, politicians and business leaders, was to allow some breathing space to consider how technological developments that could fundamentally change our society should be regulated. How good an idea is it to pause the development of AI, and what should this break involve?

We asked IVA Fellow Sara Mazur, Chair of the Swedish AI initiative WASP.

citat tecken

It is great to see the subject being properly debated and that sound technologies that can benefit humanity are continuing to be developed as a result.

Sara Mazur

What was your first reaction when you read about this call?

– It is always good to have academic debates, and of course we welcome reflections on the development of AI technology and its use. In general, it is difficult to slow down technological advances, and in this case the success of this approach is doubtful. I don’t even know if it would be possible.

You chair the Wallenberg AI, Autonomous Systems and Software Programme (WASP), Sweden’s largest research initiative to date on AI and other subjects. What do you do?

– WASP is a huge research programme with a budget of more than SEK 6 billion, where we conduct basic research into new technologies in AI, autonomous systems and software. Long-term basic research in WASP’s areas is vital for the development of business and society, and is an enabling factor in the work to achieve a sustainable future.

– WASP is now strengthening Sweden’s competitiveness in areas where global advances are being made at tremendous speed. We have attracted a large number of leading researchers to Sweden, who have had to build up new research teams, and we aim to produce more than 600 new doctoral graduates, at least 150 of whom will have been employed in industry during their doctoral studies. We have also established research arenas in partnership with Swedish industry, where knowledge, infrastructure, systems and technology are shared in order to jointly research areas of interest in fields such as AI. In addition, we are now investing SEK 200 million in cyber security research within WASP, with AI playing a major role.

How would a pause in all advanced AI development, as proposed in the call, affect what you do, and even in the long run what is done in the AI field across Sweden?

– The call actually indicates a need to expand research to increase our understanding of AI and what its possibilities and limitations are. In addition to technical research within WASP, there is also WASP-HS, which aims to develop knowledge of the ethical, economic, labour market, social, cultural and legal implications of new technologies. In light of the rapid technological advances, this research should really be accelerated and expanded.

Among other things, the call will pause all training in advanced AI systems, such as future generations of ChatGPT, which is about building language models that can learn to interpret and produce advanced text. One such project is under way in Sweden, GPT SW3, where WASP is working with AI Sweden and RISE to develop an AI-driven language model for the Nordic languages. Do you have any concern that a break, if it happened, would affect that work?

– No, the language model training that we are doing is part of our Research Arena for Media & Language and is focused on basic research. So it is not about commercialising AI products. It is about academic research. Unlike most of the generative models now available worldwide, our work is open and transparent, both the research outcomes and the resulting language model. This will form the basis for many of the studies and research projects that are needed to continue to develop AI safely and responsibly.

The apparent aim of the break is to allow time to develop rules and frameworks that reflect the rapid pace of current AI development. Some people think it makes sense that, based on some kind of precautionary principle, this would also include research. Do you agree with them?

– Of course, it is important to discuss technologies and how to regulate their use. Knowledge is required to establish the right laws and draft the right regulations. We need to have sound knowledge of how new technologies affect our society. And this is achieved by conducting open research.

– But there must be a balance between these regulatory and precautionary considerations and the opportunities created by open and free, ground-breaking research. It would be a shame to prevent harmless research that could produce solutions to the world’s major challenges and save people’s lives.

The appeal also says that decisions about how we deal with a technology such as AI, which could affect our entire future social development, are a democratic issue. So something to be decided not by CEOs of major tech companies, but by elected politicians. Do our politicians definitely have the expertise to determine how these developments should best be regulated?

– The politicians always retain the right to make decisions and the democratic responsibility, but it is important to have researchers who can be involved in conducting research and generating knowledge in this field, to ensure this expertise. It can be very difficult for a layman to form an opinion about how to regulate this area well. This is why experts have to be involved in the process. And they are at the moment.

In your view, do you think a global six-month pause in all advanced AI development is going to happen?

– No, I have a very hard time believing that, but it is great to see the subject being properly debated and that sound technologies that can benefit humanity are continuing to be developed as a result.


What counts as AI?

Artificial Intelligence, or AI for short, is a generic term for computer programs designed to mimic human intelligence and cognitive abilities in various ways, such as learning from past experiences, understanding everyday language, and solving problems.

What is AI used for today?

AI development has been ongoing since at least the 1950s, with early applications including games and expert systems. Since then, huge advances have been made and AI is now an integral part of everything from internet search engines, self-driving vehicles and financial systems management, to tools for making medical diagnoses, estimating risks, identifying threats and predicting the weather.

What different types of AI are there?

The term AI covers many different kinds of systems with different abilities and degrees of competence. Here are a few of them:

Rule-based AI is the oldest form of AI. It follows predefined rules for carrying out tasks and solving simple problems.

Machine learning is a branch of AI that is widely used today, for example in industrial processes, search engines and decision support systems. The technology is based on algorithm-controlled software that can to some extent improve itself by processing data.

Generative AI is a type of artificial intelligence that has made further advances in its ability to develop in recent years by studying and analysing large quantities of data, for example to produce texts, images and 3D animations that are beginning to resemble what a human can create. ChatGPT is an example of generative AI.

General AI is a form of AI that does not yet exist, but which researchers and engineers are working to develop. It is hoped that in the future we will be able to develop software that can perform tasks and solve problems in much the same way as a person can. For example, it could learn things it wasn’t originally created to do.

The singularity is the type of AI that typically occurs in dystopian tales of the future: a theoretical AI state in which software begins to develop and learn new things at such a speed that it becomes more intelligent than us humans and impossible to control.

What is WASP?

The Wallenberg AI, Autonomous Systems and Software Programme (WASP) is a 12-year research programme aimed at positioning Sweden as an internationally recognised and leading nation in the fields of artificial intelligence, autonomous systems and software. The initiative’s partners include Chalmers University of Technology, KTH Royal Institute of Technology, Linköping University, Lund University, Umeå University and a large number of Swedish industrial companies.

The call published on 22 March
Pause Giant AI Experiments: An Open Letter

About Sara Mazur

Sara Mazur chairs the Wallenberg AI, Autonomous Systems and Software Programme (WASP), and is deputy Executive Director of the Knut and Alice Wallenberg Foundation. Sara previously worked at Ericsson, including as Head of Research, and her academic background includes a degree in electrical engineering, as well as a PhD and associate professorship in plasma physics at KTH Royal Institute of Technology.

Sara Mazur has been a Fellow of the Royal Swedish Academy of Engineering Sciences (IVA) since 2007.