When algorithms decide - opportunities and risks
Artificial intelligence (AI) is becoming ever more powerful and is being used for ever more complex tasks. This raises ethical questions, for example when AI is used to decide for or judge people. A new study by TA-Swiss has examined the opportunities and risks of AI for society.
Artificial intelligence is a very powerful tool for solving complex problems and dealing with huge amounts of unsorted data. Its use makes it possible to translate languages far better than before or to defeat human opponents in strategy games of all kinds. AI is constantly being improved and used for more and more activities that were previously reserved for humans, such as identifying tax fraud or diagnosing diseases.
Trust is good, control is better
But rapidly growing technical capabilities require a watchful eye for the risks that can accompany them. Could AI cost masses of jobs? How will consumer behavior change if more and more people follow the purchase recommendations of an intelligent search engine? What happens to the media if AI contributes to the fabrication of "fake news" or does not dissolve ideological filter bubbles, but actually expands and strengthens them? What might happen if the state uses AI to proactive policing to operate, to issue regulations or to reduce the workload in courts? How should research and education respond to the opportunities and risks of AI, and what competencies are particularly relevant for today's researchers and future decision-makers to make the best use of AI for society?
These and similar questions were discussed in the TA-Swiss study was carried out by an interdisciplinary project team led by Markus Christen of the Digital Society Initiative at the University of Zurich, researchers from Empa and the Institute of Technology Assessment at the Austrian Academy of Sciences. The scientists developed their findings using methods such as targeted literature studies, workshops and surveys of more than 300 international experts.
Instructions for policy makers
This work resulted in nine recommendations for different areas such as work, education and research, consumption, media and administration. In the area of education, for example, it is important not only to enable experts to develop and implement AI systems, but also to promote the ability to make judgments about the legal, ethical and social implications of AI. In areas where risks are unclear, research to identify such risks should be intensified, the study authors urge. To this end, funding from universities or third-party funding would be desirable.
In the TA-Swiss study, the experts also comment on the lack of transparency of AI and its possible discriminatory properties. Possible control mechanisms for these systems are discussed, as are legal aspects arising from the use of AI, such as liability or data protection.
Press release Empa
Compare also technical paper AI also changes risk management