AI also changes risk management

The development of artificial intelligence (AI) is progressing at great speed. Numerous breakthroughs have been achieved in recent years, particularly in the areas of machine vision, linguistic data processing and strategy games. Ultimately, AI is an omnifunctional technology that has the potential to change virtually all areas of life. The use of AI also offers a wide range of opportunities in risk management.

artificial intelligence
© depositphotos, agsandrew

Artificial intelligence is a field of research that emerged in the 1950s and deals with the development of intelligent machines. Intelligence is usually understood as the ability to think or act rationally according to human standards. The "holy grail" of many AI researchers is the creation of a "strong AI", which is cognitively equal or even superior to humans. Today, however, only "weak AI" exists: static models that have been trained for a narrowly defined application area and are useless outside of it. The most important sub-area of AI is currently machine learning. In simplified terms, this involves algorithms that identify latent connections between properties and results in large data sets. Three central approaches to machine learning are the linking of input variables to output variables based on manually classified input-output pairs (supervised learning), the recognition of clusters or other structures in data sets without pre-classifications (unsupervised learning), and the maximization of a reward function that corresponds to the desired behavior (reinforcement learning).

AI will change numerous sectors and areas

The current AI boom, which began just over five years ago, is primarily due to three developments: first, cheaper computing power; second, larger data sets; and third, deep learning algorithms that use an enormous number of intermediate layers between input data and results. This has led to significant breakthroughs in machine vision (e.g., superhuman performance in object recognition and skin cancer classification), linguistic computing (e.g., human parity in speech recognition, English-Chinese translation, and the GLUE text comprehension test), and strategy games (e.g., superhuman performance in Go, poker, and Dota 2), among others.

AI as an omnifunctional technology is expected to strongly change, if not revolutionize, numerous economic sectors and policy areas in the coming years. This is because AI has a wide range of innovative complementarities, such as autonomous vehicles, unmanned aerial vehicles or industrial robots, and thus considerable application potential in all major industries. In the following, we outline some key opportunities and challenges that arise for risk management with the increasing penetration of AI applications.

What opportunities in risk management?

In the coming years, AI applications can be expected to be used in all phases of risk management, from risk prevention to crisis management. Thus, AI can already make an important contribution in hazard prevention and preparedness. Among other things, machine learning can be used in critical infrastructure protection for predictive maintenance, inspection, as well as visual detection of infrastructure damage. For example, machine learning has been used to predict which water mains in Sydney are at high risk of failure or where in U.S. cities building inspections are most likely to be worthwhile. Likewise, machine learning has been used in various studies to detect and quantify corrosion or small cracks in concrete or steel structures. This process could soon be used in the inspection of nuclear power plants, roads, bridges or buildings.

AI also promises more precise and, above all, faster processes in the areas of risk analysis and early detection. Since expert-driven risk analysis as it prevails today is very resource-intensive, it can mostly only be carried out at longer intervals. AI supports a shift here away from subjective, expert-driven risk analysis to a machine-based process. On the one hand, such approaches find application in the modeling of complex, longer-term challenges such as climate change. On the other hand, machine learning and weather data can be used, for example, to update flood or landslide forecast models on a daily, hourly or even real-time basis in order to optimize early warning systems.

Also use for cybersecurity

Advances in machine vision support situational awareness and critical infrastructure surveillance in particular. Among other things, intelligent security systems enable the recognition of biometric characteristics, emotions, human actions and atypical behavior in a surveillance area. They also allow video footage to be automatically searched for objects or people in a given time period based on specific features such as size, gender or clothing color. Similarly, machine learning can be used to detect anomalies and intrusions in cybersecurity.

Last but not least, AI can also support crisis management. For example, by using machine learning to automatically read out the extent of local damage and the need for support from posts in social media. The success of AI in strategy games is an indication that it could well be used in the future to support decision-making in crisis management. In the longer term, there is also future potential in the area of resilience engineering. Accordingly, AI could be used, for example, in important infrastructure systems to build up generic adaptability, thus helping them to adapt to changing environmental conditions.

Risks and challenges

Even though AI is an extremely dynamic field and expectations of this technology are often very high, certain limits will remain in its application for the foreseeable future. AI systems are heavily dependent on the quality and quantity of data. Biases that are present in the training data are later reflected in the inference. Likewise, while AI systems capture statistical correlations from enormous amounts of data, they do not yet have an understanding of causal relationships because of it. Where there is no or very sparse data, such as emerging and future technological risks, current AI cannot match human expertise.

In addition, the widespread use of AI systems also poses new risks, especially when algorithms support or make momentous decisions, as in medicine, transportation, financial markets, or critical infrastructure. In such cases, compliance with fairness, accuracy, and robustness criteria, among others, must be ensured. For example, by controlling how strongly the network weights different inputs in decisions so that they meet ethical standards and do not, for example, discriminate on the basis of origin or gender. Another danger that needs to be prevented, especially in markets, is cascading interactions between algorithms, such as in the "flash crash" (several sharp price drops) on Wall Street in 2010. In addition, AI systems are vulnerable to so-called "adversarial examples", manipulative interventions using images or physical objects that deliberately confuse the AI. For example, researchers at the Massachusetts Institute of Technology 3D-printed a plastic turtle that Google's object recognition AI consistently classified as a firearm. Another American research team has used inconspicuous stickers to get (semi-)autonomous vehicles to classify a stop sign as a speed limit sign.

Hackers can abuse AI

Last, it should be noted that AI tools can be used not only to protect against risks, but also with malicious intent, particularly for cyberattacks. Advances in text processing, text understanding, and natural language generation, for example, could allow malicious actors to make calls ("vishing") and tailor mail and mobile messages ("spear phishing") to extract credentials at a new scale. In addition, smarter malware could better mimic human click behavior and spread more autonomously. With the rapidly growing interconnectivity of technological devices and systems, from household appliances to critical infrastructure (keyword Internet of Things), the potential damage from AI-based cyberattacks is also increasing.

Conclusion

AI is a general-purpose technology and is also making increasing inroads into the field of risk management. For example, the practice of risk analysis and monitoring continues to change as advances are made in machine vision and linguistic computing. At the same time, however, today's AI systems should not be overestimated. For example, forecasting extreme events using AI is often difficult due to a lack of training data. The rapid and not always linear development of AI makes it difficult to realistically estimate future AI capacities, and there is no expert consensus on the time frame in which "strong AI" could become a reality. AI is advanced statistics, it is not inherently neutral, nor does it currently possess a human-like understanding of concepts. Public and private stakeholders should invest first and foremost in the training and qualification of their employees so that they can properly train, use, and assess AI tools.

Finally, the transformative potential of AI systems in many areas also means that policymakers need to pay more attention to them. For example, the new EU Commission presented its white paper on AI in February 2020, which envisages the development of legally binding requirements for high-risk applications such as medical decisions or biometric identification. In Switzerland, the interdepartmental AI working group presented its report in December 2019. This report considers the current legislation to be sufficient, but highlights a need for clarification in the areas of international law, public opinion formation, and administration.

Authors:

  • Dr. Florian Roth, Senior Researcher, Risk and Resilience Research Team, Center for Security Studies (CSS), ETH Zurich
  • Kevin Kohler, Research Assistant, Risk & Resilience Research Team, CSS, ETH Zurich

 

Reading tips

  • ACLU (2019). The Dawn of Robot Surveillance: AI, Video Analytics, and Privacy. www.aclu.org/sites/default/files/field_document/061119-robot_surveillance.pdf
  • Brundage, M. et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. https://arxiv.org/pdf/1802.07228.pdf
  • Shoham, Y. et al. (2019). The AI Index 2018 Annual Report. https://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf
  • World Bank (2018.) Machine Learning for Disaster Risk Management. https://documents.worldbank.org/curated/en/503591547666118137/pdf/133787-WorldBank-DisasterRiskManagement-Ebook-D6.pdf.

 

(Visited 585 times, 1 visits today)

More articles on the topic

REGISTER NOW
SECURITY NEWS
Important information on safety topics - competent and practical. Receive exclusive content and news directly to your email inbox.
REGISTER
You can unsubscribe at any time!
close-link