Smart AI is not enough - join the discussion
The ITEA project CyberFactory#1 aimed at designing, developing, integrating and demonstrating a set of key enabling capabilities to foster optimisation and resilience of the Factories of the Future (FoF). During the project, Finnish project partner Houston Analytics was seeking ways to utilise AI for anomaly detection to secure processes in the factory environment. According to Seppo Heikura, Senior Advisor at Houston Analytics, it is not enough that AI is only smart, AI systems should also be robust and safe. In this article he elaborates on this topic and invites you to join the discussion.
In general, companies tend to focus primarily on ROI and functionality. Although equally important, security and robustness may get less attention. This can also be said for AI solutions, the primary focus tends to lie on creating a smart AI system, when the following two key questions should also be addressed:
- Is it solid: is the AI solution always delivering what it was planned to deliver?
- Is it safe: have vulnerabilities been assessed, and are needed security measures in place?
Robustness and security of AI participate in multidimensional trade-offs with other requirements including ethics, regulation, bias, explainability and governance. Ultimately, AI-solutions represent business decisions rather than simply technical ones. That is why AI security and robustness should also be considered primarily from a business perspective.
Robustness means that the AI system can perform in a consistent way under different environmental circumstances. There are several ways to test AI robustness:
- Are results repeatedly the same with the same data?
- Are results as expected with alike data?
- Is the model sensitive to extreme border values?
Security Means that the AI system is protected from abuse. AI-systems are usually parts of decision-making processes. Abusers of the system could exploit susceptibilities in the system by working out the rules of play, entering the criteria the AI-system is looking for to “play” the system and gaining desired decisions. In designing the interaction with AI, detection approaches and thresholds for such inputs need to be considered.
As all AI related decisions are primarily business decisions, decision makers must weigh the balance between potential benefits and potential security and robustness issues. Having just smart AI is not enough. It has to full fill also the demands related to robustness and security in order to serve the desired business needs properly. In some cases even human intervention to the process can be useful to increase adaptivity of the solution.
If you are interested in this topic and would like to join the discussion, Houston Analytics invites you to join the LinkedIn group: 'Building Smarter Businesses: Guidance for company leaders on adopting and succeeding with AI' .
Related projects
CyberFactory#1
Addressing opportunities and threats for the Factory of the Future (FoF)