The company I retired from were very forward thinking, and we were playing with neural network AI for business purposes some thirty years ago. Management was impressed, but cynical, and a frequent comment was to not just bring a clever Black Box, but also explain the basis for its answers. i.e. What were the critical factors in making the Black Box arrive at an answer.
This latter was a very important hurdle in a world consisting of 50:50 decisions, and one which resulted in the AI being rejected most of the time.
Times move on, and Covid has resulted in a big leap forward in explaining what makes a Black Box give out its answers, and sometimes it appears an AI is using spurious test data to influence its decision.
This paper in Nature, explains how Washington University used techniques to identify which factors most influenced the AI’s response. In the case of a Covid ‘detector’ they found out that the model was using spurious data from the way X-rays had been labelled, as it had ‘learnt’ on training data that Covid positive patient’s X-rays generally came from particular sources. i.e. it was not really identifying valid lung structures at all.