TechNews Pictorial PriceGrabber Video Thu Mar 28 15:10:36 2024

0


‘Explainable Artificial Intelligence’: Cracking open the black box of AI
Source: George Nott


At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant.

“It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble,” said AWS’ chief architect in his day two keynote at the company’s summit in Sydney.

Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why. That’s not so easy with AI.

Artificial intelligence – in its application of deep learning neural networks, complex algorithms and probabilistic graphical models – has become a ‘black box’ according to a growing number of researchers.

And they want an explanation.

Opening the black box

“You don’t really know why a system made a decision. AI cannot tell you that reason today. It cannot tell you why,” says Aki Ohashi, director of business development at PARC (Palo Alto Research Center). “It’s a black box. It gives you an answer and that’s it, you take it or leave it.”

For AI to be confidently rolled out by industry and government, he says, the technologies will require greater transparency, and explain their decision making process to users.

“You need to have the system accountable,” he told the AIIA Navigating Digital Government Summit in Canberra on Wednesday. “You can’t blame the technology. They have to be more transparent about the decisions that are made. It’s not just saying – well that’s what the system told me.”

PARC has been working with the Defense Advanced Research Projects Agency, an agency of the U.S. Department of Defense on what is being called Explainable Artificial Intelligence, or XAI.

The research is working towards new machine-learning systems that will have the ability to explain their rationale, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future. Importantly they will also translate models into understandable and useful explanations for end users.

In current models nodes arbitrarily decide how they make decisions, in image recognition using miniscule dots or shadows.

“They focus on whatever they want. The things they focus on are not things that tend to be intuitive to humans,” Ohashi says.


One way to do change this, being explored by PARC, is to restrict the way nodes in a neural network consider things to ‘concepts’ like colour and shapes and textures.

“The AI then starts thinking about things from a perspective which is logically understandable to humans,”

Others are working towards the same goal. While “humans are surprisingly good at explaining their decisions,” said researchers at University of California, Berkeley and the Max Planck Institute for Informatics in Germany in a recent paper, deep learning models “frequently remain opaque”.

They are seeking to “build deep models that can justify their decisions, something which comes naturally to humans”.

Their December paper Attentive Explanations: Justifying Decisions and Pointing to the Evidence, primarily focused on image recognition, makes a significant step towards AI that can provide natural language justifications of decisions and point to the evidence.


Being able to explain its decision-making is necessary for AI to be fully embraced and trusted by industry, Ohashi says. You wouldn't put a toddler in charge of business decisions.

“If you use AI for financial purposes and it starts building up a portfolio of stocks which are completely against the market. How does a human being evaluate whether it’s something that made sense and the AI is really really smart or if it’s actually making a mistake?” Ohashi says.

There have been some early moves into XAI among enterprises. In December Capital One Financial Corp told the Wall Street Journal that it was employing in-house experts to study ‘explainable AI’ as a means of guarding against potential ethical and regulatory breaches.

UK start-up Weave, which is now focused on XAI solutions has been the target of takeover talks in Silicon Valley, reports the Financial Times.

Since December, Amazon has offered three artificial intelligence based services; the deep learning speech recognition and natural-language tool Lex; text-to-speech tool Polly; and image recognition with Rekognition on its cloud platform.
Read more
‘Explainable Artificial Intelligence’: Cracking open the black box of AI

Speaking to Computerworld on Thursday, Gore hinted that there would eventually be some ‘explainable’ element to the offering.

“Right now no. You just put data in and get attributes out,” he said. “As it evolves over time, being able to understand that decision making process to a certain level will be there. So you can try and work out why it may be making a recommendation.”

Trust in the system

As well as the potential commercial benefits and necessities of AI that can explain, in human terms, how it has reached its decision, there is also a societal need.

Our lives will be increasingly influenced by deep learning algorithms, from those with immediate consequences to human safety such as medical diagnosis systems or driverless car autopilots, as well as AI built into larger systems that could determine our credit rating, insurance premium or opportunity for promotion.

“It’s incredibly easy to be seduced by the remarkable nature of the technology that is coming. And it is remarkable,” ANU anthropologist and Intel senior fellow Genevieve Bell told Wednesday’s AIIA summit.

“What is coming is amazing. Some of that tech is provocative and remarkable and delightful. Having humans in the middle both as the objects and subjects and regulators of that technology is the most important and in some ways the hardest thing to do.”

The Institute of Electrical and Electronics Engineers is considering the issue with its Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Its Ethically Aligned Design standards guide suggests that systems must be accountable and transparent.

“For users, transparency is important because it builds trust in the system, by providing a simple way for the user to understand what the system is doing and why,” it reads.

One notable suggestion – set out in the standards for physical robots – is exactly what AI needs: that they all be fitted with a button marked 'why-did-you-do-that?'.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |