AI’s big problem: Can we trust a system we don’t understand?


Artificial intelligence is steadily taking on more complex tasks, from driving cars to playing games and making loans. But this new technology has a very unusual attribute: We don’t always understand how artificial neural networks make decisions, as Will Knight argues in MIT Technology Review. For example, while a deep-learning project in a New York hospital appeared successful in the difficult task of predicting schizophrenia, it gave no rationale for its findings. And an overly opaque system, Knight writes, is problematic because its results are uncheckable. Rather than trusting the results blindly, experts suggest, the answer may be to teach AI how to explain itself. “It’s going to introduce trust,” says Ruslan Salakhutdinov, director of AI research at Apple.

Read more in MIT Technology Review >