With polls showing that more than 70 percent of people in the U.S. remain wary of autonomous machines, the amount of research going into transparency in artificial intelligence (AI) is no surprise. In February, Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias, and in May Microsoft launched a solution of its own. Now, Google is following suit.

The Mountain View company today debuted the What-If Tool, a new bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework. With no more than a model and a dataset, users are able to generate visualizations that explore the impact of algorithmic tweaks and adjustments.

“Probing ‘what if’ scenarios [in AI] often means writing custom, one-off code to analyze a specific model,” Google AI software engineer James Wexler wrote in a blog post. “Not only is this process inefficient, it makes it hard for non-programmers to participate in the process of shaping and improving ML models.”

Above: Exploring scenarios on a data point within TensorBoard.

Image Credit: Google

Using the What-If Tool, TensorBoard users can manually edit examples from datasets and see the effects of the changes in real time, or generate plots that illustrate how a model’s predictions correspond with any single feature.

Key to this process is counterfactuals and algorithmic fairness analysis. With a button click, the What-If Tool can show a comparison between a data point and the next-closest datapoint where the model predicts a different result. Another click…

[SOURCE]