An Interview with Elizabeth M. Adams, Affiliate Fellow at the Standard Institute for Human-Centered Artificial Intelligence (HAI)
Topics: AI ethics, Interview, Responsible AI, AIbrains
Five practical steps to make Artificial Intelligence interpretable
If AI is to gain people’s trust, organisations should ensure they are able to account for the decisions that AI makes, and explain them to the people affected. Making AI interpretable can foster trust, enable control and make the results produced by machine learning more actionable. How can AI-based decisions therefore be explained?
Topics: AI, Data, Machine Learning, Responsible AI