2019 Enterprise Analytics Trends: The Need for Explainable AI | MicroStrategy
BI Trends

2019 Enterprise Analytics Trends: The Need for Explainable AI

With IDC’s Data Age 2025 report predicting that the global datasphere will grow from 33 zettabytes in 2018 to 175 zettabytes by 2025 (with enterprise organizations creating and managing 60% of this data), it’s imperative that enterprises focus now on how to leverage artificial intelligence in their data and analytics initiatives.

According to Deloitte Insights’ State of AI in the Enterprise, 82% of the more than 1,000 professionals surveyed said their organizations have already gained a financial return from their AI investments, with the top three benefits being:

  • Enhancing current products (44%)
  • Optimizing internal operations (42%)
  • Making better decisions (35%)

Relative to competitors, respondents say the adoption of AI has allowed their organization to either catch up (16%), stay on par (20%), edge slightly ahead (27%), widen a lead (28%), or leapfrog ahead (9%). But concerns are top of mind as well. Some of the potential AI risks companies worry about include cybersecurity vulnerabilities (51%), making the wrong strategic decisions based on AI (43%), legal responsibility for decisions/actions made by AI systems (39%), regulatory non-compliance risk (37%), and ethical risks (32%).

Can I have confidence in my artificial intelligence application without an explanation of how it makes decisions and reaches conclusions?

In 10 Enterprise Analytics Trends to Watch in 2019, Borba Consulting founder and Boulder BI Brain Trust (BBBT) member Marcus Borba says organizations are right to worry. “As artificial intelligence becomes more sophisticated, concern around the fear of artificial intelligence systems as black boxes grows. There are several concerns encompassing various themes:

  • Fairness: How can I check that the decisions were made fairly?
  • Bias: How do I know that my AI application does not have a biased view of the world?
  • Security: Can I have confidence in my artificial intelligence application without an explanation of how it makes decisions and reaches conclusions?”

Borba notes multiple reasons to invest in explainable AI (an artificial intelligence whose actions can be explained and understood by people), including those surfaced in Deloitte’s survey above: regulations, ethical use of data, transparency, compliance requirements, and risk.

“When it comes to the ethical use of data, the explanation of model outputs will drive successful adoption, revealing if sensitive data is causing similar exclusions and avoiding negative ethical outputs, and providing a provable way to show how decisions are ethical. In compliance, explainable AI can provide an auditable record, including all parameters associated with the prediction, enabling the business to meet compliance requirements whenever necessary.

“The challenge of explainable AI,” notes Borba, “is to produce more explainable models while maintaining a high level of prediction accuracy, enabling users to understand, trust, and manage their artificial intelligence applications.”

Read more insights around explainable AI and augmented analytics from this well-known BBBT member, and see what Forrester analysts, Constellation Research’s Ray Wang and Doug Henschen, Ventana Research’s Mark Smith and David Menninger, IDC’s Chandana Gopal, Marcus Borba, Ronald van Loon, and other top thought leaders say requires your organization’s attention now. Download the eBook: 10 Enterprise Analytics Trends to Watch in 2019.

Comments Blog post currently doesn't have any comments.
Security code