Back to News

Bridging the Trust Divide to Expand AI Implementation

Tuesday, Dec 10, 2024

Bridging the Trust Divide to Expand AI Implementation

The introduction of artificial intelligence sparked significant excitement and widespread integration, but its momentum is currently tapering off.

Businesses continue to express enthusiasm for AI, eyeing McKinsey's estimation that generative AI could save companies up to $2.6 trillion across various sectors. Despite such interest, there's hesitance to implement this technology. A survey of top analytics and IT executives found that just 20% of generative AI applications are operational.

What accounts for the disparity between enthusiasm and execution?

The causes are many. Prominent issues include security and data privacy concerns, compliance risks, and data management complexities. Additionally, there's unease over AI's transparency and worries about return on investment, costs, and skills shortages. We'll explore these hindrances and propose strategies businesses can employ to counter them.

Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds, commented that “high-quality data is critical for the accuracy and reliability of AI models, which results in better decision-making and outcomes.” He emphasized that trustworthy data enhances AI confidence among IT professionals, which expedites broader adoption and incorporation of AI technologies.

Currently, only 43% of IT specialists are confident in meeting AI's data requirements, and given data's importance, it’s clear that data issues are often the bottleneck in adopting AI.

Addressing this involves revisiting foundational data processes. Organizations must develop robust data governance plans from scratch, ensuring strict controls that uphold data quality and integrity.

With rising regulatory demands, compliance poses a challenge for many firms. Incorporating AI only amplifies risks, regulations, and ethical governance concerns, with security and compliance risk highlighted as a major worry in Cloudera's AI and data architecture report.

Though AI regulations can initially seem daunting, executives should embrace these frameworks for support, as they offer a foundation on which to construct effective risk management and ethical guidelines.

Creating compliance policies, forming AI governance teams, and maintaining human oversight in AI-driven decisions are key in establishing comprehensive AI ethics and governance systems.

Security and data privacy pose significant challenges to businesses, and justifiably so. Cisco’s 2024 Data Privacy Benchmark Study showed that 48% of employees have admitted to entering non-public company information into generative AI tools, prompting 27% of companies to bar such technologies.

Mitigating these risks requires restricting access to sensitive information. This means reinforcing access controls, preventing privilege creep, and keeping data away from public large language models (LLMs). Avi Perez, CTO of Pyramid Analytics, noted that his firm’s intelligence software AI was purposefully designed to avoid sharing data with the LLM, but still conveys metadata to facilitate analysis. He noted, “Data privacy and associated issues can be severe barriers... With our system, the LLM creates the solution without accessing data or performing calculations, removing about 95% of data privacy risks.”

A major roadblock to AI acceptance is skepticism over its outcomes. The well-known instance of Amazon’s AI hiring tool, which exhibited gender bias, serves as a deterrent for many. Increasing AI explainability and transparency is key to alleviating these anxieties.

Adnan Masood, chief AI architect at UST and a Microsoft regional director, expressed, ] AI transparency revolves around clearly explaining the reasoning behind the ends, making decision-making processes open and comprehensible. Unfortunately, many executives underestimate transparency’s significance. An IBM survey found that only 45% of CEOs deliver on promises for openness. Thus, AI advocates must develop thorough AI governance protocols to avoid creating ‘black boxes’ and invest in tools for explainability like SHapley Additive exPlanations (SHAPs), Google’s Fairness Indicators, and automatic compliance checks from the Institute of Internal Auditors’ AI Auditing Framework.

As usual, cost remains a significant barrier to AI implementation. According to a survey by Cloudera, 26% of respondents described AI tools as overly expensive, and Gartner indicates ‘unclear business value’ as a reason for AI project failure. However, the same Gartner report identifies that generative AI has resulted in average revenue growth and cost reductions of over 15% among users, showcasing AI’s potential financial benefits when properly utilized.

It’s imperative to approach AI with the same strategic mindset as other business ventures - target areas promising quick returns on investment, outline expected benefits, and set measurable KPIs to validate outcomes. Michael Robinson, Director of Product Marketing at UiPath, suggests the first step should be identifying high-value, transformative AI use cases.

Lack of skills is a persistent hurdle to AI use, yet little is being done to bridge the gap. Worklife’s report highlights that the initial wave of AI adoption was championed by early adopters. Now, it’s the laggards, characteristically doubtful and less optimistic about AI—and any emerging technology—that hold back progress.

Latest News

Here are some news that you might be interested in.