"No one doubts that artificial intelligence is a strategic boardroom issue, though diginomica revealed last year that much of the initial buzz was individuals using free cloud tools as shadow IT, while many business leaders talked up AI in their earnings calls just to keep investors happy.
In 2024, those caveats remain amidst the hype. As one of my stories from KubeCon + CloudNativeCon last week showed, the reality for many software engineering teams is the C-suite demanding an AI ‘hammer’ with little idea of what business nail they want to hit with it.
Or, as Intel Vice President and General Manager for Open Ecosystem Arun Gupta put it:
"When we go into a CIO discussion, it’s ‘How can I use Gen-AI?’ And I’m like, ‘I don’t know. What do you want to do with it?’ And the answer is, ‘I don’t know, you figure it out!’"
So, now that AI Spring is in full bloom, what is the reality of enterprise adoption? Two reports this week unveil some surprising new findings, many of which show that the hype cycle is ending more quickly than the industry would like.
First up is a white paper from $2 billion cloud incident-response provider, PagerDuty. According to its survey of 100 Fortune 1,000 IT leaders, 100% are concerned about the security risks of the technology, and 98% have paused Gen-AI projects as a result.
Those are extraordinary figures. However, the perceived threats are not solely about cybersecurity (with phishing, deep fakes, complex fraud, and automated attacks on the rise), but are rooted in what PagerDuty calls the “moral implications”. These include worries over copyright theft in training data and any legal exposure that may arise from that.
As previously reported (see diginomica, passim), multiple IP infringement lawsuits are ongoing in the US, while in the UK, the House of Lords’ Communications and Digital Committee was clear, in its inquiry into Large Language Models, that copyright theft had taken place. A conclusion that peers arrived at after interviewing expert witnesses from all sides of the debate, including vendors and lawyers.
According to PagerDuty, unease over these issues keeps more than half of respondents (51%) awake at night, with nearly as many concerned about the disclosure of sensitive information (48%), data privacy violations (47%), and social engineering attacks (46%). They are right to be cautious: last year, diginomica reported that source code is the most common form of privileged data disclosed to cloud-based AI tools.
The white paper adds:
"Any of these security risks could damage the company’s public image, which explains why Gen-AI’s risk to the organization’s reputation tops the list of concerns for 50% of respondents. More than two in five also worry about the ethics of the technology (42%). Among the executives with these moral concerns, inherent societal biases of training data (26%) and lack of regulation (26%) top the list."
Despite this, only 25% of IT leaders actively mistrust the technology, adds the white paper – cold comfort for vendors, perhaps. Even so, it is hard to avoid the implication that, while some providers might have first- or big-mover advantage in generative AI, any that trained their systems unethically may have stored up a world of problems for themselves.
However, with nearly all Fortune 1,000 companies pausing their AI programmes until clear guidelines can be put in place – though the figure of 98% seems implausibly high – the white paper adds:
"Executives value these policies, so much so that a majority (51%) believe they should adopt Gen-AI only after they have the right guidelines in place. [But] others believe they risk falling behind if they don’t adopt Gen-AI as quickly as possible, regardless of parameters (46%)."
Those figures suggest a familiar pattern in enterprise tech adoption: early movers stepping back from their decisions, while the pack of followers is just getting started.
Yet the report continues:
"Despite the emphasis and clear need, only 29% of companies have established formal guidelines. Instead, 66% are currently setting up these policies, which means leaders may need to keep pausing Gen-AI until they roll out a course of action."
That said, the white paper’s findings are inconsistent in some respects, and thus present a confusing picture – conceivably, one of customers confirming a security researcher’s line of questioning. Imagine that: confirmation bias in a Gen-AI report!
For example, if 98% of IT leaders say they have paused enterprise AI programmes until organizational guidelines are put in place, how are 64% of the same survey base able to report that Gen-AI is still being used in “some or all” of their departments?
One answer may be that, as diginomica found last year, that ‘departmental’ use may in fact be individuals experimenting with cloud-based tools as shadow IT. That aside, the white paper confirms that early enterprise adopters may be reconsidering their incautious rush."...
For full post, please visit:
https://diginomica.com/ai-two-reports-reveal-massive-enterprise-pause-over-security-and-ethics