A blog by Richard Horton (Vice Chair for itSMF International)
What are we looking at here?
What's going on with AI ? It seems to be a question everyone is asking. So much so that people don't feel the need to explain that "AI" stands for Artificial Intelligence. The opportunities seem to be boundless ... but are they opportunities for exploitation as much as for public good?
We recently started a 4-part webinar series looking at the place of AI in ITSM. This has been arranged by International for the benefit of itSMF Chapters around the world. For the first of these sessions, we were delighted to welcome James Finister. James will be known to many people as an industry luminary who is on the committees that set standards for AI use. We asked him to talk with us about the challenges AI poses and the answers to some of those challenges. James gave us a wide-ranging overview of the subject and highlighted a number of challenges and things to think about when looking at AI use.
Challenges in AI use
Here are my potted highlights, in no particular order.
· AI poses us with an ethical transparency challenge. Are we going to pretend that it is a human interaction or be honest that we are using AI to help us. Particularly where there is a chatbot interface, being too uncannily human without admitting you are not can lead to trust issues when the truth dawns.
· There is an old saying in the IT world of "Garbage In, Garbage Out". No matter how good your IT system the end result can only be as good as the quality of your starting data allows. It's the same with AI. AI algorithms have been shown to be biased towards a white male viewpoint when fed information by white males, for example. Another way of looking at this is that current AI decisions are driven by human decisions
· There is an equality challenge - does AI serve all of us equally or does it accentuate the sort of polarization of opportunity and wealth that we have seen with other technology trends. The risk of this is very real. Or maybe it's an opportunity to turn this on its head by making IT more accessible.
· There is a lot of excitement and hype about AI but it is a lot less clear how you measure the benefit that it brings. Before we invest significantly in it we should be clear what we expect to get out of it. And that is a time and data investment as well as financial,
· There are good opportunities for using AI to improve operational efficiency for background tasks where the risks are lower, and your suppliers are probably already doing this even if you don't realize it.
· This is a new world where everyone is working out what is possible. There will be underexploited and unexplored possibilities where AI can help us.
· We are responsible for what we use AI for. We will be held to the same standards for our AI use as for everything else we do, and as such we also need to cater for how our suppliers use AI in our policies
· ISO 42001 addresses AI Management Systems but over 150 standards have significant AI implications
When AI processes our data, it lacks context of what that data means. It has no consciousness of subtexts and no conscience of what the consequences might be an appropriate way of behaving other than what it has learned from the data fed to it. It inherently amplifies any dataset that it is given. As such it makes you more of what you are already. If you have poor internal controls and use AI it will amplify them rather than prove your saviour.
What happens now?
This is an area of huge and rapid change. International standards are evolving rapidly, and this is reflected in new legislation taking shape. While there is new legislation to contend with, the reality check is that most prosecutions will probably come under existing laws that AI has led us into accidentally breaking.
In short,
· AI is here and will still touch us if we ignore it.
· We should think about how those we work with use it as well as about how we use it - if they are our suppliers how they process our data is our responsibility.
· AI can cut through obscurities and resolve all sorts of data challenges
· AI has a natural tendency to amplify what is fed it
· We still need to comply with the law and its our responsibility to ensure that our AI use does so
So, AI can and should help us. But let's be mindful about how we use it and share our experience so we can learn together.
AI and Information Security
What's the Information Security connection here? Risks include that we can accidentally leak data that shouldn't be shared and compromise the integrity of our data. Opportunities include the ability to spot unusual patterns of behavior more easily and be more effective in combating cyber-attacks. But that is just scratching the surface for both risks and opportunities.
James' reaction to this was that the biggest threat to Information Security is an attack by an AI agent, but that an AI agent might also provide our best defense. As in the film Terminator 2, there is good AI and bad AI. Which one will win?
The session was the first in a series, so to quote from Terminator, "I'll be back". We're looking forward to homing in on the human perspective next time.
The AI version of this webinar.
But before I go, what, you may be wondering, did AI make of James' session? Well, here it is, reproduced below. See if you find it or my summary of it more helpful!
AI thought the Key Questions were
* What are the key ethical and governance issues surrounding AI in IT service management?
* Are there any specific standards or regulations that organizations should be aware of when adopting AI?
* What are the potential risks associated with shadow AI in organizations?
And this was the text it provided (with a couple of very minor tweaks - I think you are more interested in the ISO standard for AI than for Plain end steel tubes !)
"Summary:
The first webinar in a series focused on the role of AI in IT service management featured insights from James Finister, who discussed the current landscape of AI adoption within organizations. He noted that 42% of organizations have formally integrated AI, with 78% of those finding it beneficial. Finister highlighted the challenges of trust in AI decision-making, comparing it to the scrutiny faced by autonomous vehicles. He also addressed the phenomenon of "shadow AI," where employees utilize AI tools without organizational approval, and emphasized AI's effectiveness in IT operations, particularly in early error detection and enhancing problem management.
In his session James also referenced the survey by Atomic Works People Cert regarding AI adoption in organizations.
Finister explored the ethical implications of AI, particularly concerning chatbots and virtual agents, and raised concerns about the potential for AI to develop negative traits. He pointed out the difficulties in maintaining proactive communication during major incidents, which can hinder problem-solving efforts. Additionally, he expressed skepticism about AI's role in recruitment, suggesting it may lead to the selection of average candidates. He identified underutilized applications of AI, such as contract rationalization and real-time customer feedback, while stressing the importance of accurately measuring AI's impact and the need for a supportive cultural environment for successful AI integration.
The discussion also covered the rapid pace of AI adoption, and the associated governance challenges organizations face. Finister emphasized the necessity of adhering to evolving standards and regulations, highlighting the importance of corporate governance shaped by external legislation. He referenced the OECD principles for AI and the Bletchley Declaration, noting concerns about inclusivity and the need for international standards. Finister concluded by outlining strategies for managing AI in supplier relationships, advocating for transparency in procurement contracts and ethical governance at the board level, while encouraging organizations to educate stakeholders about AI's implications.
Chapters & Topics:
Introduction to AI in IT Service Management Webinar Series
Richard Horton opened the webinar series on AI in IT service management. James Finister followed with a brief introduction of his background and the significance of AI in the industry, emphasizing both its potential benefits and the ethical issues that need to be considered. He outlined the session's focus on the current state of AI, its applications, and the challenges it presents.
* Ethical and governance considerations in AI adoption.
Understanding AI Adoption and Its Impact in ITSM
James Finister presented findings on AI adoption, revealing that while 42% of organizations have formally integrated AI, a significant number are using AI tools like ChatGPT informally. He pointed out the skepticism towards AI decision-making, drawing parallels to the higher standards expected of autonomous vehicles. Finister advocated for the use of AI in IT operations, particularly for early error detection and integration of various tools, as well as its role in improving problem and knowledge management.
* Challenges and risks associated with shadow AI in organizations.
Ethical Considerations in AI for IT Service Management
James Finister addressed the growing use of AI in IT service management, emphasizing the ethical dilemmas surrounding chatbots and virtual agents. He pointed out issues of trust and the rapid adaptation of AI, which can lead to crises. Additionally, he raised concerns about AI's role in agent recruitment, suggesting it may favor average candidates over exceptional ones.
Exploring AI Applications and Challenges in IT Service Management
James Finister highlights various areas where AI can enhance IT Service Management, including contract analysis, experience management, and proactive communication during incidents. He cautions that many organizations struggle to measure the benefits of AI effectively and that subjective assessments may not reflect true ROI. Additionally, he points out the limitations of AI, such as its lack of context and consciousness, which can lead to misunderstandings and ethical concerns.
* The importance of measuring AI benefits and outcomes.
AI Governance and Ethical Considerations in ITSM
James Finister addressed the challenges of AI adoption in organizations, particularly the need for robust corporate governance and internal controls to mitigate risks. He pointed out that many organizations struggle to keep pace with the numerous standards and regulations emerging in the AI landscape. Additionally, Finister raised ethical concerns about AI's impact on society, including the need for transparency and the potential exclusion of vulnerable groups.
* The role of standards and regulations in AI implementation.
Overview of AI Legislation and Principles
James Finister addressed the OECD principles for AI and the Bletchley Declaration, expressing concerns about their development being primarily influenced by Western governments. He pointed out the difficulties in reconciling the upcoming UK AI app with the existing EU Act, both of which are based on the OECD principles. Finister also mentioned the significance of international standards from ISO and IEEE, as well as the UN report on AI for Humanity.
Discussion on AI Standards and Their Implications in ITSM
James Finister discussed the growing number of AI standards, noting that the UN identified over 150 standards with implications for AI. He pointed out three key standards, including ISO 42001 for AI management systems, which organizations should be familiar with to ensure sensible adoption and operation of AI. Finister also stressed the importance of performance evaluation and continuous improvement in AI implementation.
Best Practices for AI Integration in Supplier Management
James Finister discussed the critical role of understanding AI's impact on supplier management and the necessity of integrating AI considerations into procurement processes. He advised organizations to proactively address AI in contracts, ensuring ethical standards are upheld. Additionally, he warned against the pitfalls of blindly trusting vendors and encouraged the establishment of robust measurement systems to track AI's influence."
Click here to watch the recording of the webinar
If you want, you can register for the next session on this series where SImone Moore considers "Robots Can Cry, But Should They?"