AI Deployment

The Challenges of AI Deployment and Responsible AI

The opportunity for artificial intelligence (AI)-enabled capabilities for IT service management (ITSM) and other business operations is already here. However, adopting organizations face two key challenges. The first relates to the actual AI deployment, and the second relates to responsible AI use. This blog looks at both challenges, highlighting the key issues IT organizations must address to succeed with AI. But first, here are some AI adoption statistics to show the rapid rate of AI adoption.

Adopting #AI for #ITSM? You face 2 key challenges. The first relates to the actual AI deployment, & the second relates to responsible AI use. Here @Joe_the_IT_Guy explores. Share on X

AI-enabled capabilities are already here

Recent AI adoption survey data from ITSM.tools found that 36% of survey respondents already used corporate AI capabilities. An even more significant 66% were using free AI tools such as ChatGPT to help them at work. Earlier survey data (2023) found that three-quarters of ITSM tools had already added AI-enabled capabilities. So, AI-enabled capabilities are already available in organizations, making the two challenges a significant obstacle to future operational improvements.

In the same way that the age-old adage of a “Fool with a tool is still a fool” applies to automation, unprepared IT organizations can deploy AI-enabled capabilities that are of little benefit to the business. In fact, they could make business operations and outcomes worse if the various AI challenges aren’t addressed.

The challenge of AI deployment

The AI deployment challenge has two parts for IT organizations. The first is how best to apply AI-enabled capabilities to IT operations, and the second is how best to deliver and support new AI-enabled capabilities in the wider business. While these are different AI opportunities, there are overlaps with many of the issues:

  • Data-related challenges – poor data quality leads to inaccurate AI models and unreliable results, but having high-quality data isn’t sufficient.
  • Security challenges – Robust security measures are needed to protect sensitive data against breaches and ensure compliance with regulations like General Data Protection Regulation (GDPR).
  • Technical challenges – complex AI models can be difficult to interpret and explain, and ensuring they can handle large volumes of data and high user demand is critical to success.
  • Integration issues – integrating AI with legacy systems can be technically challenging, but ensuring that AI-based capabilities can seamlessly interact with other enterprise systems and data sources is crucial for successful AI deployment.
  • Organizational challenges – not only is there the resistance to change from employees and other stakeholders prevalent in most change projects. There’s also often a shortage of skilled professionals to develop, deploy, and manage the AI.
  • Compliance issues – this includes regulatory compliance (related to data privacy, security, and AI ethics) and keeping up with evolving standards and best practices for AI deployment.

The ethical implications of AI use could be included in this list, but they’re covered in more detail shortly.

An example AI deployment challenge – data

The IT phrase “garbage in, garbage out” applies to AI-enabled capabilities as much as it does to more traditional IT systems. Incomplete datasets can lead to inaccurate AI models and predictions. Inconsistencies in data formats and units of measurement can cause errors in AI model training and validation. Unfortunately, combining data from multiple sources that use different formats, structures, and standards can be complex and time-consuming.

Insufficient data availability hinders the training of AI models in some domains. Data containing errors, outliers, or irrelevancies can also degrade the performance of AI models. Plus, there’s the aforementioned issue of training data bias, where the AI model can learn and perpetuate the bias, leading to incorrect and unfair outcomes.

Incomplete datasets can lead to inaccurate AI models & predictions. So how do you get it right? Share on X

The challenge of responsible AI

The subject of “AI ethics” is a hot topic. This industry term for responsible AI use covers various areas, including:

  • The potential for AI bias – training data can contain biases that are then reflected in AI models, leading to discriminatory decisions and outcomes. There are, however, techniques to minimize the bias – for example, ensuring diversity in data, using statistical methods to detect and mitigate bias in data sets, and using fairness-aware algorithms and metrics,
  • Ethical considerations – deciding what constitutes ethical AI behavior can be subjective and vary across contexts and cultures. For example, AI can disrupt job markets by automating tasks, raising ethical concerns about job displacement and economic inequality. In terms of specific use cases, allowing AI to make autonomous decisions in areas like healthcare and criminal justice raises ethical questions about responsibility and accountability. A fundamental way to address this challenge is to establish ethical guidelines for AI use and multidisciplinary ethics committees to oversee AI projects and provide guidance on ethical issues.
  • Transparency of AI models – advanced AI models are inherently complex and challenging to interpret. However, ensuring end-users and stakeholders understand how AI-enable capabilities work is crucial for gaining AI acceptance and trust. Model explainability techniques help address this challenge – for example, using interpretable models or post-hoc explainability techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations).
  • Governance – responsible AI requires the development of robust governance frameworks to oversee AI deployment, including corporate ethical guidelines, standards, and accountability mechanisms. The creation of this governance framework creation activity must involve a spectrum of stakeholders, including AI ethicists, legal experts, and the affected communities.
  • Social and cultural implications – AI-enabled capabilities must be designed to respect and adapt to diverse cultural norms and values, which is extremely difficult in a global context. There’s also the risk that AI-enabled capabilities could exacerbate existing social inequalities. AI team diversity, the provision of cultural sensitivity training, and the above ethical actions all help minimize this challenge’s impact.
Here @Joe_the_IT_Guy looks at 5 ethical challenges of #AI. Share on X

An example responsible AI challenge – ethical considerations with respect to people

An organization must consider many non-technical consequences when choosing where to use AI-enabled capabilities. One of the highest-profile ethical issues is job displacement because AI and automation can lead to significant job losses, particularly in industries with high degrees of routine and repetitive tasks. For example, AI-driven robots might replace human workers in manufacturing operations.

There’s also a second ethical perspective to this – economic inequality. This is because the benefits of AI are unevenly distributed, creating a divide between the people who can leverage AI in their roles and those who cannot (and might lose their jobs to AI). For example, knowledge workers will benefit from AI advancements, but low-skilled workers might face unemployment.

'An organization must consider many non-technical consequences when choosing where to use AI-enabled capabilities.' - @Joe_the_IT_Guy #AI #ITSM Share on X

Are you facing the challenges of AI deployment and responsible AI in your organization? Please let me know in the comments!


Posted by Joe the IT Guy

Joe the IT Guy

Native New Yorker. Loves everything IT-related (and hugs). Passionate blogger and Twitter addict. Oh...and resident IT Guy at SysAid Technologies (almost forgot the day job!).