Prompt Engineering in ITSM Scenarios
The opportunity of artificial intelligence (AI) capabilities for IT service management (ITSM) is well written about. The abilities of generative AI have added to this and increased the speed of AI’s real-world adoption by ITSM teams. But how can you ensure that prompt-based generative AI capabilities are used to best advantage in your ITSM scenarios? To help, this blog explains the basics of “prompt engineering” before sharing the most common prompt engineering mistakes to avoid to get better results from generative AI tools.
This blog explains the basics of 'prompt engineering' before sharing the most common prompt engineering mistakes to avoid to get better results from generative #AI tools. Share on XWhat is prompt engineering?
Prompt engineering is the activity of designing and refining input prompts to interact with AI models most effectively, particularly large language models (LLMs) like ChatGPT. Prompt engineering aims to elicit the most accurate, relevant, and valuable responses from the AI model by better communicating what’s needed.
How can you ensure that prompt-based generative #AI capabilities are used to best advantage in your #ITSM scenarios? @Joe_the_IT_Guy takes a look here. Share on XBut what basic needs are required to get the best from your prompts? For example, what should you do if you want to use a generative AI tool to help create problem-management guidance for your organization?
Getting the prompt engineering basics right – 5 key steps
When crafting prompts for generative AI tools, five key steps will help you get the responses you need:
- Create clear and specific prompts – use precise language in your prompts to reduce ambiguity. The more specific the prompt, including the context, the more targeted the response. For the problem-management guidance example, while the prompt “Please share some problem management guidance” will elicit a response, there are better ways to formulate the prompt (as covered below).
- Structure your prompt appropriately – use explicit prompt instructions like “Explain,” “Summarize,” or “Describe,” and provide examples (within the prompt) to illustrate the type of response expected. This helps the model understand the format and content requirements. For the problem-management guidance example, add context to your prompt by using the words “ITSM” or “ITIL” and perhaps give an example piece of guidance such as “For example, separate the responsibilities of incident management and problem management.”
- Refine your prompt for better results – experiment with different prompt phrasings and structures to see what works best. You can use the generative AI tool’s responses as feedback to adjust and improve your prompts. In the case of the problem-management guidance example, the initial response might be too high-level to be helpful. However, these high-level areas can be turned into separate prompts that dig deeper into the available guidance. For example, if a piece of guidance is to invest in problem management tools and techniques such as Kepner-Tregoe, use this insight to learn more about this and the popular alternatives using a secondary prompt.
- Look out for inaccuracies – the generative AI tool’s responses are only as good as the training data that built it and the quality of your prompt. It could be that the generative AI tool can’t provide a sufficiently suitable response for a given subject area. Or, as with the problem-management guidance example, if your prompt hasn’t provided context (through the inclusion of “ITSM” or “ITIL”), the contextless response will likely include guidance related to what ITSM best practice calls incident management rather than the required problem management guidance.
- Further refine prompts as needed – for example, in the case of the problem-management guidance, the prompt could stipulate that the response should be written in the style of an ITSM professional.
Finally, it’s essential to appreciate that using prompt engineering to elicit the responses you need from a generative AI tool will take trial and error. This is shown in the following list of common prompt-creation mistakes.
When crafting prompts for generative #AI tools, five key steps will help you get the responses you need says @Joe_the_IT_Guy. Here he explains the steps. #ITSM #ServiceDesk Share on XThe main mistakes people make with prompt creation
In some ways, the common prompt-engineering mistakes people make are the flip side of the above basics. But it’s worth describing them in more detail to help people avoid them:
- Not ensuring that the generative AI tool is suitable – if the tool’s training data hasn’t included sources related to the area being questioned, it will not be able to provide a helpful response (but it still might try, so be careful).
- Using vague prompts – these simply result in ambiguous or even irrelevant responses. For example, “Tell me about problem management” could bring back almost anything! This prompt is a great example of the need for context such that, in this case, the generative AI tool knows that it’s ITIL or ITSM problem management.
- Not appreciating that long prompts can confuse the generative AI tool – for example, a prompt such as “Explain how best to adopt problem management in a new ITSM practice for a global organization operating in different time zones” is liable to result in an unfocused response that tries to address the multiple elements of the prompt and is unhelpful and perhaps confused as a result.
- Settling for the first response – as per the earlier example, creating usable problem management guidance will likely require a multi-tier prompt approach. Digging deeper to get more granular guidance as needed.
- Not providing examples in your prompt – if you have a specific vision for how your problem management guidance should look, include an example or template to help ensure that the responses are in the desired format.
- Not specifying the audience – for example, failing to include that the problem management guidance will need to be used by ITSM professionals (or even focused problem managers).
- Assuming the AI’s responses are 100% correct – for something as technical as problem management, there’s a need for a subject matter expert to verify the information. Failing to do this could result in misguided activities and poor decisions.
How are you using prompt engineering in the context of ITSM? Let me know in the comments!