top of page

Designing for AI in Service: 4 Keys to Smarter Support Experiences

Most everyone is using—or at least talking about using—generative AI.


The potential impact for individuals and organizations is huge. A well-designed generative AI experience can feel like magic: quick answers, fewer repetitive tasks, happier customers, and more confident employees.


But “could” is the key word.


Generative AI isn’t magic. Like anything else, it takes time, energy, and intentional design to deliver the experience you're promised.


You need to thoughtfully plan, meticulously clean your data, and clearly define the expectations you have of your AI—and what it should expect from you.


So how do you actually do that?


If your organization is planning to leverage generative AI in service, these four areas are essential to consider.


 

1. Know What You Want

If you don’t know what you want from AI, it won’t magically become clearer after you enable a feature or start paying for a tool.


A tool doesn’t create value.Your vision creates value. The tool just helps you get there.


You need a clear—and realistic—understanding of the value AI can bring to your business. You’re not going to leap from manual processes to fully autonomous agents overnight. And that’s okay.


What to do:

  • Define a clear pilot use case. Focus on a specific, achievable task—like summarizing cases / incidents or generating draft responses—that’s tied to a real business need.

  • Select a manageable pilot area. Choose a team or workflow with lower risk and strong internal support, so you can test without high stakes.

  • Identify internal champions. Engage a few people who will try it, offer feedback, and advocate for what’s working (and what isn’t).

  • Track results and lessons learned. What worked, what didn’t, and what does AI need to do better?

  • Refine your strategy. Use pilot feedback to adjust goals, expectations, and use case design before expanding AI across teams.


You’re not flipping a switch. You’re designing for scale—one step at a time.


💡ODNOS Insight: Consider one of these use cases as a pilot -

  • Case or Incident Summarization - internal summaries of tickets for handoffs, escalations, or wrap-ups

  • Reply Suggestions - if you’re using Chat / Email, consider having AI draft responses to your customers for lower complexity cases


 

2. Get Your House in Order

Don’t set up anything without first determining your use cases—and evaluating whether your data and structure can support them.


If you expect AI to pull from a knowledge base or prior communications, those sources need to be accurate, relevant, and up to date.


It won’t know that a knowledge article from 2018 is obsolete unless you tell it.


And if all of your case data lives in loosely written notes or free-form fields with inconsistent categorization, that’s going to lead to generic responses.


What to do:

  • Review free-form fields. Can you replace open text with structured picklists, checkboxes, date fields, number fields, etc.. to create clear signal fields AI can reference?

  • Audit your knowledge base. Are articles current? Are they categorized by product, solution, topic, or case reason? 

  • Standardize where you can. Predictable structures help AI understand how case fields are used, what values are available, and what should be filled in when.


 

3. Context is Key 

If someone told you, “This customer is upset,” and nothing else—how would you help them? Would you send them a replacement? Refund them? Offer a discount for a future order All are possible, and none are wrong—without context.


The same goes for AI.


Your AI isn’t just looking for what happened—it needs to know why it matters, who it impacts, and what’s already been tried.


The most effective prompts and copilots pull from:

  • Record details

  • Product and asset data

  • Account history

  • Entitlements and SLAs

  • Previous interactions or outcomes


What to do:

  • Review your data model. If your agents are manually clicking through five different records to understand the situation, AI will struggle too.

  • Make connections between objects. Ensure related records are linked logically—Assets to Products, and so on.

  • Consider external sources. Sometimes your CRM alone isn’t enough. Look at integrations with external systems—but build with long-term strategy in mind, not point solutions.


Good AI needs full context. Build a data model that delivers it.


 

4. Onboard Your AI

Think back to your first day on the job. You had questions. You needed training. You weren’t expected to perform like a 10-year veteran. You needed to learn not just the systems—but the expectations your boss, team, and customers had of you.


AI is no different. Yet we often skip this part entirely.


If you want AI to represent your brand, guide your customers, or support your team, you need to teach it how—and why.


What to do:

  • Define expectations. What tone, voice, or level of formality should it use? Should it be empathetic, direct, reassuring?

  • Explain the “why.” What’s the business goal? What does success look like? What does the customer care about in this moment?

  • Be specific. If you expect AI to draft replies or summarize interactions, outline how they should be structured. Use logic to guide and tailor outputs.


 

A Mini Readiness Checklist

Before you hand anything over to AI—make sure your foundation is ready. Here’s a quick gut-check based on the four essentials:


Know What You Want 

◻️ Have we defined clear business goals for using AI (e.g., reduce handle time, improve first-response quality)?

◻️ Have we identified a realistic pilot use case that’s meaningful, manageable, and measurable?

◻️ Do we have internal buy-in from key teams or champions to test and provide feedback?

◻️ Are we prepared to capture lessons learned from our pilot to inform broader rollout?

◻️ Have we established what success looks like—for both the business and the end users?


Get Your House in Order

◻️ Are free-text fields replaced or complemented by structured picklists or signal fields?

◻️ Is our knowledge base accurate, current, and categorized by product/solution/topic?

◻️ Are fields used consistently across teams?

◻️ Do we have a plan to phase out outdated fields or clean up legacy values?

◻️ Are there clear data entry guidelines in place so new records follow expected structure?

◻️ Have we identified and documented which fields are most important for AI to reference—and ensured their consistency through validation rules or automation?


Context is Key

◻️ Does our data model bring together everything needed to understand and resolve an issue—across related objects like Accounts, Assets, Entitlements, etc.?

◻️ Are agents currently piecing together context manually—or relying on experience instead of structured data? If so, is there a plan to structure that information in a way AI can reliably access and use?

◻️ Have we mapped out where external data may be needed? And do we know what sources, which data points matter, and how that data will be connected or integrated into our system?


Onboard Your AI

◻️ Have we defined the tone, voice, and formality AI should use?

◻️ Do our prompts include the “why”—what we want the AI to achieve?

◻️ Are we using logic to adjust responses based on case type, priority, or persona?

◻️ Are we testing and refining prompts like we would a new employee or process?


If you're checking most of these boxes—you're on your way to smarter service.

If not? That’s okay. This isn’t a one-and-done setup. 


 

TLDR  

Generative AI can elevate your service experience—but it won’t fix bad data, disconnected systems, or unclear processes.


If you want it to work, you need:

  • A clear vision of what you want it to do

  • Clean, structured, and current data

  • A connected context that AI can follow

  • Thoughtfully designed prompts and logic that reflect your brand


AI isn’t a switch you flip. It’s a system you design—for scale, for trust, and for impact.


Want help walking through this with your team? Let’s talk.

bottom of page