1. Help Center
  2. Skills Coaching + AI

What is Intent AI and How do I Use It?

Bright utilizes large language models (LLMs) to analyze learner statements to see whether they match the tone, meaning, and substance of your standards.

The 3 statements below are excerpts from a software sales manager's discussion with a prospect. Read each statement, then analyze what you think the common thread is across the 3 statements.

  1. Thanks for letting me know that you already have a CRM provider in place. Have you ever considered getting a price quote from an alternative provider?
  2. I’m glad you’re happy with Salesforce. For what it’s worth, we’ve had quite a few companies similar to yours switch from Salesforce in the last few months. Do you have a minute to hear why they made that choice?
  3. Thanks for your willingness to share a little about your current platform. I hope you don’t mind me asking, but how is your current platform using generative AI? I ask because this feature has saved our customers millions this year compared to platforms that don’t have this feature.

In each of these statements the sales manager: 

  • Is trying to overcome an objection from the prospect
  • Is using open-ended questions
  • Is first responding kindly to the objection before making the attempt

Each statement uses VERY different words and language, but they all have a similar intent. And it just so happens that each are correctly applying the training they received from their company. 

With this in mind, let's talk about Bright Intent AI. 


How it Works

Elsewhere we've outlined how Term and Phrase AI matching works. The concept there is simple: we look for core phrases made by the learner that match acceptable phrase alternatives entered in your moment or conversation simulation node. 

Intent AI works differently. 

With Intent AI, we don't limit Learners to using specific words or phrases. we let them speak in their own, natural tone and style. But we DO expect them to match the meaning and tone of the approaches and standards we've trained them on. To do this in the system:

  • Step 1: Provide acceptable statements that both meet your expectations AND have similar approaches. We've found it usually takes 3 intent samples to get good results. 1 is too narrow, and 2 may not provide enough diversity of language, which can cause good learner submissions not to hit as expected. 3 to 5 samples is a sweet spot. 
  • Step 2: Click the Search Icon in the Explanation section to bounce your intent statements off of the Bright LLM. The result will be a generative , 3-4 sentence summary of your Intent, based on your sample statements. 
  • Step 3: Make sure you AGREE with the generative AI summary. If you do, you'll get great results from the Intent Match feature. If you don't, you need to add more sample intent statements with a little more diversity of style/language, but that still meet your expectations. 
  • Step 4: Enter the coaching, stars, and other elements of the experience just like you do for the other types of AI - and then Save. 

You can reference the node view where you'd execute these 3 steps below.

Intent


Tips for Designing Intent AI 

The steps for entering Intent AI are super easy. But there is definitely an art and a science to making this approach work. Here are our top tips for getting good results, fast.

  1. Use Both Full and Partial Intent in Your Rules:
    To design this feature we partnered with amazing industry experts in generative AI. While working, we realized that sometimes the LLM detected SOME fit between learner statements, but not a FULL match. 

    For example, if the intent was to 1) Provide a warm greeting, 2) note the company name, 3) note a recorded line, and 4) ask for the customer's name. A full match according to your samples may be something like "Hello, thank you for calling City Bank. My name is Rob and we're speaking on a recorded line. With whom do I have the pleasure of speaking?"

    This is clearly a better fit for the intent than a learner who says 'Thanks for calling City bank. We're on a recorded line. May I have your name?' The Learner may also leave out 1 of the 4 elements, which is 'close' to the intent, but not a true match.

    To reflect this we offer both Full and Partial Intent Match in the dropdown (noted below). The best practice in most simulations is to enter BOTH match types with the SAME intent samples. In this way, you can allow learners to proceed in the conversation simulation if they don't match intent perfectly. This also allows you to provide lower star ratings + different coaching to the learner for partial intent matches. 

    The best way to see how flexible the Partial Intent match really is will simply be to test!

    Intent Dropdown

  2. Stay Humble - the LLM is Pretty Darn Good At Defining Intent
    There may be times when you DISAGREE with the LLM analysis of your intent samples. Before you add more samples or decide 'the AI is broken' take a deep breath and re-read your entries. We've been very pleasantly surprised with how good Bright Intent AI is at picking up on subtle intents in samples. 

    For example, during design we noticed that if we used intent samples that wrote out "thank you" and used more formal terms like "ma'am" across all 3 statements, the LLM may deem a learner submission as 'Partial Intent' due to an informal tone (e.g. using 'Thanks' or words like 'cool'). 

  3. Consider Adding Phrase Matches Too
    Intent is a great feature, but you don't need to abandon Phrase matching AI. In fact, you can use BOTH at the same time. It may be a good idea to add simple Phrase match conditions to your AI rules to ensure that a learner meets the intent but still meets certain key compliance considerations. 

    For example, you might write an Intent condition and then add 'recorded line' as an additional Phrase match. This would mean that the learner has to meet your intent AND use the word recorded line somewhere in the statement. 

  4. Test Before Releasing to Learners
    One of the core AI principles at Bright is to stress test and improve AI conversations and coaching with sample learners before releasing experiences in a live program. NEVER just build your rules, publish, and start training learners. While it may be fast to click the buttons and build the experience, you'll get much better quality results if you use our other features to verify the rules are working as expected before using them in formal company training programs.