Create smart trackers
Smart trackers use AI to identify and surface specific concepts by taking into account diverse words, sentence structures, and contexts.
To train a smart tracker, you need at least 500 recorded English calls; best results are achieved with at least 1500 calls. If you have fewer than 1500 calls, choose concepts that are fairly common, since this enables us to build a model with many examples of sentences that positively match your concept. This is important for building a strong smart tracker.
Tip
Your org or workspace doesn’t have 500 English calls yet? Or maybe it does, but the concept you want to track isn’t that common. In that case, you may want to build a keyword tracker, which surfaces words and phrases in your calls. For more, see this.
Click Company settings > Customize analysis > Trackers. In the top right corner, click + CREATE TRACKER, and then choose Smart Trackers.
-
Click + CREATE TRACKER.
-
Give the tracker a name and description that reflects the concept you want to track. These should be meaningful enough so that people at your company who see the tracker understand what it’s tracking.
-
Filter the types of calls you want to train the tracker on. For example, outbound calls, external calls, calls in a certain stage of the deal. These filters help focus the tracker, so that it produces more accurate results. They also determine which calls the tracker can work on after it’s activated.
-
Give at least 5 examples of real sentences from real call that fit this concept. We'll use these sentences to pull sentences from your calls that match (or don't match) the concept.
-
Tag the sentences to train the model to identify which types of sentences match your concept, and which ones don’t. Basic training for the model requires 4 rounds of tagging. Each round contains 25 sentences.
-
After 4 rounds of tagging, review the tracker to see if it’s producing accurate results.
-
Results look good? Activate the tracker. Set up a stream when you activate the tracker to automatically collect relevant calls in a folder. Want more accurate results? Keep training the tracker by completing more rounds of tagging.
-
After activation, you can continue training the tracker to improve accuracy. Go company settings > Tracker, locate the tracker you want to keep training, click the action menu beside its name and click Train more.
Tracker name: This is the name you’ll see everywhere the tracker appears, so choose a name that is short and meaningful. For example, “Pricing objections” is better than “Pricing” or “Customers who say that the price is too high".
Description: Describe the concept behind the tracker so that anyone who see the tracker understands what it's tracking. This description appears when someone hovers the tracker name.
Tracker filters: Filters narrow down the types of calls that the tracker will be applied to, so you get accurate, meaningful results. Mandatory filters include the following:
-
Track when said by: Decide whether you want the tracker to be applied to what your team says, what the customer says, or both.
-
Web conference or telephony calls: Set whether the tracker is applied to outbound calls, inbound calls, conference calls or all three.
-
Internal or external: Set whether the tracker is applied to calls that took place between team members only, or those that took place with customers, too.
If you want to further specify which calls the tracker is applied to, click More filters and you’ll see the following optional filters:
-
When it was said
-
During which opportunity stage
-
By which team
For example, let’s say you want to track how your customer support team is introducing themselves on calls from customers. You can choose to apply the tracker to:
Inbound calls: From customers to the support center
External calls: Involving team members and customers
Said during: The first 5 minutes of the call, in the discovery stage, by the Customer Support team.
Once you’ve set these filters, click NEXT and you’ll move to the page where you provide example sentences.
Important
Once you set filters for which types of calls you’ll train the tracker on, these filters will be applied going forward and cannot be changed. So, if you train the tracker on inbound calls only, you’ll only be able to use the tracker on inbound calls. If you train the tracker on Jane Smith’s team only, you’ll only be able to use the tracker on Jane Smith’s calls.
Once you’ve prepared the tracker settings, you’re ready to start building the AI model. The first step is giving example sentences.
These sentences are what the AI model uses to ‘understand’ what you are looking for. Provide at least 5 examples of real sentences from real calls that you want the tracker to surface.
Quick tips for writing great example sentences
-
Use real sentences that people really said. Check your calls or talk tracks to find them.
-
Keep the sentences short and precise.
-
“Tell me a bit more about the main challenges you’re facing right now.
-
-
Choose sentences that have different words.
-
"This is John on a recorded line."
-
"My name is Claire and I’m calling from a monitored line."
-
-
Use a variety of sentence types.
-
"That’s just not in our price range."
-
"Could you go any lower on that cost?"
-
-
Look for sentences that are specific, not general.
-
"Our main priority is driving higher conversion rates through the sales funnel."
-
-
Make sure each example is a single sentence only.
Using the example sentences you provide, we create a set of sentences from your existing calls. Some of these sentences are similar to your sentences; some aren’t. Now, you’ll tag the sentences so the model learns which ones fit your concept and which ones don’t.
-
Tag sentences YES if they fit your concept.
-
Tag sentences NO if they don't fit your concept.
-
Tag sentences NOT SURE if you’re not sure.
The sentences you’ll be tagging are in bold. The sentences that are not in bold, before and after the bolded sentences, give you context.
Want to hear a snippet of the call to be sure of the context? Go ahead and click Go to call in the bottom right.
Each round of tagging includes 25 sentences and takes about 10 minutes: This includes the amount of time it takes you to tag the sentences, and the processing time for us to train the model.
Important
During the first 4 rounds of training, you're building the model, so expect to tag many calls of the NO. This is fine, and expected, because that’s part of the training process. The model can’t learn with only YES tags, because then it doesn’t learn which types of sentences to avoid.
After you've completed 4 rounds of tagging, you’ve trained the model enough to review and evaluate its results.
To do this, go to the Review model page.
The results you see on this page reflect the types of sentences the tracker will surface if you activate it right now.
You’ll see 20 sentences that the model has surfaced in your calls. Unlike previous screens, these aren’t sentences that need to be tagged, and the model isn’t deliberately giving you samples of sentences that don’t match your concept. These are samples of the types of results you’ll get when you activate the tracker.
If a majority of the results fit your concept, activate the tracker. If you’re not satisfied with the results, keep tagging sentences to improve the trackers accuracy.
When you’re satisfied with the model’s results, activate the tracker. You’ll be asked to choose if you want to apply the tracker to upcoming calls only, or calls that happened in the past (up to 12 months in the past).
When you activate the tracker, you can automatically create a private stream based on tracker mentions. To learn more about streams, and about adding people to stream notifications, see this.
If you apply the tracker to calls that happened in the past, it will take up to 24 hours for the results to be processed. You’ll get an email notifying you when the results are ready. Until then, you can view partial results on the Search page.
You can train smart trackers after activation to improve accuracy. To do so, go to the Tracker page, locate the active tracker, click and select Train more.
Once your smart tracker is set up, you can view results in the following places:
-
Search page
-
Team stats
-
Streams
-
Saved alert emails
-
Initiative boards
-
Calls API
-
Calls CSV
-
And more
Note
Dive deeper with the Smart tracker course at the Academy, and learn how to create accurate, insightful smart trackers for your team.
For a winning recipe for tracking team initiative adoption with smart trackers, see: Tracking the performance and adoption of strategic initiatives
Wondering how accurate your smart tracker is? Here's how you can check.
(Note: You must be a business admin)
- Go to Company settings > Trackers and locate the tracker you want to assess.
- Click Edit in the top right corner and go to the Review model page.
- If the tracker has had at least 4 rounds of training, you’ll see the performance estimation.
This estimation is based on how the smart tracker performed on sample snippets that were already tagged when someone in your company set up the smart tracker by tagging snippets Yes, No, and Not sure. It’s not an exact measure of how the tracker will perform in real life, but it enables you to get a fair and straightforward approximation.
Precision refers to the percentage of smart tracker detections that are correct. For example, when precision is 80% and the smart tracker detects 10 snippets, that means 8 of these snippets are correct detections and 2 of them are false.
Another way of describing precision is by assessing the number of true positives and false positives. True positives are detections that are correct. False positives are detections that are incorrect. When precision is 80%, it means that there were 8 true positives and 2 false positives.
High precision matters when:
-
You want to reduce false leads
-
You want to make optimal use of your resources
Hit rate, also known as recall, refers to the percentage of correct snippets that the smart tracker detects out of all the correct snippets that exist. For example, when the hit rate is 70%, it means that the smart tracker detected 7 correct snippets for every 10 correct snippets that actually exist. It missed 3 correct snippets.
Another way of describing hit rate (recall) is by assessing the number of true positives and false negatives. True positives are detections that are correct. False negatives are detections that were missed. When the hit rate (recall) is 70%, it means that there were 7 true positives and 3 false negatives.
A high hit rate (recall) matters more when:
-
You want to detect as many relevant snippets as possible
-
You care less about seeing snippets that aren’t relevant than you care about missing snippets that are relevant
This estimation is based on test sets of snippets that were tagged when the smart tracker was set up. We already know which snippets match the concept and which ones don’t, so when we run the smart tracker on them, we cross reference the results with what’s already been tagged.
We calculate these metrics by running the model on snippets that have already been tagged. When the model was trained, at least 100 snippets (4 rounds) were tagged Yes, No, or Not sure. We set aside 10% of these snippets for validation (known as a held-out set) and used the other 90% to train the model.
After training the model, we test it on the held-out set to see how many of the Yes snippets it detects correctly, how many Yes snippets it doesn’t detect, and how many No snippets it detects incorrectly.
We do this once, remix the snippets, set aside a different held-out set, and test the model again. We repeat this process 10 times to get the performance estimation.
Let’s say you’ve built a tracker to detect when customers ask for a discount. If the accuracy estimation show precision at 80% and the hit rate is 90%, it means:
80% precision: Out of 10 instances that we identified, 8 of those included customers asking for a discount. That means we flagged 2 instances where customers did not actually ask for a discount.
90% hit rate: If there were a total of 10 instances where customers actually asked for a discount, then we flagged 9 of those instances but missed 1.