Check out this video of Steve Jobs talking about the origins of the iPad.
The text that appears as Jobs talks is how a computer program developed by a firm called Beyond Verbal is interpreting Jobs’ emotion. That is, the program is judging whether Jobs is feeling fatigue or nostalgia based not on what is saying but how he is saying it.
Kinda nifty, but does it have commercial applications? The claim is yes and it is in call centers.
Here is how the New York Times tells it (In a Mood? Call Center Agents Can Tell, Oct 13).
“It helps agents decide how to respond. If there’s a customer-is-always-right type, you want to give them proper appreciation and respect,” Mr. Emodi says. “If the caller is seeking friendship, the agent should speak in a friendly, direct way.” …
Executives say a few companies are working on call-center applications for the software and they expect the first of those apps to be ready for use around the end of this year. The idea is to use it not just to identify and mollify dissatisfied callers, but also to help agents distinguish between frustrated callers who wish to solve a problem and are worth spending more time on from angry callers who want merely to vent.
As the article points out, some kind of automated review of call center interactions is not new. Multiple firms sell technology to review recorded calls that allow management to learn about customer interactions beyond what shows up in transaction records. That is, a transaction record might report that a customer called to dispute a bill but reviewing the call can determine whether the customer was inquiring gently or flying off the handle from the moment he was connected to the agent.
What is different here is that this new technology is real-time. The agent can potentially be guided in dealing with the customer. Think of it as automated empathy — the agent can respond as if he was really good at reading others’ emotions even if he is totally tone-deaf on such matters.
The question to my mind is how good/accurate this technology has to be for it to be really useful. For calls that are straightforward and uncomplicated (e.g., changing the mailing address on a credit card), I am not convinced that reading subtle emotional clues matter all that much. Arguably, that is not much of an objection as more and more of these basic transactions are done over the web instead of through a call center.
But even for more complicated calls, I am not sure whether there is a lot of value. If a caller is really, REALLY angry, the agent knows. What is the level of a critical emotion that the technology can identify with certainty that the human cannot grasp on his or her own? What is the risk of responding to a misinterpreted emotion? That is, if a customer is flagged as being angry but genuinely is not, how does that effect the overall transaction?