My notes from the second day at Support Driven Expo 2022. There should be more, but I missed taking notes for a couple of the sessions I attended before and after my own presentation.
The Value of a Support-Focused
Mackenzie Wells, Remind
- our solution: support-focused engineer
- identify two types of issues that a dedicated resource could alleviate?
- Product Areas of Focus
- Support’s access to CRUD operations for new features (main backend administrative tools)
- extra, transform, and load customer toolset
- data clean-ups
- copy changes
- redundant/repetitive tasks
- how do we take these functional focuses? how do we gain the attention we need? support for this idea? get headcount? how to get direct attention to the issue types?
- challenges: budget, low trust between support/engineering (kept asking vs. revenue generating), now baseline for effectiveness
- out of the wallet thinking
- identify existing team members who think like the product well
- build excitement with cross-functional leaders in engineering/product
- advocate for self-learning and guided learning as part of their goals: make it part of their job
- find the high impact wins with visibility internal/external
- how to iteratively and quickly identify success? Visibility: with whom to share the goals and outcomes?
- 88% engineering time on roadmap; percent of time engineers are working on new product features (not oncall incidents nor escalations)
- target 45% deflection rate; percentage of escalations that could be handled by the support-focused engineer
- started with identifying data to bee tracked, focused on learning and small iterative code
- exceeding deflection rate allowed more hires; beyond succeeding on deflection rate, began roadmap work for tools
- full roadmap and strategy for internal tools built
- summary
- overview of the concept: removing the barriers in your toughest support interacionts
- ID of need
- who are the individuals or skills sets
- measurement of success
Build better customer relationships with conversational support models
Brandon Tidd, 729 Solutions
- seeing CRMs evolve, things are speeding up
- we live in an on-demand society, customers expect you to be there 24/7/365, your competition are already doing it
- leveraged AI, where ZD Messaging comes in
- chat reimagined with intelligent deflection, intent driven dialogue
- provides various articles based on question, at its limit will ask for contact info to create ticket, email notification when new message
- can build workflows based on specified keywords/phrases
Make QA work for you
Ines van Dijk, Quality in Support
- Why do we QA?
- What parts do we need to pay attention to?
- What is the current status quo?
- how do you quantify/measure the quality? CSAT/performance reviews
- CSAT: is often a vanity metric
- Ben Franklin effect: if you have done me a favour, much more likely to do a return the favour
- not an indicator of the quality
- negative: usually policies and not quality of conversation
- Peak-End theory: remember memories based on how they felt at the peak and end of the event, memories are inherently biased
- only 15% of tickets tend to get CSAT
- CSAT and gent performance reviews don’t tell us the full story
- quality is not defined by the company but a feedback loop
- agent buy-in is much easier with QA
- goal buckets: performance reviews (use data to develop training material, assist new hires, perf reviews), CSAT alternative (discover deeper insights into customer perception), product investigation (extrapolate important product feedback), strategy transfer (putting info into the team): top-down process of taking strategic business goals to the team
Is good enough quality good enough?
Susana de Sousa (Director of Support, Loom), Ethan Walfish (Head of Technical Support, Gong), Ines van Dijk (Founder, Quality in Support)
- what are the essentials of good enough quality?
- right set of goals and tools, people to guide the process
- what you’re providing is inline with goals
- all about expectations, finding the right balance
- how do you defined quality?
- differ for every org, usually CSAT/agent performance
- hard to measure. CSAT is not a good reflection. More value out of your product because they reached out.
- spoke to as a human
- a lot of companies wants performance evidence (if agents doing well enough), why CSAT score, don’t know how to define quality, no one way to measure it
- how do you track quality?
- have to look at metrics: FRT, TTR, CSAT. Also needs to be qualitative measures: sampling, could we have done better
- ownership is really important. Know that quality is an expectation, important.
- how do you build those expectations?
- empathy: what does it look like? what do conversations look like? need to educate them on what it means in different situations
- find a quality expert
- more consistent or best every time?
- depends on your org
- current highest level of support, whereas been at places where they provide reasonably experience and be cost effective
- you need to be thoughtful about which approach to take
- quality is expensive: invest in people, training, spend resources, every percentage go up, cost, study effects of each quality point you score, find optimal: 80/85/90%
- easier to keep customer than get a new one, cost less in marketing/sales to do quality
- but hard to see. How do you measure impact
- which industries could get away with it?
- consumers demand
- bank/airline, good enough, because how many other options do you have?
- B2B is different: smaller might have less interactions,
- lack of industry standards
- need to have a quality specialist in-house instead of team leads do extra
- can companies set unrealistically high standards?
- what’s the volume? can they maintain quality with that volume?
- when giving people more time, have a conversation, instead of rushing and sending macros, quality goes up
- it’s about time, and ownership, letting people know what’s important
- find ways for team to talk about: proud, coaching, problem (rose-bud-thorn)
- best thing to do: set and meet expectations. Making big efforts don’t necessarily increase customer satisfaction
- metrics
- don’t need to reinvent the wheel, but need to rethink why tracking CSAT?
- metrics are about context. Can’t look at it by itself.
- customer effort score is not look at much right now
- metrics are like growing a tree. Water regularly, need to see where it goes, consistent measure.
- customers care about value: CSAT is only at that exact moment, need to measure the value. Optimize around that value.
- surveys: isn’t it about the questions you ask? when it’s being asked?
- CSAT is an easy metric to obtain, but it’s an empty thing, memory biased, negative CSAT is not about the conversation/support itself. Doesn’t give you the context you need to know whether conversation was good.
- Can have negative CSAT conversation that is 100% quality
- should we be optimizing for CSAT or quality?
- what matters is the change over time. If it goes up or down suddenly.
- support can’t change customer effort score by themselves, need product/dev
- want more volatile metrics in order to figure out what’s going on
- what other metrics can we use to get the data?
- value enhancement score: impact of support’s actions.
- ask 2 questions: 1 is around product. did the support experience improve value from the product?
- 2: was your purchase validated by support? was it worth the money they spent?
- engagement, loyalty, churn, product adoption,
- did Tableau analysis: correlation between product use and support interaction
- as machine learning are being introduced, can include all tickets in “sample”
- can help remove customer bias
- near-sentiment analysis, how customer is responding to each agent response, clearly see sentiment, progress in ticket
No more Nonpologies: apologize sincerely: The simplest way to improve customer experience
Leslie O’Flahavan, E-WRITE
- Example: “Customer satisfaction is our highest priority. We regret any inconvenience to the customer.”
- not addressed to customer, no accountability, doesn’t address situation
- wrote it to avoid legal action, don’t want to validate
- customers want things that are not in your control
- empathize when you can’t apologize
- What situation does your team regularly apologize for? (Something you’re in control over.)
- What words do you use? “I’m so sorry for the trouble you’ve experienced…” Avoid “it’s out of our control”
- Does the apology work? If they thank you. Did they receive thee apology at the right time? Willing to continue to continue the conversation, but also not repeated contact. Did they calm down?
- should sound sincere, be well-written. Help the customer resolve their issue or calm down.
- Four traits of apologies customers can trust
- sincere
- specific
- personal: to writer, reader, or both
- proportionate to the offence
- word choice: avoid defensive wording
- use the words that the people you’re apologizing to would use
- taking ownership
- provide a timeline
- three strategies for writing sincere apologies
-
- Replace the wording “We regret any inconvenience this may have caused.” > “We regret the inconvenience this caused.”
- Replace “inconvenience” with a more specific word: “disappointing” “frustration” “troubling patterns” “mistake” “disfunction” “hassle” + add noun after “this” > Example: “We regret the frustration this delivery mistake has caused.”
- Alt: “regret” > “apologize” . “We” > “I”.
- Alt: remove emotion, could imply it’s their fault for feeling that way > “I apologize for this delivery mistake.”
- want to be
-
- Pair “I’m sorry we…” with “we should have…”
- can’t be a tragic mistake, can work for small mistakes
- quote what you’ve learned
- context: apology comes first, give what happened, then action on how to prevent it from happening again, make it scanable.
-
- Follow your apology with empathy.
- Example: “I, too, would have expected this task to be taken care of after the first request.” “As a parent myself, I would not have been happy that my child was stuck at the airport.”
- Template: “Thank you for writing to us about… I was sorry to hear the wait was [duration] when you were told it would be [duration]. We should have given you accurate information right from the start. / I’m glad you reached out to us, so the management team at [location] can work with the staff to ensure our guests receive accurate information.”
- 10-15% should be apology
Creating a knowledge-based culture: Leave it better than you found it
Phil Verghis, klever insight
- seek to understand before you seek to solve
- knowledge: how to, actionable, context, how & why of things, anything that is useful information for everyone
- At klever, Knowledge about customers, employees, business and applying what you learn
- 60-90% problems that have been solved before, if only we knew what we already know.
- support knows more about the customers and company than anyone else
- knowledge-first: Typically, case-first/firefighting, especially escalations. Shift to knowledge-first, as much time needs to be spent making use of the knowledge to prevent a repeat case as time taken to solve the case.
- habits > behaviour > culture
- think of the key drivers, joyful, beautiful to do it, those habits build behaviour and culture
- all of us are in charge of improving the KB
- one methodology: Knowledge Centred Service (serviceinnovation.org): The UFFA principle: Use it, Flag it, Fix it, Add it
- Simple, joyful, in their workflow
- Simple: if I can’t explain it in one sentence, drop it.
- Joyful: love to help. All of us help each other. Not a us vs. them.
- In their workflow: tie it to ticket workflow.
- Workflow through two lenses: customer effort, employee effort
- Make it easy to search knowledge base, what to enter, attach files, add colleagues to case (CC)
- Often, high quality customer experience comes at the expense of employee effort.
- Knowledge-first approach reduces customer, employee, and manager effort
- example: applying knowledge sharing to support quick growth, increase customer satisfaction, decrease delivery costs: 300% improvement in productivity
- Measure the right things
- “We measure too much. Most of what we measure doesn’t matter.”
- knowledge is contextual
- don’t want goal on activities
- Level Zero Solvable: Make what you know available to the customer, in their own words. (Not deflection.)
- Time to publish: How quickly do you make what you know available to your customers (minutes): Reduces customer effort and employee effort. Trust your people to
- Customer Effort Score: Support made it easy for me to handle my issues. (Knowledge about Customers.)
- Employee Effort Score: … made it easy for me to do my job. (Knowledge about Employees.)
- Time to competency: how quickly a new (or existing) team member gets up to speed. (If done, usually by tenure or passing an exam.) Advanced: As measured by your team trusting you.
- include people in the decisions
- Manager move from enforcers to player-coaches
- managers good at keeping things on track, on time
- but not under conditions of true complexity
- grading > guiding
- Measure are for the team, not for the managers.
- Context: We will do X because… // Intent: Here’s what we hope to achieve by focusing on X…
- Guidelines: Here’s how I see us getting X done… // Guardrails: Here are some common traps you should watch out for…
- start with what you can, keep doing what you can to improve things
- design for the best case scenario, and then take care of exceptions
See you next year
Thanks to all the organizers and presenters!
Hopefully, see you next year!