It’s Time To Kill The Net Promoter Score

Management ideas come and go. 

Unless we’re talking about the net promoter score, which has come, but hasn’t left. It’s the cockroach of management metrics. 

For the life of me, I can’t understand the continued interest in this metric, or as some delusional people call it, a system. 

Intention to do anything — let alone to recommend a company one does business with, is useless. 

Go ahead, tell your senior management team that, on a scale of 1 to 10, that your intention to increase revenues and profitability this year was a 10. Unless you actually grew revenue and profitability, don’t hold your breath waiting for a good bonus. 

But it’s easy to understand and inexpensive to implement, according to net promoter groupies. 

Go to hell. That’s pretty easy to understand, too. Not very helpful in actually getting you there, though. Utility — how helpful a metric is at helping make management decisions — is the criteria for implementing a management metric, not how cheap it is to implement, or how simple the definition.

The utility of NPS just isn’t there anymore. There are better metrics out there. This post is about one of them: Referral Performance Score.

—————

Zendesk recently asked consumers “How do you show loyalty to the firms you do business with?” The top answers: Providing referrals and buying more. 

20131118 Zendesk2

If providing referrals and buying more are the top ways in which consumers show loyalty, why would you measure anything else, let alone “intention” to refer? And why wouldn’t you measure and track actual referrals? 

—————

Bottom line: Financial institutions should stop wasting their time with the net promoter score, and start tracking and measuring actual referrals. 

And they should go one step further, and start measuring the Referral Performance Score. 

It’s very simple to calculate (if that’s your criteria for a metric): Multiply the percentage of customers that refer by the percentage of customers that grow their relationship. 

referral performance score

Based on Aite Group research, FIs (banks and credit unions) increased the percentage of consumers that provided a referral in 2012 to 39% from 36% in 2011.

Even more impressive is that the percentage of customers that increased the number of accounts held with their primary FI grew from 10% to 16%.

Overall, the industry’s RPS increased from 353 to 547.

—————

As a group, credit unions really stood out in this year’s RPS calculations. Although they didn’t expand the percentage of their members that referred the CU by much, they did significantly increase the percentage of members that added accounts.

If you were a credit union executive, which would you rather know: a) How your CU compares to other CUs in terms of % of members that referred the CU and the % of members that grew their relationship, or b) How you CU compares to other CUs in terms of % of members that intended to refer the CU on some subjective scale?

If you answered B, please leave this site.  

And if you were a bank or CU exec, how would you know if your marketing efforts were paying off? The number of new customers is certainly a good measure, but in a down economy the focus may be on growing the relationship with existing customers. 

But not all customers will be in the market for new accounts. If not, getting referrals from them is a great way to grow the business. 

—————

I don’t dispute that measuring and tracking referrals is going to take some effort and investment. But if it’s the most important way that consumers show loyalty, isn’t it worth the effort?

By doing so, your FI creates an ability to calculate RPS for all customers, continuously. And not just periodically, and for a sample of customers, which is what NPS or even customer sat scores are going to give you. 

The ability to slice and dice RPS for different customer segments (e.g., product ownership, demographics, etc.) makes the RPS a far more valuable and flexible metric than any survey-based metric. 

—————

It’s time to kill that little cockroach of a management metric, the net promoter score.

—————

Note: For those of you mathematically inclined, the “2013” stats cited refer to the period of Q2 2012 thru end of Q1 2013, which I why I referred to the 2013 numbers as 2012 behavior. I should’ve been a bit clearer on this. 

About these ads

22 thoughts on “It’s Time To Kill The Net Promoter Score

  1. Great post as always. I did have a question about this sentence in the last paragraph “By doing so, your FI creates an ability to calculate RPS for all customers, continuously”. I get that you can always know how many accounts your customers/members are opening, but don’t you still have to survey to know if they are referring (such as the Zendesk example you used in the blog post)? Or is there a “continuous” option for knowing that your customers/members are referring? Even if you ask new customers/members how they heard about the FI, they may not identify a specific referral from an existing customer/member as the source …

    • Companies like Geezeo, Marquis, and CU Grow have referral management products that automate the referral process. Seems to me to be a whole lot simpler to implement a capability like that than trying to ask every new customer if they were referred and by whom.

  2. Ron, normally I’m an advocate of your posts but you’ve made a few errors here and so your conclusion is flawed:

    1. “But it’s … inexpensive to implement…” : Sure adding a question to a survey is easy but that is not NPS, its just the benchmark. Implementing NPS is about understanding what is driving customer loyalty and making changes in the organization to do that. It is difficult and a pretty big cultural change for most organizations. The metric is just the most visible part of the process but it is not THE process. Easy and inexpensive it is not.

    2. Your Zen Desk study isn’t worth much: what people say they do and what they actually do are vastly different. These data are not reliable in explaining what people will actually do. I’ve seen you tear apart poorly implemented surveys like this in the past.

    3. NPS is not about measuring referral propensity. Yes I know what the question asks and so that fools a lot of people. It is about measuring customer loyalty and I’ve got plenty of evidence to prove that it does that task pretty well.

    4. Your new metric is great but it’s just a metric. All it does is tell you where you are. How are you going decide what and how to change? Are you going to randomly make changes in the business until the RPS goes up and then back track to see what drives it? No, you need to understand what is driving it and then make educated guesses and informed decisions on changes to the business. So you need more than RPS.

    Plenty of companies have implemented NPS badly, as just a metric, and failed to achieve an ROI on the process. However, plenty more have implemented it well and used it to drive positive business value. I know, I’ve seen it.

    • Thanks for your comment, Adam. At times I’m not sure if you’re agreeing with me or arguing with me. Some responses to your comments:

      “1. “But it’s … inexpensive to implement…” : Sure adding a question to a survey is easy but that is not NPS, its just the benchmark. Implementing NPS is about understanding what is driving customer loyalty and making changes in the organization to do that.”

      Asking “how likely are you to recommend us?” is NOT about “understanding what’s driving loyalty and making changes.” You have to ask a lot of other questions — and, I would argue, even more importantly — observe a lot of behaviors. When a company does that, it renders the “how likely are you to recommend us” question useless.

      “2. Your Zen Desk study isn’t worth much: what people say they do and what they actually do are vastly different. These data are not reliable in explaining what people will actually do. I’ve seen you tear apart poorly implemented surveys like this in the past.”

      Totally agree that what say they do (or will do) and what they actually do are vastly different. Which is EXACTLY why NPS is total BS. People say they’ll refer the company — and even more amazingly assign a score of 9 or 10 to that likelihood — and then never do. As you imply, behavior is way important than intention.

      But that doesn’t mean that referral behavior and purchase behavior AREN’T the most important measures of loyalty. I was using Zendesk’s survey as quantitative proof to support my belief that those are the two most important indicators of loyalty. Why? Because they’re directly tied to the bottom line — bringing in new customers and ringing up more sales. Other behaviors (and attitudes and intentions) are nice — like wearing shirts with a company’s logo, chit-chatting thru Facebook, and “intending” to refer — but bottom line behavior is most important.

      “3. NPS is not about measuring referral propensity.”

      You smoking crack with Rob Ford? NPS is ALL ABOUT measuring referral propensity. You can make all the claims you want that it isn’t, but as long as the score is based on the answer to “how likely are you to refer” it’s about referral propensity.

      “4. Your new metric is great but it’s just a metric. ”

      True dat. But I assert it’s a superior metric to NPS and customer satisfaction. My rationale and justification is included in the report I published. Sorry, won’t spill all my candy lobby.

      “Plenty of companies have implemented NPS badly, as just a metric, and failed to achieve an ROI on the process. However, plenty more have implemented it well and used it to drive positive business value. I know, I’ve seen it.”

      I don’t doubt that you have. But instead of asserting this, what would be helpful is a contrast between those that have implemented it “badly” and those who have it done it right. There’s still one problem that’s likely to crop up: PROVING that the improvement in results is tied directly and solely to having implemented NPS. Good luck with that.

      • Ron, you’ve got a live one on this topic and I’m enjoying it.

        These are a bit out of order but they build a story so stay with me.

        To respond again:

        1. “Asking “how likely are you to recommend us?” is NOT about “understanding what’s driving loyalty and making changes.” You have to ask a lot of other questions.”

        Correct — you are making a common mistake in the rest of the article. NPS one metric. As others have pointed out Bain and Satmetrix have done this approach a disservice by promoting the one question idea. The one question provides a proxy for customer loyalty not how to fix it.

        “3. NPS is not about measuring referral propensity.”

        I haven’t spent much time with Rob Ford but but again you are incorrect. NPS is NOT measuring referral propensity. NPS IS about a simple to access proxy for customer loyalty. The form of the question is not relevant, it is what the answer tells you that is important.

        NPS has been shown to be a good proxy for loyalty and that’s what it is used for, at least by us.
        [Happy to reference some papers on this.]

        2. “As you imply, behavior is way important than intention. ”

        Going back to the point above: whether or not a person actually recommends the product/brand in question is irrelevant. NPS is a loyalty proxy. In fact due to different personality profiles some people will naturally recommend more than others.

        How does RPS handle that? Do you give bad service to non-recommenders on the basis that they don’t bring you new business?

        4. Implementation success

        “Sorry, won’t spill all my candy lobby.” Yeah but if you don’t make it public, as NPS is, then the post seems self serving.

        Bad implementation: adding a question to the annual customer survey and publishing it in a Press Release. This does nothing but feed/starve CEO egos

        Good implementation: Engaging staff around a continuous improvement process. Implementing on-going customer feedback with the NPS question and support questions. Analyzing this data in context. Implementing root cause analysis and action techniques to identify issues and drive changes to lift the score. Using the score internally to track progress. Publishing PR with revenue improvement year over year.

        As you can see the second is more difficult so not often implemented.

        • Adam:

          You write: “NPS has been shown to be a good proxy for loyalty”

          HAS BEEN….yes, NPS is a “has been” — couldn’t have said it better myself. :)

          This is one of the major points here. There are better metrics (i.e., proxies) for capturing loyalty. NPS was developed back when tracking behavior was too difficult or expensive to do. It’s 2013. The world’s changed. Big Data is here. We can better (i.e., more easily and more cheaply) track behaviors.

          Every time you (or the other NPS supporters) say “NPS is a good proxy for loyalty” you skirt the issue, or definition, of what exactly loyalty is.

          Here’s one definition (formula) : Customer loyalty = repeat buying behavior + referral behavior.

          I’m ok with you defining it differently. Tell me how you DEFINE loyalty so I can determine if RPS is a “better” “proxy” or not.

          Tere’s no need to ask about intentions anymore. Among just a sample of customers. Which may be biased. And whose answer to the NPS question may be unduly influenced — for the positive or negative — by the latest interaction with a company.

          As for what I’m not sharing here….yes, that could make this self-serving. But guess what? Even if I shared the whole report here, it would still be self-serving. I don’t charge for reading this blog (I can’t imagine that anybody would actually pay to). So why am I doing this? Better damn believe it’s for self-serving reasons.

          But more specifically to what I’m withholding, it’s simply a chart which shows a set of criteria for determining the effectiveness of a management metric (which I did not develop, lending at least a litter objectivity and credibility to it), and then contrasts NPS, RPS, and customer satisfaction along the criteria.

          • Ron,

            Customer loyalty = repeat buying behavior.

            I hate to resort to this … but Wikipedia: “Loyalty is faithfulness or a devotion to a person, country, group, or cause.”

            I’m not sure why you would add “referral behaviour” to this definition. You might as well add “the blueness of eyes”.

            Yes there are better proxies for loyalty but I would argue they are not accessible to most businesses. In fact I have argued it here: http://www.genroe.com/blog/why-should-i-choose-nps-over-customer-satisfaction-or-customer-effort-score/7626. (I am sharing the result of my analysis. :-) )

            “And whose answer to the NPS question may be unduly influenced — for the positive or negative — by the latest interaction with a company” — actually that’s the whole point of something like transactional feedback. Someone’s loyalty is mostly likely to influenced by their last interaction as opposed to say just walking down the street. You want to capture that so you can fix the transaction.

            If you want to open a new front why not write a post on how good Big Data is and why it’s changing the world. I’ll be happy to stand on the other side of the boxing ring for that as well. Might as well call it Big Baloney if you ask me.

          • I would add referral behavior because its a demonstration of “faithfulness.”

            But I’m ok using the Wikipedia definition (mostly because it will serve my interests in this case).

            As defined by Wikipedia, loyalty is simply not a quantifiable construct. You can’t quantify the extent to which someone is devoted or faithful.

            THEREFORE, yours (and Rebecca’s) claims that NPS is highly correlated to loyalty is patently untrue. How can you correlate something to something else that isn’t quantifiable?

          • “NPS was developed back when tracking behavior was too difficult or expensive to do. It’s 2013. The world’s changed. Big Data is here. We can better (i.e., more easily and more cheaply) track behaviors.”

            I don’t want to go on a “big data” tangent b/c that word has been tossed and kicked around enough over the past year. But yes… tracking behavior (digital and transactional) is possible yet from my perspective many credit unions have yet to embrace this.

            Why? To begin, many don’t know where to start or are simply scared to. In the case of referrals “big data” could be used to help credit unions go beyond the standard, undifferentiated “refer a friend and get $25″ offer.

            Let’s use the aggregated transactional data of PFM as an example and start by identifying trends on the top 3 most common retail purchases members have made over a monthly or quarterly basis. This report may find the most common retail purchase among members may include:

            – Starbucks
            – Amazon
            – Shell Gasoline

            It is now possible to segment members into these three categories and run a personalized digital referral campaign through email and other digital communication channels. Instead of offering the generic “refer a friend and get $25″ offer, the credit union could have three different offers depending on the member segment:

            1. Refer a friend and get a $25 Starbucks gift card
            2. Refer a friend and get a $25 Amazon gift card
            3. Refer a friend and get a $25 Shell Gasoline gift card

            The dollar value of the referral offer remains the same at $25. However, the perceived value to the member is higher based upon their own purchase and transactional history which could spur the action to refer friends and family thus generating leads for loans and new accounts.

  3. Ron –
    I agree with you that it’s great to ask your customers to put their money where their mouth is – asking them to buy follow-on services or to advocate that other people buy your services and measure the results. Yet in a model where consumers buy less frequently (e.g. a water heater repair) or have different buying cycles, isn’t Net Promoter Score still a pretty good lagging (if not leading) indicator of your brand value?

    I believe NPS is a great way for any business to measure whether consumers feel that it’s a valued service. Measuring whether those consumers buy another service is also an important metric, and I feel NPS is still really useful to answer the question: “does my business measure up to the service results for Disney, JetBlue, Amex, and Amazon?”

    Thanks for the great article,

    Greg

    • Greg: Measures like NPS — and my own RPS — are probably even more important to companies/industries where there isn’t frequent purchase behavior, because, if there was, repeat buying would be a better measure of loyalty. But in businesses where repeat buying isn’t frequent, looking to other types of profit-generating behavior — like referrals — is important. But intention to refer is useless. Only behavior matters.

  4. Wow – sounds like someone has a case of the “Monday’s”! As a self-proclaimed “Net Promoter Groupie”, I was a little offended by your wish to damn me to the fires of hell. However, I’ll try not to take it personally.
    While I’m a huge advocate for using Net Promoter in credit unions to improve member experience and loyalty, I actually agree with a few of the points you make here. Utility should be the criteria for implementing a management metric. If you are taking the time, energy and money to measure something – it had better be put to good use.
    The Zendesk study that you quoted is interesting and actually further proves a key NPS principle –your most loyal members (aka “Promoters”) behave differently than members who are less loyal. Promoters buy more of your products and services and refer others. Our research at Member Loyalty Group has shown that Promoters also carry higher loan and savings balances, indicating greater share of wallet. These are the key behaviors that drive the organic growth that top-performing NPS credit unions experience.
    Also, your point that it is important to measure actual referrals is something I advocate, as well. In my experience working with NPS, if a credit union has group of Promoters that are not actively referring, then the credit union is at risk of losing these Promoters. Promoters who become disengaged can quickly turn into Passives or even Detractors and start to move their banking relationships elsewhere. It’s the enthusiasm and love for their credit union that drives Promoters to refer.
    I’m not sure that I agree, however, with your notion that NPS should be put to rest. I know a number of credit unions who have successfully utilized Net Promoter, as not only a metric to track member loyalty, but as a tool to transform their business AND have realized significant financial results in doing so. I expect that this is a trend that will continue for years to come, as credit unions continue to develop Net Promoter programs and share best practices.
    As any other “Kool-Aid drinking” NPS advocate would agree, it really is easy for everyone in the organization to understand “Promoters minus Detractors = Net Promoter Score.” And using NPS to monitor member experience and loyalty can bring transformative results. However, it is not always easy to implement. Credit unions that do this well must invest resources into data collection, analysis, distribution and taking action on findings. Like any other business model, credit unions that make the decision to use Net Promoter should recognize that it requires commitment and dedication.
    NPS, RPS, or anyotherPS – I believe it is critical for credit unions to understand what makes members loyal to your credit union and what you need to do to earn their enthusiastic recommendation.

    • Rebecca: Thanks for commenting.

      First off, nothing I wrote should be taken personally by anybody. I’m advocating killing off the NPS metric, not the people who support it. I wasn’t telling you or anybody to go to hell. I was demonstrating that the directive “go to hell” was simple and direct, but not specific enough to act on.

      Second, NPS supporters need a serious wake up call. Customers who give a 8, 9, or 10 score to the question “how likely are you to refer?” are actually NOT “Promoters” as they’re commonly referred to as. At best, they’re “Intended Promoters.” Until these customer ACTUALLY PROVIDE A REFERRAL, they can’t be called Promoters. And, at the other end of the scale, just because some answers less than 7 or 6 or whatever it is, doesn’t make them a Detractor. Some of us who give companies bad scores on the NPS surveys they give us just go about our business and neither promote NOR bad mouth the company.

      • I have to chime in here. Great reply Rebecca.
        Ron – to your point above regarding the labels of “promoters” and “detractors” – it is just that – a label. But as Rebecca pointed out, promoters are more profitable and many NPS users have implemented a discipline to catch and record actual referrals. And on the flip side, these credit unions call every detractor within 48 hours of receiving the survey to find out why they can’t/won’t promote. This is not easy, takes a ton of time, and in my opinion is the most valuable tool NPS provides. As a member owned financial cooperative their voice should matter.

        I’m with Rebecca – call it NPS, RPS, DPS, whatever – as long as a credit union understands what drives business and what threatens it and takes action, it’s all good.

  5. Ron-

    Disclaimer: Let me begin by saying I’m a fan of NPS because it’s worked for me in the six years I’ve been using it but I can agree that there are some flaws.

    I think NPS is poorly promoted by Bain and Co.

    The name of the book “The Ultimate Question” misguides people because to find it’s true value you need to ask two questions:
    1. What is the likelihood you would recommend a friend or colleague to company XYZ
    2a. What is it that company xyz does well to earn your recommendation? (If a promoter)
    2b. What is it that company xyz would need to do to earn a better recommendation? (if a passive or detractor)

    I could care less about the score. The score is for the scoreboard and for executives to say “look how big my c*ck is!” to their colleagues. It’s an executive ego metric, most CEO’s have no idea why they have promoters, passives or detractors they simply know their score. From an operations view point, the value is in the feedback. Take your customer comments, categorize them, finds trends and make operational improvements.

    I understand you have a stats background so the most value is probably in the #’s. I agree with you, who cares if someone says they will recommend. The bottom line only cares if they actually do. There is a way around this, any CRM or POS will tell you your customers behavours and buying frequency. Also, ask the new customers you acquire how they heard of you and track that too. Giving your promoter customers promo codes to share with family and friends is the easiest customer acquisition program available. By tracking the promo codes (who sent it, who used) will tell you how the bottom line is being effected. Another misconception about NPS is that it’s easy because it’s one question, its not, it’s hard work. I’m finding that people that don’t want to put in the work are saying that it isn’t effective.

    There is no holy grail metric or survey tool. Not NPS or your RPS. To go out and tell your readers to “go to hell” (whether you meant it or not) just tells me you’re a prick. Pick what works for you, ignore outside influences and just go out and build businesses. That’s it!

    Please answer two questions for me:
    1. Do you have financial interest in RPS?
    2. Have you ever operated a NPS program. I mean, really operated one within a business not consulted or advised?

    Thanks for putting the post together.

    Michel.

    • Michel: Thanks for commenting.

      What I don’t understand — and I hear it from other NPS supporters as well (see Denise’s comment) — is the statement “I could care less about the score.” NPS is all about the score. The whole methodology is based on asking the intention to refer question, and calculating a score based on the response. If you don’t care about the score, then what else is there?

      To answer your questions:

      1) Yes and no. I guess you could say that I have a financial interest in that the research I publish is sold by my company. But my business doesn’t live or die on whether or not RPS is adopted (unlike, say a Member Loyalty Group which is all about the NPS metric).

      2. No.

      • What else is there? For me, the score is like having a doctor say, “you’re healthy” (high NPS) or “you’re dying” (poor NPS) without telling you how to get better or what you’re doing well. This alone would not make me a fan of NPS which is why I don’t believe the most value lies within the score.

        As I mentioned, the value for me is the opportunity to give customers/clients the ability to easily tell the company where they can improve or where they are doing well. From an operations mindset, I salivate at finding these trends to make operational improvements to earn higher customer loyalty which is where the ROI and $$$ is.

        To answer my own question and be honest, I have a financial tie to NPS (my clients are on it) but by no means do I get defensive like others. I’ll be the first to admit that NPS has its challenges but many who say “it doesn’t work” haven’t gotten dirt under their nails and led a program before.

        I’m interested in learning more about your RPS.

        Thanks.

        M.

  6. I enjoyed chatting about this briefly at CUWCS Ron. Our conversation inspired a post about how credit unions can overcome being a best kept secret with the help of referrals, not NPS. If NPS is a scoreboard analytic, the same could be said for pointless metrics like the number of Facebook fans. Who cares about the number of fans. On the flipside, the number of loans or new accounts those Facebook fans generate… that info, and action, is priceless.

    To your point of moving intent of NPS to the action of referrals, I have detailed four steps a credit union could employ here: http://www.cugrow.com/turn-intent-into-action-with-these-4-digital-referral-steps

    If a credit union wants to move in this direction, it will take some work. It’s more than just throwing up a “refer a friend” link on a website and linking to a PDF to be downloaded. In this case it would be hard to track who is referring who. It is also important to map out the digital referral process while including a strong referral incentive. Lastly, explore ways you can tap into personalized referral offers with transactional data to turn intent into action.

    At the end of the day, referrals must be built into part of the process. Digital retailers do a great job with this however neo-banks/FinTech companies are now getting into the game. Moven and Simple have built referrals into their lead generation process and I just published a blog post about how Coin has built referrals into their go to market strategy:

    http://www.cugrow.com/in-depth-digital-ux-review-of-coin-for-credit-union-executives

  7. Great article…couldn’t agree more. Also, even if I do respond with a 9 or 10 and am a ‘promoter’ of your institution, my ‘friends’ and ‘colleagues’ to whom I promote you may not be desirable to you or in any way a segment you would target. The value of my promotion, therefore, is not only useless, but possibly detrimental. Also, although I obviously competed against NPS while I managed the banking practice at JD Power, there WAS one ‘intention’ in which I did see strong value. We asked customers ‘how likely are you to switch banks in the next 12 months’. Of those who indicated they were either ‘likely’ or ‘very likely’ 35% actually DID end up switching. Translation: customers may or may not stay with you even if they like you, but once they have it in their minds that they want to leave, 1/3 actually do. Identifying customers who self-identify themselves as ‘at risk’ or one foot out the door becomes far more critical than those who profess a desire to sing your praises some day.

    • Great point about the “quality” of the referral, Mike. Your JDP question really questions the whole Net Promoter claim regarding the predictive quality of the NPS question. If there’s another question — i.e., “how likely to switch?” — predicts referral likelihood, then THAT’S the question to ask, not referral likelihood.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s