Driving Adoption of Your Lead Scoring Solution

There is no shortage of data on why companies should use lead scoring. More and more B2B organizations are moving to a data driven, often predictive, approach and finding value from it. Some studies suggest that the majority of companies are either using lead scoring or are planning to do so (73%) [1] with more than half of B2B firms (58%) currently using lead scoring.

Interestingly, there is far less dialog about ongoing usage and true adoption. Anecdotally, I hear from marketers, service and software providers alike that while there is initial excitement about their lead scoring solution, it often wanes. In other cases, I hear about teams who are skeptical and resistant from the start. As a result, processes and tools often go unused, leaving revenue and efficiency opportunities on the table and falling short of generating positive ROI on the initial investment to build the solution.

There are a number of things that you can do to maximize the chances of success of your lead scoring solution and its long term value and adoption.

How to Achieve Lead Scoring Success

1. Get aligned. Regardless of who is sponsoring the initiative to build or revise your lead scoring solution, ensure that those that need to use it and those that benefit from it have the opportunity to contribute to the roadmap, design, implementation and roll-out. Ideally this goes beyond one or two meetings or simply informing people as to what is happening. A good solution first gathers inputs as to what sales believes differentiates good leads from bad. Along the way, where the solution lines up, highlight this. Where it does not, dig deeper—it could be the case that the intuition is wrong but it could also be indicative of a bad scoring approach or process and a need to revise the solution. Going through this discovery and alignment process will improve the quality of your solution and also generate the buy-in that is critical for your longer term success.

2. Define your objectives. Be clear on what you’re trying to accomplish and consider defining clear success metrics coupled with a roadmap for how you will both test and roll out your lead scoring solution. It needn’t be an overly involved plan; something simple that is clearly defined will do.

3. Pilot. Start small and test your lead scoring solution with a subset of users and measure what happens relative to a comparable comparison group. The goal of the pilot is two-fold: to see how the solution performs (and then refine it as needed) while also building the business case for roll-out.

Ideally your steering committee and subsequent pilot group include both leaders and end users of the solution who genuinely believe that it can work. This helps to maximize your chances of success. The pilot also allows you to capture quantitative outputs around how the solution did perform, and how it would likely perform upon broader roll-out. Be sure that the test design is well thought out in terms of the volume of leads that will be part of the test and business-as-usual control groups and that you’re sticking to a set of clear objectives and success metrics. Don’t forget to pay attention to your sampling approach: your pilot test group may be drawn from higher performers or one region, and you’ll likely need to adjust for this when selection your control group and/or measuring your results.

4. Roll out with a red carpet. It is critical that the new process isn’t tactically pushed on people. Make it a big deal and something to build momentum around. If all goes well, you have pilot users who become champions for change and believe that this is a better way. At this point, you also have results from your test that provide quantifiable proof of the return on investment potential. Leverage your champions and build a deliberate roll out and communication plan to ensure end users are clear about what is happening and are excited about the potential –versus bristling at change. Ensure that you clearly address the “how does this help – what does it mean for me?“ sentiment. Extrapolate your broad pilot findings into use cases that are relevant and meaningful for an individual rep: The pilot yielded a 40% lift. So what? That means if on a base projection of $10,000 you have the potential to realize $14,000 using the new process which would provide an incremental commission of $X. The metric need not necessarily be a financial one so long as it is motivating.

5. Train. Ensure that end users get trained on how to use the solution, process, and any software tools necessary. Do this as close to the time they will need to start using the solution as is possible. Remove outside distractions for the session so your users are engaged and have regularly scheduled training sessions for new employees and re-training for existing employees. We have all had software systems that we are supposed to be adept at using but simply aren’t for a number of reasons. Make it easy for your users to rectify this.

6. Don’t set it and forget it. This goes for the solution itself –and- your training and roll out. Have a maintenance plan. Stay current and keep talking to your users to get feedback and inputs and react and communicate accordingly. Similarly, if you’re seeing a decline in results dig into why. Adoption might be waning. The solution may be declining in effectiveness. Either way (and these two things are highly related) you need to do something to address it. In terms of the scoring itself, expect to re-evaluate and rework the attributes or their weighting a few times a year at a minimum. The market, your business and your buyers are evolving constantly. Your lead scoring needs to keep pace.

If you’re starting this journey, this roadmap should help you get started the right way. However, what if you are already down the path and recognize that adoption and support of the solution are waning? It’s not too late. These same tips apply – dig into what’s going on. Do your discovery and get aligned on the facts. If the scoring approach needs an update get that done and then pick up with a pilot of the new methodology and see it through to your communication, rollout, training and maintenance plan.

If you do your discovery and identify that the solution is in fact working and does not need an update or revision, then drill into why certain users aren’t supportive while also identifying your power users and solution champions. Work with these groups to define revised communication and training plans and consider whether there are ways to tie solution usage to performance assessment and compensation where needed.

[1] According to an Eloqua Topliners Community poll, February 2014 via LeadLizard