Customer satisfaction

You can imagine the scene: four people are sitting in a restaurant, bitching about the food, the decor, and above all the godawful service. Suddenly the conversation halts as the maitre d’ approaches, arching a supercilious eyebrow:

“Is everything all right, m’sieu?” Immediately the table breaks out in a flurry of delighted comments until the blessed man moves out of earshot, whereupon they can resume bitching about the food, the decor, and above all the service.

Sound familiar? Measuring customer satisfaction is anything but a straightforward process, but in consultancy, where you could argue that there is very little objective product outside the client’s perception of the problem, getting a handle on what your customers think of you is vital.

For the big consultancies, the task is made a little easier by the fact that the volume of work allows some use of statistical sampling. This is important, because being constantly interviewed for customer satisfaction surveys can itself become something of an annoyance to a busy client.

Nick Bear, director quality assurance for KPMG Management Consultancy, is aware of the need to maintain a balance between depth and breadth of information.

“An independent research organisation does telephone surveys for us on an individual engagement basis,” he says. ” But we’re moving away from that into deeper interviewing, again using people who are more independent.

There are a couple of specialist firms who employ a number of people, some of whom are ex-KPMG and so understand the issues.”

If a problem arises, it may need to be dealt with on an individual basis: a measure of objectivity can be introduced by bringing in a KPMG partner from outside the engagement to talk to both clients and partners. Or there may be actions that affect the whole practice: Bear cites a situation a few years back where complaints about the difficulty of contacting staff led to a wider adoption of voicemail.

Equally important as identifying strengths and weaknesses is the need to disseminate this information. KPMG produces a monthly electronic newsletter featuring quotations – both positive and negative – from client interviews.

Bear cautions that, by their nature, customer surveys may paint too dark a picture: “What tends to get the focus is where things don’t go right,” he says. “Telephone interviews only ask for follow-up reasons if the score is below par.”

Believing that clients would hardly enjoy having telephone interviews extended into lengthy fishing-for-compliments sessions, Bear would rather rely on the deeper insights of the face-to-face interview.

“There’s no doubt they cost us a lot of money, but the feedback is very detailed – and can lead to further opportunities with the client,” he says.

However, this approach can only be used with a selection of clients, so the process has to be continually under review to ensure a broad range of assignments is covered.

Smaller consultancies may not be able to mount such a complex exercise, but they have the advantage of much more direct feedback from clients. French Thornton is a smaller consultancy specialising in programme management of large projects, which has drawn many of its personnel from much larger firms. Partner
Ed Haysler comments on the difference:
“When I was at a big consultancy firm they had a form, but it was infrequently filled in unless someone put in a lot of effort,” he says. “Here we want the partners to be actively engaged in the project.”

This doesn’t mean the firm doesn’t also ask for feedback:

“We believe very much in straight talking to our clients and our clients being straight talking back to us, he says. “We have complete openness: clients are invited to come along to all our meetings.”

Openness is also important within the firm, says Haysler: “If there were any hint of client dissatisfaction, we’d call in another partner to investigate,” he says.

To help the more junior consultants, French Thornton has devised the “red-face” test:

“If you have any concerns, think about the “red-face test”, says Haysler.
“Imagine standing in front of the client and saying exactly what you’ve done.”

Haysler says this is particularly useful in cases where there appears to be a conflict of loyalty between the client and the firm

“In all cases the loyalty is to the client: we’ve got to have references, we’ve got to have positive feedback,” he says. “If you do what’s right for the client it’s inevitably what’s right for the consultancy.”

For the individual consultant, the problems of sorting out the objective and subjective get even thornier.

David Firth (a one-time contributor to Management Consultancy’s Corporate Fool column), who combines human change and communications consultancy, describes the extremes:

“The feedback I get covers a spectrum,” he says. “The most formal type I get is when I’m speaking, and you’re given numbers: four out of five for content and so on. At the other end of the spectrum there are people who continue to use me again and again and can’t explain why they like me.”

The one-to-one nature of much of Firth’s work can make getting feedback difficult: it’s not easy or necessarily productive to stand in front of a client and say: “look me in the eye and tell me how much you like me”.

“I’ll follow up on a different channel, perhaps send an e-mail, then I can read a lot into what they are saying,” he says.

Even very positive feedback can have its drawbacks, says Firth, who describes himself as his own worst critic. He gives as an example a seminar where he felt he had performed very badly, but the feedback was all very positive: “you’ve helped us see the light” and so forth.

“I thought, you’re either lying or you have very low standards!” he says, but also this experience became a valuable source if insight.

“It’s always fabulous to have lots of feedback, but you have to read into it.”

He recalls a session with a clearing bank client where he was acting as “corporate fool” “I said to them, all this customer service stuff is a load of rubbish – customers are mad!”

He gives as an example a furious letter he banged off to a hotel chain after discovering a “dirty teapot” in his room (doesn’t he know you should never wash a teapot?)

“OK it was my feedback – but my reaction wasn’t rational,” he says. “It was mainly because I’d just had a row with my wife.”

It’s important therefore to interpret customer feedback carefully, not to hide from it but also not to accept it uncritically. (Firth floats the entirely foolish and therefore intriguing notion that consultants too should get to give feedback on how satisfied they are with the clients …)

Ultimately, customer satisfaction has to be a two-way street: it shouldn’t be a matter of handing the direction of the firm over to a pile of forms and interview quotes. Consultancy is increasingly about relationships, and relationships are surely about a meeting of minds, not simply one partner endlessly adapting to the other’s needs. If things aren’t working out then, to paraphrase British Rail, maybe they’re the wrong kind of clients.

Deloitte & Touche Consulting

Going face to face with clients

Peter Allred, head of process consultancy at Deloitte Consulting, says the firm’s satisfaction monitoring process is less to do with individual projects and more to do with longer-term client relationships. “Very few projects are one-offs,” he says, “but if they are we aim to ring the client within a month of completion. On the whole though, we are tending to monitor on-going satisfaction on large assignments.” The consultancy is organised along industry and service lines. It has three major lines: strategy, process and technology.

The firm monitors client satisfaction in two ways, favouring face-to-face interviews in both. A quality partner is allocated to major clients and undertakes regular reviews with those clients, every six months or annually. In addition there is a full client service review every year, led by a partner, independent of any project being undertaken for the client. Says Allred: “The partner and his team interviews the client about the quality of the work done and what the client thinks of the consultant team doing that work.”

He says the review is very comprehensive, asking clients some 30 or 40 questions about progress, staff members and their relationship and communications skills.

“The quality partner, on the other hand, takes a more technical angle on the quality of the work,” he says. The firm has a quality management system and ISO 9000 checks and balances are used within the annual quality report on each engagement.

But, says Allred, “consultancy is very much a people skills thing: it is about relationships and chemistry.” If a client is dissatisfied the staff on a team may have to be changed, he adds.

The firm uses a central allocation system to put together optimum project teams, utilising its global resources. Says Allred: “For example, for a banking job we did recently we brought someone with specialist skills in from the US.”

The business has changed, he adds, and some of that change can be attributed to feedback from clients. “Deloitte Consulting is very much organised in client teams,” he says “it is client-facing all the time – and we have very much clearer lines of communication.” The firm’s collegiate working style has developed out of client feedback, he says. “A lot of consultancies still work on a command and control basis but we find clients like to work with consultants in teams. They want a lot more ownership of projects, giving staff buy-in to the change process.”

Deloitte Consulting’s focus on client relationships is symptomatic of the way the consultancy business has developed, says Allred. The firm aims to start very few new clients, he says, with roughly 70 percent to 80 percent of work in any one year coming from existing customers. “Most of the one-off jobs go to the smaller players,” he says.

Ernst & Young

Surveying the way the land lies

Three and a half years ago Ernst & Young’s consulting practice introduced a client care programme under which all clients receive client satisfaction surveys. Says Mike Cullen, managing partner, major accounts, industry sectors and customer service: “From each survey the questionnaire is developed and adjusted and is sent out with a personal letter from the managing partner. It is off-line from the engagement partner because we don’t want to get a rose-tinted view.” On long projects the survey goes out quarterly and is followed up by a visit from a member of the MC management team every six to nine months.

The survey, which features a scoring mechanism, has 28 questions in five categories and pulls no punches. Under “Quality of service” it asks “Did consultants give straight, unequivocal opinions when asked?”, while under “Adding value” it takes the bull by the horns: “Were our fees appropriate – did you get value for money?”

Says Cullen: “It is very important to ask about value for money – consultancy is not cheap and you can’t hide it. We have never regretted putting it in although we were nervous about it at the time.”

Other categories include: delivery of project; communications; and client relations. “We also ask questions about how we can improve our service and whether clients would recommend us to others,” says Cullen.

“Most clients respond within a week or so and if we get very good reports they go onto the client satisfaction database as a benchmark,” he adds.

Any specific queries are actioned directly, followed up internally and the client informed of the outcome. “We look at team membership and constitution but a bad team mix is often too easy an excuse. We have to look at our own managing process and the root cause and address it as a business design need.”

The firm has re-evaluated the way in which it engages with clients and how it can demonstrate value in a consistent way, says Cullen.

“It is all about communications – we don’t want the client to feel in the dark. A lot of clients we work for will be answerable to their own chief executive and we have to provide our sponsor with the capability to answer the boss’s questions. We have to make sure consultancy teams are aware of that.”

The firm takes one-off and on-going projects equally seriously. “Our whole philosophy is client-oriented and client care is irrespective of size,” says Cullen. “One-off projects often turn out to be feasibility studies or market-testing projects anyway.”

He estimates that 75 to 80 percent of the firm’s work in any given year comes from people who have been clients in the previous year.

As a result of its customer focus and the feedback it gets, says Cullen, the firm has looked closely at the design and performance measures of its own organisation. Eighteen months ago it centralised its resourcing and skill-based function, taking resourcing out of the hands of the engagement partners.

This was part of the redesign of the business both nationally and across Europe, changing the parochial view for the global one.

“Now we are operating on best team principles,” says Cullen. “Previously, for example, on a Birmingham assignment the client might question why we hadn’t brought in a specialist from Glasgow. Now all our people are measured as performers and the central resource function can identify the people with the best skills across Europe and their availability and we can put together the best team for the job.”

Scientific Generics

Two heads are better than one

For Simon Davey, director, business innovation division of business and technology consultancy Scientific Generics, the very best measure of customer satisfaction is whether the client comes back for more.

“It is the ultimate compliment – and we monitor it as one of our key metrics: currently it is running at 80 percent.”

On any project, large or small, says Davey, a senior person is appointed as a project reviewer who monitors internal quality standards and is the customer contact both during the project and after it is completed.

On major accounts there is also a key account manager who must fully understand the client’s business situation and requirements, he says.

“We try to separate the monitoring of quality from that of the overall client relationship,” says Davey. “Because of the spectrum of skills applied to a project the key account manager may not be technically qualified to judge the quality of the work. For example, for one client which makes drug delivery devices we have done BPR work, financial due diligence work, programme management and in-depth technical development work. As the key account manager for that client, I cannot judge the quality of some of the intensely technical work because business process work is my speciality.

In order to gauge how well we performed we need a project reviewer with expertise in the field.”

After a project ends there is a major team debrief, the results of which are fed back into the annual review process for the project manager and team members.

Client feedback is gathered in different ways depending on the nature of the organisation. “Sometimes we interview the key client contact but for larger organisations where there are a range of contacts the process can be more structured, perhaps using a questionnaire.”

The company has also started to explore post-project workshops as a way to thrash out what the learning points have been.

“The sessions are attended by the client project manager and the person who owns the outcome of the assignment – that is very important – our project leader, the project reviewer, key technical contributors to the assignment and client staff involved in the main technical points of interaction.

The key account manager would not usually attend: we would want the team to do the learning.”

The problem here is time, says Davey, because everyone is busy. “But I would advise anyone interested in customer satisfaction to make the time for a post-project workshop. In my experience they are extremely useful to both sides.”

One of the things the company has learnt over the last three years, he says, is the need to be much more sensitive to the nature of the interaction that the customer actually wants.

“It can be tricky at the outset because it is difficult for the client to judge. Our input can range across a spectrum, from expert, a pre-prepared solution, to facilitative, helping the client explore the problem and develop their own solution. If we present too expert a stance it is seen as prescriptive and it is difficult to get acceptance within the client organisation; a too facilitative approach, on the other hand, is seen as content-free. It is a question of deciding where to position ourselves on the spectrum for each client.”

This is more of an issue for business and technology consultancies like Scientific Generics than for the Big Five, he says.

“We provide a really quite extensive range of input. Some of our staff are exceedingly expert in technical areas – sometimes their services may be required and sometimes not. It’s a question of getting the interactive style of working right.”

Mick James is MC’s consultant editor and Mary Huntington is a freelance journalist.

Related reading

HMRC banknotes