As a prosperous year for management consultants comes to a close, January’s harsh commercial reality stares us in the face. We work in an ever more competitive market with customers becoming much smarter in their use of consultants and expecting a better service. A key selling point for today’s consultants is to have not just a strong track record, but also to have available an effective suite of methods, techniques and tools to apply on the client’s behalf. It is no longer sufficient simply to offer advice, no matter how authoritative; it is essential to be able to demonstrate the strength of the advice.
Consultancy requires strong analytical capabilities supported by the right tools, many of which are computer based. Daily we are swamped with advertising for new IT and business methods all of which claim to revolutionise the management of change. But which are worthy? To investigate every one is unrealistic. We need to see through the hype.
Software provides the key tools today in management consultancy. The right products can make the difference between failure and success, but one key principle is that the tools must support the method and not the converse.
Consider a typical consultancy project: a three-month study to advise an insurance company client on the best structure for a new telephone call centre which will be contacted by customers responding to a media advertising campaign. The client currently sells 10 types of policy and, while it is expected that there will be some 5,000 calls per day, the level of interest in each product is speculative. The client mandates a service level in terms of call waiting times; it naturally wants the most economic solution but also needs to know if a slight reduction in service levels might lead to substantial reductions in operating costs.
So how could we structure the project? First we might design the detailed call response procedures to be followed for each of the 10 different products.
These could be a series of questions put to the customer with a set of rules to determine what follows depending on the answers given – for example, a level of cover above a certain threshold might require more details on the types of risk or if a caller has been refused similar cover by another insurance company then a supervisor might have to be consulted.
We expect that 20 per cent of the calls will lead to a sale, 25 per cent will result in a further customer call later and the rest will produce no definite outcome.
We need to document these procedures in such a way that they are comprehensively described. Flow charts could be created for all the possible paths through each procedure for each product. This is an excellent way of both designing and visualising the call processes and there are many good software packages that can be used, some with linking mechanisms which aid navigation around the whole process.
With the procedures defined and charted we now need to determine how the call centre as a complete process would function. This means we need to know how each call is likely to progress through its procedures in terms of both stage and time and how it will relate to other calls when competing for resources. We know the service level targets, we know the likely times for each stage in our flow charts, we have to make assumptions about when during the day calls will be received. How can we predict how the whole system needs to be structured and resourced and how it will operate? We need a model that closely represents the live system.
Modelling is widely used but under exploited. We mentally create models as we envisage how a process will work. Each flow chart is an elementary model. We need to know how the real life dynamic system will behave and if we can create a realistic dynamic model we can use it to predict how our live system would behave in circumstances not yet experienced for real. Flow charts are fine for the purposes described earlier but are inadequate as a dynamic modelling method because they lack a temporal aspect. We need to apply call arrivals to the system and dynamically see their flow through the entire process. So have we wasted our time creating paper or even computer-based flow charts? The answer is yes if we could use a dynamic modelling tool which could also build the flow charts which are then incorporated in a model of the whole system.
So how do we build a dynamic model? Spreadsheets can be useful but they offer only a “snapshot” view of the process. An alternative option might be a system dynamics model. This is one type of simulation modelling and certainly has a place in business process modelling. However, it is more suited to reproducing systems in which flow is continuous and is less appropriate for a system, such as our example, where there are “discrete” items such calls, callers, staff, telephones and so on. System dynamic models tend to be used more for strategic level issue analysis.
We might also consider linear programming but this requires advanced computer programming skills and has limited applicability.
This leads us to the only technology that would completely meet our requirements – discrete event simulation modelling. This combines the relevant capabilities of flow charts, spreadsheets, some elements of system dynamics and linear programming. The models are dynamic and produce the closest representation of a live system: observation of their behaviour should be the most accurate prediction of the real thing.
These simulation models have another, key advantage: they can reproduce the variation in performance that is normal in processes, especially those in which humans play a role. In our project no call will be exactly the same, some customers will have a clear idea of what they want while others will require extensive consultation and explanation. No member of staff will be consistent when performing repeated tasks. These variations are key factors in our modelling because we have to achieve a minimum of a constant level of service. We will have to design a system which not only meets this when things are going well but also when dealing with more difficult callers at the end of a long, hard Friday when the office is hot and staff tempers are getting ragged! We need to quantify the ability of the process as a whole to tolerate these variations while maintaining service levels. We need to know the probability that things will go well and if not when they won’t, how likely that is and by how much they will go awry. This is the prediction of risk in our process. In simulation models these performance variations are modelled by using statistical distributions.
This may sound intimidating but the concept is really very simple. Let us assume that we expect that the first four questions in the procedure could take from two to four minutes to complete. Each time a call passes into the procedure the model needs to know on this occasion how long these questions will take. It refers to our “distribution” of not less than two and up to not more than four minutes. It chooses a value for this occasion by referring to an endless stream of random numbers whose values lie between zero and one (to as many decimal places as the computer has precision). It reads the number in the stream that follows the last one it read; if this number has a value of zero then the time to be taken to complete the questions is two minutes, if the random number value is exactly one then the time is four minutes. If the value of the random number lies between zero and one then the time value to be applied in the model will be fall proportionately between the two extremes of two and four minutes. While this example is quite simple the variation in possible times is quite wide, 100 per cent variation from minimum to maximum.
Our call centre process would comprise many procedural paths in which both times and routing would be determined by this use of random number reference against a set statistical description of possible outcomes.
We need to be cautious as a single simulation run of our model could produce answers that were misleading because they state just one of the possible outcomes, it may not be the most likely result. We need to repeat the model simulation many times ensuring that each repetition, or “replication”, looks at different random numbers. We then calculate the average result over all the replications to find the most likely prediction. We also have extra important data: we know the extent and likelihood of variation in the result caused by these variations in timing and routings – this quantifies our risk.
To illustrate this, in our case our model might predict that we will achieve a 95 per cent probability that all calls will be accommodated within the service level targets. If this is only 95 per cent likely then what might happen for the other 5 per cent of the time? The model will provide this information, again as a probability. In our case it might predict a 3 per cent probability that 10 per cent of the calls will be outside the limits by 10 seconds and a 2 per cent probability that the shortfall will be 20 seconds. We can provide the customer with information on which they can base commercial decisions. The next step would be to examine alternatives to our structure, for example how a reduction in staff numbers would alter the performance while saving costs. The trade off between performance and cost can be investigated with many possible changes to the model.
We would continue our lines of enquiry and also identify which elements of our model have the greatest influence on the overall performance and therefore demand the most accurate data. But surely this is: “garbage in = garbage out + wasted time + wasted cost + total frustration”? Well, possibly but not if care is taken to ensure that the impact of low quality data is measured. We can still produce valuable results even with poor data if proper analytical disciplines are observed. If the data is poor for a simulation model it would also be poor for any other modelling technique.
So if simulation modelling is so good why aren’t all management consultants using it? The primary reason is that until recently simulation modelling tools have required comprehensive mathematical and programming skills to be used. Models were the domain of the scientist not the manager, they were complex and expensive. Recent software advances have led to products that are powerful yet easy to use, require no programming skills and guide the user towards achieving usable results. These improvements have been the vital step in bringing these tools to the manager’s desk top. Colour graphical displays mean that models can be quickly and easily built using pre-prepared building blocks, even for complex processes.
Delays and bottlenecks are immediately obvious, animated displays are an excellent way of communicating behaviour and are increasingly an important medium for conveying proposals to customers.
So if the technology is now within reach of managers what are the likely costs? In 1997 product prices have varied widely from a few hundred pounds to more than #20,000. As a rule, the more expensive products are more powerful but require considerably more skill and computer literacy to use. There is also a substantial training bill to be borne in mind.
Be modest in your initial aspirations and don’t spend more than #500 on your first tool. Tools such as Simmit! can comfortably model a call centre with 400 staff and more than 50 products. Apply the “20/80” rule wisely: you can always upgrade later and a modest tool will pay for itself handsomely in your first project. You will also achieve success earlier and develop confidence in the technology. Beware the “lite” versions of more complex tools, they tend to be less capable than their full-blown brethren but just as difficult to use. Also bear in mind that most of the tools on the market came from the automotive industry so seek a programme purpose built for your type of activity. Be cautious about any supplier who claims to have “unique” technology: this is unlikely. What best differentiates products is their features, ease of use and price.
What are the major risks? The most common is not thinking through carefully how you will model your system and modelling at an inappropriate level.
It is tempting to include detailed data because it is the most commonly available. The recommended technique is to model from the top level down, adding detail when it becomes proven as necessary to better analysis.
So is simulation better than the real thing? It can substitute for expensive and disruptive pilot schemes, it lets you experiment quickly and safely and establish the bounds of commercial viability of new business structures and processes. It is an excellent medium for lateral thinking. But in the end it is, after all, only a mirror of reality, a support not a substitute for the competent management consultant.
John West is managing director of Analysis International, specialists in the development and use of business simulation modelling.