posted 25 Feb 2003 in Volume 6 Issue 6
Benchmarking KM at British Energy
Shortly after the publication of the British Standard Guide to Good Practice in Knowledge Management PAS 2001 last year, it was used to benchmark the KM activities of British Energy Power and Energy Trading as part of the research for an MSc dissertation. Simon Carpenter and Sarah Rudge describe the process and the lessons learnt.
At the time the British Standards Institution (BSI) published its good-practice guide to knowledge management, PAS 2001, we were a few months into the research phase of a work-based distance-learning MSc degree in information management with the School of Information Studies at the University of Central England. The plan was to benchmark the KM practices of British Energy’s Power and Energy Trading subsidiary, BEPET, against a readily available, published standard. Being reasonably priced, easily accessible and, most importantly, impartial, PAS 2001 fitted the bill.
A brief guide to benchmarking
Several different types of benchmarking exist. The two most common are internal (ie, where two or more business units or processes within the same organisation are compared and assessed), and external, where comparisons are made with outside organisations. Within the latter, a further three distinct approaches have emerged:
- Competitive benchmarking;
- Functional benchmarking;
- Strategic benchmarking.
Competitive benchmarking is where a comparison is made with a firm’s direct competitor. Functional benchmarking is where a search is made for external best practices, not necessarily in the same industry, which can be readily copied and adopted. Strategic, or core competence, benchmarking is where the critical components of an organisation’s success are assessed. We felt that functional benchmarking would be the most appropriate approach for the BEPET exercise.
In any benchmarking exercise, though, there are five basic components:
- Analysis and data collection;
- Comparison and results;
- Verification and maturity.
Stages one to three fell within the scope of our BEPET KM benchmarking exercise.
The BSI had began looking at the possibility of publishing a KM good-practice guide in 1999, and in 2001 the first meeting of its knowledge-management panel took place. Subsequently, a committee was assembled and a draft report for discussion published in the summer of 2001 as a first step towards developing a knowledge-management standard. It was prepared by Dominic Kelleher and Simon Levene, two KM directors from PricewaterhouseCoopers on behalf of the BSI, and is arranged into four main chapters, with a further five annexes containing some suggested self-assessment tools. The chapters include information on the rationale behind KM, some suggested approaches to it and possible benefits to be gained by investing in it.
There were several reasons behind the choice of PAS 2001 for the BEPET KM benchmarking exercise, including:
- It is relatively current, and therefore takes account of the latest thinking and issues in KM;
- In time, it may well become a British Standard, a universally recognised performance measure;
- It was not prepared for KM specialists in particular, but for all companies;
- Unlike other potential benchmarking tools, it is generally available to all, not just to subscribers to a particular product or consultancy;
- It is broad in scope and inclusive of all the different approaches to KM;
- It is objective and not linked to a particular product or consultancy.
Research showed that benchmarking KM was found to be quite a novel concept, particularly using PAS 2001. As PAS 2001 provides a comprehensive overview of KM and all its practices, and as we were working towards a fairly tight deadline, we felt we needed to focus on a few key ‘benchmarkable’ areas for the research. After looking around for something suitable, we decided to base these on the performance categories from the Telios/Know Network’s annual Most Amired Knowledge Enterprise (Make) awards. Not only would these provide a basis for drawing information from PAS 2001, they also, through the recent winners, supplied a ready-made list of best-practice companies to further compare BEPET against. As such, the key areas used in the study were:
- Creating an enterprise knowledge culture;
- Top management support for managing knowledge;
- Developing and delivering knowledge-based products and solutions;
- Maximising enterprise intellectual capital;
- Creating an environment of knowledge-sharing;
- Establishing a culture of continuous learning;
- Managing customer knowledge to increase loyalty/value.
We felt that these categories were clearly relevant to the study overall and so were applied (successfully and quite easily) to the design of the benchmarking exercise. To further compliment the study, a knowledge audit of BEPET was also carried out, and best-practice company profiles based on five of the award-winning companies were drawn up with the aim of matching them against the findings of the benchmark survey.
Benefits to be gained
There were many benefits to be gained by the organisation through the conduct of such an investigation. As PAS 2001 states, “Knowledge is now widely considered to be a company’s key resource, and its effective use is vital for business success.” One of the main benefits of the research to BEPET is the potential it brings to introduce best-practice KM activities into the division. Through reading PAS 2001, we believed the study could bring many improvements, such as:
- Individual employees being better informed and becoming more effective;
- Improved team-working;
- Increased innovation;
- Continuous learning would be better facilitated and supported;
- Improved stakeholder relationships.
Using case studies of some of the Make awards winners would enable BEPET to benefit from the experience of others, and from the current advice of leading KM practitioners. The study also allows the close examination of an area of business practice not generally possible due to time and cost constraints. Importantly, it gives scope for further benefits to the rest of British Energy and the wider information and knowledge communities beyond the company as the findings are disseminated and published.
Collecting the data
The benchmarking exercise contained the following elements:
- Semi-structured group interview;
- Pilot survey;
- Survey of all staff in BEPET;
- Semi-structured follow-up interviews;
- Benchmarking process.
Semi-structured group interview and pilot survey
We decided to begin with a semi-structured group interview with a selected group of key knowledge users from within BEPET. The interview conversation was completely informal but based around a few pre-selected topics. We decided on this method with a view to gaining the advice and support of key members of staff, who were in turn drawn from across the company’s different departments. The main item discussed at the meeting was the pilot survey that had been distributed to the group a few days earlier. The resulting feedback from the group helped to modify the language and make it more intelligible to the non-specialist, and also suggested some of the issues that would be raised by the research.
The staff survey
A survey was chosen as the benchmarking method in order to gain as many perspectives and viewpoints on BEPET practice as possible. This was particularly important as the benchmarking process included many intangible elements such as culture and knowledge sharing.
Following the semi-structured group interview, 85 copies of the revised survey were distributed to staff in BEPET by hand, and a deadline of two weeks was given for recipients to complete and return their copies. The survey was printed in A5 booklet format in order to save money, with an introductory letter printed on the top page. We felt that an A4 format might create the wrong impression, as well as appearing a bit daunting. We also chose to print the survey on light blue paper so that it would stand out against the predominately white background of the average in-tray. Finally, in order to encourage a higher level of response and more honesty in the answers we received, anonymity was guaranteed for those who wanted it, though the option was given for people to identify themselves and volunteer for a follow-up interview.
After a few days, a chaser e-mail was sent out to all staff in BEPET to encourage the return of completed surveys. The e-mail also mentioned that spare copies of the survey were available for those who missed the initial distribution. A few days after this, another similarly worded chaser was sent out, thus giving everyone who wanted one an opportunity to complete a survey form. A further five copies of the survey were subsequently handed out, bringing the total number of copies distributed to 90.
The survey consisted of 49 items divided into nine sections, including seven based on the Make awards categories. The seven were:
- The BEPET working environment;
- The BEPET culture;
- Our management;
- Information content management;
- Intellectual capital;
- Customer/stakeholder relations.
We took this opportunity to ask further questions that did not form part of the benchmarking exercise, but that we hoped would generate responses that would prove both interesting and illuminating. Two additional sections were therefore included in the survey. One was a small-scale knowledge audit entitled ‘The BEPET knowledge flows’. This focused specifically on staff perceptions of knowledge and related issues within the organisation. A further section was added to the end of the survey that allowed respondents to add any of their own comments if they wished to do so.
Within each of the seven main sections several statements were drawn from best practice, as identified by PAS 2001. The survey respondents were asked to record their level of agreement/disagreement with each statement according to a sliding scale. The scales were standardised to an anchored Likert five-point scale with ‘strongly agree’ and ‘strongly disagree’ at either end, and a ‘no opinion’ option to avoid misuse of the midpoint ‘3’. For benchmarking purposes, best practice would be a score of ‘5’. As an example, Section A, ‘The working environment', from the survey is shown in figure 1, the statements in which were drawn from section 2.2 in PAS 2001, ‘Establishing the right culture for success in KM’.
Figure 1 – an example of a section of the survey handed out to staff
The survey response rate was 40 per cent. After all the responses were in, five semi-structured interviews with a sample of BEPET staff were arranged.
Not all best practices were included, particularly if they were obviously not relevant to BEPET, such as the existence of a chief knowledge officer. In cases like these it was assumed that a question had been asked and had received a strongly disagree response, with the resulting affect on the benchmark score. The results were recorded on an Excel spreadsheet. We chose the Likert scale because we felt it was the most appropriate method for a group benchmarking exercise, and also the least time consuming to complete and analyse.
A couple of weeks after the survey response deadline, we set up semi-structured interviews with five key members of staff. The five constituted a purposive sample, four of them being drawn from across the division from the members of staff who had indicated a willingness on the form to be contacted and interviewed. The fifth interviewee was a member of the human-resources team. Although she had completed her questionnaire anonymously, she had agreed to be interviewed on general issues. The five interviewees also represented a cross section of the various staff levels within the organisation – two employees, two team leaders and a manager.
The interviews, apart from that with the HR team member, were used to expand on the individual’s responses in the survey, and also to gain their perspectives on more general cultural and knowledge-sharing issues. Notes were taken during the sessions and written up, and then agreed with the interviewee afterwards. They were not recorded or transcribed due to a lack of time.
The BEPET knowledge audit
A basic knowledge audit was also included in the survey. Six questions were asked, which are listed below. To keep things as straightforward as possible, the respondents were only required to tick their response from a selection of suggested answers. The questions asked were:
- Where does the information you need to do your job come from?
- In what form is it?
- What types of information do you have stored?
- In what form do you store your information?
- Who are the main customers for your information?
- In what form do you pass information on to them?
The audit was included in order to gain an overview of the knowledge contained in BEPET, who has it and how it flows (or doesn’t) through the business. In addition, we wanted to discover how different teams stored their knowledge, who were the key knowledge holders and who the main knowledge customers were, both internal and external to BEPET.
We also felt that a knowledge audit was an interesting way to gain insight into the organisation’s culture and approaches to knowledge sharing. It was pleasing to see that this idea was reinforced by PAS 2001, which states that a knowledge audit should also seek to “uncover many of the cultural barriers to knowledge use and transfer”.
General thoughts on the study
In the context of a general survey, it proved difficult to take into account different perceptions of individual staff and their awareness of particular issues. For example, the benchmarking survey included the highly subjective, but essential in terms of the benchmarking process, statement, “There is an ‘open book’ approach to all information.” We recognised that each person would have their own their own view on the information that is shared with staff by the management, and how important or otherwise it is for employees to know about everything that is going on in the business.
It was also not possible to include all the areas covered by PAS 2001. Where no KM initiative existed, that area was left out of the survey. This meant that in the results phase, a judgement was made on their relative importance and a suitable score was factored in. An example of this is how the lack of a content-management system in BEPET took 50 per cent off the benchmarking score for that section. However, more time could have produced a more accurate methodology in this respect.
Overall, however, the aims and objectives of the research were achieved. The benchmarking process was completed and recommendations to management made. It did not prove quite as difficult as we thought it might to produce a workable survey and benchmarking process from PAS 2001, and the results from the questionnaires and follow-up interviews were in line with what we, and others, were anticipating. Crucially, the response rate was slightly higher than we expected, and represented a balanced cross section of staff in the business as a whole.
Simon Carpenter is knowledge and information officer at British Energy Power and Energy Trading. He can be contacted at firstname.lastname@example.org
Sarah Rudge is a lecturer at School of Information Studies, the University of Central England. She can be contacted at email@example.com