posted 1 Oct 1998 in Volume 2 Issue 2
the Knower: Towards a Theory of Knowledge Equity
If practitioners and researchers are serious about developing a comprehensive theory and field of 'knowledge management' then it is crucial to develop valid and reliable measures of 'knowledge'. Rashi Glazer believes that introducing the' knower' into the measurement process is crucial to achieving this.
After several seemingly false starts, the field of knowledge management has finally arrived - having achieved a critical mass of both academic respectability and practitioner attention. Scholars are beginning to craft a rich research agenda organised around a set of explicit hypotheses concerning both the causes and effects of information and knowledge-intensive structures and environments. Within the firm, meanwhile, notions such as 'chief knowledge officer' or 'knowledge as an asset' have become a normal and accepted part of the working business vocabulary.
However, if academics are serious about developing a comprehensive theory of knowledge, and if managers are serious about developing their knowledge assets, then it is time for all of us to get serious about developing reliable and valid measures of knowledge. For, despite the flurry of activity, it is becoming clear that no real progress can be made in our efforts to treat knowledge either as a variable to be researched or as an asset to be managed, unless we come to terms with the issue of measurement/valuation. The history of scientific research in any field bears this out; while, within the firm, it is safe to say that knowledge managers will have nothing to manage unless their 'assets' can be valued (and thus compared with other assets according to common metrics). It may be overstating the case to say that unless something can be measured, it does not exist. However, it is probably true in both research and business that unless something can be measured, nobody really pays much attention to it for very long.
Measurement is the process of assigning numbers to things. More specifically, it represents an empirical relationship system (e.g., a set of objects that differ in size) with a formal relationship system (e.g., the real numbers) in such a way that if a particular relationship holds in the empirical system (e.g., the fact that one object is bigger than another), then this relationship is preserved in the formal system (e.g., the bigger object is assigned a higher number than the smaller object).
This article is concerned with issues surrounding the measurement or valuation1 of those 'objects' that might be classified as 'knowledge' or 'information'. While formal definitions of 'information' and 'knowledge' remain messy, many observers make the distinction that information is data that has been given structure and knowledge is information that has been given meaning.2
Knowledge as a Good
An information or knowledge 'theory of value'3 must begin with the perception of knowledge as a 'commodity' that is difficult to measure by traditional standards. The typical economic 'good' displays such properties as divisibility, appropriability, scarcity, and decreasing returns to use or depreciation. It is these properties that give rise to the distinction between 'value in use' and 'value in exchange.' This creates the familiar 'value paradoxes' in the field of economics - e.g., the fact that water is 'cheap' (low value in exchange) while diamonds are expensive (high value in exchange); even though the former has high value in use, while the latter has low value in use.
By contrast, information as a commodity differs from the typical good in that it is not easily divisible or appropriable (i.e., 'either I have it or you have it'); it is not inherently scarce (although it is often perishable); and it may not exhibit decreasing returns to use, but often in fact increases in value the more it is used. Furthermore, unlike other commodities, which are (with few exceptions) non-renewable and depletable, information is essentially 'self-regenerative' and 'feeds on itself,' such that the identification of a new 'piece' of knowledge immediately creates both the demand and conditions for the production of subsequent pieces.4
It is these features that make information or knowledge difficult to value by conventional criteria (leading historically to underdeveloped markets for knowledge 'goods'). However, these features must be taken into account in any serious attempt to develop realistic and robust measures. In particular, the traditional separation between 'value in use' and 'value in exchange' must be abandoned in favour of an appreciation that knowledge has an economic value (value in exchange) only when used. This is in sharp contrast to the situation with most matter and energy-based commodities, where economic or exchange value is high to the extent that the good is actually not used (e.g. a new car is most valuable before it has been driven).
Measuring the Knower
Focusing on usage as the basis for measurement or valuation of any system calls immediate and direct attention to the interaction between subject and object - or, in the case of a 'knowledge system,' between the 'knower' (the subject of knowledge) and the 'known' (the object of knowledge). In particular, if knowledge has no value (or measure) unless it is used, and if it is the knower who is the user of knowledge, then measuring knowledge is ultimately a matter of 'measuring the knower' - by which is meant, of course, measuring the meaning of a piece of information to the information processor.
While this may seem like a fairly benign assumption, it is at odds with traditional information theory in particular and most formal measurement systems in general: introducing the 'knower' - or meaning of an observation - into the observation process puts us in direct conflict with the fundamentals of science. The whole point of measurement theory is to remove the knower from the process. Yet, it is precisely context that gives meaning to information - thus creating knowledge - and results in different knowers valuing the superficially 'same' piece of knowledge differently.
Attributes of information attended to by Knowers
Following are some of the factors that knowers, the users of knowledge, pay attention to in processing information. Though the examples given here typically involve individual human information processors, the term ' knower' can also apply to the organisation as a whole (when operating as a single integrated organism). While technically the factors are attributes of signals (the objects of knowledge), the act of processing these factors is precisely what endows an otherwise neutral signal with meaning to the knower; and, in this sense, it is appropriate to characterise them as attributes of the knower. Indeed, these are the very factors that conventional formal measurement or valuation systems have trouble incorporating into their methodologies.
Perhaps more than anything else, knowers do not evaluate items of knowledge independently, but as part of an overall context. Paying attention to the contextual properties of data in order to make sense of the world takes place not only at the basic sensory or perceptual levels (e.g., news that the temperature is 30 degrees in January has different meaning and would be valued differently depending on whether we are talking about San Diego or Minneapolis), but also with respect to higher level cognitive activities. For example, an important piece of knowledge with significant marketing implications for the firm is the 'utility' a consumer has for one product when compared with another. In expressing preference between two items, A and B, the overall set in which A and B are embedded (e.g., the presence of a third object C) may influence the relative rankings that an individual gives to A vs. B.
An important and frequently encountered type of context that knowers rely on is the way a particular situation is framed or a problem represented. As is true with context in general, framing considerations operate at the basic sensory or perceptual levels, but the more interesting cases involve higher-order cognitive activities. Staying within the realm of preference judgements, for example, it has been shown that the negative properties of stimuli tend to be weighted more heavily than the positive ones. Consequently, whether a problem is framed in terms of gains or losses often has a dramatic effect on the interpretation or meaning given (the half-empty or half-full dilemma).5
Among the most important contextual properties of data is the fact that individual items of information (such as features or attributes of objects) are typically correlated or interact with each other. The resulting 'redundancy' (or degree to which the presence or absence of one feature can be reliably predicted based on the presence or absence of another feature) is used by knowers to identify the configural effects (or 'gestalts') that are the basis for efficient information processing. Whereas most traditional measurement systems, particularly those developed in the social sciences, assume various types of independence among the features that distinguish one object from another, the use of configural effects is the foundation for 'pattern recognition' and so-called analogical processing.
An analogue system is the most useful for directly comparing the degree to which objects or sets of objects resemble each other without the need for intermediate translation (e.g., through a numerical or other digital system).
More generally, analogue processing is intimately related to pattern recognition, where decision making is based on the ability to recognise and then make instantaneous responses and adaptations to rapidly changing environmental conditions.
Fuzzy aspects of data
Knowers and users of information have a tolerance for ambiguity and the so-called fuzzy boundaries separating one set of objects from another - a tolerance that most formal measurement models do not allow for. Whether or not an item of information is a member of a fuzzy set is not an all-or-nothing, yes/no, binary proposition, but a matter of degree. Fuzzy sets can be used to describe an intermediate state between the extremes that are required by the formal logic of 'crisp' sets (the basis for probability theory and other formal measurement systems). As an aide to information processing, the use of fuzzy sets allows knowers to facilitate the transition between sets of objects. As such, 'fuzzy logic' corresponds to the natural language systems actually used by knowers to make sense of the world (through categorisation) and to communicate what has been categorised (as opposed to the mathematical or other languages imposed by formal systems). In this respect, fuzzy sets are often at the basis of analogue representation and information processing where there may be a technical loss of precision (when compared to 'digital' representation) that is more than compensated for by the gains in efficiency from focusing on the pattern and not the details.6
Dynamics and Temporal Context
Knowers pay attention to the temporal properties of data, so that the same item of information may have different meaning at different points in time. Particularly salient in this regard are stimuli that come first or last in a sequence (primacy and recency effects). While many measurement methods index stimuli for time, others do not or cannot, given their formal structure. One critical component of dynamic or temporal stimuli is the use of feedback, which knowers use to change their interpretation of events and thus adapt and respond to changing environmental conditions.
Knowers value an item of information differently depending on who else knows it. In keeping with traditional economic theory, an individual's utility for a good is independent of who else possesses the good. Indeed, private information (e.g., a hot stock tip) is obviously more valuable to the extent that it is appropriable and truly scarce. However, as noted above, one of the features distinguishing information from typical goods is the extent to which information is difficult to appropriate and thus not scarce in the conventional sense. In fact, it is this property that leads to one of the most interesting aspects of the way knowers value information - namely, that knowledge-intensive goods are often deemed more valuable if more users possess the same knowledge. This phenomenon, which is part of the more general phenomenon of 'network externalities' or the gain in individual utility that comes from others possessing the same good, is at the basis of many knowledge-intensive industries such as software and telecommunications where the development of market standards is crucial to the industry's growth, if not basic survival. This factor is perhaps the most dramatic example of how knowledge as a good derives its value from access and use as opposed to ownership and control.7
Knowers as information processors
The list above identifies properties of knowledge objects that knowers attend to when evaluating them. Understanding how these confer 'subjective' on otherwise neutral signs is one aspect of the knowledge measurement problem. Ultimately, however, since measuring knowledge involves measuring the knower, there is a need to consider the capabilities of an organism (whether it is an individual or a firm) as an information processor and learner. In the final analysis, if two knowers ascribe disparate meanings and different value to the same item of knowledge, this is because, in some sense, one knower is 'smarter' than the other - either because of inherent genetic endowment and/or acquired sets of skills.
While there are many models of what constitutes skilled learning and information processing, the information acquisition process may best be described by the context of 'Open-Minded Inquiry'.8 It is a particularly useful concept because it describes qualities that not only are associated with individual learning, but are appropriate for organisational learning as well. Also, it is key to the development of 'organisational knowers' and thus eventually to our ability to measure and value the organisation's knowledge assets.
Among the key attributes of an 'Open-Minded Inquiry' information acquisition system are:
|Active Scanning - systematically seeking out environmental cues, as opposed to waiting passively for information to be received. With regard to organisational learning, active scanning should come particularly from front-line contacts in the field, who are motivated (and compensated) to inform management on a systematic basis.|
|Self-critical Benchmarking - systematically instituting continual comparisons of new in-coming data against a set of internal standards or referents. Within the organisation, this involves going beyond the typical tear-down analyses of competitors' products and the occasional study for insights into how better to perform discrete functions and activities. It requires systematic and continual evaluation of other firms' attitudes, values, and management processes in the belief that the firm can always learn how to improve its measures and the way individual functions work together.|
|Continuous Experimentation and Improvement - systematically planning and observing the outcomes of on-going changes in procedures and practices so that those that improve performance are adopted and those that don't are dropped.|
|Informed Imitation - systematically studying the best practices of peers, role models or competitors - based on attempts to understand why the competitors succeeded, so the firm can emulate successful moves before the competition can get too far ahead.|
|Guided Inquiries - systematically instituting a capability that enables the learner to anticipate environmental requirements and resolve problems - for example, formally introducing a market learning or 'Inquiry Centre' i.e. an identity that provides comprehensive information used by all functions, so that they can be creative in an integrate fashion.9|
Information distribution and memory
Once information is acquired by the knower (whether an individual or an organisation), it must be stored and distributed throughout the 'memory' system and then ultimately interpreted. There is now a fair amount of evidence that human biological memory systems make use of distributed (as opposed to local) storage architectures. In other words, a given piece of information does not reside uniquely at one and only one address, but is distributed throughout the memory system. Building on this natural biological principle, successful learning organizations are recognising that the appropriate structure for their organisational memory is also a distributed architecture. Distributed architectures have been found to be critical for the parallel (as opposed to sequential) processing of information - a key characteristic of superior learning-oriented systems. Parallel information processing allows for different items of information to be operated upon simultaneously by a single processor and for the same item of information to be operated upon simultaneously by different processors. The advantage is that when otherwise discrete items of information are processed together, their connections and interactions become the focus of attention. This is the foundation for pattern recognition, which (as noted above) is one of the main characteristics of higher-order learning.
The advantage of using multiple processors on a single item of information is that it permits a general problem to be broken down into a number of component sub-problems that are then handled simultaneously. Here, the overriding consideration is speed, not just for its own sake, but because many problems simply cannot be solved if not in real time - e.g. in an organisational setting, being ready with the next generation of a product in markets with increasingly ephemeral life cycles. Within the firm, where the 'general problem' is typically the design and implementation of a particular strategy, the gains in speed (e.g., in time to market or reaction to competitive activity) from having different members in the organisation work in parallel on a problem's components can be enormous. Furthermore, parallel processing structures are inherently redundant, which makes them remarkably resistant to damage - e.g. the loss of key managerial talent.
The other key advantage of distributed architectures is that they facilitate both accessible and synergistic memory structures - critical factors for building successful learning competencies. Within the organisation, acquired knowledge will not have a lasting effect unless it is accessible - i.e., what is learned is lodged in the collective memory. Organizations without practical mechanisms to remember what has worked and why will have to repeat their failures and rediscover their success formulas over and over again. Successful learning organizations insure against these risks by systematically instituting procedures to guard against a too-quick decay of the firm's collective recall capabilities (for example, by insisting that information files are made accessible to the entire organisation and by minimising turnover through transfer or the premature disbanding of decision groups and teams).
In this respect, a superior organisational information distribution system is also synergistic, meaning that information is made available to anyone in the organisation who might potentially have use for it. Again, the emphasis is on access and use and not on ownership and control. In the traditional (sequential information processing) organisation, the issue of the information distribution follows the typical 'need to know' approach, a style of functioning which suffers from two serious problems: it assumes that the use to which information will be put are already known in advance (exactly the opposite of what is required for learning); and it exacerbates the already serious tendency wherein knowers do not know what they know.10
After information is acquired, stored, and distributed, it is interpreted. Interpretation organises data, giving it structure or context and thus meaning. Successful interpretation may be described as 'mutually informed,' even within a single individual - i.e., multiple points of view are not only tolerated, but actively encouraged. Knowers make decisions based on 'mental models,' which invariably involve a degree of abstraction or simplification of 'reality.' These simplifications facilitate learning when they are based on undistorted information about important environmental relationships, but they can impede learning when they are incomplete, unfounded, or seriously distorted - by functioning below the level of awareness, so they are never examined. In an institutional setting, learning organisation often use scenarios and other devices to force managers to articulate, examine, and eventually modify their mental models of how their markets work. Furthermore, problems can also arise when managers in different functional areas have very different mental models and don't appreciate or accept that there are other valid interpretations. This problem can be especially dysfunctional in decision groups or teams. Learning organizations, as well as individual learners, increase the likelihood of realising uniformity of meaning (or consistency of mental models) by insuring that:
|Information is uniformly framed or labelled;|
|The media of communication provide rich and reinforcing cues;|
|The amount and complexity of information doesn't overload the capacity to extract interpretations; and|
|The knower does not have to discard or unlearn too much obsolete, misleading, or discrepant information.11|
Meta-knowledge is 'knowing what you know' - the 'knower knowing the knower.' Self-knowledge is at the heart of culturing that level of intimate awareness with processes ('the way I do things around here') that underlies self-referential functioning. There is accumulating evidence that meta-knowledge is at the core of 'learning' and that it is ultimately responsible for the level of meaning that an individual knower ascribes to a particular object of knowledge. The inability to 'know what it know' is a characteristic of an information-processing structure (whether a human individual, organisation, or machine computer) that is sequential in nature and based on localised, separated memory stores. Thus, when confronted with an item of information, the traditional computer has no conceptual way of determining whether the information is known (i.e., already stored in memory) or unknown (i.e., not stored in memory), in which case it must be learned. For example, if presented with two customer transaction records - one of which is already in a database, the other which is not - the traditional computer architecture will perform the same exhaustive search in both instances before making either a positive or negative identification. Similarly confronted with changing environmental parameters is deciding whether or not the incoming information represents something genuinely new that calls for a strategic redirection. Sometimes, the appropriate identification is not made until it is too late to act. Meta-knowledge involves the ability to appreciate the degree to which the meaning of information is context-dependent and require reasoning by analogy (i.e., pattern recognition). This ability is how a knower adapts and responds; it is the essence of learning.
Conclusion: Towards a theory of knowledge equity
Knowers attend to the properties of data that conventional measurement methods do not easily incorporate into their formal structures. As a result, knowers' evaluations often seem inconsistent and even 'irrational' by traditional standards. It is precisely the ability to pay attention to these factors that often results in superior learning; however, it is for this reason that many observers have concluded that knowledge - or the interpretation that knowers place on the objects of knowledge - is essentially 'tacit' and should remain so and that attempts at formal measurement are fundamentally flawed. Despite such pessimism, the fact remains that the 'knowledge measurement enterprise' is making significant progress across a wide variety of domains. What is required is that investigators and practitioners be made aware of each other's endeavours so that they can learn from each other's efforts and understand the extent to which many are pursuing very similar objectives with similar methods.
In this regard, it is now possible to glimpse the beginnings of a coherent theory of 'knowledge equity' - one that will simultaneously draw upon and further the knowledge measurement process.12 As the examples described here suggest, it is entirely feasible to introduce so-called tacit or subjective factors - attributes of knowers - into formal measurement methods and models. It is also clear that these methods, when part of a comprehensive set of knowledge measures or assets, will have quite a different flavour than conventional metrics. For example, with regard to accounting for items of knowledge on the firm's balance sheet, it is possible to have information assets indexed according to who is the user/knower of the asset - which results in a multiple set of accounts for same item.
There is one thing about all of this that should be reassuring to traditional economists. Knowledge is difficult to measure because it is not scarce in the traditional sense. The defining quality of information-intensive environments is an abundance, and not a scarcity, of information. However, as the noted psychologist and economist Herbert Simon has suggested, 'What information consumes is rather obvious; it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.'13 If - in the information age - information and knowledge are not scarce but abundant, and if it is the attention of the information-processor that is the real scarce resource, then in measuring the knower, we are valuing a scarce resource after all.
Rashi Glazer is an Associate Professor at the Haas School of Business at the University of California, Berkeley. His teaching and research interests are in the areas of competitive marketing strategy, IT strategy and behavioral decision-making.