The use of scientific evidence and knowledge by managers

Transcription

The use of scientific evidence and knowledge by managers
Groupe de recherche
interdisciplinaire en santé
Secteur santé publique
Faculté de médecine
The use of scientific evidence
and knowledge by managers
Étude comparée de la collaboration interorganisationnelle et de ses effets :
François Champagne
N99-01
THE USE OF SCIENTIFIC EVIDENCE AND KNOWLEDGE BY MANAGERS
Paper presented at
«Closing the Loop : 3rd Conference on the Scientific Basis of Health Care»
Toronto, October 1-3, 1999
François Champagne, Ph.D.
Département d'administration de la santé
et
Groupe de recherche interdisciplinaire en santé
Université de Montréal
and
Health Evidence Application Network (HEALNet)
1
Preamble
What use do managers make of evidence in their decision-making? To
address this question, I would like first to review how and why evidence is supposed to
be used. I will propose that clearly managers cannot use evidence this way; that there
are good reasons for this; and that it is unrealistic to expect that they ever would. I will
then suggest, however, that managers do use evidence in all kinds of other ways, and
sometimes very beneficially. I will conclude with some ideas about how to go about to
realistically optimize the use of evidence by managers.
Models of Knowledge Use in Decision-making
The basic premise of what we may now legitimally call this movement for
evidence-based decision-making at all levels of health care (clinical, organizational,
systemic) is at the same time a simple but heroic assumption: that using evidence is a
good thing. More specifically, this basic premise can be broken down as illustrated in
Figure 1. The use of scientific evidence and knowledge, that is knowledge derived from
systematic investigation conducted in such a way as being recognized as rigorous by a
community of researchers should lead to higher quality decisions which should lead to
the implementation of higher quality actions and consequently if all goes well, to better
outcomes. The causal reasoning that links quality of decisions, quality of action and
outcomes seem pretty robust. Unfortunately, the relationship between use of evidence
and quality of decision is clearly less well understood.
But let's not challenge it for a moment. How is evidence supposed to
come into use? Through which process and why would decision-makers and managers
use evidence? Several processes or models of knowledge use in decision-making have
been suggested in the literature. Indeed, use of scientific evidence and knowledge in
decision-making has been a concern of the knowledge utilization field which has been
studying the use of social research in policy since the 1930's, of policy analysis, of
2
Figure 1
The basic premise of evidence-based decision-making
Use of scientific
evidence and
knowledge
Quality of
decisions
Quality of
implemented
actions
Outcomes
3
organizational decision-making, of individual/cognitive decision sciences, of the
innovation diffusion field, of program evaluation, of operations research and much more
recently of evidence-based medicine. Much of the discussion that follows is based on
the seminal work of Carol Weiss (1977, 1979) of Harvard University on the use of social
science knowledge in social policy.
The most traditional and venerable model of knowledge use may be called
the knowledge-driven model (Figure 2).
This model is derived from the physical
sciences. It assumes that the mere fact that knowledge exists naturally and inevitably
presses it toward development and use (Weiss, 1977, 1979). According to this model,
basic research discloses some opportunity whose practicality is defined and tested by
applied research.
implemented.
If all goes well, appropriate technologies are developed and
Although one can find several examples of this model in physics,
chemistry and physiology (the development of therapeutic drugs being a prime
examplar), very few examples can be found in the social sciences in general and in
management in particular at least for 2 central reasons (Figure 3) (Weiss, 1977, 1979).
First, management knowledge is not apt to be so compelling or
authoritative as to drive inevitably toward implementation. Management is inherently
contextual.
It is management of context.
It cannot be dissociated from it.
Consequently, generalizations are hazardous and although knowledge may provide
thorough understanding of management situations, its prescriptive value will always be
limited. As Whitley (1988) wrote very persuasively:
"The production of scientific knowledge which could
form the foundation for managerial skills standardized across
different hierarchical arrangements and circumstances
seems, then, fraught with difficulties.
The variety of
managerial practices, their organizationally embedded
nature and their susceptibility to change and control by
enterprise top managers, render attempts at standardizing
managerial problems for solution by standardized skills
4
Figure 2
The knowledge-driven model of scientific evidence use
Basic
research
Applied
research
Technological
development
Use
(adoption of
technology)
Quality of
implemented
actions
Outcomes
5
Figure 3
Limits to the applicability of the knowledge-driven model in management
Basic
research
Applied
research
Management knowledge is not
apt to be so compelling or
authoritative as to drive
inevitably toward
implementation
Technological
development
Management
knowledge does
not readily lend
itself to
conversion into
technology
Use
(adoption of
technology)
Quality of
implemented
actions
Outcomes
6
across firms and industries of limited value. (...) In this view,
attempts to establish a general «science of managing»
which would generate knowledge of highly general relations
between limited and standard properties of separate,
standard objects are doomed to failure since «managing» is
not a standardized activity."
Secondly, management knowledge does not readily lend itself to
conversion into replicable technology. Although managers may and do use manage
technology, the technology of management itself is fluid, imprecise, one could say
prehistorical.
A second model, maybe the more common and indeed the one that lies
behind both evidence-based medine and much of the current efforts by funders of
research to increase the relevance of research to practice, is the problem-driven (or
problem-solving or decision-driven) model of scientific evidence use (Figure 4) (Weiss,
1977, 1979). This model, like the preceding one is an instrumental model, in the sense
that evidence is seen as directly useful and applicable to a decision. According to this
model, the path to scientific evidence use starts with the definition of a problem
situation. Information or understanding is lacking either to generate a solution to the
problem or to select among alternative solutions. Research will provide the answer
(Weiss, 1977). In variant A of this model, the one underlying evidence-based medicine,
research predates the recognition of the problem.
Decision-makers faced with a
decision go out and search for information from preexisting research.
Or the
information is brought to their attention by staff, analysts, colleagues, consultants or
they see it somewhere in a journal, newsletter or at conferences (Weiss, 1977, 1979).
In variant B, the one underlying efforts to establish partnerships between researchers
and practitioners, research is commissioned to fill the knowledge gap. In both variants,
newly acquired knowledge is used to better understand the problem situation and to
choose among alternatives courses of action.
7
Figure 4
The problem-driven model of scientific evidence use
Variant A
acquisition of
knowledge from
pre-existing
knowledge
reservoir
Definition
of
problem
Identification of
missing
knowledge
Interpretation
for the problem
situation
Variant B
acquisition of
knowledge
through
commissioned
research
Use
(adoption of
technology)
Quality of
decision
Quality of
implemented
actions
Outcome
8
In variant A, where research predates the problem, the usual prescription
for increasing the use of knowledge is to make findings more easily accessible through
improved communication and transfer of research to decision-makers. In variant B,
improved use will result from more control by funders or decision-makers on the
specification and conduct of the requested research. As with the previous knowledgedriven model, there are several limits to the applicability of this problem-driven model
to managerial decision-making (Figure 5).
First in many managerial situations, the
parameters of the problem may not be clearly defined or even more likely, there might
not be consensus of these parameters.
Secondly, even when there are clear-cut
problems to be solved and clear-cut decisions to be made, decision-makers do not
always know what kind of information they need to know. Thirdly, in many cases,
indeed in all complex cases, research findings may be ambiguous, less than clear-cut,
with weak support and low power. Research is rarely comprehensive or convincing
enough to supply the correct answer.
In addition, time constraints may preclude thorough knowledge search (in
variant A) or patient waiting for results (in variant B).
Even more importantly,
acquisition of knowledge may be impeded by the widely acknowledged cultural gap
between the research and users communities. It may indeed be the most persistent
observation in the literature on knowledge utilization that researchers and decisionmakers belong to separate communities with different values and ideologies and that
these differences impede utilization (Beyer and Trice, 1982). Finally, the implication of
knowledge for action may not be clear-cut either. Understanding may not directly lead
to recommendation for action. Implications of findings for action may be far from clear
(Scriven, 1991, 1995).
So there appears to be serious limits in the applicability to management of
the theoretical models that underlie the current calls for greater instrumental use of
scientific evidence in decision-making.
Clearly, management knowledge will not
inexorably lead to development and action as in the knowledge-driven approach and
9
Figure 5
Limits to the applicability of the problem-driven model in management
Definition
of
problem
- unclear problem
- no consensus
Identification of
missing
knowledge
- difficulty in
identifying
information needs
Variant A
acquisition of
knowledge from
pre-existing
knowledge
reservoir
Variant B
acquisition of
knowledge
through
commissioned
research
- ambiguous
findings
- time constraints
- cultural gap
Interpretation
for the problem
situation
Use
(adoption of
technology)
Quality of
decision
Quality of
implemented
actions
- troublesome link
between
knowledge and
action
Outcomes
10
only in very rare circumstances would all conditions posited in the problem-driven
model be met in the managerial world for research to directly influence decisions.
Types of Managerial Knowledge
To add more gloom to this picture, let us come back to the basic premise
that use of scientific evidence leads to better decisions. It is safe to say that although
the use of evidence might contribute, might enhance or at least might be related to the
quality of decisions, there is little doubt that many other factors influence the quality of
any decision (Figure 6).
First, it has been accepted since the early work of Herbert Simon (1945)
that decision-making in any complex or unprogrammed situation cannot be perfectly
rational as would be a decision uniquely based on evidence. Rationality is limited by
the limits in human cognitive processes and by the presence and interaction of multiple
values, stakes and stakeholders. Indeed, most decision situations do involve multiple
participants with multiple views and interests.
Secondly, managers will be informed and influenced by all kind of
knowledge other than scientific. First, practical knowledge, that is personal knowledge
derived from experience, is almost universally highly valued by managers. In addition,
managers lend much credence to anecdotal evidence, that is knowledge they derive
from their personal observation either fortuitous or provoked of particular situations,
usually in organization very similar to theirs. Indeed, many managerial decisions may
be based on imitation of esteemed peers. Second, intuitive knowledge may be the
most highly regarded form of knowledge in some management circles.
For some,
intuition is some type of holistic and configurational thinking process distinct from the
usual linear, analytical mode of thinking (Mintzberg, 1976, 1989). For others, intuition
is analysis frozen into habit (Simon, 1987). Whatever your view on intuition, you will
agree that it tends to be a major factor in a managerial decision-making. A closely
11
Figure 6
Use of scientific evidence in a context of
limited rationality decision-making
Instrumental use
of scientific
evidence and
knowledge
Quality of
decisions
Quality of
implemented
actions
Characteristics of
decisions
- stakes
- stakeholders
- processes
Knowledge
production
paradigm
Use of
- practical knowledge
- anecdotal evidence
- intuitive knowledge/wisdom
- expert reasoning (pattern recognition,
forward reasoning and parallel
processing)
Outcomes
12
related issue is that in some familiar circumstances, managers may be considered as
expert decision-makers, that is they may use expert reasoning involving pattern
recognition, parallel processing and forward reasoning (Patel et al., 1996). In other
words, they will not decide by carefully evaluating competing alternatives, but by
recognizing a problem and its solution as a function of a highly structured knowledge
base but with minimal use of explicit knowledge.
An additional factor which may influence the instrumental use of evidence
is the epistemological foundation according to which this evidence was produced.
Traditionally, the positivist paradigm was the dominant management research
paradigm.
this paradigm posits the existence of a single reality, of a single truth
independent of any observer's interest. It also asserts that the observer can and should
remain detached and distant from its object of study.
Competing paradigms include critical theory, post-modernism and
constructivism. All of these share a belief that knowledge should be constructed by
using actors' subjectivity rather than trying to exclude it and that the inquirer and the
inquired are interlocked in such a way that the findings are constructed through their
interaction (Guba and Lincoln, 1989, 1990).
When knowledge is produced as it usually is, that is according to the
positivist paradigm, with its emphasis on isolating the object of study from its content
and eliminating subjectivity, managers will tend to dismiss it as theoretical, which is a
word they use for things they do not like and that they find fatally flawed and not
grounded in reality.
Types of Use of Knowledge by Managers
So, should we be discouraged about the possibility and value of using
scientific evidence in managerial decision-making? Although managers may not or may
13
with difficulty use evidence instrumentally, that is to directly, specifically and punctually
influence decisions, they do use evidence in all kind of other ways (Figure 7).
First, they may come to be influenced and informed by evidence indirectly
through disorderly interaction with researchers.
Researchers are one of the actors
within the policy and decision-making arena and as such may propagate evidencebased opinions, ideas, understanding and solutions. Evidence is thus used as part of a
public deliberation involving multiple stakeholders Weiss (1977, 1979) labeled this
model the «interactive model».
Evidence may also be used by managers to support their a priori position
on an issue involving multiple actors and multiple interests. Research is used politically
«as ammunition for the side that finds its conclusions congenial and supportive»
(Weiss, 1979).
There is also another way that managers use evidence strategically. This
time, managers can use the research process itself as a tactic either to delay decision
and action, to avoid taking responsibility for a decision, or as a public relation exercise.
In all of these situations, it is the sheer fact that research is being done and will be
used that is invoked not the content of the findings (Weiss, 1979).
Finally, the most common use of scientific evidence by decision-makers
including managers may be conceptual, that is as background, integrated knowledge for
understanding.
In this process which has been called an enlightenment process
(Janowitz, 1972, Crawford and Biderman, 1969, Rich and Caplan, 1976; Pelz, 1978;
Weiss, 1979), the concepts and theoretical perspectives of social and management
sciences permute the decision-making process. Knowledge used in decision-making is
constituted by accumulated and integrated evidence and generalizations rather than
findings from a specific study.
14
Figure 7
Models of non-instrumental use of scientific evidence
• The interactive model : researchers interact with all others stakeholders. They all engage in a
disorderly fashion in mutual consultations that progressively move closer to potential decision
options
⇒
deliberative use of evidence
• The political model : scientific evidence is used as ammunition in a political debate
⇒ political use of evidence
• The tactical model : the research process is used to delay decision and action, to avoid taking
responsibility for a decision or as a public relations exercise
⇒ tactical use of evidence
• The enlightenment model : managers use background, integrated knowledge for understanding
⇒
conceptual use of evidence
15
So managers do not often use evidence instrumentally but do use it
deliberately, politically, tactically and conceptually. What can we make of this to inform
and advance the debate on evidence-based decision-making? (Figure 8).
Is Evidence-based Managerial Decision-making Possible?
First, we should recognize and accept the fact and value of noninstrumental use of scientific evidence. The conceptual use of research in particular
may well represent use in the truest sense of the word. Knowledge internalized by
managers gets used constantly with much impact.
In addition, the influence of
conceptual use of evidence does not rest on untenable assumptions of rational and
consensual decision processes.
Similarly, the deliberative use of evidence in the public arena
acknowledges a proper use of scientists' expertise and capacity to synthesize
knowledge so as to meaningfully contribute to the solution of salient problems.
The political use of evidence should not be distrusted or undervalued
either. Given the inherently political nature of managerial decision-making, the political
use of evidence is inevitable. Contrarily to population opinion, there is nothing wrong
with using research to support a predetermined position. This is neither unimportant
nor improper. Only distortion and misinterpretation of findings are illegitimate (Weiss,
1977, 1979).
Secondly, we should, in particular funders should, accept the fact that
some research will never be used. As with any human enterprise, some research will
be flawed and should not be trusted nor used. More importantly, research is a risky
business. Some will not lead to usable results or indeed to anything. We usually value
risk taking We should do so in research too.
16
Figure 8
Some points for discussion
• Recognize and accept the prevalence and value of non-instrumental use of scientific
evidence
• Accept the fact that some research will never be used
• Beware of the “error of jumping to recommendation”
• Critically review current attempts at partnerships between decision-makers and
researchers
• unrealistic expectations
• conflicts of interests/managerialism
• epistemologically unsound ?
• Adopt a comprehensive theoretically informed framework for the analysis and
optimization of the use of scientific evidence and knowledge in decision-making
17
Thirdly, it should be acknowledged that the link between research findings
are evidence on the one hand and recommended action on the other may be tortuous
and ambiguous. Pressure for the use of evidence should not lead to errors of jumping
to unwarranted recommendations (Scriven, 1995).
Fourth, current attempts to promote greater use of evidence in decisionmaking usually consists in establishing close partnerships between decision-makers and
researchers involving various linkage and exchanges throughout the research process.
These should be critically reviewed on several grounds. First, they create unrealistic
expectations of usable results.
It is foolish for a scientist to guarantee to discover
explanations or provide usable recommendations as a result of investigation of a
particular situation (Scriven, 1991, p. 304)).
Science is not plumbing.
Secondly,
overemphasis on utilization may create strong conflicts of interest situation for
researchers because it places pressure on them to adjust findings to what decisionmakers are willing to do rather than to what they should do (Scriven, 1991, p. 370).
According to Guba and Lincoln (1989), this tendency toward managerialism is
detrimental since it precludes critical analysis of managerial action and is almost
inevitably disempowering and unfair with the ultimate power resting with decisionmakers.
An even more serious criticism of current partnerships attempts is that
most of them involve problem-driven commissioned research projects, that is they are
partnerships around specific research projects aimed at providing understanding or
solution to specific problems.
This approach is potentially troublesome on
epistemological grounds. Knowledge is not simply data or information. Knowledge is
not a simple statement of acts emerging from a local situation (Kerr, 1984). Knowledge
is constituted by the accumulation of evidence. As a scientist, I am concerned about
the possibility of any decision being too overly influenced by any single research
project. There are few critical experiments in research and none in social science and
management research.
Evidence-based medicine with its emphasis on research
18
synthesis does avoid this flaw of variant B of problem-driven research.
Research
partnerships in health policy and management research could likewise be built around
research synthesis rather than around research projects.
Finally, the determinants and impact of the use of scientific evidence in
organizational decision-making should be studied using a comprehensive and
theoretically informed framework. Various literature streams mentioned earlier could
indeed inform such a framework which could lead to optimizing, by which I purposefully
mean finding the proper role and place, not maximizing, the use of scientific evidence in
managerial decision-making.
Figure 9 presents an interpretation of the type of factors that could be
considered1.
Although this framework would obviously require much discussion, my
purpose here is simply to highlight major factors. You are already familiar with the
right hand side of this figure which depicts the relationship between the instrumental
use of scientific evidence and knowledge, quality of decision and action, and outcomes
in a limited rationality context involving multiple stakeholders and where managers also
rely on other types of knowledge.
This framework also recognizes that scientific
evidence and knowledge will be used in other non-instrumental ways and will
accordingly impact on the quality of decision.
All forms of use depend first on evidence and knowledge characteristics,
that is its availability, accessibility and validity. Data availability can be influenced by
research funding and targeting practices. Data accessibility can be influenced by efforts
to improve exposition of decision-makers to evidence and knowledge, most notably
research transfer and brokering practices. Validity of scientific evidence and knowledge
rests on the strength of the knowledge base and on the methodological and
epistemological approaches that drive the production of knowledge. Indeed, my earlier
1
This model is an elaboration of a model proposed by Oh and Rich (1996)
Figure 9
A framework for the analysis and optimization of the use of scientific
evidence and knowledge in decision-making
Communication and brokering
Funding and
commissioning
Characteristics of
scientific evidence
availability
accessibility
validity
Characteristics of
data
Characteristics of
decision-makers
-values and beliefs
-attitudes/motivation
-skills
-scientific literacy
Non-instrumental use of
scientific evidence and
knowledge
deliberative
political
tactical
t l
Instrumental use
of scientific
evidence and
knowledge
Characteristics of
organizational and systemic
context
specialization
formalization
managerial attention
culture
interorganizational
collaboration
Quality of
decisions
Quality of
implemented
actions
Outcomes
Characteristics of decisions
- stakes
- stakeholders
- processes
Knowledge
production
paradigm
Organizational action and performance
Health system action and performance
Societal context
Use of
- practical knowledge
- anecdotal evidence
- intuitive knowledge/wisdom
- expert reasoning (pattern recognition,
forward reasoning and parallel processing)
20
criticism of partnership between researchers and decision-makers should not in any way
be taken to be a criticism of the partnership between researchers and inquired
participants called for in non-positivist paradigms of research. Proponents of these and
most notably constructivists, strongly feel that those interactive, intersubjective,
naturalistic and participative research processes will enhance usefulness and use of
research findings.
Individual decision-makers characteristics should also be taken into
account. Indeed, their values and beliefs, attitudes and motivation, skills and scientific
literacy will influence the propensity of individuals to use scientific evidence and
knowledge.
Organizational characteristics will also facilitate or impede the use of
scientific evidence and knowledge.
Less formalization, that is a more flexible and
organic structure, more specialization, that is a more complex division of labor, more
managerial attention to use and more participation in interorganizational collaboration
and networking should all be associated with greater use of evidence and knowledge by
managers.
This last factor may indeed be one of the most important given what we
know about the prevalence of imitation behaviors among managers.
Indeed, this
observation of a central influence on knowledge use of esteemed peers and fellow
network members is supported by many different lines of inquiry including innovation
diffusion theory, social learning theory, communities of practice and communities of
knowing approaches to organizational learning as well as academic detailing and
opinion leaders approaches to the implementation of practice guidelines.
21
Conclusion
Managers can and do use scientific evidence and the quest for increased
evidence-based decision making in health care management is clearly legitimate.
However, this quest should be pursued with realistic expectations given the nature of
management
knowledge
and
of
managerial
decision-making,
with
proper
acknowledgement of the value and legitimacy of indirect, non-instrumental use of
evidence and with a broad and thorough understanding of the factors involved in use
and consequently in the optimization of the use of scientific evidence in decisionmaking.
22
References
Beyer JM, Trice HM (1982). The Utilization Process : A Conceptual Framework and
Synthesis of Empirical Findings. Administrative Science Quarterly 27:591-622.
Crawford ET, Biderman AD (1969). The Functions of Policy-Oriented Social Science. In
Crawford and Biderman (eds) Social Scientists and International Affairs. New York :
Wiley.
Guba EG, Lincoln YS (1989). Fourth Generation Evaluation. Newbury Park : Sage.
Guba EG, Lincoln YS (1990). Competing Paradigms in Qualitative Research. In Denzin
N, Lincoln YS (eds) Handbook of Qualitative Research. Newbury Park : Sage.
Janowitz M (1972). Professionalization of Sociology. American Journal of Sociology.
78:105-135.
Kerr DH (1984). Barriers to Integrity : Modern Modes of Knowledge Utilization.
Badder: Westview Press.
Mintzberg H (1976). Planning on the Left Side, Managing on the Right.
Business Review. July-August.
Harvard
Mintzberg H (1989). Mintzberg on Management. New York : The Free Press.
Oh CH, Rich RF (1996). Explaining Use of Information in Public Policy-Making.
Knowledge and Policy : The International Journal of Knowledge Transfer and Utilization
9(1):3-35.
Patel V, Kaufman DR, Magder SA (1996). The Acquisition of Medical Expertise in
Complex Dynamic Environments. In Erickson KA (ed) The Road to Excellence: The
Acquisition of Expert Performance in the Arts and Sciences, Sports and Games.
Hillsdale, NJ : Lawrence Erlbaum.
Pelz DC (1978). Some Expanded Perspectives on Use of Social Science in Public Policy.
In Yinger JM, Cutler SJ. Major Social Issues : A Multidisciplinary View. New York : The
Free Press.
Rich RF, Caplan N (1976). As quoted in Pelz (1978).
Scriven M (1991). Evaluation Thesaurus. Newbury Park : Sage.
23
Scriven M (1995). The Logic of Evaluation and Evaluation Practice. New Directions for
Program Evaluation. Winter, no 68.
Simon H (1945). Administrative Behavior. New York : The Free Press.
Simon H (1987). Making Management Decisions : The Role of Intention and Emotion.
The Academy of Management Executive 1(11):57-64.
Weiss CH (1977). Introduction. In CH Weiss (Ed.). Using Social Research in Public
Policy Making. Lexington, MA : Lexington Books.
Weiss CH (1979). The Many Meanings of Research Utilization. Public Administration
Review sept.-oct., 425-431.
Whitley R (1988). The Management Sciences and Managerial Skills.
Studies 9(1):47-68.
Organization
Adresse de correspondance
Prière d'adresser toute correspondance concernant le contenu
de cette publication ou autres rapports déjà publiés à :
Groupe de recherche interdisciplinaire en santé
Secteur santé publique
Faculté de médecine
Université de Montréal
C.P. 6128, Succ. Centre-Ville
Montréal (Québec) H3C 3J7, Canada
Téléphone : (514) 343-6185
Télécopieur : (514) 343-2207
Adresse de notre site Web
http://www.gris.umontreal.ca/