Facing the Credibility
Crisis of Science: On the
Ambivalent Role of Pluralism
in Establishing Relevance
and Reliability
Martin Carrier
Bielefeld University
Science at the interface with society is regarded with mistrust among parts of
the public. Scientific judgments on matters of practical concern are not in-
frequently suspected of being incompetent and biased. I discuss two proposals
for remedying this deficiency. The first aims at strengthening the independence
of science and suggests increasing the distance to political and economic powers.
The drawback is that this runs the risk of locking science in an academic
ivory tower. The second proposal favors “counter-politicization” in that re-
search is strongly focused on projects “in the public interest,” that is, on projects
whose expected results will benefit all those concerned by these results. The dis-
advantage is that the future use of research findings cannot be delineated
reliably in advance. I argue that the underlying problem is the perceived lack
of relevance and reliability and that pluralism is an important step toward its
solution. Pluralism serves to stimulate a more inclusive research agenda and
strengthens the well-testedness of scientific approaches. However, pluralism also
prevents the emergence of clear-cut practical suggestions. Accordingly, plural-
ism is part of the solution to the credibility crisis of science, but also part of the
problem. In order for science to be suitable as a guide for practice, the leeway of
scientific options needs to be narrowed – in spite of uncertainty in epistemic
respect. This reduction can be achieved by appeal to criteria that do not focus
on the epistemic credentials of the suggestions but on their appropriateness in
practical respect.
This paper was composed while I was a senior fellow of the Alfried-Krupp Foundation at
Greifswald and completed while I was a John-G.-Diefenbaker fellow at the University of
Toronto. I am grateful for their support. I also thank Gürol Irzik for his valuable suggestions.
Perspectives on Science 2017, vol. 25, no. 4
© 2017 by The Massachusetts Institute of Technology
doi:10.1162/POSC_a_00249
439
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
440
Facing the Credibility Crisis
1. Science in Politics and the Economy
One of the pervasive distinctions in the history of political thought is the
distinction between rule by consent and rule by competence or expertise. A
classic locus of this debate is Plato’s Politeia in which Plato argues against
the rule by consent and advocates philosophers as political leaders. Philos-
ophers are geared toward eternal ideas and for this reason place emphasis
on the long-term consequences of political actions. The same idea is ex-
pressed today by the notion that devising policies adequately requires under-
standing the relevant subject matter. This idea is realized in many areas by
including scientific experts in the process of law-making. For instance, in the
US, it is the job of the National Commission on Forensic Science to improve
forensic practice by drawing on scientific knowledge. Further, independent
central banks in the US and the EU are supposed to direct financial policy
on the basis of expert knowledge. This matter is regarded as being so in-
tricate and so weighty that it needs to be handed over to scientific experts.
At the international level, the “International Commission on Radiological
Protection” (ICRP) develops recommendations that enter into the national
radiation protection legislation in a vast number of countries, and the WHO
Expert Committee gives advice on the selection and use of essential medicines.
However, in the public opinion, the quality of such science-based polit-
ical recommendations is often regarded as suspect. Philip Kitcher diagnoses
a “live skepticism about the authority of science” and sees the public trust
in science seriously undermined (Kitcher 2011, pp. 16–20). In 2010, the
journals Scientific American and Nature conducted a poll among their reader-
ship, which can be assumed to be supportive of science and science-literate
(Scientific American 2010, p. 56). Still, when it came to rating the trust-
worthiness of scientific statements about agricultural matters such as the
use of pesticides or food safety, the relevant scores hardly rose above the
“neutral” level and remained well below “trust” and even more below “high
trust” (Scientific American 2010, p. 56). In the same year, a “Eurobarometer”
on science and technology produced by the European Commission con-
firmed the trend. 58% of the respondents agreed with the statement that
one “can no longer trust scientists to tell the truth about controversial issues
because they depend more and more on money from industry.” 47% of
Europeans attributed a tunnel vision to scientists: They look at issues in
a very restricted science-and-technology sense and fail to integrate a broader
human or social perspective (European Commission 2010, pp. 19–23). This
was a recurrent motif in the debate about nuclear energy in Europe during
the 1980s. Critics objected that scientific experts had only a very narrow
notion of safety in view and neglected wider political ramifications such
as the security regime indispensable for a nuclear society ( Wynne 2003,
pp. 406–7). Moreover, dependence of science on private research money
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
441
is believed to produce ill-founded and myopic results. Privately sponsored
research is said to achieve only limited understanding (European Commis-
sion 2010, pp. 23–7). Studies on lay participation reveal that participants
tend to suspect industry as being primarily driven by greed and for-profit
thinking. Such motives are associated with neglecting long-term conse-
quences and negative impacts on society and the environment (Williams
et al. 2017, p. 93). Further, a survey among the supporters of the two
major American political parties showed that the credibility of chunks of
scientific knowledge depends on political inclination. Republicans tend to
accept “production science” (relevant for promoting industry and economic
growth) but to reject “impact science” (identifying human influences on the
environment and human health). The latter is supposed to be politicized and,
as a result, not credible (McCright et al. 2013). Such findings suggest that
public trust in the appropriateness of scientific judgment is seriously sapped.
To be precise, it is not scientific knowledge in general that is met with
public distrust, but rather some areas of science at the interface with society.
Practice-oriented research and scientific expert knowledge is at the focus of
criticism. The problem is not the Higgs boson but, e.g., nutritional research
(e.g., genetically modified organisms, but also dietary recommendations),
medical research (e.g., vaccination or alternative medicine), environmental
research, climate change and, at times, human evolution. We observe that
the trustworthiness of parts of science among the public is compromised
by what is perceived as politicization and commercialization.
As far as I can see, two major deficiencies are attributed to the assessment
of scientific experts, namely, incompetence and one-sidedness. Incom-
petence means that scientific results or recommendations are insufficiently
confirmed; one-sidedness says that the research endeavor or its outcome is
biased and thus merely draws a partial picture of the situation at hand.
Incompetent answers are not adequately supported; one-sided or biased
answers emphasize particular features at the expense of others. There is
some overlap between the two, but in general incompetence is regarded
as lack of reliability, whereas one-sidedness is perceived as lack of relevance.
In particular, the notion of one-sidedness or bias, as used in this article, does
not, in general, imply that the pertinent research is epistemically unsound.
The point rather is that an issue is addressed only partially whereas the
public would be better served by more comprehensive research endeavors.
This is why bias in the sense employed here has primarily to do with rel-
evance deficits and not necessarily with poor reliability. This is different in
particular cases. Special kinds of bias can be tied up with methodological
flaws and be epistemically faulty. This applies, in particular, to cases in
which the partial picture developed is passed off as the whole story (Carrier
forthcoming).
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
442
Facing the Credibility Crisis
As a rule, incompetence is typically ascribed to experts in the socio-
political realm, whereas one-sidedness is feared to be widespread in com-
mercialized research. The impression of incompetence can be traced back
to various misjudgments of scientists in the political arena during the past
years. It is worth stressing at the outset that science is better than ever in
solving intricate problems. Think of the prediction of climate change from
the 1960s on or the quick identification of the complex cause of ozone
layer depletion in the 1980s. Think also of the dramatic increase in com-
putational power and solar cell efficiency in recent years. Yet in other cases,
scientists were not that successful. Earlier promises regarding the defeat of
infectious diseases needed to be taken back (as the problems associated
with the use of DDT and with antibiotic resistance testify). Conversely,
scientists warned against risks that proved to be non-existent later. Examples
are the millennium bug from 1999 and the swine flu from 2009/10. Not
infrequently, the public has seen scientists puzzled and overburdened, which
has undermined trust in the adequacy and reliability of their judgment.
Another reason for attributing incompetence to experts is the so-called
expert’s dilemma, i.e., the confrontation of expertise and counter-expertise
(Grunwald 2003). In parts of the public the view prevails that each science-
based advice can be countered and repealed by an equally science-backed
contrary recommendation. What happens to agricultural microorganisms if
genetically modified Bt-corn is implemented? Does the microwave radiation
emitted from cell phones pose long-term health risks to frequent users? Do
usual concentrations of bisphenol A involve health hazards? You get
inconsistent answers to such questions depending on which expert you ask.
A second source of the perceived reduction in quality of science-based
advice is bias or the loss of neutrality. This supposition is also backed by
the expert’s dilemma. On this interpretation, contrasting expert judgments
are produced by the politicization of science. Parts of the public share the
view that scientific experts have become part of political fighting and that
the parties to whatever dispute can rent, as it were, suitably inclined scientists.
This rent-an-expert suspicion is bolstered by studies by Robert Proctor who
showed that the American tobacco industry hired scientists who launched a
mock debate about the alleged overestimation of the risk of smoking (Proctor
2011, Part III). In a similar vein, Naomi Oreskes and Erik Conway revealed
that right-wing political circles had paid scientists for deliberately hiding
anthropogenic climate change (Oreskes and Conway 2010).
In addition to expert dissent, unanimously shared expert bias nourishes
public suspicion. In this case, it is not the opposition among the pro-
fessional opinions of scientists but rather their deceiving consensus that
is thought to betray expert bias. This effect is particularly striking in
application-oriented industrial research. Meta-reviews of clinical studies
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
443
of new medical drugs have exposed a tight correlation between the finan-
cial interests of the sponsor of a clinical trial and its results (Davidson
1986; Bekelman et al. 2003; Lexchin et al. 2003; Sismondo 2008a,
2008b, Cochrane 2012; Elliott 2014; 2016). More specific cases in point
are the largely unanimous but erroneous assessment of the beneficial influ-
ence of the hormone treatment of menopausal disorders between 1970 and
2000 (Büter 2015), the disclosure that clinical data about side-effects of
Vioxx, a novel anti-inflammatory drug, had been suppressed (Biddle
2007), and the recent exposure that the efficacy of Tamiflu had been grossly
overestimated in the 2000s (Cochrane 2014; 2015). In all these cases, eco-
nomic interests could be shown to be at work, and in many of them, bias
and unreliability emerge in unsavory unity.
Such indications of incompetence and one-sidedness have hurt the epi-
stemic authority of the sciences involved. This decline affects their credi-
bility among a wider audience in the first place, which is not the same thing
as scientific justification (which is a matter of the scientific community).
Justification and credibility are not unrelated, but they are sufficiently dis-
tinct to merit separate treatment. In section 2, I explore two strategies for
improving the credibility of science. My argument is that they are unsatis-
factory as they stand but that they can be elaborated into another proposal
that I present in section 3. This proposal gives rise to the demand of a broad
or evenhanded research agenda that includes a contrasting set of issues. In
section 4, I extend this argument to the suitability of pluralism for increas-
ing the epistemic credentials of scientific accounts. While these consider-
ations boost the importance of pluralism for solving the credibility crisis,
being stuck with a broad range of contrasting options is not a practically
useful condition. In this respect, pluralism rather serves to exacerbate the
credibility crisis. In section 5, I discuss options for diminishing the mani-
fold of scientific responses in order to vindicate scientific knowledge as a
source of practically relevant information. Bolstering the public credibility
of science requires crafting an appropriate balance between opening up
the spectrum of approaches (in order to improve the inclusiveness and reli-
ability of the knowledge produced) and narrowing the range of options
taken into consideration (so that coherent science-based advice is possible
and science proves to be relevant in practical respect). This is intended to
show in which sense and to which degree pluralism may be taken to respond
to the problems of incompetence and one-sidedness and thus to contribute
to restoring the epistemic reputation of science.
2. Possible Ways Out of the Crisis
If the politicization and commercialization is assumed to be the root of the
predicament, abandoning such influences may be a way out. However,
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
444
Facing the Credibility Crisis
given the social conditions under which research operates today, this recipe
involves an operation at a very grand scale. It seems worthwhile to inquire
whether more modest moves are suited to restore trustworthiness in the
public arena.
The first suggestion for boosting the credibility of science is to build up
and reinforce the independence of science that is typically said to require
strengthening fundamental or epistemic research as a counterweight to
application-oriented research that is dominated by market forces and
political factions. Michael Polanyi is the classic source of the view that
any attempt to intervene in science will impair research, the reason being
that scientific progress is based on the non-coordinated reciprocal adjust-
ment of scientists (Polanyi [1962] 2000, pp. 1–4). John Ziman gave the
independence argument a different twist by claiming that the epistemic
culture of open and unconstrained discussion, as it prevails in fundamental
research, is indispensable for creating trust in science. By contrast, if
research operates in the grip of material interests and under the pressure
of short-term commercial or political goals, the scientific community will
lose its non-partisan and disinterested attitude and be satisfied with super-
ficial, biased, and insufficiently tested accounts (Ziman 2002; 2003). In
the same vein, an initiative of medical doctors and pharmacists in Germany,
called “MEZIS” as an acronym for “I pay for lunch myself,” seeks to increase
the distance between medical practice and industry. Again in Germany,
Günter Stock, the former president of the Berlin-Brandenburg Academy of
Sciences, launched a public campaign in 2014 for amplifying the indepen-
dence of science from political forces (but not from economic companies).
The underlying idea is that operating detached from political and eco-
nomic ambitions makes science an impartial arbiter that merits trust. This
approach typically translates into the tenet that fundamental or epistemic
research is able to counterbalance commercialized and politicized research.
The former is characterized by two features: First, its institutional goal is to
understand nature, not to produce some device intended to achieve some
practical goal. By contrast, the institutional goal of practice-oriented re-
search is related to utility. Corporate leaders judge the success of research
efforts in terms of profits reaped. Institutional goals do not necessarily
coincide with the motives of individual researchers, but they are important
because they determine what counts as success or failure of a research
endeavor (Stokes 1997, pp. 7–8; Carrier 2011, p. 14). Second, scientists
themselves determine the agenda in epistemic research. This choice is
made according to epistemic interest and expected solubility. Such research
addresses areas where something can be expected to be found with a
given theoretical or experimental approach. Epistemic research looks
where the light is (Anderson 2001, p. 493; see Kuhn 1962, p. 164). In
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
445
contrast to this autonomous problem choice, the agenda in practice-oriented
research is set by demands that are considered important in the extra-
scientific world. The agenda is demand-driven in being, first, governed by
considerations of utility and, second, imposed on science according to its
urgency as assessed by social or political standards. Whether or not such
problems are solvable in the first place is hardly taken into consideration.
Examples are the identification of suitable measures for fighting climate
change, the development of powerful electrical storage systems, or the dis-
covery of therapies of Alzheimer’s and Parkinson’s disease.1
As a result, the independence approach considers social demands and
the pressure of practice on science as the chief source of the epistemic de-
cline of science that becomes manifest in incompetence and one-sidedness.
The proposed solution is picking research problems according to epistemic
interest and feasibility, which keeps social influences automatically at a
distance. But the price to be paid is often the loss of relevance of science
for concrete social problems. Epistemic research remains sometimes locked
in the ivory tower.
The second approach to restoring the epistemic reputation of science is
what I call counter-politicization, i.e., combating illegitimate external im-
pacts on science by bringing in justified sociopolitical influences. The idea
is that the origin of the credibility crisis of science lies with the failure of
research to address real-life problems. Science indulges in self-created
issues that do not reach the world outside of university labs. Instead, science
needs to focus on practical problems. Fundamental research seldom produces
significant practical progress but attracts the lion’s share of the resources.
Redirecting these resources to projects that immediately mitigate human
suffering would be beneficial to both the epistemic authority of science
and to humankind (Kourany 2003; 2010, chaps. 1, 5; Cartwright 2006).
According to this approach of counter-politicization, political influences on
the research agenda may serve to enhance the utility of science, and it is this
increased utility that helps overcome the credibility crisis of science.
A large-scale political endeavor to foster counter-politicization is the
demand for “responsible research and innovation” which is, among other
1. A lot of criticism has been advanced against the distinction between epistemic and
practice-oriented research. However, such objections focus on the impossibility to catego-
rize a given research project as either epistemic or application-driven. Yet, the conceptual
separation does not rule out that a given research project serves both ends simultaneously
(Stokes 1997 pp. 12–7). It is compatible with this distinction that there are no purely
epistemic projects (Carrier 2013, p. 2548). If the distinction is granted as a conceptual
instrument, it allows us to pursue questions as to which kinds of research are subject to
a decline in public reputation and which kinds could help re-establish credibility.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
446
Facing the Credibility Crisis
things, part of the EU research framework “Horizon 2020.”2 An important
idea linked to this endeavor is that responsible research and innovation
should be responsive to social challenges which can be achieved by public
participation or by acting on behalf of the public in applying the so-called
precautionary principle. Such research demands the willingness of scien-
tists to adjust their projects in response to stakeholder intervention, social
values, and changing circumstances (Stilgoe et al. 2013, pp. 1572–74; von
Schomberg 2013, pp. 63–5).
In the philosophical context, this approach of counter-politicization has
been elaborated and prominently pursued by Kitcher. His goal is to democ-
ratize science through systematic participation of the public in setting up the
research agenda. Science regains its epistemic authority if it visibly commits
itself to the betterment of the human condition. And this commitment
becomes visible if the public draws up the research agenda itself. This public
influence makes it perspicuous that science has something to contribute to
improving human life. In Kitcher’s well-ordered science, representatives of
the public do not merely give advice but make decisions about research
topics. There is no place for autonomous problem choice by scientists (Kitcher
2004; 2011, pp. 117, 129–30, 217).
The message of this counter-politicization approach can be reconstructed
such that incompetence is avoided by keeping all merely partial socio-
political interests at bay that distort a trustworthy picture, while one-
sidedness is not eliminated but rather funneled into publicly approved
channels. Focusing research on demands ratified by the people is a legitimate
sort of one-sidedness (Intemann 2015, p. 218). Whatever the general creden-
tials of this approach are, the critical question from a philosophy-of-science
perspective is whether such demand-driven research, in which the research
agenda is determined by social choice, is the best way to maximize public
benefit.
The traditional objection to this approach is the claim that discoveries
cannot be anticipated and creativity cannot be fenced. It is impossible to
guide research toward specific ends; any attempt to do so will only block
scientific progress (Polanyi [1962] 2000, pp. 3, 10, 14). This is true to
some extent, but the objection is often overdrawn ( Wilholt 2012,
pp. 107–10). In fact, there are many examples of successfully planned re-
search. State authorities or foundations managed to fruitfully get research
programs underway that produced innovative environmental protection
technologies (such as catalytic converters, or flue gas desulfurization).
2. (http://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-
innovation)
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
447
Likewise, industrial research accomplished demanding research goals.
Giant magnetoresistance, a physical effect that underlies most hard disks
presently in use, was systematically sought (in analogy to a different,
already known effect), successfully identified in 1988 and subsequently
translated into a technology (Wilholt 2006). Likewise, the development
of the CD was based on advanced research in optical storage technology
and was a response to an assumed demand (de Vries 2011, pp. 61–2). In some
instances, research can be planned successfully, and this is why the option of
demand-driven research exists.
A difficulty arises, though, if research is committed to exclusively fol-
lowing this demand-driven mode. If the epistemic authority of science is
supposed to be re-established by tying the research agenda to public
choice, then the demand-driven way is the only way to go. The trouble
is that beneficial research outcome cannot be produced at will. Not in-
frequently, demand-driven research fails or yields unexpected results. Failed
demand-driven projects are the development of a vaccine against HIV or
finding ways to successfully combat antibiotic resistance in bacteria. Both
endeavors were surprising failures; the odds of success had been expected
to be good in advance. Similarly, more than a decade ago, leading neuro-
scientists envisaged the advent of effective medication against Alzheimer’s
and Parkinson’s disease within ten years (Monyer et al. 2004, p. 36).
Nothing of this sort is in the offing yet. Conversely, many achievements
go back to chance discoveries. Medical research provides lots of examples
of such unexpected presents. Administrating beta-blockers against cardiac
insufficiency, employing lithium against bipolar disorders, and the present
use of Viagra are all due to pleasant surprises. Nobody had anticipated
these options. It is often impossible to discern in advance whether a research
endeavor will prove useful eventually.
As a result, the chief difficulty of a comprehensively demand-driven
research agenda is that research goals stated in advance can often not be
reached. Such goals can be attained under favorable conditions, but we can-
not rely on such success. This uncertainty makes it risky to seek to ascertain
the credibility of science by counting on the success of democratically
planned research projects. As a result, pursuing the counter-politicization
approach can easily backfire and inadvertently further undermine the epi-
stemic reputation of science.
3. Benefits of a Pluralist Research Agenda
The preceding considerations suggest that, in the judgment of the key
figures of the debate, there are two ways to strengthen the credibility of
science among the public: increasing its relevance and reliability. Trust-
worthy research addresses problems that are considered important in the
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
448
Facing the Credibility Crisis
eyes of the public (i.e., relevant) and produces results that are well con-
firmed (i.e., reliable). The latter virtue chiefly addresses the charge of in-
competence and is emphasized by the independence approach. The former
merit mainly responds to the objection of bias and is stressed by the counter-
politicization account. People are assumed to judge science by the viability of
its contributions to practical problems and by the perceived dependability of
its findings. The claim I seek to elaborate in the second half of this paper is
that pluralism plays an ambivalent role in this endeavor. I argue in sections 3
and 4 that promoting pluralism is a way to establish both relevance and
reliability, but go on to point out in section 5 the flipside of pluralism when
it comes to guiding action. Pluralism is beneficial in epistemic respect but
detrimental in pragmatic respect.
I begin by exploring the influence of pluralism on relevance. A pluralist
approach to crafting the research agenda means broadening the agenda.
This, in turn, augments the prospects of including elements that are taken
to be significant by a wider audience. Various indications support the view
that the credibility crisis of science at the interface with society is at least
partially due to the impression of parts of the public that research does
not sufficiently take up their interests and concerns. In fact, the complaint
of one-sidedness is widely shared with regard to medical drug research.
Pharmaceutical companies exclusively seek treatment options that are
susceptible to patenting. This leaves lifestyle effects, such as diet and exer-
cise, out of consideration (Brown 2008, pp. 197–9). Other non-patentable
approaches, such as using bacteriophages for fighting inflammation, are also
left unexplored by privately financed research. Such a skewed research
agenda prompts the public attitude that medical research is mostly profit-
driven and, as a result, fails to respond to urgent practical questions. Given
the widespread public unease about for-profit thinking in medicine, broad-
ening the research agenda in this way can be expected to enhance the trust-
worthiness of science among the public.
There is more specific evidence supporting this expectation. A glaring
example of a lack of trust is vaccine skepticism and the anti-vaccination
movement. A significant hesitancy is observed among parents as regards
the vaccination of their children. This attitude is usually attributed to egre-
gious knowledge deficits among the parents and a general attitude of hostility
toward science. The often-quoted evidence is that the allegation of a causal
link between measles-mumps-rubella vaccine and autism still lingers on in
the relevant circles, although this relationship has been disproven and uni-
versally abandoned decades ago. However, as Maya Goldenberg pointed out,
parents’ reluctance is not always the result of general skepticism about
science, but rather due to their impression that the questions most pressing
to them were missing on the standard agenda. Research as usual is satisfied
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
449
with demonstrating that vaccination is beneficial to public health: it reduces
the incidence of the relevant disease drastically. It is granted that side effects
occur in rare cases and that they may even be severe, but on the whole, benefit
largely outweighs harm. However, parents are not so much interested in the
population level; they rather focus on their individual child. More specifically,
they want to see it ruled out that their child is among those rare cases in
which damage occurs. Consequently, they ask for research on those factors
that predispose children to show such side effects. To date, such research is
missing (Goldenberg 2016).
A study on lay participation in implementing hydraulic fracturing tech-
nologies, commonly known as fracking, in Northern England points in the
same direction. The participation scheme was guided by the idea that the
local community would receive information about the benefits of fracking
and the effective risk management procedure in place and would welcome
the technology in this light. However, the response turned out to be dif-
ferent. The participants criticized that the whole design was structured in
a restricted and one-sided fashion and demanded a much broader deliber-
ative procedure around various innovation choices and the social desirabil-
ity of fracking (Williams et al. 2017, p. 98–9). It is true that this example
concerns technology implementation rather than research options in a nar-
row sense. But the common question is which scientific or technological
innovations are suitable and advantageous to the public. And again, the
upshot is that the public went away disappointed because the agenda
was not set up in a sufficiently pluralistic way.
Accordingly, it is the omission of questions deemed relevant by parts of
the wider audience that is likely to prompt a marked lack of trust in sci-
ence. It is plausible to conclude that enlarging the range of research topics
and addressing such missing questions would diminish the impression of
one-sidedness of research and thus would contribute to fostering public
trust. For gaining public trust in this respect, taking serious effort would
certainly be as important as an accomplished solution. Hugh Lacey has
introduced the notions of “neutrality,” “inclusiveness,” and “evenhanded-
ness” to designate the proposition that science should equally serve a
variety of different kinds of research objectives. In particular, scientific
research should not privilege some value-outlooks or special interests at
the expense of others. That is, research projects are legitimately guided
by non-epistemic goals and heuristics, but on the condition that the
ensuing research commitments are distributed even-handedly across the
range of “viable” social demands. For instance, commercial interests should
be balanced by values such as empowerment of poor people and indigenous
cultures, and environmental sustainability (Lacey 2013, pp. 79, 81–2).
Accordingly, bias can be avoided by broadening the research agenda. Such
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
450
Facing the Credibility Crisis
evenhandedness is produced by pluralism and is supposed to retain or re-
establish the relevance of science for marginalized segments of society.
The expectation is that evenhandedness is suited to bolster public trust in
science.
The sort of pluralism hitherto addressed refers to a wide range of issues
taken up in a research field. This pluralism can be tied up with the posi-
tions discussed in section 2. An evenhanded research agenda may then also
mean a balance among three major branches of science. First, Kitcher’s
well-ordered science addresses research items that the public assumes to
be in its own well-considered interest. However, as argued before, such an
exclusively demand-driven strategy seems to be too risky and its success
too unpredictable to serve as the chief plan for regaining public trust. Rather,
Kitcher’s procedure might be better suited to govern a particular branch of
research, namely, “research in the public interest” (Krimsky 2003, chaps. 11,
13). In this type of endeavor, research topics are chosen according to their
intended effects upon those affected by the research results (Carrier 2011,
pp. 19, 28). Philanthropic initiatives in medical research represent such
research in the public interest. At present, this area is dominated by private
initiatives of individuals or foundations. Likewise, research on climate change
is an effort of high practical relevance that neither grew out of epistemic
research nor was sponsored by economic companies. Research in the public
interest stakes out the area in which Kitcher’s scheme of well-ordered science
might be legitimately put into practice. In other words, the project of
counter-politicization is a promising and convincing approach if it is inter-
preted as a proposal for supplying a more coherent structure to this area of
science in the public interest.
Second, fundamental or epistemic research, as championed by the inde-
pendence approach, also represents a legitimate mode of selecting research
topics. Judged by the standard of evenhandedness, the reason for its legit-
imacy is that such research underpins more concrete, demand-driven pro-
jects. On some occasions, practice-oriented research needs to return to the
drawing board and resort to epistemic endeavors in order to be successful.
The early attempts in gene therapy came to grief profoundly and practical
success only emerged after a period of fundamental reorientation (Lewis
2014). Thus, epistemic research is sometimes vital for making practice-
oriented research sustainable. Third, likewise judged by the standard of
inclusiveness, market-driven practice-oriented research merits consideration
as well. Such research is also demand-driven in that it proceeds from expected
human needs and interests. Only what is liked and bought by many is suc-
cessful on the market (Carrier 2011, pp. 17–8). Industrial research serves
many practical interests well and should not be stifled by one-sidedly
privileging research exhibiting a universal appeal.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
451
Yet the downside is that each such branch tends to neglect a variety of
questions that are important from a more inclusive perspective. This is
why a balance or pluralism needs to be retained among all three branches.
The independence approach champions epistemic research and this is all
right as far as it goes. It serves to detach research from social forces, but it
thereby also removes science from the concerns of the people. The merit of
counter-politicization is to link up research with these concerns and to
feature science in the public interest. However, exclusively stressing this
approach overlooks the value of autonomous problem choice by scientists
and the advantages of market-driven research. The latter also takes up
needs, interests, and concerns of the people in producing useful novelties.
Many welcome its products, such as high-density hard disks or flat
screens.3 Such research may also contribute to making science more accept-
able and more credible among a lay audience. As a result, epistemic research,
market-driven research, and science in the public interest are all legitimate.
The really troubling feature is that one of these branches gains a monopoly for
acceptable research, as market-driven research virtually has in present-day
pharmaceutical research, or as Kitcher demands for well-ordered science.
Rather, these three branches should be considered complementary. They
contribute all three to the evenhandedness of the research agenda.
A salient question at this juncture is what is the right balance or dis-
tribution of resources among the three branches. Rather than offering a
comprehensive scheme for estimating the most beneficial ratio, I propose
to proceed from perceived imbalances. Maladjustments are often relatively
easily diagnosed in a consensual way. For instance, it is almost part of
conventional wisdom that privately sponsored pharmaceutical research is
in need of a public counterweight. The bias reported from this area has
reached epic dimensions.4 Similarly, demand-driven research on nuclear
3. The usefulness of market-driven research is contentious. An objection raised by a
reviewer for this journal is that industrial research of this kind only benefits minorities,
i.e., customers in well-to-do countries, and does not respond to social needs in a more sub-
stantial sense. However, the impact of market-driven research is difficult to anticipate.
Unexpected spillovers may occur. Cell phones were developed for consumers in rich coun-
tries, but they had a tremendous favorable impact on Africa. The technology allowed poor
African countries to skip expensive ground-networks and to provide affordable and widely
accessible telecommunication options (Pew Research Center 2015, p. 7). Further, most
innovations merely serve minorities. Each new medical drug targets only a small set of
people. Requiring of acceptable novelties that they address the needs of most of the people
at a global scale leaves us with a tiny fraction of advances and could slow down the pace of
innovation considerably.
4. Krimsky 2003, pp. 147–52; Brown 2008; 2011. See also the references regarding
biased clinical trials given in section 1.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
452
Facing the Credibility Crisis
fusion had run into an impasse in the 1980s since plasma turbulence was
much more difficult to control than anticipated. As a result, scientists de-
cided to step back and to approach the matter with a long-term perspec-
tive in mind. The upshot was the creation of plasma science as an
epistemic discipline. This shift from demand-driven to epistemic research
was supposed to amend and rectify the shortcomings produced by an over-
hasty commitment to practical benefit (Plasma Science Committee 1995;
Weingart, Carrier and Krohn 2007, chaps. 1, 3; see also the gene therapy
example mentioned before). Finally, students of quantum entanglement
were happy when prospects of application emerged with respect to their
supposedly purely epistemic field. Quantum encryption and quantum
computation relieved the field from its former otherworldly reputation.
The upshot is that by responding to a perceived imbalance, a proper equi-
librium among the three branches at hand can be restored. The present
article can be viewed as an attempt to make the need for striking such a
balance more conspicuous and as a plea for making such shifts and cor-
rections in a more explicit, reasoned, and elaborate way. Such a balance
seldom arises by itself but rather needs to be produced by deliberate effort.
All three branches are part of present-day research, to be sure, but many
specific research areas suffer from a disequilibrium that calls for being
redressed. One of the goals of this paper is to highlight the need for active
intervention in order to secure a sufficient amount of plurality and to restore
evenhandedness (see Carrier 2011, pp. 28–9).
The upshot is that, regarding the research agenda, a two-fold pluralism
seems apt to promote the relevance of research for the general public. This
pluralism concerns setting the research agenda in an evenhanded way by
taking up public interests and concerns broadly, and it extends to the
three branches of science. This latter sort of pluralism connects the general
call for an inclusive agenda with the proposals of the independence and
the counter-politicization approach for overcoming the credibility crisis
of science.
4. Pluralism as a Pathway to Reliable Knowledge
In addition to being biased scientific experts are suspected of being in-
competent, that is, giving ill-founded recommendations. In this section I
address the role of pluralism in articulating reliable answers to problems of
a practical nature. This role is ambivalent. The first thing is that social epis-
temology emphasizes the importance of a variety of competing approaches
for arriving at well-confirmed conclusions. The social notion of objectivity
is centered on critical debate and reciprocal control. A multiplicity of ap-
proaches serves this critical spirit best and is suited to neutralize the impact
of the blind spots from which each individual suffers. The increased level of
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
453
criticism, as it emerges in a pluralist setting, can be taken to intensify the
process of examination and strengthen the well-testedness of those approaches
that survive the process. Thus, strife and antagonism within the scientific
community is among the core features of scientific method. Pluralist science
is better-tested and more reliable science (Popper 1966, pp. 412–13; Longino
1990, pp. 66–80; Carrier 2013, pp. 2548–49, 2558).
In fact, this is how research tends to approach demanding challenges.
When difficult new issues need to be tackled, the research community is
likely to split up into a variety of competing approaches. Consider the
competition between string theory and quantum loop gravity or between
a cell-based and a holistic understanding of cancer (Carrier and Finzer 2011)
in present-day epistemic research. The epistemic rationale is to attack a
problem from different angles and thereby to increase the odds of success
(Kitcher 1993, chap. 8). When a deeper understanding is reached, this
pluralism usually gives way to consensus. The customary account of this
transition from pluralism to consensus says that severe pluralist testing
often reveals that a certain approach is superior in all relevant respects so
that a unanimously accepted standard account surfaces under such circum-
stances. After some time of evolution of a spectrum of contrasting theories, a
variety of different indicators of methodological quality will tend to clearly
distinguish one of the rival accounts (Kuhn 1969, pp. 204–6; McMullin
1987, p. 67; Kitcher 2000, pp. 26–7, 35). A multiplicity of approaches
at the forefront of research is epistemically beneficial and such a manifold
emerges naturally since the competing approaches have different profiles of
virtues and vices and cannot be judged unambiguously for this reason
(Kuhn 1977). After some time, plurality gives way to a greater unanimity
of judgment since typically one of the alternative approaches achieves a
superior ranking on most or all quality standards.
It is worth emphasizing that the accounts under consideration regard-
ing confirmation and reliability are rivals and conflict with one another,
whereas the approaches discussed with regard to the even-handedness of
the research agenda are complementary and do not converge toward con-
sensus. Actually, in this latter respect, sustained pluralism is the desirable
way to go. The import of a study on the effect of a medical drug is delim-
ited more clearly if it is contrasted with a study on lifestyle effects. The
relevance of the former study is increased by delineating its domain
through another study conducted from a different angle. However, as to
confirmation and reliability, pluralism extends to contrasting, incompati-
ble approaches. In this regard, the account elaborated here favors transient
pluralism: pluralism is welcomed as a means to the epistemic end of reach-
ing a justified consensus eventually. Pluralism leads to stricter tests and
undermines itself by prompting superior accounts. Such an unforced
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
454
Facing the Credibility Crisis
consensus, produced by the power of experience and argument, is the
traditional hallmark of scientific knowledge.
The social notion of objectivity and the pluralist approach going along
with it provides the prospect of upholding demanding methodological
standards even of no neutral scientists are available. The Baconian notion
of objectivity requires dropping all non-epistemic motives and proceeding
in a detached, open-minded and unprejudiced fashion (Bacon [1620]
1863, Bk. I, § 37–65, 68). While it is certainly true that Baconian stan-
dards can be approached to some degree in some fields, it seems also clear
that such neutrality is hard to achieve when stakes are high and interests
are strong. Worldly ambitions and social values are likely to interfere with
the neutrality and quality of inquiry. Financial conflicts of interest can be
expected to lead to shoddy studies that contribute to damaging the credi-
bility of science (Elliott 2014). Around 2000, editors of medical journals
adopted an anti-corruption policy to the effect that meta-reviews about the
safety and effectiveness of medical drugs were required to be done exclu-
sively by authors without vested interests in any one of the drugs under
review. This policy failed because in many cases no such authors could be
identified (Brown 2008, p. 194). Scientists who were sufficiently familiar
with a pharmaceutical area also had stakes in this area. Expertise and vested
interest formed a faithful if unholy couple. This is where Baconian objec-
tivity has shifted out of reach.
However, the social and pluralist notion of objectivity can still work
under such circumstances, the reason being that distributing the epistemic
risk among various factions by dividing up a research community and
multiplying forces is a strategy of general benefit under uncertainty. As
a result, pluralism and controversy is also found in demand-driven research
where future financial gains loom large. In medicine, Alzheimer’s disease is
subject to divergent judgments. One camp suggests beta-amyloid plaques as
the chief culprit; a rival camp casts the villain role with tau protein tangles.
The causal story is reversed in both camps. What is considered the cause in
one research community, is assumed to be the effect in the rival community.
A third party claims that the crucial damage is done by energy shortage in the
brain and that this penury produces the physiological features that the com-
peting approaches take to be causes. In fact, as this third approach suggests,
Alzheimer’s is a sort of diabetes of the brain. A fourth faction features prions,
i.e., infectious agents composed of misfolded proteins. In this view, the disease
is driven by an infection-like spread of such protein aggregates in the brain.
Pluralism is epistemically beneficial even if the studies involved are
biased in themselves. This means that an approach that addresses the
matter from a specific narrow angle can be offset by a study that is simi-
larly narrow, albeit in a different respect. For instance, in the early 2000s,
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
455
scientists articulated worries to the effect that the anti-clotting efficacy of
aspirin would decrease over the years and thereby stirred up a controversy.
Some years later, a journalist revealed that a company producing alter-
native anti-clotting agents had launched the whole debate about this
assumed habituation effect. Conversely, some of the principal scientists
who opposed the alleged drop in effectiveness were funded by the leading
aspirin manufacturer (Wise 2011, p. 288). I take this episode as an indi-
cation that competing economic interests and opposing value attitudes are
able to exert a corrective influence. They can prompt reciprocal criticism
that serves to redress the balance and to produce a better confirmed
account eventually.5
Along these lines, a lack of pluralism is sometimes held responsible for
low epistemic quality. In 2012, Gilles-Éric Séralini published a study in
which he reported the emergence of cancer in rats nourished by glyphosate-
tolerant genetically modified corn. The study was heavily criticized in meth-
odological respects, and its results are highly contested (Carrier forthcoming).
The crucial item in the present context is that in an article published in the
periodical Le Monde in 2012, a large number of French scientists blamed
the lack of dependable results in this area to the absence of pluralism. They
argued that it would have been imperative to assign the preceding studies of
the agent substance, as conducted by the relevant industry, to a variety of
researchers in the first place (Andalo et al. 2012). Accordingly, in the judgment
of the pertinent scientific community, studies that approach the matter from
different directions would have produced a more reliable outcome.
Such pluralism is typically followed by an emerging consensus, but
this transition is not yet complete in ongoing research. While the process
typically remains hidden from the public in epistemic research, the oppo-
sition of diverse approaches becomes conspicuous if the research proceeds
in the social arena. In cases of research on urgent problems, the struggle
between the antagonists proceeds in the limelight (Collins and Evans 2002,
pp. 246–8; Carrier 2013, p. 2562). The pattern agrees in epistemic and
practically relevant research at the interface to society: a pluralistic phase
of antagonism is followed by concurrence. The difference is that in the latter
case, this antagonism and the reciprocal criticism going along with it
unfolds right on open stage before the public eye. This is where the down-
side of pluralism becomes visible. A wide range of competing hypotheses is
5. However, social notions such as reciprocal criticism cannot guarantee in themselves
that demanding epistemic standards are satisfied. What is necessary in addition is an indi-
vidual commitment to rational argument and empirical evidence. An “epistemic attitude”
of a sufficient number of individual scientists is needed as well (Carrier 2013, pp. 2563–64;
López Cerezo 2015, pp. 314–16).
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
456
Facing the Credibility Crisis
less than convincing as a guide to matters of practical importance. I will
address this question in section 5, but this multiplicity also casts new light
on the predicament of science in the social arena.
Pressing practical problems can often not be solved by linking them up
with the system of knowledge. In such cases, science needs to face com-
plexity and uncertainty, and actually runs the risk of being overburdened.
Take the transition of national electricity systems from fossil sources to re-
newable energy. Experts are at a loss in some respect about which measures
are to be taken for this purpose. Is extending the electric grid the way to
go or is it better to support the development of efficient and powerful
small-scale storage systems? There are various technical processes in the
pipeline for stepping up the local storage capacity, and it might well
become feasible to save green electricity locally for periods without wind
and sunlight. No expensive large-scale electric grids would be necessary.
But nobody knows at present whether this guess will be correct. Science
operating at the interface with politics is faced with complex challenges
and a large amount of uncertainty. As a result, the ensuing diversity of
judgment is a usual response to complexity and uncertainty; it is not a
deficiency of politicized research. As a result, the expert’s dilemma appears
in a more favorable light. The failure of science in some areas of practical
importance is real; regarding some intricacies of the life world, the limits
of scientific knowledge are reached. Yet, rather than indicating the cor-
ruption of science through its politicization, this failure shows how hard
it is to move from scientific generalizations to complex challenges of the
life world. To sum up, conclusions reached through confronting antago-
nistic approaches with each other are likely to have run through a test
procedure of pronounced severity and tend to be more reliable. This is
how pluralism in science may contribute to overcoming the perception of
incompetence.
The Harm Done by Pluralism to Taking Action
5.
The preceding considerations primarily concerned the positive impact of
reciprocal censure and mutual control on the epistemic credentials of sci-
entific assumptions. As a rule, pluralist science is more severely tested and
more reliable—provided that sufficient time can be granted to enable
consensus formation. However, it is one thing to say that pluralism serves
to step up the justification of research results; it is a different thing to
assume that this improvement is recognized by the public and thereby
enhances credibility. To be sure, some complaints about incompetence
might dissolve if the public comes to understand the importance of con-
troversy for examining knowledge claims and to appreciate the partial
nature of many such claims. Still, appreciating pluralism goes only halfway
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
457
in making science appear more trustworthy to the public. After all, in order
for science to become relevant for practical issues, it needs to entail clear
messages. A range of contradicting suggestions might not appear overly
useful to a wider audience.
What is characteristic of science in the social arena are the uncertainty
and the tentative nature of many scientific responses as well as the time-
pressure under which they are developed. It is of no avail to wait until the
dust of controversy has settled. As a result, the plurality of antagonistic
approaches is exposed to the general public (see section 4) and is taken
as indicating incompetence and unreliability. The question is, accordingly,
how to deal with this downside of pluralism for science in the social arena.
One suggestion is that politicians or agents in general need to learn how to
cope with conflicting advice from science and to pick for themselves which
advice they consider appropriate (de Melo-Martín and Intemann 2014,
pp. 606–7). This may be necessary in some cases, but issued as a general
recommendation it means giving up on the challenge of science-based
advice. Another suggestion is to pick coherent chunks of knowledge as
the basis of recommending action (Chang 2012, pp. 265–6). However,
in cases of practical relevance, the controversial issues usually extend well
into the relevant parts of knowledge.
Faced with concrete challenges, scientists cannot afford to wait until one
of the relevant approaches has the edge over its competitors. Rather, the
spectrum of approaches needs to be diminished at a quicker pace. I suggest
two sorts of consideration: epistemic robustness drops factors and accounts
that have no immediate relevance for the judgment at hand, social robust-
ness leaves out all choices that hardly stand a chance of being implemented
because of opposing interests and value-attitudes in the population.
A promising strategy for arriving at widely shared conclusions is “epi-
stemic robustness” or “coarse-graining”: the gist of an analysis or recom-
mendation remains unchanged albeit the underlying causal influences and
factual conditions vary to some degree. In practical matters it is often a
widely accepted strategy to “be on the safe side” and to recommend thresh-
old values that remain well below the expected emergence of harmful effects.
Thus, uncertainty about the precise nature and amount of a distortion does
not necessarily hurt. The Intergovernmental Panel on Climate Change
(IPCC) also practices this strategy by admitting large error bars. If epistemic
robustness is heeded, ignorance of the precise conditions matter less in prac-
tical respect (Carrier 2010; Carrier and Krohn 2016). If large wiggle room
is granted to relevant projections and predictions, consensus is more likely
to ensue. This means emulating the process of convergence in fundamental
research by blurring the relevant results. The challenge is to put these results
in a way that they unambiguously point in certain practically relevant
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
458
Facing the Credibility Crisis
directions. A variant of this strategy involves coarse-graining the accounts on
offer (rather than the details of the influences at hand). The idea is to distill a
conclusion from the plurality of approaches that is suggested or entailed by
most of these approaches. One such option is to select measures that need to
be taken anyway. For instance, moving to green electricity requires improved
power storage systems in any event, and for coping with climate change,
some adjustment to sea-level rise will be indispensable at any rate.
Second, invoking social values may contribute to reducing the range of
accounts scientists need to take into consideration. The odds increase for
getting a conclusion accepted by politics and the public if this conclusion is
compatible with value-attitudes as they prevail in society. “Social robustness”
in the sense of compatibility with widespread value attitudes in society
may be influential on the content of the recommendations (Carrier 2010;
Carrier and Krohn 2016). Faced with controversial evaluations, scientists
may attempt to squeeze a lesson out of the multiplicity of scientific accounts
that appears endorsable to many social factions. An example of such socially
robust advice is setting strict threshold values for suspicious chemicals even
without unambiguous evidence for their harmfulness. Being cautious con-
tributes to pacifying strife and demonstrates the willingness to meet the
critics halfway. Further, if the population is in favor of green energy, un-
certainties regarding climate effects of coal-fired power plants need not be
taken into consideration (Carrier 2010; Droste-Franke et al. 2015, pp. 13,
38–51; Carrier and Krohn 2016). Further, scientists in elaborating their
recommendations may include the social consequences of their suggestions,
or, more precisely, the socio-economic costs of being wrong, and thereby
drop many such suggestions without assessing painstakingly their epistemic
credentials (Douglas 2000; 2009).
These two strategies appear to be suited to reduce pluralism in practice
driven research.6 It goes without saying that the approaches that become
subject of this evaluation of practical relevance need to pass some mini-
mum threshold regarding reliability. Yet, the auxiliary criteria brought
to bear in this second round of evaluation do not latch onto their epistemic
credentials but rather focus on their practical virtues. Consequently, such
pragmatic auxiliary considerations can be trumped by epistemic supe-
riority. It is true, social robustness and the attempt to remain within the
framework of publicly accepted values discourages bold expert recommen-
dations that call into question current commitments. As one of the referees
6. An additional strategy is identifying and abandoning “agnotological studies” such as
those revealed by Proctor and by Oreskes and Conway (see section 1). Such studies are
flawed in methodological respect and fail to represent a serious epistemic endeavor (Carrier
forthcoming).
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
459
for this journal objected, social robustness tends to favor the status quo and
short-term goals over long-term goals. Point granted. However, audacious
expert advice that envisions new horizons should be supported epistemi-
cally and not draw on auxiliary standards alone.7
6. Conclusion
I explore options of how the public credibility of science can be preserved even
though science is part of the public arena and subject to partisan political and
economic forces. This question is addressed by stressing the benefits of a
broader range of research problems and a variety of competing approaches.
The inclusiveness induced by this topical expansion serves to constrain the
one-sidedness of the research agenda and thus to promote relevance; and
the reciprocal control produced by diversity within the scientific community
is apt to improve epistemic support and thus to combat incompetence. This
applies to conceptual pluralism within each research field, but also to plural-
ism among the branches of science. Strengthening the pluralism within and
among these branches is a way of making research more relevant and reliable.
While it is important to realize that opening up the conceptual space is
epistemically beneficial, it is also true that science, as a guide to practical
matters needs to cut pluralism. The procedure employed in epistemic re-
search is pursuing controversial avenues until one of the relevant approaches
manages to become superior in all relevant respects. Yet, such a smooth
resolution of controversies cannot be expected within the narrow time
constraints in the practical realm. Since pluralism is perceived as a lack of
knowledge and proficiency among the wider public, pluralism is likely to
sap public credibility. In practical matters, the urgency of the issues at hand
requires reducing the range of options. This reduction is the complementary
step to broadening the scope; both moves are called for in order to combat
the appearance of incompetence.
Pluralism can be cut back for practical matters by appeal to epistemic
and social robustness. Epistemic robustness focuses on invariant features or
consequences widely shared among competing scientific accounts. Social
robustness emphasizes scientific accounts that fit into the pertinent non-
epistemic value system. It leaves out all choices that conflict with powerful
interests and values and can be expected to fail in the political process
anyway. Thus, I suggest addressing the credibility crisis of science by means
7. In contrast to Wilholt (2013, p. 250), my proposal does not bind the trustworthiness
of science-based advice to shared relevant evaluations between researchers and the public.
Scientists can contradict public expectations without compromising their trustworthiness
if they are backed by epistemic reasons.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
460
Facing the Credibility Crisis
of a carefully crafted balance between opening up the conceptual space and
diminishing the range of options. Pluralism is part of the solution to the
credibility crisis in promoting evenhandedness and well-testedness, i.e., rel-
evance and reliability, but it is also part of the problem by preventing the
emergence of unambiguous practical suggestions. In order for science to be
a suitable guide for practice, the leeway of options needs to be narrowed and
pluralism be curbed. This can be achieved by appeal to auxiliary criteria
that do not focus on how precisely such suggestions fare empirically (assum-
ing that they pass a minimum threshold of epistemic quality), but their
appropriateness in practical respect.
References
Andalo, Christophe et al. 2012. “Science et Conscience.” Le Monde, Nov. 16,
2012. http://www.lemonde.fr/idees/article/2012/11/14/science-et-
conscience_1790174_3232.html#
Anderson, Philip W. 2001. “Essay Review. Science: A ‘Dappled World’ or
a ‘Seamless Web’.” Studies in History and Philosophy of Modern Physics 32:
487–494.
Bacon, Francis. [1620] 1863. The New Organon, trans. J. Spedding, R. L.
Ellis, and D. D. Heath. The Works VIII. Boston: Taggard and Thompson.
Bekelman, J. E., Y. Li, and C. P. Gross. 2003. “Scope and Impact of
Financial Conflicts of Interest in Biomedical Research: A Systematic
Review.” Journal of the American Medical Association 289: 454–465.
Biddle, Justin. 2007. “Lessons from the Vioxx Debacle: What the Privati-
zation of Science Can Teach Us About Social Epistemology.” Social Epis-
temology 21: 21–39.
Brown, James R. 2008. “The Community of Science.” Pp. 189–216 in The
Challenge of the Social and the Pressure of Practice: Science and Values Revisited.
Edited by M. Carrier, D. Howard, and J. Kourany. Pittsburgh: University
of Pittsburgh Press.
Brown, James R. 2011. “Medical Market Failures and Their Remedy.”
Pp. 271–281 in Carrier and Nordmann (2011).
Büter, Anke. 2015. “The Irreducibility of Value-Freedom to Theory
Assessment.” Studies in History and Philosophy of Science 49: 18–26.
Carrier, Martin, and Patrick Finzer. 2011. “Theory and Therapy: On the
Conceptual Structure of Models in Medical Research.” Pp. 85–99 in
Carrier and Nordmann (2011).
Carrier, Martin, and Wolfgang Krohn. 2016. “Scientific Expertise: Epistemic
and Social Standards. The Example of the German Radiation Protection
Commission.” Topoi. An International Review of Philosophy, published online
DOI 10.1007/s11245-016-9407-y.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
461
Carrier, Martin, and Alfred Nordmann (eds.). 2011. Science in the Context of
Application. Methodological Change, Conceptual Transformation, Cultural
Reorientation. Dordrecht: Springer.
Carrier, Martin. 2010. “Scientific Knowledge and Scientific Expertise:
Epistemic and Social Conditions of their Trustworthiness.” Analyse und
Kritik 32: 195–212.
Carrier, Martin. 2011. “Knowledge, Politics, and Commerce: Science under
the Pressure of Practice.” Pp. 11–30 in Carrier & Nordmann (2011).
Carrier, Martin. 2013. “Values and Objectivity in Science: Value-Ladenness,
Pluralism and the Epistemic Attitude.” Science & Education 22: 2547–568.
Carrier, Martin. forthcoming. “Agnotological Challenges: How to Capture
the Production of Ignorance in Science,” (under review).
Cartwright, Nancy. 2006. “Well-Ordered Science: Evidence for Use.”
Philosophy of Science 73: S981–91.
Chang, Hasok. 2012. “Is Water H2O? Evidence, Realism and Pluralism,”
Dordrecht: Springer.
Cochrane. 2012. “Industry Sponsorship and Research Outcome.” Cochrane
Library 2012, Issue 12, http://onlinelibrary.wiley.com/doi/10.1002/
14651858.MR000033.pub2/epdf/standard, last accessed 4 January,
2016.
Cochrane. 2014. “Tamiflu and Relenza: How Effective are They?”
Cochrane Community News Release, 10 April 2014, http://community.
cochrane.org/features/tamiflu-relenza-how-effective-are-they?, accessed
1 April, 2015.
Cochrane. 2015. “Neuraminidase Inhibitors for Preventing and Treating
Influenza in Adults and Children.” http://onlinelibrary.wiley.com/doi/
10.1002/14651858.CD008965.pub4/pdf/abstract, accessed 19 April, 2015.
Collins, Harry M., and Robert Evans. 2002. “The Third Wave of Science
Studies: Studies in Expertise and Experience.” Social Studies of Science 32:
235–296.
Davidson, Richard. 1986. “Sources of Funding and Outcome of Clinical
Trials.” Journal of General Internal Medicine 12/3: 155–158.
De Melo-Martín, Inmaculada, and Kristen Intemann. 2014. “Who’s Afraid
of Dissent? Addressing Concerns about Undermining Scientific Consensus
in Public Policy Developments.” Perspectives on Science 22: 593–615.
de Vries, Marc J. 2011. “Science in the Context of Industrial Application:
The Case of the Philips Natuurkundig Laboratorium.” Pp. 47–66 in
Carrier and Nordmann (2011).
Douglas, Heather. 2000. “Inductive Risk and Values.” Philosophy of Science
67: 559–579.
Douglas, Heather. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh:
University of Pittsburgh Press.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
462
Facing the Credibility Crisis
Droste-Franke, Bernd, Martin Carrier, Matthias Kaiser, Miranda Schreurs,
Christoph Weber, and Thomas Ziesemer. 2015. Improving Energy Decisions.
Towards Better Scientific Policy Advice for a Safe and Secure Future Energy System,
Heidelberg: Springer.
Elliott, Kevin C. 2014. “Financial Conflicts of Interest and Criteria for
Research Credibility.” Erkenntnis 79: 917–937.
Elliott, Kevin C. 2016. “Standardized Study Designs, Value Judgments,
and Financial Conflicts of Interest in Research.” Perspectives on Science
24: 529–51.
European Commission. 2010. “Science and Technology Report. Special
Eurobarometer 340 / Wave 73.1-TNS Opinion and Social.” ec.europa.
eu/public_opinion/archives/ebs/ebs_340_en.pdf (accessed 26 April
2017)
Goldenberg, Maya J. 2016. “Public Misunderstanding of Science? Reframing
the Problem of Vaccine Hesitancy.” Perspectives on Science 24: 552–581.
Grunwald, Armin. 2003. “Technology Assessment at the German Bundestag:
‘Expertising’ Democracy for ‘Democratising’ Expertise.” Science and Public
Policy 30: 193–98.
Intemann, Kristen. 2015. “Distinguishing between Legitimate and Illegit-
imate Values in Climate Modeling.” European Journal for Philosophy of
Science 5: 217–32.
Kitcher, Philip. 1993. The Advancement of Science. Science without Legend,
Objectivity without Illusions. New York: Oxford University Press.
Kitcher, Philip. 2000. “Patterns of Scientific Controversies.” Pp. 21–39 in
Scientific Controversies. Philosophical and Historical Perspectives. Edited by
P. Machamer, M. Pera, and A. Baltas. New York: Oxford University Press.
Kitcher, Philip. 2004. “On the Autonomy of the Sciences.” Philosophy
Today 48 (Supplement 2004): 51–57.
Kitcher, Philip. 2011. Science in a Democratic Society. Amherst: Prometheus.
Kourany, Janet A. 2003. “A Philosophy of Science for the Twenty-First
Century.” Philosophy of Science 70: 1–14.
Kourany, Janet A. 2010. Philosophy of Science after Feminism. Oxford: Oxford
University Press.
Krimsky, Sheldon. 2003. Science in the Public Interest. Has the Lure of Profits
Corrupted Biomedical Research? Lanham: Rowman & Littlefield.
Kuhn, Thomas S. 1962. The Structure of Scientific Revolutions. Chicago:
University of Chicago Press, 3rd edition 1996.
Kuhn, Thomas S. 1969. “Postscript to the Second Edition of The Structure
of Scientific Revolutions.” Pp. 174–210. Chicago: The University of
Chicago Press.
Kuhn, Thomas S. 1977. “Objectivity, Value-Judgment, and Theory-Choice.”
Pp. 320–39 in The Essential Tension. Chicago: University of Chicago Press.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Perspectives on Science
463
Lacey, Hugh. 2013. “Rehabilitating Neutrality.” Philosophical Studies 163:
Lexchin, Joel, L. A. Bero, B. Djulbegovic, O. Clark. 2003. “Pharmaceutical
Industry Sponsorship and Research Outcome and Quality: Systematic
Review.” British Medical Journal 326: 1167–1170.
Lewis, Ricki. 2014. “Gene Therapy’s Second Act.” Scientific American 310:
77–83.
52–7.
Longino, H. E. 1990. Science as Social Knowledge: Values and Objectivity in
Scientific Inquiry. Princeton: Princeton University Press.
López Cerezo, José Antonio. 2015. “Social Objectivity under Scrutiny in
the Pasteur–Pouchet Debate.” Journal for General Philosophy of Science
46: 301–318.
McCright, Aaron M. et al. 2013. “The Influence of Political Ideology on
Trust in Science.” Environmental Research Letters 8: 1–9.
McMullin, Ernan. 1987. “Scientific Controversy and its Termination.”
Pp. 49–91 in Scientific Controversies. Case Studies in the Resolution and Closure
of Disputes in Science and Technology. Edited by H. T. Engelhardt, Jr. and
A. C. Caplan. Cambridge: Cambridge University Press.
Monyer, Hannah et al. 2004. “Das Manifest. Elf führende Neurowissenschaftler
über Gegenwart und Zukunft der Hirnforschung.” Gehirn und Geist 06/2004:
30–7.
Oreskes, Naomi, and Erik M. Conway. 2010. Merchants of Doubt. New York:
Bloomsbury.
Pew Research Center. 2015. Cell Phones in Africa: Communication Lifeline.
http://www.pewglobal.org/files/2015/04/Pew-Research-Center-Africa-Cell-
Phone-Report-FINAL-April-15-2015.pdf, accessed 14 February, 2017.
Plasma Science Committee. 1995. Plasma Science. From Fundamental Research
to Technological Applications. Washington D.C.: National Academy Press.
Polanyi, Michael. [1962] 2000. “The Republic of Science: Its Political and
Economic Theory.” Reprinted in Minerva 38: 1–32.
Popper, Karl R. 1966. The Open Society and Its Enemies. Vol 2. The High Tide
of Prophecy, 5th revised edn. London: Routledge.
Proctor, Robert N. 2011. Golden Holocaust. Origins of the Cigarette Catastrophe
and the Case for Abolition. Berkeley: University of California Press.
Scientific American. 2010. “In Science we Trust.” Scientific American 303:
Sismondo, Sergio. 2008a. “Pharmaceutical Company Funding and its
Consequences: A Qualitative Systematic Review.” Contemporary Clinical
Trials 29: 109–13.
Sismondo, Sergio. 2008b. “How Pharmaceutical Industry Funding Affects
Trial Outcomes: Causal Structures and Responses.” Social Science & Medicine
66: 1909–914.
56–9.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
464
Facing the Credibility Crisis
Stilgoe, Jack, Richard Owen, and Phil Macnaghten. 2013. “Developing a
Framework for Responsible Innovation.” Research Policy 42: 1568–1580.
Stokes, Donald E. 1997. Pasteur’s Quadrant. Basic Science and Technological
Innovation. Washington D.C.: Brookings Institution Press.
Van der Sluijs, Jeroen P. 2012. “Uncertainty and Dissent in Climate Risk
Assessment: A Post-Normal Perspective.” Nature and Culture 7 (2):
174–195.
Von Schomberg, René. 2013. “AVision of Responsible Innovation.” Pp. 51–74
in Responsible Innovation: Managing the Responsible Innovation of Science
and Innovation in Society. Edited by R. Owen, M. Heintz, and J. Bessant.
London: John Wiley.
Weingart, Peter, Martin Carrier, and Wolfgang Krohn. 2007. Nachrichten
aus der Wissenschaftsgesellschaft. Analysen zur Veränderung von Wissenschaft.
Weilerswist: Velbrück Wissenschaft.
Wilholt, Torsten. 2006. “Design-Rules: Industrial Research and Epistemic
Merit.” Philosophy of Science 73: 66–89.
Wilholt, Torsten. 2012. Die Freiheit der Forschung. Begründungen und
Begrenzungen, Berlin: Suhrkamp.
Wilholt, Torsten. 2013. “Epistemic Trust in Science.” The British Journal
for the Philosophy of Science 64: 233–253.
Williams, Laurence, Phil Macnaghten, Richard Davies, and Sarah Curtis.
2017. “Framing ‘Fracking’: Exploring Public Perceptions of Hydraulic
Fracturing in the United Kingdom.” Public Understanding of Science 26:
89–104.
Wise, Norton M. 2011. “Thoughts on Politicization of Science through
Commercialization.” Pp. 283–299 in Carrier and Nordmann (2011).
Wynne, Brian. 2003. “Seasick on the Third Wave? Subverting the Hegemony
of Propositionalism: Response to Collins & Evans (2002).” Social Studies of
Science 33: 401–417.
Ziman, John. 2002. “The Continuing Need for Disinterested Research.”
Science and Engineering Ethics 8: 397–99.
Ziman, John. 2003. “Non-Instrumental Roles of Science.” Science and
Engineering Ethics 9: 17–27.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
/
e
d
u
p
o
s
c
/
a
r
t
i
c
e
–
p
d
l
f
/
/
/
/
2
5
4
4
3
9
1
7
9
0
3
1
0
p
o
s
c
_
a
_
0
0
2
4
9
p
d
.
/
f
b
y
g
u
e
s
t
t
o
n
0
8
S
e
p
e
m
b
e
r
2
0
2
3
Download pdf