On the Very
Idea of a Science Forming Faculty
Summary
It has been
speculated, by Chomsky and others, that our capacity for scientific
understanding is not only enabled but also limited by a biologically endowed science forming faculty (SFF). I look at
two sorts of consideration for the SFF thesis and find both wanting. Firstly,
it has been claimed that a problem-mystery distinction militates for the SFF
thesis. I suggest that the distinction can be coherently drawn for cases, but
that the purported ‘evidence’ for even a fairly lose general demarcation of
problems and mysteries is not best
explained by a SFF. Secondly, I consider in detail a range of cognitive
considerations for the SFF thesis and contend that it is at best moot whether science can be so construed as to make
it feasible that it is a faculty competence.
I feel most
deeply that the whole subject
is too profound for the human intellect.
A dog might as well speculate on the mind of
Newton.
From a letter of Charles Darwin to Asa Gray
1:
Introduction
Noam
Chomsky (e.g., 1975b, 1980, 1988, 2000a) conjectures that our capacity for
science is due to a biologically endowed science
forming faculty (SFF): what lies within the principles of the faculty are
problems, what lies beyond them are mysteries. The brief of the sequel is to
question the very idea of a SFF. It must be said that Chomsky’s conjecture is
speculative. Even so, he takes the idea very seriously, and I shall pay the due
respect by doing likewise. Moreover, the notion is employed, by McGinn (1991,
1993) in particular, to argue for the substantive claim that consciousness is
mysterious (McGinn, in fact, appears to think that more or less everything
philosophers think about is mysterious.) If my contentions are anywhere near
correct, while consciousness (or free-will, or personal identity, or meaning,
etc.) might well be mysterious, it will not be because there is a human SFF
that fails to accommodate it.
2: Problems
and Mysteries: A Preliminary Characterisation
Chomsky’s
notion of a SFF is tied to that of a problem-mystery distinction. I shall
describe a strong distinction; there
is a weak one, relativised to the here and now, but that is one no-one should
want to deny.
In a strong sense ‘problems’ covers questions we could answer, events we
could explain or otherwise understand, properties whose constitution we could
discern, and so on. It will be noted that problem
has a modal aspect: problems are not necessarily things we shall solve, they are things we could
solve. For example, Fermat’s last theorem
remained a problem for over 300 years until Andrew Wiles’s positive proof. Now
consider the closest possible world W just
like the actual world save that Wiles (or a counterpart
thereof) gives-up on his proof with no-one continuing his research, and that W-humanity meets its end without ever knowing whether or
not ‘xn + yn = zn’ has integral solutions for
n > 2. Is Fermat’s last theorem only a problem in W, i.e., could W-humanity find a proof? Yes; for, on
our assumption that all else is equal, the mathematics is available in W for Wiles’s proof, even though no-one
gets around to employing it. The point is this: problems are demarcated
relative to our cognitive capacity or reach, where such a capacity is
abstracted from the contingency of what we happen to do or are interested in;
it is, though, constrained by the myriad of contingent factors that have
contributed to the development of our brains and will, presumably, continue to
do so. This last point bears emphasis: it is not that some domains are so
simple, while others are so damned complex; the issue is to do with what our
minds are constitutively able to represent and explain, independently of
whether a given domain is simple or complex in an objective sense - whatever such
a sense might be.
Mysteries
also have a modal aspect: they are insoluble, inexplicable in principle. Unlike
problems, which may contingently evade resolution, mysteries lie beyond our
understanding. Before the discovery of DNA there was no known mechanism to
instantiate the heritable traits upon which selection works. Even so, heritable
traits were not mysterious before
1953, as the discovery and subsequent theory demonstrated, they were merely
problematic. Dark matter might be
mysterious; then again, it might smoothly be accommodated within current
particle physics. It might be that no
reformation of set theory we could formulate will tell us whether or not 2À0 = À1, in which
case the continuum hypothesis would
constitute a mystery (here I forego any intuitionistic scruples.)
Alternatively, the negation, say, of the hypothesis might be unsatisfiable in a
any model for some theory which
supersedes ZF(+C). At the moment, as with dark matter, there is no way of
knowing. Such is the way with mysteries: at least at the present state of play,
we cannot tell if we are dealing with a deep problem or a mystery. Further,
mysteries outstrip problems in the sense that, while any problem is
formulatable qua soluble, mysteries
are not so minimally accessible: the notion carries no implication that we can
so much as frame the appropriate questions. After all, if the answers are constitutively beyond our ken, we should not
expect, in every case, to be able to pose the right questions in the first
place.
My aim will not be to refute this strong
distinction; I have no argument that there are only problems, no mysteries.
Indeed, I think it far from implausible that there are domains, perhaps as yet
not thought of, which are such that our brains are ill-equipped to deal with
them. Yet this thought does not provide
us with the kind of distinction to which Chomsky and others appeal. In
particular, while the thought allows us to hypothesise coherently cases of
mystery, it does not allow us a demarcation of them. Chomsky (2000a, p. 83),
for sure, does not think that the distinction can be drawn sharply, but nor is
the distinction meant to be so lose that it does not reflect a real feature of
our cognition. That is, Chomsky’s point is not merely that we are epistemically bounded; it is, rather,
that such a boundary is endogenously determined to some specifiable degree.
This thought is crucial. Pinker (1997, p. 558-65), who otherwise commends
epistemic boundedness, thinks that the thesis is “almost perversely unprovable”
(op cit, p. 562). I would go further:
if the mysteriousness of mysteries is itself mysterious, then we shall never be
in a position rationally to conclude that such
and such is a mystery. By drawing an endogenous boundary, Chomsky may be
understood as attempting to remove mystery from mysteries and so explain our epistemic boundedness. The
determining endogenous factor is a human science
forming faculty (SFF).
3: The
Limits of Thought
Chomsky
conjectures that the broad shape of human scientific accomplishment is a
function of an innate SSF. As an initial characterisation, we can think of our
putative SFF as analogous to the language faculty. Here is Chomsky (1975b, pp.
155-6; cf. Chomsky, 1968/72, pp. 90-3; 1971, p.49; 1988, pp. 156-9), making use
of Peirce[1]:
The
fact that “admissible hypotheses” are available to [the SFF] accounts for its
ability to construct rich and complex explanatory theories. But the same
properties of mind that provide admissible hypotheses may well exclude other
successful theories as unintelligible to humans… though these theories might be
accessible to a differently organised intelligence.
Thus, where
the language faculty realises a universal grammar (UG) which allows for the
generation of the grammars humans may acquire, so the SFF realises a set of
concepts and principles that allow for the formation of all the possible
theories humans may understand. UG empirically defines the notion of a possible
human language (a grammar or I-language),
but it does not follow that UG determines every possible ‘language’. Aliens, if
such there are, will, we may presume, possess a quite distinct UG (or something
else entirely) that determines languages inaccessible to us. Our UG is not a
general purpose device to construct languages, it is severely constrained by
principles which allow only a finite amount of variation: a human language is
one which can in principle be deduced from UG principles given the setting of a
finite number of parametric values. Thus, some ‘languages’ are mysteries for
us, i.e., those not determined by our
UG.[2]
The same thought applies to SFF. SFF is not a general purpose device which can
construct a true theory for any domain. The set of theories it determines is
drawn from a fixed conceptual resource with a finite number of principles
defined over it. Now consider the set of true theories of the universe and its
furniture and the set of theories determined by SFF. The intersection of the
two sets is the set of (true) theories accessible to humanity; what falls outside
the intersection is inherently mysterious. The intersection there is, is a
“chance product of human nature” (Chomsky, 2000a, p. 83). There are a number of
quite slippery issues to do with how close we should understand this analogy to
which I shall return at length in §4; for the moment the sketch above suffices.
In this section I shall look at some
considerations that are understood to militate for a problem-mystery
distinction independently of the notion of a SFF, but which may be taken to
buttress the SFF thesis in that a SFF would provide a natural explanation of
them. That is, the SFF thesis is supported to the extent that it provides the
best explanation of an independently coherent problem-mystery distinction. In
the following sections I shall look at considerations specific to the SFF
thesis.
Chomsky (e.g., 1993, 2000a,) is fond
of reminding us that we are organisms, put together by evolution (not
necessarily, natural selection), we are not angels. We are not designed, by God
or anything else, to know all there is to know. Independent of the SFF
hypothesis, then, to claim jointly that the truth about reality is
unconstrained by our cogitations and that every truth falls within our
understanding is to attribute to ourselves strange powers unprecedented in the
biota. It would seem, therefore, that once even a modest realism is accepted,
mystery follows, lest we think ourselves angelic (cf. Fodor, 1983, §V).[3]
If we look at the rest of the animal
kingdom, we find cognitive closure. A
favoured example is that of the maze-solving abilities of rats (e.g., Chomsky,
1991b, p. 41; 1993, p. 45). Over a large range of mazes (e.g., radial ones)
rats perform at levels equal to or greater than humans, but some mazes prove
intractable. For instance, a prime maze
is one whose solution depends on the subject making a certain decision (left or
right) at each prime choice. Rats’ poor performance with such mazes is
naturally explained by their lack of number theory. Of course, it does not
follow that the average person would perform much better than a rat, yet the
average person has the concepts which would
enable her easily to solve the maze. Even if one lacked the explicit notion of
a prime, one could still work-out the maze by ‘discovering’ the concept. This a
rat cannot do.
The point of the analogy is that just
as a rat will scurry around the prime maze, fated by its cognitive
short-comings never to find the solution, so humans scurry around with their
problems, fated in some instances to remain in ignorance, constitutively
lacking the concepts which would provide the correct solution. We do not, for
sure, appear to ourselves to be rat-like, but we well might from the perspective of “a differently
organised intelligence”. To think otherwise would effectively be to hold that
humans have no cognitive closure. This appears to be a supernatural property, but to accept
cognitive closure, it seems, is to accept a form of the SFF thesis: only a
certain range of concepts are “admissible” to us; we lack the capacity to frame
other concepts necessary for the understanding of certain domains.
The rat analogy is certainly striking; it
has the desired humbling effect. Neither the analogy nor the surrounding
argument, however, oblige us to seek a SFF explanation of our apparent cognitive closure.
Notwithstanding the potentially
significant differences between rats and ourselves (e.g., language, culture,
technology, etc.), the analogy certainly lends force to the thought that, for
any species, there will be insoluble ‘problems’. So much, however, does not
lead us to the SFF thesis as the natural explanation, and nor, therefore, to a
definite sense of mystery for humans. Of course, we are just another species,
but such modesty obliges us to concede no more than our lack of omniscience.
The analogy may well convince us that there will be some problems or other that
we are just not fit to solve, but this gives no support to an endogenous
demarcation between problems and mysteries. Simply put: the rat analogy
militates for our epistemic boundedness,
but it does not tell in favour of the SFF thesis. Chomsky and McGinn appear to
conflate the two ideas, but they are quite distinct. The SFF thesis would
certainly count as an explanation of our cognitive limits, but we can be
cognitively limited without a SFF; more to the point, a SFF appears to be the
‘best explanation’ simply because it is read into the supposedly independent explanandum.
The source of this illicit conflation, I
think, is the contrast between cognitive closure and supernaturalism. Chomsky
assumes that if there is no cognitive closure that allows for a demarcation (to
some degree of precision) of problems and mysteries, an identification of
mysteries as mysteries, then humanity
is potentially omniscient (especially see Chomsky, 1988, pp. 158-9). By modus tollens, he arrives at the
desired result. This inference is never questioned, perhaps because
there are those, after Peirce, who have thought that evolution has equipped us with
a sure way to the truth (cf., Dennett, 1995, chp.13.) Also, Chomsky does at
times appear to identify the two notions: the SFF thesis is a mere tag which
dignifies our ignorance of the biological basis of our epistemic boundedness.
But if this is all that is intended, then it is unduly presumptive to speak of
a faculty, still more so, a faculty
for science. However Chomsky intends
to gloss ‘SFF’, which will be investigated below, to reject epistemic immodesty
is not to commend an endogenous demarcation. Chomsky’s inference harbours an
exhaustive disjunction - endogenously determined closure or omniscience - we
should not accept. Lack of such closure does not entail omniscience or anything
remotely supernatural.
For the purpose of questioning the
entailment, let us assume that we do not have fixed conceptual resources. There
is no SFF; instead, it is genuinely indeterminate what we may understand. This
may be so if a completed neuroscience and cognitive psychology would not
provide us with a list, as it were, of domains we may understand; rather, we
find that, cognitively speaking, the brain contains some relatively autonomous
components that follow a ontogenetic pattern as default, while others are much
more inter-modal and differentiated. Our completed theories do not tell us if
we can know what dark matter really is or whether the Continuum Hypothesis is
true or false. Indeed, we cannot even tell what range of concepts, bound or
unbound, the brain can support: each brain, it turns out, is different in
significant respects. For my present purposes I need not suggest that this
scenario is true. My argument only requires that it is consistent with what we
presently know about the brain.[4]
The scenario certainly deserves such modest credit, especially given that our
concern is with fine grained notions of individual concepts and hypotheses.
Now the above view is not one of
cognitive closure in the relevant sense; equally, it patently does not imply
omniscience or any other supernatural property. One’s possessing a SFF is not a
necessary condition for one not being a god. There may simply be no answer to what we may or may not
understand, at least none from a complete science of the mind/brain. Human
nature may leave undetermined the limit of our cognitive reach. A corollary is
that if we still want to ask, ‘In principle, what can humans understand?’, then
we should acknowledge that the question is no longer to be construed as
straightforwardly empirical. Rather, we are asking something like, ‘If humans
with their current cognitive make-up were to carry on indefinitely, what
domains would resist explanation?’ With the secure foundation of fixed
conceptual resources removed, it is very difficult to begin to assess this
question, for any answer will be sensitive to a myriad of factors: not only our
cognitive structure, but also many exogenous factors: the kind of traditions
that develop, the kind of stuff there in fact is in the universe, the
technology we develop, the kind of assistance, if any, we receive from alien
life-forms, and maybe just sheer luck. This is not to say that there are no
mysteries, only problems; the appropriate conclusion is that the distinction
between the two cannot be empirically grounded; rather than being an issue in
cognitive science, it is a piece of futurology, interesting enough to speculate
on, but not something to be greatly exercised about. The dialectical moral is:
a rejection of strong closure is not conceptually or empirically concomitant
with our deification; far from the problem-mystery distinction being an
independent notion the SFF thesis naturally explains, it is the thesis that
motivates the supposed cognitive division. This conclusion should not be
surprising: one can hardly expect to arrive at specific theses about human
cognition from broad inchoate observations.
I shall shortly look at some arguments
which seek to support the SFF thesis directly; before doing so, I shall look at
another consideration - failure - of
a more general nature.
Unlike McGinn (1991) on consciousness,
Chomsky does not affect to know whether this or that domain is mysterious, even
so he appeals to potential mysteries such as linguistic creativity.[5]
No-one is yet in a position to say that consciousness or linguistic creativity
are definitely not mysteries; that would require coherent theories of the
phenomena, something none of us possess. I do not think, though, that our
historical failure to explain these phenomena or any others intimates that
there is a SFF that lacks the appropriate conceptual resources. I shall argue
for this negative thesis by suggesting that a history of failure may be
properly explained in more modest terms.
Chomsky, of course, does not take the
failure of previous accounts of creativity to demonstrate mystery; ditto for McGinn (1991) and Nagel (1986, 1995)
vis-à-vis consciousness.
Nevertheless, an inductive comfort is felt in past failings; they are taken to
be “suggestive” or indicative that the time has come to give-up.[6]
We should, however, not be moved to pass from failure to mystery.
Patently, no amount of failure allows
us to infer mystery. But in what way, then, is failure suggestive? Charitably,
the history of science is one of equal proportion of failure and success; and
where there is success, failure always threatens as research programs wax and
wane and data accumulate. If we are to be moved by simple failure, we might as
well declare the universe and all that’s in it a complete mystery. After all,
science is not in the proof business. For failure to intimate mystery, the lack
of success must be peculiar.
A mark of potential mystery to which
some have appealed is that we, as it were, ‘stare blankly’ at a problem,
nothing is forthcoming. This characterisation, however, is hardly descriptive,
it is a judgement on the efforts made
or, worse, an assessment of the authors’ own efforts. No problem induces blank stares, whether literally or
metaphorically, in everyone. Consciousness certainly does not as the groaning
book shelves and increasing number of ‘centres’ and conferences testify. Of
course, one is free to think that such output does not amount to much more than
a blank stare, but one would thereby be offering a slanted evaluation, not a neutral criterion of mystery. Alternatively,
failure might take the form of an absence of science: a domain is identified,
but neither a methodology nor predictive/explanatory theories are produced.
Failure on such a scale would certainly intimate that something is grievously
amiss, but, again, we are far from a suggestion of mystery.
Prior to Darwin, it is fair to say
that while there were theories of evolution (witness Lamarck and Geoffrey),
they did not provide sound mechanical explanations of the origins and
inheritance of traits which lead to species diversity and similarity. Indeed,
the very idea of species evolution was tendentious; perhaps the then dominant
view in biology was the neoclassical one that dismissed the very idea that one
species may ‘change’ into another.[7]
A reasonable person might well have declared, and many did, that life was a mystery, the province of
divine ordinance. With the re-discovery of genes and the discovery of DNA,
Darwin’s theory is now the background for modern biology. Such has been this
success that the very idea of an elan
vital is now as egregious as that of a res
cogitans. This transformation from not even a recognition of evolution to
advances favourably comparable to those of post-Galileo physics took just over
a hundred years. Thus, there is precedent for ‘blank stares’ to metamorphose
quickly into paradigmatic science. It is always too soon, it seems, to gainsay
intellectual advance.
More tendentiously, the human sciences,
in contrast to the physical sciences, exhibit a failure to progress and in many
cases predictive or explanatory hypotheses are not even sought. An assessment
of the human sciences by the present criterion might lead one to think mystery
endemic in the human domain. Would this be a reasonable conclusion?
Well, is Homo more complex, mysterious even, than DNA, quantum mechanics,
analysis, relativity theory, etc.? We are encouraged to think so merely on the
basis of the lack of scientific success. We have, however, no clear, neutral
sense of what conceptual complexity
amounts to, still less a domain-independent metric of it. The relevant
variables for any interesting social problem might be too astronomical to
control for, but this would not constitute a mystery in the present sense.
Friendly aliens might lend us their super computers. Consider: a four-colour-like theorem might be
unprovable in the absence of computer assistance, but it would not therefore be
mysterious. Perhaps the problem with Homo
is more mundane.
Chomsky (1979, p. 57) himself has
likened the methodology of sociology to butterfly collection: lots of
interesting data, if one likes that kind of thing, but nothing approaching
explanation. Chomsky (1968/72, pp. 24-6) also suggests that a deep problem with
the scientific investigation of that we are most familiar with is that we think
we already know the facts, and so waste our efforts trying to systematise and
explain what are in reality chimera. Chomsky’s assault on behaviourism is a
paradigm of the required process of defamiliarisation. Perhaps we are awaiting
a similar development vis-à-vis
consciousness. In short, a more modest judgement is that there is nothing
inherently mysterious about Homo, the
problem is that we continuously adopt the wrong approach. It is an interesting
historical question why this should be so, but there need be no portent of
mystery. Indeed, what is probably most inimical to the progress of the human
sciences is the unfortunate, though perhaps unavoidable, usurpation of method
by political agendas, both left and right. Again, this is something Chomsky has
taught us.
I should not suggest that the retrieval
of a criterion of mystery from a history
of failure is impossible, but I fail to see how it could be reasonably made in
the face of historical precedent and more modest explanations. Let us, however,
move to Chomsky’s particular considerations for the existence of a human SFF
rather than mysteries in general.
4:
Faculties and Science
Let us
assume that cognition is not served by a general purpose device; instead, the
mind is divided into a number of dedicated devices that support a range of
competencies and capacities. Think of the nomenclature ‘faculty’ as
(intentionally) picking out such devices in terms of the domain-specific
principles and concepts particular to them. This somewhat fuzzy
characterisation is for a reason.
Fodor (1983, 2000) reads Chomsky’s
notion of a faculty epistemologically rather than functionally, i.e., a
Chomskyan faculty is a body of information a subject knows, not an
architectural component. I think Fodor is right in as much as Chomsky’s
theories are not processing stories, as many assume. One could, therefore,
accept that h-theory,
binding theory, et al. are innate
while holding that the mind is a general purpose device. Fodor, however, is not
quite right. Chomsky uses ‘faculty’ (and ‘organ’ and ‘module’) ambiguously
(somewhat like his use of ‘grammar’): it sometimes denotes a body of
information, that which we cognize,
at other times it denotes the cognitive mechanism that supports that
information and interfaces with performance systems (information cannot
interface with anything.) This is not
sloppiness: for Chomsky, there is no question about the ‘psychological reality’
of a grammar apart from its explanatory worth; if it proves so worthy, then the
grammar is an accurate, albeit abstract, intentional description of a yet
unknown physical mechanism. Chomsky is assuming (contra the generalist) that there are dedicated devices (brain
areas) without speculating upon their operational specification; and so the
devices do not individuate the faculties (see Chomsky, 1988, pp. 7-8).
Consequently, a ‘Chomskyan’ faculty is a loser notion (qua intentional) than Fodor’s (1983) modules (effectively, input systems). In particular, faculties
need not be automatic, inaccessible, or encapsulated. They do, though, follow a
biologically fixed maturation process: given a uniform initial state and experiential input, they determine a final state that supports a mature
competence. This final state may also be uniform, as it appears to be with,
say, the theory of mind faculty; or
it might be non-uniform, as it clearly is with the language faculty, i.e.,
different input determines different I-languages. As indicated, the bare idea of modules
or faculties admits great variation (more of which below), let us though stay
with the sketch at hand.
Now if we conjecture that the human
mind has a faculty architecture, some
diagnostics have to be in play so that we may identify the faculty based
competencies, for not all competencies are so supported. For example, line
dancing, car repair, origami, chicken sexing, etc. are all competencies, but we
are not moved to posit, say, a line dancing faculty. But we do posit faculties
for language, theory of mind et al.
This difference provides an angle on the diagnostics appropriate for a faculty
competence.
Faculties are fixed as part of our
biological endowment; the principles therein specified are thus innate,
unlearnt. This gives us some ready diagnostics. First, a candidate faculty
based competence must be uniform across the species within intelligible bounds
of difference; it cannot be a culturally specific capacity. In short, the
competence must be a trait of the species. Second, the competence must follow a
fairly strict ontogenic course; for since the blueprint of the development of
the competence is genetically coded for, the competence should be invariant
across a wide variety of experiences. Explicit teaching, for example, should
not make a significant difference to the speed of the development or the final
competence arrived at. Third, the competence and its development should, to
some degree, be invariant over various pathologies, injuries and differences in
intelligence. A faculty F is a device
dedicated to a specific domain; disturbance to another faculty, therefore,
should not necessarily lead to disturbance to F. Pace Fodor (1983,
2000) and his modules, it is perfectly coherent to view some faculties as
enjoying proprietary interfaces with one another, while others may work in
isolation (Collins, 2000).[8]
Fourthly, the competence should reach normal maturity in the face of a poverty of stimulus. This diagnostic is
essentially just another way of saying that the competence acquired is underdetermined
by the data available to the child. After all, if a competence were determined
by some learning regime or a certain set of stimuli (no matter how complex), it
would be redundant to claim that it is supported by a faculty, for the
competence could apparently be acquired independent of any prior principles or
concepts specific to the competence’s domain (here I exclude general principles of, say, association,
if such there be).
It should be transparent that the
diagnostics delineated do not fit line dancing,
etc. The diagnostics do fit linguistic competence and face recognition,
and a good although still highly controversial
case can be made for them fitting theory
of mind.[9]
Let us hypothesise, then, that these diagnostics are indeed criterial of a
faculty competence. Do the diagnostics identify our scientific endeavours as
faculty based? This is a difficult question because Chomsky is, I think, somewhat unclear on how the notion of
a SFF is to be understood.
The problem is this: we can give
‘science’ a strict construal under
which it primarily covers our paradigms of successful scientific theories.
Under this reading, a SFF is a kind of theory selector, determining those
domains in which we can achieve some success. Alternatively, we may construe
science in a liberal way as covering
any thinking (practise) that is guided by certain meta-principles
(supra-empirical virtues). So read, a SFF is simply the seat, as it were, of
the set of principles which enter into our construction and evaluation of
theories. Now while a case can be made, I think, for the view that some such
principles are innate and uniform throughout the species, it also seems obvious
that such principles are domain general, not specific, and that they support
neither the problem-mystery distinction nor the associated model favoured by
Chomsky whereby our SFF is supposed to determine a subset of the set of true
theories. If, faced with such difficulties, we revert to the first, strict
construal of science, so that the putative SFF
meets these demands, then we lose the positive readings on the
diagnostics. Either way, therefore, the SFF thesis appears to be in some
disrepair.
Let us first look at the strict
construal, which is, I think, highly implausible; it does, though, have its
interest. Science, we might say, is paradigmatically represented by the
theories found in text books of say Newtonian mechanics, statistical
thermodynamics, general relativity, Bohr’s atom theory, etc., where
generalisations are sought that are explanatory and predictive of future cases
on the basis of postulated unobservables that unify otherwise disparate
phenomena. Under this construal, a SFF may be understood as a device that gives
us access to those concepts and principles required for fecund generalisations
over the domains in question (e.g., atoms or the structure of space) while
other domains remain closed to us, for our SFF simply lacks the appropriate
conceptual resources. When Chomsky and McGinn claim that our SFF determines but
a subset of the set of possible scientific theories, and that we may determine
what domains are mysterious to us, it
is difficult not to have such a construal in mind. My initial analogy between
the SFF as and the language faculty (and the attendant quotation from Chomsky)
followed such a line, for it is one that makes perfect sense of the problem-mystery
distinction. This conception, however, appears to tell us that the kind of
science typical of the West for the past 400 or so years is as cognitively determined
as language is. If this is the
conception, then it is surely mistaken.
Science, as exemplified by, say, general
relativity theory, is a fairly recent product of Western culture, there is no
evidence whatsoever for its being a species trait. Nor, of course, does it have
an ontogeny: normal human maturation does not produce scientists; it requires a
great amount of explicit instruction for one to grasp the theories
characteristic of the last few centuries. Equally, being a scientist, so to speak,
is not invariant under differences or changes of intelligence or cognitive
capacity: no-one expects scientific competence
to be selectively spared or impaired and there is, of course, no
evidence for any such pathological profile. The reason for this is that
grasping and working with a theory appears to require a battery of competencies
and capacities: distinct kinds of reasoning (e.g., deductive and analogical),
good long and short term memory, mathematical and linguistic knowledge,
experimental design, etc. So much I take to be indisputable. Finally, the poverty of stimulus diagnostic does not apply either (I shall
separately look at this diagnostic in §5 with reference to the more plausible
liberal construal.) We learn
scientific theories, we do not acquire
them from partial and degraded data. Indeed, to acquire a theory we typically
need to be inundated with stimulus (lectures, text books, conversations,
experiments, etc.), and even then we consistently make all kinds of errors.
Perhaps, then, Chomsky has something different in mind.
Consider: “The basic elements of
rational inquiry may have some of the properties of such cognitive systems as
the language faculty, though the ways they are employed are surely quite
different: scientific knowledge does not grow in the mind of someone placed in
an environment” (Chomsky, 1980, p. 140). Quite! Notwithstanding the apparent
support the strict construal receives from Chomsky, here he seems to be
advancing what I earlier called the liberal
construal under which science is simply a kind of thinking marked by our
predisposition to judge according to certain principles. By this reading, our
SFF, in some sense, leads us one way rather than another through the space of
theories, but it does not code for any such route, we can not, as it were,
read-off the theory of natural selection, say, from the neonatal brain. Where
we are at a given period will be a function of a background of past theories,
especially the successful ones, but this history and future progress is shaped
or canaled by the kind of answers our
SFF permits according to its principles. This is the model Chomsky
appears to present in his 1988, chp. 5.
Chomsky (2000a, pp. 82-3; also 1980, p. 140) offers empirical test,
elegance, and criteria of intelligibility, as potential candidates for such
principles; we may add simplicity, exhibition of causal structure (‘Mill’s
methods’, perhaps) and other such meta-empirical notions.
As I indicated above, while this latter
construal does not suffer from the same impairments as its restrictive
counterpart, it does have its own problems. The first thing to note about the
proposed principles is that they are not
domain specific: simplicity, elegance, testability, etc. are applicable to any
field. Thus, if a SFF has no domain
specificity, then it really makes no difference whether one says that the
history of science has been shaped by our SFF or, vacuously, by our thought.
Put only slightly otherwise, a SFF would simply be for thinking as such. Indeed, Fodor (1983, 2000)
takes cognition which is governed by such global
principles to be precisely that kind of thinking which is not domain specific, in contrast to the cognition that is served by
modules with their proprietary databases. Of course, the operative notion of a
domain is somewhat vague; still, one reason why language and face recognition,
say, appear to be faculty competencies is that their domains are so
idiosyncratic: our proficiency in the domains calls for specific information about verb structure, vertical symmetry of
eyes, mouth, et al. In contrast, the
principles under consideration appear to have no idiosyncratic domain. From
finding one’s way home or finding a lost sock to arranging a wedding or
building a kennel for the dog requires principles of reason and testing, even
if, perhaps, only in the imagination. Any rational belief fixation requires
some constraining, otherwise, we would be afflicted with the frame problem,
which, in point of fact, we never are. Consider, specifically, the notion of
causal structure, which appears to have a strong innate basis (Sperber, et al., 1995): we like theories to give
us causal mechanisms; such is why, inter
alia, Einstein gives us a better theory of gravitation than Newton and why
no-one but cranks take morphic resonance to be a serious hypothesis. Equally, however,
we impose causal structure on everything we come across: if we cannot discern a
causal pattern, we tend to retire in
bemusement, but not always. Sometimes we forego causal structure (see below).
It might
be that we can delineate science-specific
notions of simplicity, elegance, etc., but we cannot assume that there are such
principles without begging the question at issue, for such an assumption
amounts to a presumptive specification of a SFF. Moreover, we have no
independent good reason to think that there are any science-specific abductive
principles. As it stands, therefore, ‘SFF’ is a misnomer; for why speak of a faculty for science when precious little is excluded? The whole point of
faculty theorising is to divide and conquer, to isolate specific competencies
and attempt to see what kinds of peculiar information and principles best
explain the observed proficiency. There is no theoretical gain in hypothesising
a faculty which appears to serve (more or less) the whole of thought.
How, then, are we to understand the
faculty-ness of the SFF? Perhaps we are employing ‘faculty’ too precisely;
Chomsky (1975b, pp.155-6; 1988, pp. 156-9) does explicitly draw the analogy
between language and science, but it is clearly not meant to be a tight one.
The issue here is the extent to which a SFF can be domain neutral (unlike
language) without becoming indistinguishable from a vacuous notion of general
intelligence or rationality. Let us look at some potentially pertinent
proposals.
On Fodor’s (1983, 2000) view, the
mind/brain broadly divides into some components that are domain specific (for
Fodor, these are modules that serve
input cognition: vision, olfaction, parsing, etc.) and others that are domain
general (or perhaps just one), these serving central cognition, i.e., rational
belief fixation. Although it is rarely, if ever, noted, the theory theory view (e.g., Gopnik and
Wellman, 1994, and Gopnik and Meltzoff, 1997) is close to Fodor’s position to
the extent that both resist the view that central, rational thought is modular,
i.e., rational belief fixation is not served by an ensemble of dedicated,
domain specific, encapsulated
components; still less do Fodor and the theory theorists commend a
module dedicated to science.[10]
Instead, both posit innate domain general principles of theory formation and
confirmation which, we may presume, have shaped the history of science. So, is
this not a tale of a SFF?[11]
Fodor himself is non-committal as to the organisation of central cognition:
organisation there surely is, but without new concepts, we haven’t a clue how
to account for it (for Fodor, 1983, p. 107, “the more global [i.e., less
modular] a cognitive process is, the less anybody understands it.”.) Even so,
the principles of belief fixation captured by our putative principles are, for
Fodor, innate and can be selectively impaired (e.g., it follows from Fodor’s
account that rational belief fixation could be impaired while linguistic
competence remains intact.) Where Fodor demurs on principle, the theory theory
view can be understood as offering a story about belief fixation (see Gopnik
and Meltzoff, 1997, pp. 63-7). Peripheral modules output the data which forms
the evidence for theory construction
on the basis of an initial innate structure, guided by meta-empirical
principles.[12] A
proper assessment of Fodor’s account and the theory theory view is beyond my
present scope; fortunately though, at least for my dialectic, neither approach
is flush with the SFF notion Chomsky appears to favour.
Chomsky clearly does not view faculties
as theories that are developed according to domain general principles; on the
contrary, he eschews the very idea of anything approaching a general intelligence
in favour of common-sense faculties, a mathematics faculty, a musical one, etc.
Chomsky (1988, p. 47-8) avers: “in any domain…, specific… capacities enter into
the acquisition and use of belief and knowledge”; general mechanisms, “if they
exist” enjoy, at best, a “doubtful” role. Chomsky (e.g., 1980, p. 135)
certainly rejects the application of domain general theorising with respect to
linguistic development, which, for Chomsky, has nothing whatsoever to do with
finding simple and elegant hypotheses or analogical reasoning or any hypothesis
testing at all.
Might a SFF be understood as an
abstraction from, or construction out of, a collection of domain-specific
faculties or theories? A SFF, by such a suggestion, would be constituted from
our endowed folk understanding of biology, physics, etc. This suggestion might
be finessed by appeal to Carey and Spelke (1994). They rightly acknowledge that
explicit science is quite different from our developmental theorising (if such
is what we do) precisely because it is not restricted to, nor directly
constrained by, the domain-specific ‘core’ knowledge which may reasonably be
thought of as innate. But they also picture science proper as enabled (and so
limited?) by analogical mappings across the innate domains which constitutes
‘constructed’ knowledge.[13] Well, what makes Carey and Spelke’s
hypothesis, and perhaps other such bootstrapping models, provisionally sound is
that the notion of analogical mappings allows scientific thinking to come under
the purview of developmental cognitive science without egregiously restricting
science to what the child naturally arrives at. The notion, however, is also so
lose as to be of scant help in constructing a workable notion of a SFF. After
all, anything is analogous to anything else. I cannot imagine what the evidence
would look like for the claim that the admissible theories are restricted by innate analogical
possibilities. For example, we tend to analogise on the basis of our most
complex machines (water pumps, clocks, computers, etc.), but none of this is
innately specified in the relevant respect. Indeed, if we consider the extent
to which explicit theorising in any given domain has departed from our
intuitive outlook, then it seems that the restriction our ‘natural’ view places
on science is approximately zero. The data there are to support the existence
of ‘science’ faculties indicates that they are of a distinctively Aristotelian
stripe, an outlook long since rejected in every area of our understanding of
the natural world (see, e.g., Keil, 1989).[14]
It bears noting that if Chomsky were
commending the SFF thesis as a story about the principles of central cognition,
then he would be quite inconsistent. Chomsky’s language faculty is not a Fodorian peripheral module;
rather, it is a system that is for thinking (if anything), it is not a mere
parser.[15]
Chomsky, then, cannot consistently hold to the claim that central cognition
(the seat of thinking) is governed by
domain general principles. There is, however, a more fundamental problem with
the idea of a SFF answering to either the Fodor or theory theory view.
Chomsky’s discussions of the feasibility
of a SFF are always presented in the context of the problem-mystery
distinction. The two ideas appear to be mutually supporting: the SFF hypothesis
offers a cognitive explanation of the distinction; without it we would have no
independent ground to say of any domain that it is mysterious. Concomitantly,
the supposed intuitive coherency of the distinction (as discussed in §3) gives
credence to the view that science is not the play of a general, unbounded
intelligence. Neither the Fodor nor theory-theory view support such a position
on the problem-mystery distinction.
Fodor (1983, §V) certainly thinks that
there are mysteries; he reasons that since cognition is innately structured,
there are endogenously determined limits on the kind of hypotheses we can
entertain. Crucially, however, this has little to do with Fodor’s particular
architecture of modules and central systems: any view that gives cognition a
fixed architecture of information control and access is bound to admit the
possibility that the world might throw up a problem which cannot be answered by
our minds.[16]
This reasoning, though, is in line with my happy concession in §3. There I
suggested that human thinking, like that of any other organism, is most
certainly epistemically bounded. But this is just to admit that we are not
potentially omniscient; it is not to concede that the structure of cognition demarcates
between theories, determines a subset of true theories. Fodor’s account, then,
does not give us the kind of cognitive explanation Chomsky expects. Indeed,
Fodor agrees with Chomsky that if
cognition is thoroughly modular, then we shall have a clear demarcation of
problems and mysteries, but Fodor’s (1983, 2000) key claim is precisely to deny
the antecedent here: that the mind is not massively modular is what makes it mysterious!
The theory theory view also fails to
give Chomsky the support he wants for the problem-mystery distinction. The
point here is straightforward and is independent of the details of the theory
theory approach. The general principles at issue are comparative ones: they
help us decide between theories or
hypotheses, they do not produce the theories for us. It makes very little
sense, for example, to say that we only select simple theories. We favour the
simpler theory, ceteris paribus. The rider accommodates the fact that we happily
neglect the dictates of a given principle, if so doing gives us a greater all
round fit with the other principles: as regards causal structure, the move from
Cartesian to Newtonian mechanics is an example; so is, perhaps, the development
of quantum mechanics last century. Being comparative, these principles cannot
preclude certain hypotheses or theories from consideration, they can only
advice us against them once they are on the table, as it were. Consider, for
example, one of Chomsky’s (2000a, p. 85) speculative examples: dark matter. There
are a number of proposals on the market, involving, variously, the size and
shape of the universe, the presence of super-massive (hence, super dense) black
holes, hitherto undetected elementary particles, etc. Now let us hypothesise
that the nature of dark matter is mysterious for us. In what way could this be
a function of our meta-empirical principles? It is certainly not to the point
to say that the true theory is too complex. It might be too complex, but this would have nothing to do with our
favouring simplicity; after all, if the
correct theory is too complex to entertain, then we cannot get around to
judging its simplicity relative to another theory. What if the truth of the
matter did not admit to a causal explanation? Again, this would not necessarily
portend mystery. A variety of indeterminacy hypotheses have been happily
entertained and accepted in the absence of a settled interpretation (read
‘mechanism’), notoriously, the collapsing
wave packet in quantum theory. We like causal mechanisms, but we can and do
forego them; mutatis mutandis, I
submit, for the other principles. The principles help us decide between the
theories on the table, but they will not reject all the theories, still less
provide a licence for us to say that no theory will do. Again, therefore, the
theory theory approach, even if it were otherwise acceptable to Chomsky, does
not provide for a cognitive explanation of the problem-mystery distinction.
The position we have arrived is that
while the strict construal of science is
patently inadequate to satisfy any of the faculty diagnostics, it clearly does
make sense of the problem-mystery distinction in the way Chomsky sets it up. On
the other hand, while the views we have just been considering do offer the kind
of principles relevant to theory construction
and assessment that may well be innate (if not quite faculty-like), they
do not offer a cognitive ground for the theory demarcation Chomsky wants.
Perhaps, as it seems, Chomsky wants it both ways. It must be kept in mind that
Chomsky’s SFF is an unabashed speculation and, for all we presently know, there
might be innate principles that are rich and specific enough to determine the
set of theories we may access. As it stands, I think the proposal falls between
the two stools of the strict and liberal construals.
In the next and final section I shall
look at Chomsky’s specific argument for a SFF based upon the poverty of
stimulus diagnostic. This separate treatment is apposite, for Chomsky, at least
in one place, appears to understand our fourth diagnostic as the crucial one.
5:
Induction and Convergence
For
Chomsky, the most telling indicator that a competence is faculty based is its
satisfaction of our fourth diagnostic. We have already seen, though, that
science is not a competence in the way that language is, say: our acceptance of
the theory of geodesic planetary orbits is patently not underdetermined by data
in the same way that our acceptance of the principles of binding theory is. In
what respect, then, does the fourth diagnostic militate for a SFF? Chomsky
(1975b, p.24-5; cf. 1971, pp.49-50) has contended that without a SFF “it would
be impossible for scientists to
converge in their judgement on particular explanatory theories that go far
beyond the evidence at hand,… while at the same time rejecting much evidence as
irrelevant or beside the point, for the moment at least” [my emphasis]. The
nature of the inference here appears to be that if our convergence is to be
possible (not a miracle), we need extra-empirical principles to weed out all
but a few of the contrary theories that we could otherwise find to comport with
the data; a SFF is simply the seat of such principles. I must say that I find
this argument to be very weak indeed; before demonstrating why, however, a word
of caution is in order.
Chomsky’s early views were certainly
motivated by Quine and, especially, Goodman’s work on induction: Chomsky agreed
with many others that unconstrained induction is untenable, whether as a model
of learning or the norms of science, but he also rejected any empiricist
band-aids to cover the problem (see Chomsky, 1975a, Introduction). Chomsky, to
my knowledge, however, offers the above direct argument to a SFF only in the
two places cited above (see below for a qualification); his more recent
discussions simply associate the SFF thesis with the coherency of the
problem-mystery distinction (e.g., Chomsky, 1991b, p. 41; 1993, pp.44-5; 2000a,
pp.82-3). Perhaps, then, Chomsky’s considered view is that there is no inference
from the inductive practise of science to the existence of a SFF (cf., Chomsky,
1980, pp.139-40). Whatever the case may be, the argument is worth considering,
for it purports to offer precisely the backing the SFF thesis requires.
Chomsky’s concern, of course, is with
the cognitive basis of our rationality, not with the clarification of the
concepts of validity or justification. That humans, especially in their
scientific mode, are concerned to cleave to rational norms, tells us a simple
truth about human thought: there is a slack between our receipt of data and the
convergent beliefs we arrive at. Such a slack, however, amounts to no more than
the fact that we are not blank slates. Crucially, the possibility remains open
that the innate equipment that enters into our ability to take up the slack is not the set of the methodological
principles that govern self-conscious scientific investigation and even if it
is, it does not follow, as made clear above, that the equipment makes up a
faculty. All that does follow is that the scientist needs some innate equipment
to be so much as a thinker as opposed
to a S-R device. Let us see this in some detail.
An underdetermination thesis UT says, for some set of methodological
cannons C, that data-set D does not confirm or corroborate a theory T on the basis of C at the expense of all other contrary theories. Now if we do in fact converge on T, then, relative to UT, any potential addition to C would override the underdetermination
problem, i.e., UT would not show that there is no rational
justified choice to be had. It might
be that every theory is underdetermined by every possible D given any C, but I know
of no argument that attempts to show
that this is so. To assess any given UT
then, we need to ask whether the associated C
is reasonable or realistic. If the answer is ‘No’, then we do not have a sound
underdetermination claim; if the answer is ‘Yes’, then we do. So, Chomsky is
perfectly correct in thinking that some C
is required, but the requirement is not based on a need to overcome
underdetermination; C is needed so
that our theories may rationally confront the data in the first place, whether
or not the confrontation leads to underdetermination. Where, then, does the
claim come from that the scientists’ C must
be innate? For all that has been said, it is still open to think of C as the product of our thought, rather
than being our thought as such, as it were. It looks as if the claim that
scientists’ C is the innate content
of a faculty floats free of any underdetermination thesis.
Consider Hume on induction. Hume
demonstrated the deductive underdetermination of theories (hypotheses); that is, for
any D, if theory T is confirmed by entailing D,
then there are contrary theories that are equally confirmed. This notion of
underdetermination, however, amounts to the now trivial claim that scientific
inference is non-demonstrative or, so as not to exclude the Popperian,
deduction cannot amount to justification. As regards Chomsky’s inference to a
SFF, if this is the only underdetermination a scientist must face, then all
that follows about convergence is that it cannot be explained on the assumption
that the scientist’s cannons are wholly deductive. For example, theories T1, T2,…Tn
might all entail D, but if T1 were the only theory
consistent with an associated favoured theory Tn+1, then it would be rational to choose T1. Otherwise put, Chomsky’s argument can be spiked by so little as an
appeal to a single cannon of reason
(e.g., choose the theory that least disrupts your other commitments) that goes
beyond the entailment of D; there is
not a whiff here of an inference to an innate SFF. I do not mean to suggest, of
course, that any such cannon(s) would in fact uniquely single out a theory in
any epistemic scenario; my point is only that deductive underdetermination
leaves such an option open.
As indicated above, a notion of
underdetermination closer to what Chomsky has in mind is the one due to
Goodman’s (1954/83) ‘new riddle of induction’. Familiar details aside, Goodman
shows that if hypothesis H (e.g., ‘All emeralds are green’) is confirmed by its
observed instances (i.e., green emeralds), then a contrary hypothesis H* (‘All
emeralds are grue’) is also confirmed by the same instances, where ‘grue’ means
‘either green if first examined before 2050AD or blue if not first examined
before 2050AD’. The riddle is that we unerringly take the ‘green’ hypothesis to
be confirmed (to some degree) by the observation of emeralds, even though the
grue hypothesis is confirmed to the same degree by the emeralds. If we did not
so converge, we would have no shared sense of laws, natural kinds, explanation,
prediction, etc., but the basis for the convergence appears to be prior to the
actual framing and corroboration of hypotheses. Do we need the SFF thesis to
explain this?
To keep things finite, let us assume that
Goodman’s claim is well-founded. What Goodman shows is that enumerative induction is not sufficient
to give a hypothesis a unique degree of confirmation. So, if we wish to explain
convergence on, say, ‘green’ rather than ‘grue’ we have no greater license from
the riddle than to add to C (=
enumerative induction). This, of course, is precisely what Goodman did, for
good or ill, with his historical
notion of entrenchment: roughly, we
converge on ‘green’ because, unlike ‘grue’, it has been successfully projected
in the past. Alternatively, Harman
(1994) proposes a practical principle
of simplicity to rule out ‘grue’. Now, these and many other ‘solutions’ do not
necessarily speak to the ‘genetic problem’ of how humans do in fact converge
(Chomsky, 1971, p.6). But the same point holds for the riddle itself: it does
not so much as indicate the shape of a cognitive solution, still less
necessitate one along faculty lines. At best, the riddle shows that enumerative
induction is inadequate as either a model of justification or, indeed,
cognition. This is an instance of the general moral: underdetermination
arguments are negative; they work against a given set of methodological
cannons, they do not establish the
identity of the cannons which in general are required for rational convergence.
I have not insisted that we should read
Chomsky’s direct argument from convergence to a SFF as being a gloss on
Goodman’s riddle. For sure, Chomsky (1975a, pp. 33-4; 1971, pp.6-8) does appeal
with perfect legitimacy to the riddle as a central plank in the argument
against a general empiricist model of leaning, but no direct association with a
SFF is made. Perhaps, though, Chomsky, at least in (1975b) where the direct
argument is made, is conflating the SFF thesis with Fodor’s position on concept
learning. The proceedings of the 1975 Royaumont conference
(Piattelli-Palmarini, 1980) suggest this. Fodor and Chomsky there argued that
the moral of Goodman’s riddle is that any induction is “logically impossible”
without an “a priori [innate]
ordering of hypotheses [or predicates]”; Fodor took this to be “so self-evident
that it is superfluous to discuss it” (Piattelli-Palmarini, 1980, pp. 259-61).
Now I do not so much think that this line is false, as woefully underspecified.
What is “self-evident”, let us grant, is that blank-slate induction is
impossible. Goodman (1954/83, p. 82) himself claimed this in arguing that a mere
habituated fixation on a regularity a la
Hume can not establish which predicates are the projectible ones, because green
and grue instances are equally regular by definition. The point is well taken
but it hardly leads us to credit the scientist with a SFF, even one which
consists of a predicate metric alone. The
inference is blocked because the scientist is patently not otherwise a
blank slate. Such, indeed, is Goodman’s point: scientists’ choices are a
function of history. As regards the developing child, if one is running a
hypothesis confirmation model of learning,
as Chomsky (1965) was with language and Fodor (1975) was considering with
concepts, then one faces the problem of explaining how the child fixates on a
given hypothesis when the data does not determine such a choice. As with
underdetermination generally, the answer is to be found in a methodological
organon C. In the learning case, both
Chomsky (1965) and Fodor (1975) take C
to consist, inter alia, of a simplicity metric defined over an innate
set of grammars and predicates respectively. For present purposes I have no
problem with such proposals, although both Chomsky (1981) and Fodor (1981)
rejected them in favour of triggering
models (such is perhaps why Chomsky ceases to appeal to Goodman after the
development of the P&P approach.) My point is simply that the scientist
faces a quite different underdetermination problem from the child: the child,
on the assumption that she is a hypothesis confirmer, needs some C prior to the data; this is enough to undermine blank-slate
empiricism in favour of an indeed self-evident nativism. The issue for the
scientist, on the other hand, is what, if anything, can go into C to enable one to arrive rationally at
some hypothesis, but there is no argument here to say that whatever C comprises must be what the child has; a fortiori there is no argument which
says that the scientist’s C must be
innate. I should say that if Chomsky and Fodor were guilty of this conflation
of the child with the scientist at Royaumont, the confusion did not last. There
is some dispute about this.
Putnam has persistently attributed to the
‘Chomskyan’ (a nomenclature apparently co-extensive with ‘Fodorian’) an
ambition to find an inductive algorithm
which would explain our scientific reasoning, an ambition that is spiked by
Goodman’s riddle.[17]
In point of fact, though, even in his (1975) Fodor was not arguing for a theory of inductive concept
learning; his claim was that if that
is one’s model, then one is de facto
committed to an innate inductive logic (his major point, of course, was that
such a model does not amount to
concept learning - there is no such thing - but to belief fixation.) Fodor’s
(1981) triggering proposal is by definition non-rational:
a trigger is caused, it isn’t warranted. Since then, Fodor (1983,
1987, 2000) has claimed that we have not a hope of a prayer of discovering any
general inductive algorithm, because our computational
theory of mind does not account for non-modular processes, viz. abductive ones. Chomsky (1980, p. 140; cf.
Piattelli-Palmarini, 1980, p. 320) agrees with Fodor (see quotation in §4.)
Well, if Chomsky and Fodor do not run child and scientist together, we
appear to be left with no argument from underdetermination to a SFF as the only possible explanation of
convergence. Notwithstanding Chomsky’s choice of modality for his direct
argument, let us relax his inference to a best
explanation one. The thesis now becomes quite tempting, for it seems that
its only competitor is a sociological account, which threatens to make
scientific convergence no more rational than our allegiances to football teams.
The choice between cognition or society, however, does not exhaust the options.
Let us agree that individual scientists
are rational, they hold to their theories because they judge them to be
true or at least well corroborated. So
much, of course, does not explain convergence, but can we not say that
agreement arises due to individuals being trained within a research program and
their theoretical energies being spent therein? Otherwise put, scientists
employ their cognition to a highly restricted space whose shape is due to
exogenous factors; convergence is explained by ‘society’ determining the options
from which cognition chooses. Well, this appears quite inadequate. Chomsky’s
SFF thesis is offered as an explanation
of convergence, an appeal to research programs seems to be simply a
re-description of the explanandum; we
want to know why there are such programs in the first place. Such a riposte is
apt for those who take the sociological to be primitive, but that is not what I
am suggesting.
Since scientists are rational
creatures, as we all are, then endogenous factors clearly have a role to play,
but the fact of convergence does not lead us to any particular thesis about
those factors. It is perfectly intelligible to appeal to a central, global
rationality along the lines of Fodor (1983, 2000) or perhaps some ensemble of
faculties. We must be careful, though, not to run child and scientist together
again. There may be deep similarities as the theory theory teaches, but while
it is true, I think, that children’s commonsensical convergence is principally
due to cognitive factors, the same explanation does not work for scientists. It
only takes one person to produce a hypothesis that answers certain questions or
makes novel predications that the extant competitors fail to do. This relative
success means that the research gets taught, it has possible technological
side-effects, it attracts funding, it is popularised to a general audience,… it
snowballs. In short, the norm is for scientists to converge on given extant hypotheses; the norm is not for
them, as it is for children, individually and creatively to converge, although
sometimes it happens. Again, there is nothing irrational in following a good
idea. Individual scientists are not drones to the program, they simply tend to
be unconcerned in their day to day activities with formulating novel theories,
for those they have work very well and there is much interesting testing and
tweaking to be done. After all, if there is nothing obviously wrong with one’s
theory - it withstands attempted falsifications and continues to explain novel
data - then one would be irrational to forsake it.
If
this model is anywhere near correct, then individual scientists within
communities, both through their education and in their maturity, do not face
the array of possible theories with data in hand and plum for just a few out of
the indefinite options. Were this the case, a SFF would indeed be required,
unless we thought science a miraculous affair. But scientists are always somewhere, occupying a theoretical
position. As such, the choices they make are of the form ‘I would rather be
here than there’, or vice versa. They
are not nowhere, child-like, deciding
where they wish to be, but faced with underdetermination where ever they turn.
A scientist will stay where he is, and he is rational to do so, because his
theory does enough good work. If the work dries up, or some other theory does
more work better, or (indeed!) is simpler, or more elegant, or integrates
better with some other accepted theory, or if its mathematics is more user
friendly,… then, ceteris paribus, he
will be rational to vacate his theory. I
fail to see, then, the SFF thesis as the best explanation of convergence: the rationality of science is
not exhibited by convergence in the face of underdetermination, it is manifest
in the subtle interplay of factors which determines when a scientist should
move or when he should just stay put.
6:
Concluding Remarks
I have not
sought to refute the SFF thesis either empirically or conceptually. At our
current state of knowledge, the former route is unavailable and to follow the
latter one would simply be to misunderstand the issue. Moreover, I have ran
with the speculation, offering a number of substantiating proposals and batting
off some ill-founded ripostes. Charitably construed then, my claim is just
that, on reflection, it is unclear what the SFF thesis amounts to, more work
needs to be done before we can seriously treat it as a hypothesis. I wish,
though, for my conclusion to be slightly less modest: the deep problem with the
thesis is its presupposition of a universal individualism, the idea that the
endogenous factors which shape any of our practises are specifiable
independently of, and have primacy over,
the exogenous ones. Such thinking is what drives the problem-mystery
distinction and it is this that gives so much sustenance to the SFF thesis, so
much so that Chomsky nigh conflates the two.
I have argued in respect to both problems and mysteries and scientific
convergence that a two way street approach is at least as viable for something as broad and amorphous as science.
Individualism is a sound assumption, I think, when we are dealing with a
competence that has the look of a
faculty, but science is not such a competence. Still, who knows… Would that we
were all capable of the Chomskyan speculations that have turned out to be true.[18]
Notes
[1] Chomsky (e.g., 1988, p. 158) has since dropped the appeal to Peircean abduction (see below).
[2] Chomsky (1965, p. 56) does contend that grammars which are not generated from UG may still be acquirable through our more general problem solving capacity. Even so, there is no guarantee that any, still less all, ‘alien’ languages will so succumb.
[3] Ironically, while Chomsky understands our biological nature virtually to guarantee our epistemic boundedness, he also speculates that UG is a perfect solution to the engineering problem of fitting language to the legibility conditions imposed by the other systems of the mind; but perfection is not a property found elsewhere in nature. The moral should perhaps be: Don’t infer properties of particular organisms from general claims about the biota; one always has to look at the particular. I shall not, though, press the moral, for the difference between the two cases is what is important. See Chomsky (1995) for the perfection speculation at work; for more informal discussions, see Chomsky (2000a, chp.1; 2000b).
[4] Such a view is perhaps close to that of Dennett (1995), Churchland (1989), and Clark (1996). More generally, an empiricist theory of cognition would tend towards indeterminacy about what can be known, whereas a rationalist or Kantian one would tend towards determinacy. My point, though, is orthogonal to this traditional divide. Cognitive design space is vast and we know very little about the area humans occupy, as such, any inference from non-omniscience to architecturally specifiable epistemic limits is quite unsafe.
[5] Chomsky (1986, 1991a) dubs the question of creativity Descartes’s problem: How are we able to use language for the free expression of our thoughts? Ironically, precisely because we have some very good ideas about the structure of language, a better case can be made, I think, for the mystery of language use than can be for consciousness. Thanks to the great advances made in linguistics, we have a quite precise working notion of linguistic competence, and much corroborating data. Against such a background we might reasonably hope for at least a working explanation of creativity. Yet this we do not have.
[6] McGinn (1991, p. 7): “longstanding historical failure is suggestive, but scarcely conclusive”. Nagel (1995, p. 97): “The various attempts to carry out this apparently impossible task [i.e., explaining consciousness] and the arguments to show that they have failed, make up the history of the philosophy of mind during the past fifty years”. Pinker (1997, p. 562): “the species’ best minds have flung themselves at the puzzles for millennia but have made no progress”.
[7] The now standard view is that, before Darwin, a dogmatic and degenerating Aristotelian essentialism prevailed (see, e.g., Mayr, 1982; for a dissenting voice, see Depew and Weber, 1995). What is certainly true is that after Darwin (ultimately, the New Synthesis) evolution is not a phenomenon to be seriously disputed and, furthermore, natural selection is recognised as the principal mechanism of change, if not the whole story.
[8] Fodor (2000, p.62-3) does entertain what might be called distributional encapsulation, where modules enjoy (architecturally constrained) access to other modules’ databases, but such organisation is clearly exceptional for Fodor.
[9] See Segal (1996) for, to my mind, a sound defence of the modularity of theory of mind.
[10] The closeness is not even noted by Gopnik and Meltzoff (1997), who run Fodor together with the central module crowd in evolutionary psychology (Ibid, p. 58), while also suggesting the theory theory view can tell a story about Fodor’s central system! They also fallaciously conflate Fodor’s parser module with Chomsky’s language faculty (Ibid, Chp. 2, passim) (see below, especially n.15).
[11] One striking correspondence is that Gopnik and Meltzoff (1997, pp. 26-7/53) claim that the innate principles (prediction, explanation, etc.) which govern theory formation also explain scientific convergence. This is one of Chomsky’s key claims (see §5).
[12] Gopnik and Wellman (1994) and Gopnik and Meltzoff (1997) follow Karmiloff-Smith (1992) in assuming that modularity (the faculty approach) has trouble explaining development. The thought is wholly confused. Of course modules develop, the point is that the crucial determinant of development is the normal maturation of the brain under normal stimulus conditions. Chomsky makes this point every time he puts pen to paper. It is the theory theory approach, I should say, that is in trouble (see n.14).
[13] Pinker (1997, Chp.5) appears to share this view insofar as he explicitly rejects a faculty for science, but (Chp.8) argues that we are epistemically bounded due to our core endowed knowledge. Pinker’s reasoning here, however, unlike that of Chomsky, is that natural selection has constrained what we might understand to being less than the whole truth.
[14] There is a serious question as to whether the theory theory approach is distinctive enough to give us the required domain general principles. The problem is this. The child is understood to fixate on certain evidentially constrained theories; the child’s brain is built to arrive at them. This is evidenced by the fact that the theories are more or less uniform across the species. But thesis and evidence are here unstable: to account for the uniformity, the theory theorist must enrich the principles the child employs, but the more rich they become, the less evidence the child requires and the less theory-like the ‘theories’ become. It is therefore moot, I think, whether the theory theory view offers a coherent alternative to the faculty approach. In short, the child just ain’t like the scientist, or vice versa. (see Leslie, 2000 for other deep worries.)
[15] For Chomsky’s explicit rejection of the Fodor view, at least regarding language, see Chomsky (1986, p.14, n.10; 1991a, pp.19-21; 2000a, pp.117-8).
[16] E.g., Fodor (1983, p.123) thinks that even a wild-eyed generalist position such as Hume’s is still endogenously restricted because such ‘minds’ can only access information derived from perceptual input.
[17] For the association of Chomsky with inductive logic, with references to Goodman’s ‘refutation’ thereof, see Putnam (1981, pp. 125-6; 1983, p. viii; 1988, pp. 82-3; 1992, pp. 14-6).
[18] I am in
the debt of two anonymous referees for insightful constructive criticism of an
earlier draft.
References
Carey, S.
and Spelke, 1994, “Domain-specific Knowledge and Conceptual Change.”
In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind: Domain Specificity
in
Cognition and Culture (pp. 169-200). Cambridge: Cambridge University Press.
Chomsky,
N., 1965, Aspects of the Theory of Syntax.
Cambridge, MA: MIT Press.
Chomsky,
N., 1968/72, Language and Mind
(extended edition). New York: Harcourt
Brace Jovanovich.
Chomsky,
N., 1971, Problems of Knowledge and
Freedom: The Russell Lectures.
New York: Pantheon.
Chomsky,
N., 1975a, The Logical Structure of
Linguistic Theory. Chicago: Chicago
University Press.
Chomsky,
N., 1975b, Reflections on Language.
New York: Pantheon.
Chomsky,
N., 1979, Language and Responsibility:
Based on Conversations with
Mitsou
Ronat. New York: Pantheon.
Chomsky,
N., 1980, Rules and Representations.
New York: Columbia University
Press.
Chomsky, N.,
1981, Lectures on Government and Binding.
Dordrecht: Foris
Chomsky,
N., 1986, Knowledge of Language: Its
Nature, Origin, and Use. New
York: Praeger.
Chomsky,
N., 1988, Language and Problems of
Knowledge: The Managua Lectures.
Cambridge, MA: MIT Press.
Chomsky,
N., 1991a, “Linguistics and Adjacent Fields: A personal View.” In A.
Kasher (ed.), The Chomskyan Turn (pp.3-25). Oxford: Blackwell.
Chomsky,
N., 1991b, “Linguistics and Cognitive Science: Problems and Mysteries.”
In A. Kasher (ed.), The Chomskyan Turn (pp. 26-53). Oxford:
Blackwell.
Chomsky,
N., 1993, Language and Thought.
London: Moyer Bell
Chomsky,
N., 1995, The Minimalist Program.
Cambridge, MA: MIT Press.
Chomsky, N,
2000a, New Horizons in the Study of
language and Mind. Cambridge:
Cambridge University Press.
Chomsky,
N., 2000b, The Architecture of Language.
New Delhi: Oxford University
Press.
Churchland,
P., 1989, A Neurocomputational
Perspective: The Nature of Mind and
the
Structure of Science. Cambridge, MA: MIT Press.
Clark, A.,
1996, Being There: Putting Brain, Body
and World Together Again.
Cambridge, MA: MIT Press.
Collins,
J., 2000, “Logical Form, Theory of Mind, and Eliminativism.” Philosophical
Psychology, 13,
465-490.
Dennett,
D., 1995, Darwin’s Dangerous Idea:
Evolution and the Meanings of Life.
London: Penguin.
Depew, D.
and Weber, B., 1995, Darwinism Evolving:
Systems Dynamics and
Natural
Selection. Cambridge, MA: MIT Press.
Fodor, J.,
1975, The Language of Thought.
Cambridge, MA: Harvard University
Press.
Fodor, J.,
1981, “The Present Status of the Innateness Controversy.” In
Representations:
Philosophical Essays on the Foundations of Cognitive Science
(pp. 257-316). Cambridge, MA: MIT press.
Fodor, J.,
1983, The Modularity of Mind.
Cambridge, MA: MIT Press.
Fodor, J.,
1987, “Modules, Frames, Fridgeons, Sleeping Dogs, and the Music of the
Spheres.” In J. Garfield (ed.), Modularity in Knowledge Representation
and
Natural-Language Understanding (pp. 25-36). Cambridge, MA: MIT
Press.
Fodor, J.,
2000, The Mind Doesn’t Work That Way: The
Scope and Limits of
Computational Psychology. Cambridge, MA: MIT Press.
Goodman,
N., 1954/83, Fact, Fiction, and Forecast
(fourth edition). Cambridge,
MA: Harvard University Press.
Gopnik, A.
and Meltzoff, A., 1997, Words, Thoughts,
and Things. Cambridge,
MA: MIT Press.
Gopnik, A.
and Wellman, H., 1994, “The Theory Theory.” In L. Hirscheld and S.
Gelman, (eds.), Mapping the Mind: Domain Specificity in Cognition and
Culture (pp.
257-93). Cambridge: Cambridge University Press.
Harman, G.,
1994, “Simplicity as a Practical Criterion for Deciding what
Hypotheses to Take Seriously.” In D.
Stalker (ed.), Grue! The New Riddle of
Induction
(pp. 153-171). La Salle: Open Court.
Karmiloff-Smith,
A., 1992, Beyond Modularity: A
Developmental Perspective on
Cognitive
Science. Cambridge, MA: MIT Press.
Keil, F.,
1989, Concepts, Kinds, and Cognitive
Development. Cambridge, MA:
MIT
Press.
Leslie, A.,
2000, “How to Acquire a Representational Theory of Mind.” In D. Sperber
(ed.), Metarepresentations: A Multidisciplinary Perspective. Oxford:
Oxford
University Press.
Mayr, E.,
1982, The Growth of Biological Thought:
Diversity, Evolution, and
Inheritance. Cambridge, MA: Harvard University Press.
McGinn, C.,
1991, The Problem of Consciousness.
Oxford: Basil Blackwell.
McGinn, C.,
1993, Problems in Philosophy: The Limits
of Inquiry. Oxford:
Blackwell.
Nagel, T.,
1986, The View from Nowhere. Oxford:
Oxford University Press.
Nagel, T.,
1995, Other Minds. Oxford: Oxford
University Press.
Piattelli-Palmarini,
M. (ed.), 1980, Language and Learning:
The Debate between
Jean
Piaget and Noam Chomsky. Cambridge, MA: Harvard University Press.
Pinker, S.,
1997, How the Mind Works. London:
Penguin.
Putnam, H.,
1981, Reason, Truth and History.
Cambridge, MA: Harvard University
Press.
Putnam, H.,
1983, “Forward.” In Goodman (1954/83): pp. vii-xvi.
Putnam, H.,
1988, Representation and Reality.
Cambridge. MA: MIT Press.
Putnam, H.,
1992, Renewing Philosophy. Cambridge,
MA: Harvard University Press.
Segal, G.,
1996, “The Modularity of Theory of Mind.” In P. Carruthers and P. Smith
(eds.), Theories of Theories of Mind (pp. 141-57). Cambridge: Cambridge
University Press.
Sperber,
D., Premack, J., and Premack, A. (eds.), 1995, Causal Cognition: A
Multidisciplinary Debate. Oxford: Clarendon Press.