- Thank you, Dr. Fahad, for accepting this invitation. I would like to begin our conversation by asking about the significant intellectual influences in your life. Looking back, what are the most important milestones that have shaped your thinking? Additionally, I am curious to know why you decided to specialize in linguistics as your academic field.
Perhaps the first stop shaping my early thinking was Jahra, Kuwait. Lucky to have intellectually diverse friends in my neighborhood, we’d gather in one another’s guestrooms, which I nicknamed “Dar Al-Khilafa” (The House of the Caliphate) for their resemblance to seating areas depicted in historical dramas. I don’t remember a day without heated discussions. We would engage in arguments and counter-arguments – political, social, religious. Although we didn’t possess a formal understanding of critical thinking at the time, our conversations were guided by instinct and what our common sense allowed. Those years instilled in us a foundation of conversational agility, even though it was not rooted in formal critical thinking principles. I cherish the memories of those years, filled with details and characters I hold with nostalgia and gratitude.
The second stop shaping me was the University of Salamanca, Spain, a crucible for my academic formation. Arriving in the early nineties, ‘Hola’ (Hello) was the only Spanish word I knew. However, I embarked on an intensive six-month study of the language, which paved the way for my enrollment in the Department of Spanish Language and Literature at the university. The first year was undoubtedly pure struggle. The demands far outstripped my meager skills. Imagine, someone who just learned to say hello expected to compete with native speakers in a matter of months! But the youth embrace challenges, and I, though young, wasn’t naive. From the outset, I understood that relying solely on classroom instruction and the summaries exchanged among students would not be sufficient to successfully navigate this stage. Moreover, the internet was not readily accessible during that time, which meant I had to take a more arduous path. I dedicated myself to extensive and meticulous reading of the reference materials for each subject. This approach instilled in me a reasonable level of self-confidence, but it also revealed the vast extent of my ignorance. It was then I learned: before venturing into a topic, read deeply.
Spain became my gateway to language as a subject of study itself. At Salamanca University, in a senior-level linguistics course, I first encountered the name ‘Chomsky.’ Although the professor did not express great enthusiasm for Chomsky’s ideas, he provided enough information to pique my curiosity about his “minimalist” program, which had only been in existence for a year at that time. What truly captivated me was the connection Chomsky proposed: how laws of physics governing the brain intertwine with the psychological rules shaping our minds, including language. From that moment, I knew – graduate studies in linguistics, specifically Chomsky’s linguistics, awaited me.
- Your interest in studying language and linguistics culminated in a distinguished doctoral thesis on Noam Chomsky’s minimalist program, which was published by Cambridge in 2014. In this book, you explored and examined the significant theoretical and methodological aspects of the minimalist program while also offering critical analysis. In general terms, what do you find valuable about this scientific program, and what are its main limitations?
The usefulness of the minimalist program lies in its encouragement of researchers to explore deeper connections between language and other areas of study, thereby transcending traditional disciplinary boundaries. This interdisciplinary approach, in turn, benefits students’ scientific training. Engaging with the minimalist program provides an opportunity to develop a broader and more comprehensive perspective on the research topic. Rooted in philosophical foundations, it demands exposure to other disciplines like physics, biology, and mathematics. Chomsky, for instance, envisions his minimalist program bridging the mind-brain gap, requiring a shift beyond pure linguistics to natural sciences, mathematics, and philosophy. Consequently, Chomsky’s writings often reference the works of experts in computational neuroscience, such as Christopher Cherniak, alongside conventional references to natural sciences, mathematics, philosophy of science, and philosophy of mind.
The points mentioned above highlight the aspects that I find useful in minimalism. However, when it comes to drawbacks, they are unfortunately too numerous to count. In this context, I will briefly mention three points, but feel free to stop us at any point of interest if you desire more details.
First, Chomsky and his followers tend to favor specific opinions or positions within various fields to bolster the minimalist program. While this bias poses no inherent problem when other opposing positions are also considered, unfortunately, such balanced engagement often falls short. As one of several examples, we can observe Chomsky adopting a philosophical reading of Galileo’s works as presented by the French and Austrian scientists, Quaret and Feyerbend, respectively, entirely disregarding the opinions of other prominent critics of that interpretation. Similarly, he promotes a specific scientific viewpoint regarding the timing of language emergence, despite the ongoing lack of consensus on this matter. The point here is that some of the positions embraced by Chomsky and his supporters, including Palmarini in the United States and Fukio in Japan, are no longer considered tenable by specialists in those respective fields. A case in point is the insistence on viewing the laws of physics solely through a teleological lens, a perspective abandoned long ago by physicists. Additionally, we encounter inconsistencies within Chomsky’s own approach, such as his initial adoption of Popper’s falsification principle followed by a complete reversal without any acknowledgement of this shift.
Second, it has been observed that some of Chomsky’s followers, sometimes with his endorsement, exhibit a promotional tendency towards the minimalist program. Such a tendency is not typically appropriate in academic fields. This manifests itself in two troubling ways: For one, we see an inflated (even historically distorted) portrayal of minimalism’s origins, goals, and achievements. Also, a dismissive attitude towards prominent linguistic critics prevails, occasionally bordering on complete ignorance. Genuine advancement in any field necessitates serious engagement with criticism, not its dismissal.
Finally, the minimalist program aims to offer a deeper understanding of language by going beyond mere sufficiency in explaining language acquisition. It seeks to explain the overall grammar underlying this acquisition process. However, despite this aspiration, it falls short of the simplest standards of scientific explanation. In minimalist literature, we commonly encounter expressions of the form x explains y, where x refers to a constraint among the contingencies that separate the computational structure from the two external structures, one related to meaning and the other to sound. Generally, y refers to theoretical concepts that can be easily dispensed with or replaced with others, without the slightest regard for the meaning of the interpretation intended here or its logical structure, let alone the quality of the evidence accompanying it.
- Regarding the second point, what are some of the most prominent criticisms that supporters of the minimalist program ignore?
Numerous criticisms have often been ignored. While not all of them are directed specifically at the minimalist program, they are applicable to it, nevertheless. For example, the program falls within the purview of biolinguistics, which inherits Chomsky’s view of language as a biological entity within the brain, coupled with a computational perspective of infinite sentence generation. This ontological position, positing language as both biological and infinite, has faced strong critiques from linguists like Katz, Postal, and Langdon. These linguists have presented logical and mathematical arguments highlighting the inconsistencies within this position, regardless of whether one agrees or disagrees with them. Unfortunately, Chomsky has either ignored these criticisms entirely or dismissively undervalued them. Also, a noticeable phenomenon exists within theoretical linguistics, such that I doubt anyone working in the field would be unfamiliar with it. Generative approaches competing with minimalism frequently engage with Chomsky and his supporters, critiquing or comparing their work. Conversely, it is rare to find any mention of these competing approaches in the literature of the minimalist program. An example that comes to mind is Robert Pursley’s work, which consistently presents the views of Chomsky for the purpose of critiquing them, before introducing his preferred theoretical alternative (HPSG). Yet, references to Pursley’s writings are scarce in minimalist circles, let alone a serious engagement with his criticisms.
- You focused a lot on the “strong minimalist thesis” in that book. What is the significance of this thesis? How effective is it?
This thesis proposes that language is an optimal solution to the challenge of readability. It occupies a space closer to a scientific hypothesis than it might initially seem. To understand its core claim, let’s first revisit the crucial assumptions: Firstly, language is considered a function within the mental capacities provided by the human brain. Secondly, language is essentially a computational system, utilizing a mental lexicon to generate specific utterances. Each of these expressions reflects a relationship between two adjacent systems: one associated with sound and the other with meaning. Thirdly, for successful language use, both sound and meaning systems must be able to “read” the phrases the computer system produces. This ability is known as “readability conditions.”
The previous points highlight the foundational assumptions underlying the strong minimalist thesis. This hypothesis suggests that a computer system, when fulfilling its role of connecting sound and meaning, achieves an optimal response to the conditions of readability. In other words, the primary challenge faced by the computer system is to establish the most effective relationship between sound and meaning, and Chomsky claims it achieves this “optimal solution.” This raises a crucial question: Why is the computer system incapable of completing this task optimally? Chomsky’s answer, however, relies on an assumption that may appear more like fantasy than reality. It is the assumption that the computational system operates under the laws of physics characterized by their simplicity. When the minimalist program describes language as simple, perfect, and economical, these descriptions align with the notion of optimality. In the strong minimalist thesis, the term “minimalist” refers to the belief that the computer system or overall grammar should possess the fewest possible properties necessary to solve the problem of connecting sound and meaning. The thesis is considered “strong” because it follows from its minimalist nature. It posits that many properties attributed to the computational system can be categorized into two possibilities: either they are unreal, theoretical constructs we can discard, or they are real properties derived from sources beyond the scope of language or the computational system. According to a particular conception of the strong minimalist thesis, supported by Chomsky and others, the properties of the computer system are limited to the recursive property. The minimalist program’s task is to try to derive other properties from two sources: the position of language among other cognitive components and its place in the natural world governed by the law of physics. The conditions of readability are linked to the first source, while optimal computation is associated with the second source. Both sources form the basis of what Chomsky refers to as the “hypothetical conceptual necessity.”
We now arrive at the second part of your question: How effective is this thesis? If regarded as a hypothesis, it encounters problems of both a conceptual nature on one hand, and an empirical (experimental) nature on the other hand. As it appeared to me during the writing of my doctoral dissertation between 2007 and 2011, and as it appears to me now, the recursive quality of language represents the most prominent example of both types of problems faced by the strong minimalist thesis.
Previously, I mentioned the perspective upheld by Chomsky and others, which asserts that recursion is the sole property that can be considered unique to human language. Pinker and Jackendoff famously dubbed this the “recursion-only hypothesis” in their critique of Chomsky, Fitch, and Hauser. (Though, I should add, Chomsky’s embrace of this essence predates minimalism.) This begs a fundamental question: what justifies the computational process behind recursion? I’m referring to the “merge” or “branches” operation, as some researchers prefer to call it according to their preferences. This process establishes a relationship between a syntactic element A and another syntactic element B to generate a new syntactic element represented as the set {A, B}. With no limit on repetitions, it’s responsible for generating infinite linguistic expressions.
The mentioned process gives rise to numerous challenges, including the question of why it should be regarded as a specifically binary computational process. However, what truly concerns us are its implications for the “recursion-only hypothesis,” specifically within the strong minimalist framework. For Chomsky, any language-like system must have a “merging” process—the embodiment of recursion—which he deems a “hypothetical conceptual necessity.” As mentioned earlier, there are two potential sources that can justify the necessity of any linguistic property. One approach is to derive the property from the conditions of readability, while the other perspective considers it as an outcome of the laws of physics governing the natural world, including language. Interestingly, Chomsky employs neither to justify the “merging” process, and for good reason. Using either of these two sources would contradict the validity of the “recursion-only hypothesis” that Chomsky defends. Remember, these external sources were minimalist tools to strip away pre-minimalist baggage, minimizing properties and processes to explain language acquisition. The purpose of this reduction is to address the challenge of rapid language development, as perceived by Chomsky, within a relatively short timeframe, and to investigate the justification for the existence of the merging process when the “hypothetical conceptual necessity” of minimalist theory falls short. This engrossing question captivated me throughout my doctoral dissertation, and a particular statement by Chomsky remains vividly etched in my memory, rendering the problem even more confusing. This statement comes from Chomsky’s famous article, “Approaching UG from Below,” where he asserts that merging is an integral part of universal grammar, irrespective of its limited scope to the faculty of language.
I remember sending Chomsky a specific question regarding this matter: If universal grammar is limited to language-specific operations, how can merging belong to it even if it’s not language-specific? As I mentioned in my letter and later in the book (p. 105), I must admit that Chomsky’s response to this question lacked coherence. He argued that the peculiarity of merging wasn’t its exclusivity to language, but its ability to generate complex expressions! This response felt like a cop-out, devaluing the concept of “linguistic specificity” to the point of meaninglessness. Following his logic, merging could be “visual” because it creates complex images, or “thought-specific” because it forms intricate ideas. Thus, the specificity of this computational process is no longer tied to its confinement within a particular domain but rather to its inherent nature across any domain. However, Chomsky refrains from completely relinquishing the linguistic specificity of this iterative process because doing so would render the process of language acquisition a mere “miracle.” As Chomsky repeatedly emphasizes, language must possess something distinctive; otherwise, the phenomenon of language acquisition would become inexplicable.
On the conceptual front, Chomsky and his followers employ a strategy, which I have labeled the “imperfection strategy,” to grapple with the strong minimalist thesis. In the third chapter of my book, I exercised significant effort to explain this strategy before offering my critique. Unfortunately, I have not yet come across anyone who has been kind enough to address my argument, despite the gracious reception and critique of my book by individuals such as Colin from Britain or Hiroki from Japan.
Beyond conceptual issues, the empirical side also presents numerous challenges for the strong minimalist thesis. Some involve the unexpected presence of repetition outside language, while others stem from the difficulty of empirically evaluating the thought characteristics associated with “readability conditions.” Reluctantly, I proposed a preliminary approach to the latter category of problems by drawing insights from studies in animal cognition, to establish a more objective standard for readability within minimalism. I say “reluctantly” because this wasn’t originally in my doctoral thesis; it came later, after one reviewer remarked that while I had done an excellent job of deconstruction, I had paid insufficient attention to reconstruction. It was this observation that compelled me to introduce the concept mentioned towards the end of the book.
- What is “the imperfection strategy”?
To explain the “imperfection strategy,” it is pertinent to begin with the concept of “perfection.” In relation to the strong minimalist thesis, any linguistic property that can be derived from this thesis, whether due to language’s position within the cognitive system or its position within the natural world, is deemed a property that can be described as perfect. —a flawless reflection of the computational system, a “perfect design” optimally fulfilling its function. This notion of “perfection” stands in opposition to the concept of “specificity.” Now, Chomsky offers two seemingly equivalent interpretations of the strong minimalist thesis: Firstly, it posits that every linguistic property embodies the perfection of language’s design, responding to readability conditions or obeying natural laws. Secondly, the thesis asserts that the design lacks any property that indicates “linguistic specificity.” Note here that this perspective raises a particular issue, which I touched upon in a previous response regarding the recursive property, as Chomsky regards the process of “merging” as an expression of linguistic design perfection while maintaining that it is restricted to language. In my view, this lack of consistency arises from the fact that the property expressing linguistic design perfection does not embody linguistic specificity, and conversely, the property representing linguistic specificity does not embody linguistic design perfection.
Let’s explore Chomsky’s strategy as outlined in Minimalist Inquiries, which comprises two steps. The first step involves determining the essential function of language, while the second step entails examining the extent to which language fulfills this function. The minimalist thesis took the first step, declaring language’s function to be meeting constraints imposed on the computational process by its place in both cognition and the natural world. Consequently, the second step involves identifying where the minimalist thesis falls short by seeking a property that does not indicate the perfection of language design. This suggests that the “imperfection strategy” aims to challenge the strong minimalist thesis. On the surface, this may appear to be the case. However, upon closer examination of how Chomsky presents this strategy, we observe that it is logically inconsistent. This is precisely what I attempted to clarify during my discussion of this thesis.
- Chomsky’s “Galilean method,” while intriguing, faces significant criticisms. This theoretical approach prioritizes abstract models over meticulous analysis of empirical data, as Chomsky often illustrates with physical theories. As a result, the method has drawn criticism from various linguists. From Bloom and Jackendoff’s initial inquiries to Everett’s Pirahã case study, the critiques culminate in Postal’s scathing criticisms in his writings. A common thread emerges: Chomsky’s theory, in general, seems resistant to falsification, as Karl Popper defined it. So, what are your thoughts on the “Galilean style”? How do you view these methodological criticisms leveled at Chomsky?
It is regrettable that Chomsky approached the principle of falsification in a purely pragmatic manner. When he felt his language acquisition theory superior to Jean Piaget’s in their famed 70s debate, he insisted that the “false” theory was worthy of respect, because it gave us the ability to examine and assess it. However, when Chomsky veers towards abstraction, particularly in his minimalist program, he appears to openly mock the principle of falsification in his response to questions regarding the testability of the strong minimalist thesis. Resorting to the “Galilean method” becomes a transparent attempt to shield his theoretical conception, granting it immunity against any contradicting evidence.
In a paper I published in 2008, I specifically focused on Chomsky’s Galilean method and attempted to trace its philosophical origins. One key observation was that Chomsky’s image of Galileo’s approach was heavily biased towards a specific interpretation within the philosophy of modern science. He seemed oblivious to the critiques of this interpretation, particularly those of the prominent Italian scholar Finocchiaro, who highlighted the shortcomings in Quarret and Fairabend’s understanding of Galileo’s scientific method, which Chomsky embraced as the foundation for his own linguistic methodology.
All the points raised by the critics regarding their rejection of Chomsky’s Galilean style are indeed valid. Other European scholars like Rudolf Boetta and Peter Sorn concur that Chomsky’s interpretation of this style contradicts the principle of falsification. This inconsistency is undeniable. However, a crucial point merits emphasis: I acknowledge that, until a sabbatical year at the Munich Institute for the Philosophy of Mathematics a few years ago, my understanding was incomplete. There, I presented a paper on Chomsky’s method in a seminar, where I benefited greatly from colleagues, mostly philosophers of science. One of the key takeaways was the need to avoid overemphasizing Popper’s principle of falsification as the sole criterion for determining the scientific nature of a theory or hypothesis. The principle itself faces challenges, but falsifiability remains a necessity. While we may struggle to define clear criteria for this ability, this is not considered as innovation in modern philosophy of science. For instance, Carl Hempel, when constructing a model of scientific explanation, emphasized the requirement for scientific explanations to incorporate general laws, despite not being able to provide a definitive standard for identifying what should be considered a general law. I highlight this to pre-empt any arguments defending Chomsky’s method on the grounds of the falsification principle’s own philosophical problems. What matters is falsifiability in essence, not a coherent philosophical framework for the principle itself.
- I agree that the “principle of falsification” is often used as a widely accepted axiom in arguments and discussions. However, let’s allocate more time for this crucial issue.
Using the “principle of falsification” as the sole criterion for accepting or rejecting theories is met with two common responses:
- Firstly, Thomas Kuhn’s research in the history and philosophy of science is often cited. In its most radical interpretation, Kuhn proposes that theory acceptance, rejection, and “paradigm shifts” (moving from one dominant theory to another) are not solely driven by refutation of existing theoretical and experimental foundations. Instead, they are influenced by intertwined social processes within the scientific community. Kuhn even quotes renowned physicist Max Planck: “Scientific truth does not triumph by convincing its opponents and making them see the light. Rather, it is because its opponents eventually die, and a new generation that has been familiar with it from the start emerges.”
- Secondly, the issue of “underdetermination” is raised. This concept suggests that a particular set of evidence at a given moment may not be sufficient to conclusively determine our beliefs and convictions regarding a specific theory. Therefore, relying solely on experimental evidence as the criterion for proving or rejecting scientific theories is deemed inadequate.
These objections appear to support the “Galilean method.” In your viewpoint, how should we respond to these objections?
On the first issue, we need to recognize two distinct approaches in the philosophy of science: logic-bound and logic-transcending. Logic-bound approaches, like those of Karl Popper, Charles Peirce, and Norwood Hanson, focus on analyzing the nature of logical inference in scientific discovery. On the other hand, the approaches that transcend the boundaries of logic are famously exemplified by Thomas Kuhn’s perspective and Paul Feyerabend’s approach to the scientific method within a historical context.
This latter philosophical approach arose in response to the decline of logical empiricism’s dominance over the philosophy of science in the second half of the 20th century. With the decline of logical empiricism, there was a void that was filled by historical and anarchist tendencies. These tendencies shared a common rejection of the notion that science, in terms of its methodology and historical development, is bound by the constraints of logic. Thomas Kuhn, for one, argued that the scientific method is subject to rules that change with the shift in “paradigms” or “guiding models” adopted by scientists within a particular historical period of “standard science.” He also argued against the possibility of directly comparing the concepts of different guiding models across eras. On the other hand, Paul Feyerabend not only denied the possibility of such comparisons but also rejected the existence of an objective truth and even the presence of a universal scientific method. Both thinkers gained rapid fame, their contributions transcending the boundaries of traditional logic in understanding scientific methodology. This necessitates approaching their ideas with skepticism or caution. As the English proverb suggests, perhaps we should take their ideas “with a pinch of salt” or with careful deliberation.
Returning to your question, which I would like to rephrase slightly without altering its essence: Do these tendencies aim to bypass the principle of falsifiability? While the answer remains inconclusive in Kuhn’s case, Feyerabend’s position seems more clear-cut. Feyerabend not only denies the existence of a scientific method but also advocates for the use of “artifice” as a legitimate means to defend the validity of a scientific hypothesis. He promotes the principle of “anything goes” in the scientific method. In my opinion, such a nihilistic stance leaves no room for distinguishing between true and false hypotheses or for differentiating genuine science from pseudoscience. This, I believe, is a disastrous outcome, and has far-reaching consequences not limited to the logic of scientific discovery. This is particularly evident in the current health crisis.
When considering the principle of falsifiability, it is important to view it not only as a criterion for accumulating scientific knowledge but also as a criterion for evaluating specific hypotheses. Invoking historical and chaotic tendencies as arguments against this Popperian principle is misguided. Popper’s principle itself has inherent structural problems in two cases: (1) when a hypothesis fails to meet the criterion of falsification, and (2) when it successfully surpasses this criterion. Both cases relate to the second issue you raised: “lack of proof.” I argue that “lack of specificity” is a more accurate translation here, as definitively proving a hypothesis, regardless of supporting evidence, remains elusive.
The issue of “underdetermination” is closely linked to what is known as “Duhem’s thesis.” This thesis posits that no definitive experiment can pinpoint the flaw in a hypothesis that fails to meet the experimental criterion. This is because a hypothesis cannot be tested in isolation from the implicit assumptions that accompany it. Although Popper defends falsification through the “argument from the negation of the conclusion,” this logically sound deduction merely indicates the denial of a hypothesis and its associated assumptions, not the hypothesis itself. Herein lies a weakness in Popper’s approach, as the falsification criterion may mistakenly exclude a valid hypothesis. While Popper does not oppose the possibility of reevaluating implicit assumptions to give the hypothesis another chance to meet the falsification criterion, it is challenging to determine and address all the implicit assumptions. Some of these assumptions may not even come to mind during the testing of the hypothesis and its consequences.
Now we turn to the second case, where a hypothesis successfully surpasses the falsification criterion. Here, too, the issue of “lack of specificity” hinders our conclusion that this is the only explanation for the phenomenon in question. However, let’s remember that passing the falsification criterion, according to Popper, does not confirm the hypothesis or increase the degree of its probability. Instead, Popper emphasizes that successful falsification doesn’t confirm a hypothesis or increase its probability of truth. What then does it mean? Here, Popper introduces the concept of “reinforcement.” When a hypothesis successfully exceeds the falsification criterion, it is reinforced or strengthened. The degree of reinforcement increases as the hypothesis continues to surpass the falsification criterion. This necessitates a comparison between “confirmation” and “reinforcement.” Firstly, confirmation and reinforcement both pertain to the acceptance of a hypothesis following experimental testing, but they differ in their approach and focus. Secondly, both confirmation and reinforcement involve gradation, meaning that there are varying degrees of confirmation and reinforcement that can be applied. Finally, the crucial difference lies in their respective approaches to measuring these degrees. When seeking to confirm a hypothesis, the focus is on evaluating the probability of a hypothesis being true before verifying how experimental results support its content. Conversely, when aiming to reinforce a hypothesis, the focus is on measuring the degree of probability of its incorrectness before assessing the extent to which experimental results falsify its content. This discussion isn’t meant to cast doubt on the genuine distinction between “confirmation” and “reinforcement.” Our concern lies in understanding what Popper’s concept of “reinforcement” entails. It involves recognizing the necessity of accepting, in principle, the validity of a hypothesis once it surpasses the falsifiability criterion. However, as Wesley Salmon pointed out, this reinforcement is no more than expansive reasoning (induction). Therefore, Popper’s approach, despite his claims, isn’t purely deductive. Adding “reinforcement” to his deductive framework demonstrates that Popper ultimately failed to overcome the problem of induction.
While these issues are concerned with the coherence of the falsification principle within specific theoretical frameworks, they do not invalidate it as a necessary criterion for evaluating scientific hypotheses. Referring to these problems is not suitable for justifying the acceptance of a hypothesis in the presence of contradicting evidence.
- Many in the field of linguistics equate the study of language exclusively with Chomskyan linguistics, particularly in the Anglo-American world, Japan, and linguistics departments in our own Arab world. However, you hold reservations about Chomsky’s linguistics. Given this situation, how can we overcome this overreliance and address the shortcomings of Chomskyan linguistics from the perspective of the philosophy of science?
In response to the first part of the question, it is difficult for me to ascertain the extent of the widespread misunderstanding regarding the equating of the “correct scientific study of language” with Chomskyan linguistics. The “correct scientific study of language” adheres to the fundamental principles of the scientific method, regardless of its origin. Linguistics boasts scholars from diverse schools of thought, each contributing valuable perspectives. Chomskyan linguistics, like any other linguistic trend, has its own strengths and weaknesses. If compelled to choose a single school as a prime example of the scientific method’s quality, I wouldn’t readily choose Chomskyan linguistics for various reasons, some of which directly relate to the second part of the question. I will elaborate on these reasons in the following section.
When examining Chomskyan linguistics from the perspective of the philosophy of modern science, it becomes evident that there are indeed shortcomings. Before addressing these, let’s first identify the most prominent ones, some of which have already been mentioned. These include the lack of regard for falsifiability and the failure to provide a clear structure for linguistic interpretation. Given that falsifiability has been previously discussed, I will focus on the structure of interpretation in Chomskyan linguistics.
As we all know, linguistics traditionally divides interpretations into two categories: internal and external. Internal interpretations derive their tools from within grammar, such as formal or grammatical interpretations, while external interpretations draw from outside grammar, including functional and historical interpretations. Now, let us pose the following question: Which of these interpretive approaches does Chomskyan linguistics align with? If you were to ask this question to different scholars working in Chomskyan linguistics, you would likely receive varying answers. This diversity is intriguing, not for the varied answers it generates, but for exposing the inherent ambiguity of the question itself. In its current form, the question requires further specification. We must first determine which specific historical era of Chomskyan linguistics we are referring to.
Allow me to further clarify this point. Before the minimalist program emerged, interpretation in Chomsky’s framework was rooted in grammar, representing a quintessential grammatical interpretation. However, with Minimalism, Universal Grammar transitioned from a tool for interpreting language properties to a subject of interpretation itself. Consequently, the tools of interpretation may be derived from outside grammar, drawing from cognitive sources (functional explanations) or the natural world (potentially causal explanations). I emphasize my use of “may,” since we are still uncertain about the specific models of interpretation within the philosophy of modern science to which the minimalist interpretation belongs. Also, we currently lack a comprehensive understanding of the logical structure of this interpretation, despite claims that there exists a connection between computational efficiency in language and the laws of physics.
Addressing the second part of your question, rectifying this epistemological deficiency in Chomskyan linguistics requires a more serious engagement with the philosophy of modern science. This means not selectively adopting aspects of the philosophy of science that merely reinforce our confidence in our own method of studying language. The philosophy of modern science has much to offer in linguistics, particularly concerning the structure of scientific explanation and its associated challenges. While scholars like Paul Ingres from France and Frederic Neumayer from the United States have explored the nature of interpretation in linguistics, their work remains particularly limited in the context of Chomskyan linguistics. Their work still revolves around the traditional division of interpretation in linguistics, as mentioned earlier. Also, there has been no substantial effort to clarify the logical structure of minimalist interpretation and its relationship to various models of interpretation within the philosophy of modern science.
- When considering linguistics in general, beyond Chomsky and generative grammar, one of the challenges encountered by knowledgeable students of linguistics is the multitude of approaches and theories within the field. Even within the Anglo-American schools, despite the broad distinction between formal and functional linguistics, a plethora of theories exist within each school, exhibiting substantial differences. These theories often exhibit significant differences, with some even disagreeing on fundamental terms used in the field.
Now, the question arises: is this multiplicity a problem requiring resolution? As a university professor, how do you navigate this wide array of theories and methodological foundations when teaching and selecting appropriate curricula?
Like many other fields within the human sciences, linguistics exhibits a disparity between the abundance of research methods and the scarcity of conclusive results. This characteristic can be likened to a tree with numerous roots yet few fruits – a fitting analogy considering that human consciousness itself serves as both a tool and a subject of study in these disciplines.
Let’s look at linguistics as an example: we use language itself to study language, essentially employing one mental function to analyze another. However, the methodological dilemma extends beyond this. As Argentinian philosopher of science Mario Bunge aptly describes, language is a “three-headed dragon” possessing physical, psychological, and social dimensions. To expand on this metaphor, viewing the dragon from a narrow perspective may lead to the recognition of only one head, but a broader perspective reveals its full complexity. While proponents of individual approaches acknowledge the existence of others, they often limit their acknowledgment to mere recognition and prioritize their preferred method. This limited focus can hinder progress in understanding the phenomenon of language. Complicating matters further, even those who acknowledge the three-headed nature of language may disagree on the optimal angle from which to approach it. This, in turn, leads to diverse, even contradictory, linguistic theories. Consequently, each approach develops its own terminology, methodology, results, and even champions.
Does this imply a rejection of collaborative research? Not necessarily. Most linguistic schools remain open to external research areas and draw upon them when relevant. For instance, the Neoplatonic approach to language draws on the philosophy of mathematics to shape its understanding of language, and the minimalist approach seeks to establish connections with theoretical physics in an effort to elevate linguistics to the level of natural sciences. However, a crucial question arises: why does research collaboration within linguistics tend outward rather than inward? One possible explanation could be that the competition between different linguistic approaches to achieve significant results overshadows the willingness to cooperate with one another in understanding the phenomenon of language.
As for your question regarding my approach to teaching, I personally do not find it challenging to deal with various theories and approaches. The reason for this is quite simple: I work in an institution that, unfortunately, does not prioritize or permit extensive linguistic research within the classroom. This situation is influenced by several factors, including the overall weaknesses in the education system. However, despite these constraints, I strive to ignite my students’ curiosity about language as a captivating phenomenon deserving of contemplation and study.
- Linguists in the Arabic world often face criticism when they attempt to theorize about “authenticity.” This objection argues that Western linguistics is not suitable for our specific field of study, claiming that we already have adequate theoretical tools within our own linguistic heritage. How do you respond to this objection regarding the theorization of “authenticity” in Arabic linguistics? What significance does the question of “authenticity” hold in this context?
The issue at hand can be viewed as a subset of the broader topic of authenticity and modernity within our Arab-Islamic culture. Any relevant point raised concerning the broader cultural issue applies equally here. In response to your question, I don’t believe we face a genuine objection. Firstly, when a scientific field engages with natural phenomena in its broadest sense, the geographic dimension holds no significance. There is no distinction between Eastern and Western chemistry, nor is there a physical theory that is correct when examined to the left of the Greenwich Line and incorrect when examined to the right. Similarly, there is no inherent division between Western and Eastern linguistics. Secondly, the notion of an “adequate theoretical framework” is fundamentally flawed, regardless of its origin. A complete theory is an illusion, contradicting one of the most crucial lessons learned from modern science: the lesson of cognitive humility imposed by the very rules of logic. These rules dictate that we can never definitively claim to have the final word on any subject pertaining to the domain of sensory experience. Lastly, the existence of authenticity and modernity does not necessarily imply a conflict between the two. While some individuals may fear the new while others disdain the old, such attitudes are subjective and do not serve as evidence of an inherent clash between authenticity and modernity. This doesn’t imply the absence of any conflict whatsoever, but it mirrors the equal possibility of their agreement. I believe the core of the issue lies in pre-existing ideological positions regarding the general duality of new and old. Despite the criticism modern science receives in certain post-modern philosophies, the ideological stance holds little value in determining the course of science. Science is, unequivocally, an exceptional cognitive tool—a tool that is imperfect, yet the best available to humanity.
- Our conversation has been a truly enriching and engaging exploration of these intricate and challenging matters. I reiterate my gratitude for you offering your valuable time. To conclude our dialogue, I would be interested in hearing your current intellectual and scientific pursuits: What topics are currently engaging your attention, and what results may we anticipate in the foreseeable future? What guidance would you offer to individuals interested in the study of language and perception, and broader topics in science?
I have completed a book I began during my sabbatical in Germany, focusing on the nature of science, its history and philosophy, and its relationship to logic and mathematics. I hope it will be published soon. Currently, I am reviewing a translation for publication in Hikma magazine and writing a paper analyzing the structure of interpretation in Chomskyan linguistics. This topic has captivated my attention for an extended period, and despite having written about it on multiple occasions, I find that it continues to provoke numerous inquiries within me. There are other research avenues that I aspire to explore and study in the future. These include topics related to linguistics within our Arab environment and the concept of “scientific philosophy” as explored by certain Arab thinkers.
Concerning your second question, I offer not advice, but a suggestion I strive to implement myself: consider every scientific problem from a broader perspective. The traditional departmental structure of academic institutions represents a practical necessity rather than a fundamental division of knowledge. Seeking to expand your knowledge tools as much as possible is vital. While individual disciplines contribute valuable insights, true progress in tackling scientific challenges often requires drawing upon a broader understanding, even if the ultimate solution remains elusive. Along the journey of scientific inquiry, there are numerous valuable fruits to be harvested, even before reaching the destination. In fact, it is quite likely that we may never fully reach that destination, yet the pursuit itself yields abundant rewards.
T1712