Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

The epistemological approach to critical thinking

Profile image of Rodrigo Canal

In this paper, we try to show briefly how you can get epistemological orientation of philosophy to understand the nature and the teaching of the critical thinking. With regard to methodology, was used, as in all forms of philosophical nature of research dissemination, a critical bibliographic review. First, we clarify the meaning of an "epistemological approach to critical thinking": section 1 we present the philosophical nature of an epistemological approach; in section 2 we treat nature and components of critical thinking with orientation in this approach and, in section 3, we present some aspects of the underlying epistemology of critical thinking, especially the relationship between justification and truth. In the final part, we offer a reason why you might think that this is an important alternative to the field of critical thinking.

Related Papers

epistemic conception of critical thinking as a process

Frans H. van Eemeren et. …

Fabio Paglieri

dif.unige.it

Steven Patterson

INFORMAL LOGIC-WINDSOR ONTARIO-

Christoph Lumer

Argumentation

Jacky Visser

This contribution gives an overview of the epistemological approach to argumentation. It explains what an 'epistemological approach to argumentation' is, and justifies this approach as being better than a rhetorical or a consensualist approach. It systemizes the main directions and theories within the epistemological approach according to their criteria for good argumentation. It presents contributions by epistemological argumentation theorists to major topics of argumentation theory. Finally, it introduces the articles of the two special issues of "Informal Logic" about the epistemological approach to argumentation.

Constanza Ihnen

Cristián Santibáñez

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Benjamin Hamby

Proceedings of the 7th International Conference of ISSA

Frank Zenker

Dialectics, Dialogue and Argumentation. An Examination of Douglas Walton’s Theories of Reasoning and Argument

Marcin Lewinski

Cesar Villa

Andrew Aberdein

Iryna Khomenko

paolo piccari

Godden, D. (2012). The role of mental states in argumentation: Two problems for rationality from the psychology of belief. In F. Paglieri, L. Tummolini, R. Falcone, and M. Miceli (Eds.), The goals of cognition: Essays in honor of Cristiano Castelfranchi (pp. 123-143). London: College Publications.

David Godden

Jean H.M. Wagemans

Marcin Koszowy , Michał Araszkiewicz

Cristián Santibáñez , Alexandre Marques , Daniel Mazzaro

Maarten Boudry

Robert Trypuz

Dale Hample

Dale Hample , Marcin Lewinski

Oxford Bibliographies

Beth Innocenti

Thinking & Reasoning

Jos Hornikx

Eugen Octav Popa

A Abordagem Epistemológica à Argumentação em Discussão

Rodrigo Canal

Kaitlyn Haynal

Inquiry: Critical Thinking Across the Disciplines

mark battersby

Robert Ricco

Rivista Italiana di Filosofia del Linguaggio

Frans van Eemeren

Studies in Logic, Grammar and Rhetoric

Jordi Vallverdú

The Routledge Handbook of Language in the Workplace

Jérôme Jacquin

Shiyang Yu , Frank Zenker

Charles Forceville

Marcin Lewinski , Dale Hample

Türkiye İletişim Araştırmaları Dergisi

RELATED TOPICS

Should we always engage in critical thinking about issues of public policy, such as health care, gun control, and LGBT rights? Michael Huemer (2005) has argued for the claim that in some cases it is not epistemically responsible to engage in critical thinking on these issues. His argument is based on a reliabilist conception of the value of critical thinking. This article analyzes Huemer's argument against the epistemic responsibility of critical thinking by engaging it critically. It presents an alternative account of the value of critical thinking that is tied to the notion of forming and deploying a critical identity. And it develops an account of our epistemic responsibility to engage in critical thinking that is not dependent on reliability considerations alone. The primary purpose of the article is to provide critical thinking students, or those that wish to reflect on the value of critical thinking, with an opportunity to think metacritically about critical thinking by examining an argument that engages the question of whether it is epistemically responsible for one to engage in critical thinking.

Metaphilosophy publishes articles and book reviews stressing considerations about philosophy and particular schools, methods or fields of philosophy. The intended scope is very broad: no method, field or school is excluded. Particular areas of interest include: the foundation, scope, function and direction of philosophy; justification of philosophical methods and arguments; the interrelations among schools or fields of philosophy (for example, the relation of logic to problems in ethics or epistemology); aspects of philosophical systems; presuppositions of philosophical schools; the relation of philosophy to other disciplines (for example, artificial intelligence, linguistics or literature); sociology of philosophy; the relevance of philosophy to social and political action; issues in the teaching of philosophy.

Wiley is a global provider of content and content-enabled workflow solutions in areas of scientific, technical, medical, and scholarly research; professional development; and education. Our core businesses produce scientific, technical, medical, and scholarly journals, reference works, books, database services, and advertising; professional books, subscription products, certification and training services and online applications; and education content and services including integrated online teaching and learning resources for undergraduate and graduate students and lifelong learners. Founded in 1807, John Wiley & Sons, Inc. has been a valued source of information and understanding for more than 200 years, helping people around the world meet their needs and fulfill their aspirations. Wiley has published the works of more than 450 Nobel laureates in all categories: Literature, Economics, Physiology or Medicine, Physics, Chemistry, and Peace. Wiley has partnerships with many of the world’s leading societies and publishes over 1,500 peer-reviewed journals and 1,500+ new books annually in print and online, as well as databases, major reference works and laboratory protocols in STMS subjects. With a growing open access offering, Wiley is committed to the widest possible dissemination of and access to the content we publish and supports all sustainable models of access. Our online platform, Wiley Online Library (wileyonlinelibrary.com) is one of the world’s most extensive multidisciplinary collections of online resources, covering life, health, social and physical sciences, and humanities.

This item is part of a JSTOR Collection. For terms and use, please refer to our Terms and Conditions Metaphilosophy © 2013 Wiley Request Permissions

epistemic conception of critical thinking as a process

SEP home page

Bibliography

Academic tools.

Critical Thinking

Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as differing conceptions of the same basic concept: careful thinking directed to a goal. Conceptions differ with respect to the scope of such thinking, the type of goal, the criteria and norms for thinking carefully, and the thinking components on which they focus. Its adoption as an educational goal has been recommended on the basis of respect for students’ autonomy and preparing students for success in life and for democratic citizenship. “Critical thinkers” have the dispositions and abilities that lead them to think critically when appropriate. The abilities can be identified directly; the dispositions indirectly, by considering what factors contribute to or impede exercise of the abilities. Standardized tests have been developed to assess the degree to which a person possesses such dispositions and abilities. Educational intervention has been shown experimentally to improve them, particularly when it includes dialogue, anchored instruction, and mentoring. Controversies have arisen over the generalizability of critical thinking across domains, over alleged bias in critical thinking theories and instruction, and over the relationship of critical thinking to other types of thinking.

2.1 Dewey’s Three Main Examples

2.2 dewey’s other examples, 2.3 further examples, 2.4 non-examples, 3. the definition of critical thinking, 4. its value, 5. the process of thinking critically, 6. components of the process, 7. contributory dispositions and abilities, 8.1 initiating dispositions, 8.2 internal dispositions, 9. critical thinking abilities, 10. required knowledge, 11. educational methods, 12.1 the generalizability of critical thinking, 12.2 bias in critical thinking theory and pedagogy, 12.3 relationship of critical thinking to other types of thinking, other internet resources, related entries.

Use of the term ‘critical thinking’ to describe an educational goal goes back to the American philosopher John Dewey (1910), who more commonly called it ‘reflective thinking’. He defined it as

active, persistent and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it, and the further conclusions to which it tends. (Dewey 1910: 6; 1933: 9)

and identified a habit of such consideration with a scientific attitude of mind. His lengthy quotations of Francis Bacon, John Locke, and John Stuart Mill indicate that he was not the first person to propose development of a scientific attitude of mind as an educational goal.

In the 1930s, many of the schools that participated in the Eight-Year Study of the Progressive Education Association (Aikin 1942) adopted critical thinking as an educational goal, for whose achievement the study’s Evaluation Staff developed tests (Smith, Tyler, & Evaluation Staff 1942). Glaser (1941) showed experimentally that it was possible to improve the critical thinking of high school students. Bloom’s influential taxonomy of cognitive educational objectives (Bloom et al. 1956) incorporated critical thinking abilities. Ennis (1962) proposed 12 aspects of critical thinking as a basis for research on the teaching and evaluation of critical thinking ability.

Since 1980, an annual international conference in California on critical thinking and educational reform has attracted tens of thousands of educators from all levels of education and from many parts of the world. Also since 1980, the state university system in California has required all undergraduate students to take a critical thinking course. Since 1983, the Association for Informal Logic and Critical Thinking has sponsored sessions in conjunction with the divisional meetings of the American Philosophical Association (APA). In 1987, the APA’s Committee on Pre-College Philosophy commissioned a consensus statement on critical thinking for purposes of educational assessment and instruction (Facione 1990a). Researchers have developed standardized tests of critical thinking abilities and dispositions; for details, see the Supplement on Assessment . Educational jurisdictions around the world now include critical thinking in guidelines for curriculum and assessment.

For details on this history, see the Supplement on History .

2. Examples and Non-Examples

Before considering the definition of critical thinking, it will be helpful to have in mind some examples of critical thinking, as well as some examples of kinds of thinking that would apparently not count as critical thinking.

Dewey (1910: 68–71; 1933: 91–94) takes as paradigms of reflective thinking three class papers of students in which they describe their thinking. The examples range from the everyday to the scientific.

Transit : “The other day, when I was down town on 16th Street, a clock caught my eye. I saw that the hands pointed to 12:20. This suggested that I had an engagement at 124th Street, at one o’clock. I reasoned that as it had taken me an hour to come down on a surface car, I should probably be twenty minutes late if I returned the same way. I might save twenty minutes by a subway express. But was there a station near? If not, I might lose more than twenty minutes in looking for one. Then I thought of the elevated, and I saw there was such a line within two blocks. But where was the station? If it were several blocks above or below the street I was on, I should lose time instead of gaining it. My mind went back to the subway express as quicker than the elevated; furthermore, I remembered that it went nearer than the elevated to the part of 124th Street I wished to reach, so that time would be saved at the end of the journey. I concluded in favor of the subway, and reached my destination by one o’clock.” (Dewey 1910: 68–69; 1933: 91–92)

Ferryboat : “Projecting nearly horizontally from the upper deck of the ferryboat on which I daily cross the river is a long white pole, having a gilded ball at its tip. It suggested a flagpole when I first saw it; its color, shape, and gilded ball agreed with this idea, and these reasons seemed to justify me in this belief. But soon difficulties presented themselves. The pole was nearly horizontal, an unusual position for a flagpole; in the next place, there was no pulley, ring, or cord by which to attach a flag; finally, there were elsewhere on the boat two vertical staffs from which flags were occasionally flown. It seemed probable that the pole was not there for flag-flying.

“I then tried to imagine all possible purposes of the pole, and to consider for which of these it was best suited: (a) Possibly it was an ornament. But as all the ferryboats and even the tugboats carried poles, this hypothesis was rejected. (b) Possibly it was the terminal of a wireless telegraph. But the same considerations made this improbable. Besides, the more natural place for such a terminal would be the highest part of the boat, on top of the pilot house. (c) Its purpose might be to point out the direction in which the boat is moving.

“In support of this conclusion, I discovered that the pole was lower than the pilot house, so that the steersman could easily see it. Moreover, the tip was enough higher than the base, so that, from the pilot’s position, it must appear to project far out in front of the boat. Moreover, the pilot being near the front of the boat, he would need some such guide as to its direction. Tugboats would also need poles for such a purpose. This hypothesis was so much more probable than the others that I accepted it. I formed the conclusion that the pole was set up for the purpose of showing the pilot the direction in which the boat pointed, to enable him to steer correctly.” (Dewey 1910: 69–70; 1933: 92–93)

Bubbles : “In washing tumblers in hot soapsuds and placing them mouth downward on a plate, bubbles appeared on the outside of the mouth of the tumblers and then went inside. Why? The presence of bubbles suggests air, which I note must come from inside the tumbler. I see that the soapy water on the plate prevents escape of the air save as it may be caught in bubbles. But why should air leave the tumbler? There was no substance entering to force it out. It must have expanded. It expands by increase of heat, or by decrease of pressure, or both. Could the air have become heated after the tumbler was taken from the hot suds? Clearly not the air that was already entangled in the water. If heated air was the cause, cold air must have entered in transferring the tumblers from the suds to the plate. I test to see if this supposition is true by taking several more tumblers out. Some I shake so as to make sure of entrapping cold air in them. Some I take out holding mouth downward in order to prevent cold air from entering. Bubbles appear on the outside of every one of the former and on none of the latter. I must be right in my inference. Air from the outside must have been expanded by the heat of the tumbler, which explains the appearance of the bubbles on the outside. But why do they then go inside? Cold contracts. The tumbler cooled and also the air inside it. Tension was removed, and hence bubbles appeared inside. To be sure of this, I test by placing a cup of ice on the tumbler while the bubbles are still forming outside. They soon reverse” (Dewey 1910: 70–71; 1933: 93–94).

Dewey (1910, 1933) sprinkles his book with other examples of critical thinking. We will refer to the following.

Weather : A man on a walk notices that it has suddenly become cool, thinks that it is probably going to rain, looks up and sees a dark cloud obscuring the sun, and quickens his steps (1910: 6–10; 1933: 9–13).

Disorder : A man finds his rooms on his return to them in disorder with his belongings thrown about, thinks at first of burglary as an explanation, then thinks of mischievous children as being an alternative explanation, then looks to see whether valuables are missing, and discovers that they are (1910: 82–83; 1933: 166–168).

Typhoid : A physician diagnosing a patient whose conspicuous symptoms suggest typhoid avoids drawing a conclusion until more data are gathered by questioning the patient and by making tests (1910: 85–86; 1933: 170).

Blur : A moving blur catches our eye in the distance, we ask ourselves whether it is a cloud of whirling dust or a tree moving its branches or a man signaling to us, we think of other traits that should be found on each of those possibilities, and we look and see if those traits are found (1910: 102, 108; 1933: 121, 133).

Suction pump : In thinking about the suction pump, the scientist first notes that it will draw water only to a maximum height of 33 feet at sea level and to a lesser maximum height at higher elevations, selects for attention the differing atmospheric pressure at these elevations, sets up experiments in which the air is removed from a vessel containing water (when suction no longer works) and in which the weight of air at various levels is calculated, compares the results of reasoning about the height to which a given weight of air will allow a suction pump to raise water with the observed maximum height at different elevations, and finally assimilates the suction pump to such apparently different phenomena as the siphon and the rising of a balloon (1910: 150–153; 1933: 195–198).

Diamond : A passenger in a car driving in a diamond lane reserved for vehicles with at least one passenger notices that the diamond marks on the pavement are far apart in some places and close together in others. Why? The driver suggests that the reason may be that the diamond marks are not needed where there is a solid double line separating the diamond lane from the adjoining lane, but are needed when there is a dotted single line permitting crossing into the diamond lane. Further observation confirms that the diamonds are close together when a dotted line separates the diamond lane from its neighbour, but otherwise far apart.

Rash : A woman suddenly develops a very itchy red rash on her throat and upper chest. She recently noticed a mark on the back of her right hand, but was not sure whether the mark was a rash or a scrape. She lies down in bed and thinks about what might be causing the rash and what to do about it. About two weeks before, she began taking blood pressure medication that contained a sulfa drug, and the pharmacist had warned her, in view of a previous allergic reaction to a medication containing a sulfa drug, to be on the alert for an allergic reaction; however, she had been taking the medication for two weeks with no such effect. The day before, she began using a new cream on her neck and upper chest; against the new cream as the cause was mark on the back of her hand, which had not been exposed to the cream. She began taking probiotics about a month before. She also recently started new eye drops, but she supposed that manufacturers of eye drops would be careful not to include allergy-causing components in the medication. The rash might be a heat rash, since she recently was sweating profusely from her upper body. Since she is about to go away on a short vacation, where she would not have access to her usual physician, she decides to keep taking the probiotics and using the new eye drops but to discontinue the blood pressure medication and to switch back to the old cream for her neck and upper chest. She forms a plan to consult her regular physician on her return about the blood pressure medication.

Candidate : Although Dewey included no examples of thinking directed at appraising the arguments of others, such thinking has come to be considered a kind of critical thinking. We find an example of such thinking in the performance task on the Collegiate Learning Assessment (CLA+), which its sponsoring organization describes as

a performance-based assessment that provides a measure of an institution’s contribution to the development of critical-thinking and written communication skills of its students. (Council for Aid to Education 2017)

A sample task posted on its website requires the test-taker to write a report for public distribution evaluating a fictional candidate’s policy proposals and their supporting arguments, using supplied background documents, with a recommendation on whether to endorse the candidate.

Immediate acceptance of an idea that suggests itself as a solution to a problem (e.g., a possible explanation of an event or phenomenon, an action that seems likely to produce a desired result) is “uncritical thinking, the minimum of reflection” (Dewey 1910: 13). On-going suspension of judgment in the light of doubt about a possible solution is not critical thinking (Dewey 1910: 108). Critique driven by a dogmatically held political or religious ideology is not critical thinking; thus Paulo Freire (1968 [1970]) is using the term (e.g., at 1970: 71, 81, 100, 146) in a more politically freighted sense that includes not only reflection but also revolutionary action against oppression. Derivation of a conclusion from given data using an algorithm is not critical thinking.

What is critical thinking? There are many definitions. Ennis (2016) lists 14 philosophically oriented scholarly definitions and three dictionary definitions. Following Rawls (1971), who distinguished his conception of justice from a utilitarian conception but regarded them as rival conceptions of the same concept, Ennis maintains that the 17 definitions are different conceptions of the same concept. Rawls articulated the shared concept of justice as

a characteristic set of principles for assigning basic rights and duties and for determining… the proper distribution of the benefits and burdens of social cooperation. (Rawls 1971: 5)

Bailin et al. (1999b) claim that, if one considers what sorts of thinking an educator would take not to be critical thinking and what sorts to be critical thinking, one can conclude that educators typically understand critical thinking to have at least three features.

One could sum up the core concept that involves these three features by saying that critical thinking is careful goal-directed thinking. This core concept seems to apply to all the examples of critical thinking described in the previous section. As for the non-examples, their exclusion depends on construing careful thinking as excluding jumping immediately to conclusions, suspending judgment no matter how strong the evidence, reasoning from an unquestioned ideological or religious perspective, and routinely using an algorithm to answer a question.

If the core of critical thinking is careful goal-directed thinking, conceptions of it can vary according to its presumed scope, its presumed goal, one’s criteria and threshold for being careful, and the thinking component on which one focuses. As to its scope, some conceptions (e.g., Dewey 1910, 1933) restrict it to constructive thinking on the basis of one’s own observations and experiments, others (e.g., Ennis 1962; Fisher & Scriven 1997; Johnson 1992) to appraisal of the products of such thinking. Ennis (1991) and Bailin et al. (1999b) take it to cover both construction and appraisal. As to its goal, some conceptions restrict it to forming a judgment (Dewey 1910, 1933; Lipman 1987; Facione 1990a). Others allow for actions as well as beliefs as the end point of a process of critical thinking (Ennis 1991; Bailin et al. 1999b). As to the criteria and threshold for being careful, definitions vary in the term used to indicate that critical thinking satisfies certain norms: “intellectually disciplined” (Scriven & Paul 1987), “reasonable” (Ennis 1991), “skillful” (Lipman 1987), “skilled” (Fisher & Scriven 1997), “careful” (Bailin & Battersby 2009). Some definitions specify these norms, referring variously to “consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends” (Dewey 1910, 1933); “the methods of logical inquiry and reasoning” (Glaser 1941); “conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication” (Scriven & Paul 1987); the requirement that “it is sensitive to context, relies on criteria, and is self-correcting” (Lipman 1987); “evidential, conceptual, methodological, criteriological, or contextual considerations” (Facione 1990a); and “plus-minus considerations of the product in terms of appropriate standards (or criteria)” (Johnson 1992). Stanovich and Stanovich (2010) propose to ground the concept of critical thinking in the concept of rationality, which they understand as combining epistemic rationality (fitting one’s beliefs to the world) and instrumental rationality (optimizing goal fulfillment); a critical thinker, in their view, is someone with “a propensity to override suboptimal responses from the autonomous mind” (2010: 227). These variant specifications of norms for critical thinking are not necessarily incompatible with one another, and in any case presuppose the core notion of thinking carefully. As to the thinking component singled out, some definitions focus on suspension of judgment during the thinking (Dewey 1910; McPeck 1981), others on inquiry while judgment is suspended (Bailin & Battersby 2009, 2021), others on the resulting judgment (Facione 1990a), and still others on responsiveness to reasons (Siegel 1988). Kuhn (2019) takes critical thinking to be more a dialogic practice of advancing and responding to arguments than an individual ability.

In educational contexts, a definition of critical thinking is a “programmatic definition” (Scheffler 1960: 19). It expresses a practical program for achieving an educational goal. For this purpose, a one-sentence formulaic definition is much less useful than articulation of a critical thinking process, with criteria and standards for the kinds of thinking that the process may involve. The real educational goal is recognition, adoption and implementation by students of those criteria and standards. That adoption and implementation in turn consists in acquiring the knowledge, abilities and dispositions of a critical thinker.

Conceptions of critical thinking generally do not include moral integrity as part of the concept. Dewey, for example, took critical thinking to be the ultimate intellectual goal of education, but distinguished it from the development of social cooperation among school children, which he took to be the central moral goal. Ennis (1996, 2011) added to his previous list of critical thinking dispositions a group of dispositions to care about the dignity and worth of every person, which he described as a “correlative” (1996) disposition without which critical thinking would be less valuable and perhaps harmful. An educational program that aimed at developing critical thinking but not the correlative disposition to care about the dignity and worth of every person, he asserted, “would be deficient and perhaps dangerous” (Ennis 1996: 172).

Dewey thought that education for reflective thinking would be of value to both the individual and society; recognition in educational practice of the kinship to the scientific attitude of children’s native curiosity, fertile imagination and love of experimental inquiry “would make for individual happiness and the reduction of social waste” (Dewey 1910: iii). Schools participating in the Eight-Year Study took development of the habit of reflective thinking and skill in solving problems as a means to leading young people to understand, appreciate and live the democratic way of life characteristic of the United States (Aikin 1942: 17–18, 81). Harvey Siegel (1988: 55–61) has offered four considerations in support of adopting critical thinking as an educational ideal. (1) Respect for persons requires that schools and teachers honour students’ demands for reasons and explanations, deal with students honestly, and recognize the need to confront students’ independent judgment; these requirements concern the manner in which teachers treat students. (2) Education has the task of preparing children to be successful adults, a task that requires development of their self-sufficiency. (3) Education should initiate children into the rational traditions in such fields as history, science and mathematics. (4) Education should prepare children to become democratic citizens, which requires reasoned procedures and critical talents and attitudes. To supplement these considerations, Siegel (1988: 62–90) responds to two objections: the ideology objection that adoption of any educational ideal requires a prior ideological commitment and the indoctrination objection that cultivation of critical thinking cannot escape being a form of indoctrination.

Despite the diversity of our 11 examples, one can recognize a common pattern. Dewey analyzed it as consisting of five phases:

The process of reflective thinking consisting of these phases would be preceded by a perplexed, troubled or confused situation and followed by a cleared-up, unified, resolved situation (Dewey 1933: 106). The term ‘phases’ replaced the term ‘steps’ (Dewey 1910: 72), thus removing the earlier suggestion of an invariant sequence. Variants of the above analysis appeared in (Dewey 1916: 177) and (Dewey 1938: 101–119).

The variant formulations indicate the difficulty of giving a single logical analysis of such a varied process. The process of critical thinking may have a spiral pattern, with the problem being redefined in the light of obstacles to solving it as originally formulated. For example, the person in Transit might have concluded that getting to the appointment at the scheduled time was impossible and have reformulated the problem as that of rescheduling the appointment for a mutually convenient time. Further, defining a problem does not always follow after or lead immediately to an idea of a suggested solution. Nor should it do so, as Dewey himself recognized in describing the physician in Typhoid as avoiding any strong preference for this or that conclusion before getting further information (Dewey 1910: 85; 1933: 170). People with a hypothesis in mind, even one to which they have a very weak commitment, have a so-called “confirmation bias” (Nickerson 1998): they are likely to pay attention to evidence that confirms the hypothesis and to ignore evidence that counts against it or for some competing hypothesis. Detectives, intelligence agencies, and investigators of airplane accidents are well advised to gather relevant evidence systematically and to postpone even tentative adoption of an explanatory hypothesis until the collected evidence rules out with the appropriate degree of certainty all but one explanation. Dewey’s analysis of the critical thinking process can be faulted as well for requiring acceptance or rejection of a possible solution to a defined problem, with no allowance for deciding in the light of the available evidence to suspend judgment. Further, given the great variety of kinds of problems for which reflection is appropriate, there is likely to be variation in its component events. Perhaps the best way to conceptualize the critical thinking process is as a checklist whose component events can occur in a variety of orders, selectively, and more than once. These component events might include (1) noticing a difficulty, (2) defining the problem, (3) dividing the problem into manageable sub-problems, (4) formulating a variety of possible solutions to the problem or sub-problem, (5) determining what evidence is relevant to deciding among possible solutions to the problem or sub-problem, (6) devising a plan of systematic observation or experiment that will uncover the relevant evidence, (7) carrying out the plan of systematic observation or experimentation, (8) noting the results of the systematic observation or experiment, (9) gathering relevant testimony and information from others, (10) judging the credibility of testimony and information gathered from others, (11) drawing conclusions from gathered evidence and accepted testimony, and (12) accepting a solution that the evidence adequately supports (cf. Hitchcock 2017: 485).

Checklist conceptions of the process of critical thinking are open to the objection that they are too mechanical and procedural to fit the multi-dimensional and emotionally charged issues for which critical thinking is urgently needed (Paul 1984). For such issues, a more dialectical process is advocated, in which competing relevant world views are identified, their implications explored, and some sort of creative synthesis attempted.

If one considers the critical thinking process illustrated by the 11 examples, one can identify distinct kinds of mental acts and mental states that form part of it. To distinguish, label and briefly characterize these components is a useful preliminary to identifying abilities, skills, dispositions, attitudes, habits and the like that contribute causally to thinking critically. Identifying such abilities and habits is in turn a useful preliminary to setting educational goals. Setting the goals is in its turn a useful preliminary to designing strategies for helping learners to achieve the goals and to designing ways of measuring the extent to which learners have done so. Such measures provide both feedback to learners on their achievement and a basis for experimental research on the effectiveness of various strategies for educating people to think critically. Let us begin, then, by distinguishing the kinds of mental acts and mental events that can occur in a critical thinking process.

By definition, a person who does something voluntarily is both willing and able to do that thing at that time. Both the willingness and the ability contribute causally to the person’s action, in the sense that the voluntary action would not occur if either (or both) of these were lacking. For example, suppose that one is standing with one’s arms at one’s sides and one voluntarily lifts one’s right arm to an extended horizontal position. One would not do so if one were unable to lift one’s arm, if for example one’s right side was paralyzed as the result of a stroke. Nor would one do so if one were unwilling to lift one’s arm, if for example one were participating in a street demonstration at which a white supremacist was urging the crowd to lift their right arm in a Nazi salute and one were unwilling to express support in this way for the racist Nazi ideology. The same analysis applies to a voluntary mental process of thinking critically. It requires both willingness and ability to think critically, including willingness and ability to perform each of the mental acts that compose the process and to coordinate those acts in a sequence that is directed at resolving the initiating perplexity.

Consider willingness first. We can identify causal contributors to willingness to think critically by considering factors that would cause a person who was able to think critically about an issue nevertheless not to do so (Hamby 2014). For each factor, the opposite condition thus contributes causally to willingness to think critically on a particular occasion. For example, people who habitually jump to conclusions without considering alternatives will not think critically about issues that arise, even if they have the required abilities. The contrary condition of willingness to suspend judgment is thus a causal contributor to thinking critically.

Now consider ability. In contrast to the ability to move one’s arm, which can be completely absent because a stroke has left the arm paralyzed, the ability to think critically is a developed ability, whose absence is not a complete absence of ability to think but absence of ability to think well. We can identify the ability to think well directly, in terms of the norms and standards for good thinking. In general, to be able do well the thinking activities that can be components of a critical thinking process, one needs to know the concepts and principles that characterize their good performance, to recognize in particular cases that the concepts and principles apply, and to apply them. The knowledge, recognition and application may be procedural rather than declarative. It may be domain-specific rather than widely applicable, and in either case may need subject-matter knowledge, sometimes of a deep kind.

Reflections of the sort illustrated by the previous two paragraphs have led scholars to identify the knowledge, abilities and dispositions of a “critical thinker”, i.e., someone who thinks critically whenever it is appropriate to do so. We turn now to these three types of causal contributors to thinking critically. We start with dispositions, since arguably these are the most powerful contributors to being a critical thinker, can be fostered at an early stage of a child’s development, and are susceptible to general improvement (Glaser 1941: 175)

8. Critical Thinking Dispositions

Educational researchers use the term ‘dispositions’ broadly for the habits of mind and attitudes that contribute causally to being a critical thinker. Some writers (e.g., Paul & Elder 2006; Hamby 2014; Bailin & Battersby 2016a) propose to use the term ‘virtues’ for this dimension of a critical thinker. The virtues in question, although they are virtues of character, concern the person’s ways of thinking rather than the person’s ways of behaving towards others. They are not moral virtues but intellectual virtues, of the sort articulated by Zagzebski (1996) and discussed by Turri, Alfano, and Greco (2017).

On a realistic conception, thinking dispositions or intellectual virtues are real properties of thinkers. They are general tendencies, propensities, or inclinations to think in particular ways in particular circumstances, and can be genuinely explanatory (Siegel 1999). Sceptics argue that there is no evidence for a specific mental basis for the habits of mind that contribute to thinking critically, and that it is pedagogically misleading to posit such a basis (Bailin et al. 1999a). Whatever their status, critical thinking dispositions need motivation for their initial formation in a child—motivation that may be external or internal. As children develop, the force of habit will gradually become important in sustaining the disposition (Nieto & Valenzuela 2012). Mere force of habit, however, is unlikely to sustain critical thinking dispositions. Critical thinkers must value and enjoy using their knowledge and abilities to think things through for themselves. They must be committed to, and lovers of, inquiry.

A person may have a critical thinking disposition with respect to only some kinds of issues. For example, one could be open-minded about scientific issues but not about religious issues. Similarly, one could be confident in one’s ability to reason about the theological implications of the existence of evil in the world but not in one’s ability to reason about the best design for a guided ballistic missile.

Facione (1990a: 25) divides “affective dispositions” of critical thinking into approaches to life and living in general and approaches to specific issues, questions or problems. Adapting this distinction, one can usefully divide critical thinking dispositions into initiating dispositions (those that contribute causally to starting to think critically about an issue) and internal dispositions (those that contribute causally to doing a good job of thinking critically once one has started). The two categories are not mutually exclusive. For example, open-mindedness, in the sense of willingness to consider alternative points of view to one’s own, is both an initiating and an internal disposition.

Using the strategy of considering factors that would block people with the ability to think critically from doing so, we can identify as initiating dispositions for thinking critically attentiveness, a habit of inquiry, self-confidence, courage, open-mindedness, willingness to suspend judgment, trust in reason, wanting evidence for one’s beliefs, and seeking the truth. We consider briefly what each of these dispositions amounts to, in each case citing sources that acknowledge them.

Some of the initiating dispositions, such as open-mindedness and willingness to suspend judgment, are also internal critical thinking dispositions, in the sense of mental habits or attitudes that contribute causally to doing a good job of critical thinking once one starts the process. But there are many other internal critical thinking dispositions. Some of them are parasitic on one’s conception of good thinking. For example, it is constitutive of good thinking about an issue to formulate the issue clearly and to maintain focus on it. For this purpose, one needs not only the corresponding ability but also the corresponding disposition. Ennis (1991: 8) describes it as the disposition “to determine and maintain focus on the conclusion or question”, Facione (1990a: 25) as “clarity in stating the question or concern”. Other internal dispositions are motivators to continue or adjust the critical thinking process, such as willingness to persist in a complex task and willingness to abandon nonproductive strategies in an attempt to self-correct (Halpern 1998: 452). For a list of identified internal critical thinking dispositions, see the Supplement on Internal Critical Thinking Dispositions .

Some theorists postulate skills, i.e., acquired abilities, as operative in critical thinking. It is not obvious, however, that a good mental act is the exercise of a generic acquired skill. Inferring an expected time of arrival, as in Transit , has some generic components but also uses non-generic subject-matter knowledge. Bailin et al. (1999a) argue against viewing critical thinking skills as generic and discrete, on the ground that skilled performance at a critical thinking task cannot be separated from knowledge of concepts and from domain-specific principles of good thinking. Talk of skills, they concede, is unproblematic if it means merely that a person with critical thinking skills is capable of intelligent performance.

Despite such scepticism, theorists of critical thinking have listed as general contributors to critical thinking what they variously call abilities (Glaser 1941; Ennis 1962, 1991), skills (Facione 1990a; Halpern 1998) or competencies (Fisher & Scriven 1997). Amalgamating these lists would produce a confusing and chaotic cornucopia of more than 50 possible educational objectives, with only partial overlap among them. It makes sense instead to try to understand the reasons for the multiplicity and diversity, and to make a selection according to one’s own reasons for singling out abilities to be developed in a critical thinking curriculum. Two reasons for diversity among lists of critical thinking abilities are the underlying conception of critical thinking and the envisaged educational level. Appraisal-only conceptions, for example, involve a different suite of abilities than constructive-only conceptions. Some lists, such as those in (Glaser 1941), are put forward as educational objectives for secondary school students, whereas others are proposed as objectives for college students (e.g., Facione 1990a).

The abilities described in the remaining paragraphs of this section emerge from reflection on the general abilities needed to do well the thinking activities identified in section 6 as components of the critical thinking process described in section 5 . The derivation of each collection of abilities is accompanied by citation of sources that list such abilities and of standardized tests that claim to test them.

Observational abilities : Careful and accurate observation sometimes requires specialist expertise and practice, as in the case of observing birds and observing accident scenes. However, there are general abilities of noticing what one’s senses are picking up from one’s environment and of being able to articulate clearly and accurately to oneself and others what one has observed. It helps in exercising them to be able to recognize and take into account factors that make one’s observation less trustworthy, such as prior framing of the situation, inadequate time, deficient senses, poor observation conditions, and the like. It helps as well to be skilled at taking steps to make one’s observation more trustworthy, such as moving closer to get a better look, measuring something three times and taking the average, and checking what one thinks one is observing with someone else who is in a good position to observe it. It also helps to be skilled at recognizing respects in which one’s report of one’s observation involves inference rather than direct observation, so that one can then consider whether the inference is justified. These abilities come into play as well when one thinks about whether and with what degree of confidence to accept an observation report, for example in the study of history or in a criminal investigation or in assessing news reports. Observational abilities show up in some lists of critical thinking abilities (Ennis 1962: 90; Facione 1990a: 16; Ennis 1991: 9). There are items testing a person’s ability to judge the credibility of observation reports in the Cornell Critical Thinking Tests, Levels X and Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005). Norris and King (1983, 1985, 1990a, 1990b) is a test of ability to appraise observation reports.

Emotional abilities : The emotions that drive a critical thinking process are perplexity or puzzlement, a wish to resolve it, and satisfaction at achieving the desired resolution. Children experience these emotions at an early age, without being trained to do so. Education that takes critical thinking as a goal needs only to channel these emotions and to make sure not to stifle them. Collaborative critical thinking benefits from ability to recognize one’s own and others’ emotional commitments and reactions.

Questioning abilities : A critical thinking process needs transformation of an inchoate sense of perplexity into a clear question. Formulating a question well requires not building in questionable assumptions, not prejudging the issue, and using language that in context is unambiguous and precise enough (Ennis 1962: 97; 1991: 9).

Imaginative abilities : Thinking directed at finding the correct causal explanation of a general phenomenon or particular event requires an ability to imagine possible explanations. Thinking about what policy or plan of action to adopt requires generation of options and consideration of possible consequences of each option. Domain knowledge is required for such creative activity, but a general ability to imagine alternatives is helpful and can be nurtured so as to become easier, quicker, more extensive, and deeper (Dewey 1910: 34–39; 1933: 40–47). Facione (1990a) and Halpern (1998) include the ability to imagine alternatives as a critical thinking ability.

Inferential abilities : The ability to draw conclusions from given information, and to recognize with what degree of certainty one’s own or others’ conclusions follow, is universally recognized as a general critical thinking ability. All 11 examples in section 2 of this article include inferences, some from hypotheses or options (as in Transit , Ferryboat and Disorder ), others from something observed (as in Weather and Rash ). None of these inferences is formally valid. Rather, they are licensed by general, sometimes qualified substantive rules of inference (Toulmin 1958) that rest on domain knowledge—that a bus trip takes about the same time in each direction, that the terminal of a wireless telegraph would be located on the highest possible place, that sudden cooling is often followed by rain, that an allergic reaction to a sulfa drug generally shows up soon after one starts taking it. It is a matter of controversy to what extent the specialized ability to deduce conclusions from premisses using formal rules of inference is needed for critical thinking. Dewey (1933) locates logical forms in setting out the products of reflection rather than in the process of reflection. Ennis (1981a), on the other hand, maintains that a liberally-educated person should have the following abilities: to translate natural-language statements into statements using the standard logical operators, to use appropriately the language of necessary and sufficient conditions, to deal with argument forms and arguments containing symbols, to determine whether in virtue of an argument’s form its conclusion follows necessarily from its premisses, to reason with logically complex propositions, and to apply the rules and procedures of deductive logic. Inferential abilities are recognized as critical thinking abilities by Glaser (1941: 6), Facione (1990a: 9), Ennis (1991: 9), Fisher & Scriven (1997: 99, 111), and Halpern (1998: 452). Items testing inferential abilities constitute two of the five subtests of the Watson Glaser Critical Thinking Appraisal (Watson & Glaser 1980a, 1980b, 1994), two of the four sections in the Cornell Critical Thinking Test Level X (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005), three of the seven sections in the Cornell Critical Thinking Test Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005), 11 of the 34 items on Forms A and B of the California Critical Thinking Skills Test (Facione 1990b, 1992), and a high but variable proportion of the 25 selected-response questions in the Collegiate Learning Assessment (Council for Aid to Education 2017).

Experimenting abilities : Knowing how to design and execute an experiment is important not just in scientific research but also in everyday life, as in Rash . Dewey devoted a whole chapter of his How We Think (1910: 145–156; 1933: 190–202) to the superiority of experimentation over observation in advancing knowledge. Experimenting abilities come into play at one remove in appraising reports of scientific studies. Skill in designing and executing experiments includes the acknowledged abilities to appraise evidence (Glaser 1941: 6), to carry out experiments and to apply appropriate statistical inference techniques (Facione 1990a: 9), to judge inductions to an explanatory hypothesis (Ennis 1991: 9), and to recognize the need for an adequately large sample size (Halpern 1998). The Cornell Critical Thinking Test Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005) includes four items (out of 52) on experimental design. The Collegiate Learning Assessment (Council for Aid to Education 2017) makes room for appraisal of study design in both its performance task and its selected-response questions.

Consulting abilities : Skill at consulting sources of information comes into play when one seeks information to help resolve a problem, as in Candidate . Ability to find and appraise information includes ability to gather and marshal pertinent information (Glaser 1941: 6), to judge whether a statement made by an alleged authority is acceptable (Ennis 1962: 84), to plan a search for desired information (Facione 1990a: 9), and to judge the credibility of a source (Ennis 1991: 9). Ability to judge the credibility of statements is tested by 24 items (out of 76) in the Cornell Critical Thinking Test Level X (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005) and by four items (out of 52) in the Cornell Critical Thinking Test Level Z (Ennis & Millman 1971; Ennis, Millman, & Tomko 1985, 2005). The College Learning Assessment’s performance task requires evaluation of whether information in documents is credible or unreliable (Council for Aid to Education 2017).

Argument analysis abilities : The ability to identify and analyze arguments contributes to the process of surveying arguments on an issue in order to form one’s own reasoned judgment, as in Candidate . The ability to detect and analyze arguments is recognized as a critical thinking skill by Facione (1990a: 7–8), Ennis (1991: 9) and Halpern (1998). Five items (out of 34) on the California Critical Thinking Skills Test (Facione 1990b, 1992) test skill at argument analysis. The College Learning Assessment (Council for Aid to Education 2017) incorporates argument analysis in its selected-response tests of critical reading and evaluation and of critiquing an argument.

Judging skills and deciding skills : Skill at judging and deciding is skill at recognizing what judgment or decision the available evidence and argument supports, and with what degree of confidence. It is thus a component of the inferential skills already discussed.

Lists and tests of critical thinking abilities often include two more abilities: identifying assumptions and constructing and evaluating definitions.

In addition to dispositions and abilities, critical thinking needs knowledge: of critical thinking concepts, of critical thinking principles, and of the subject-matter of the thinking.

We can derive a short list of concepts whose understanding contributes to critical thinking from the critical thinking abilities described in the preceding section. Observational abilities require an understanding of the difference between observation and inference. Questioning abilities require an understanding of the concepts of ambiguity and vagueness. Inferential abilities require an understanding of the difference between conclusive and defeasible inference (traditionally, between deduction and induction), as well as of the difference between necessary and sufficient conditions. Experimenting abilities require an understanding of the concepts of hypothesis, null hypothesis, assumption and prediction, as well as of the concept of statistical significance and of its difference from importance. They also require an understanding of the difference between an experiment and an observational study, and in particular of the difference between a randomized controlled trial, a prospective correlational study and a retrospective (case-control) study. Argument analysis abilities require an understanding of the concepts of argument, premiss, assumption, conclusion and counter-consideration. Additional critical thinking concepts are proposed by Bailin et al. (1999b: 293), Fisher & Scriven (1997: 105–106), Black (2012), and Blair (2021).

According to Glaser (1941: 25), ability to think critically requires knowledge of the methods of logical inquiry and reasoning. If we review the list of abilities in the preceding section, however, we can see that some of them can be acquired and exercised merely through practice, possibly guided in an educational setting, followed by feedback. Searching intelligently for a causal explanation of some phenomenon or event requires that one consider a full range of possible causal contributors, but it seems more important that one implements this principle in one’s practice than that one is able to articulate it. What is important is “operational knowledge” of the standards and principles of good thinking (Bailin et al. 1999b: 291–293). But the development of such critical thinking abilities as designing an experiment or constructing an operational definition can benefit from learning their underlying theory. Further, explicit knowledge of quirks of human thinking seems useful as a cautionary guide. Human memory is not just fallible about details, as people learn from their own experiences of misremembering, but is so malleable that a detailed, clear and vivid recollection of an event can be a total fabrication (Loftus 2017). People seek or interpret evidence in ways that are partial to their existing beliefs and expectations, often unconscious of their “confirmation bias” (Nickerson 1998). Not only are people subject to this and other cognitive biases (Kahneman 2011), of which they are typically unaware, but it may be counter-productive for one to make oneself aware of them and try consciously to counteract them or to counteract social biases such as racial or sexual stereotypes (Kenyon & Beaulac 2014). It is helpful to be aware of these facts and of the superior effectiveness of blocking the operation of biases—for example, by making an immediate record of one’s observations, refraining from forming a preliminary explanatory hypothesis, blind refereeing, double-blind randomized trials, and blind grading of students’ work. It is also helpful to be aware of the prevalence of “noise” (unwanted unsystematic variability of judgments), of how to detect noise (through a noise audit), and of how to reduce noise: make accuracy the goal, think statistically, break a process of arriving at a judgment into independent tasks, resist premature intuitions, in a group get independent judgments first, favour comparative judgments and scales (Kahneman, Sibony, & Sunstein 2021). It is helpful as well to be aware of the concept of “bounded rationality” in decision-making and of the related distinction between “satisficing” and optimizing (Simon 1956; Gigerenzer 2001).

Critical thinking about an issue requires substantive knowledge of the domain to which the issue belongs. Critical thinking abilities are not a magic elixir that can be applied to any issue whatever by somebody who has no knowledge of the facts relevant to exploring that issue. For example, the student in Bubbles needed to know that gases do not penetrate solid objects like a glass, that air expands when heated, that the volume of an enclosed gas varies directly with its temperature and inversely with its pressure, and that hot objects will spontaneously cool down to the ambient temperature of their surroundings unless kept hot by insulation or a source of heat. Critical thinkers thus need a rich fund of subject-matter knowledge relevant to the variety of situations they encounter. This fact is recognized in the inclusion among critical thinking dispositions of a concern to become and remain generally well informed.

Experimental educational interventions, with control groups, have shown that education can improve critical thinking skills and dispositions, as measured by standardized tests. For information about these tests, see the Supplement on Assessment .

What educational methods are most effective at developing the dispositions, abilities and knowledge of a critical thinker? In a comprehensive meta-analysis of experimental and quasi-experimental studies of strategies for teaching students to think critically, Abrami et al. (2015) found that dialogue, anchored instruction, and mentoring each increased the effectiveness of the educational intervention, and that they were most effective when combined. They also found that in these studies a combination of separate instruction in critical thinking with subject-matter instruction in which students are encouraged to think critically was more effective than either by itself. However, the difference was not statistically significant; that is, it might have arisen by chance.

Most of these studies lack the longitudinal follow-up required to determine whether the observed differential improvements in critical thinking abilities or dispositions continue over time, for example until high school or college graduation. For details on studies of methods of developing critical thinking skills and dispositions, see the Supplement on Educational Methods .

12. Controversies

Scholars have denied the generalizability of critical thinking abilities across subject domains, have alleged bias in critical thinking theory and pedagogy, and have investigated the relationship of critical thinking to other kinds of thinking.

McPeck (1981) attacked the thinking skills movement of the 1970s, including the critical thinking movement. He argued that there are no general thinking skills, since thinking is always thinking about some subject-matter. It is futile, he claimed, for schools and colleges to teach thinking as if it were a separate subject. Rather, teachers should lead their pupils to become autonomous thinkers by teaching school subjects in a way that brings out their cognitive structure and that encourages and rewards discussion and argument. As some of his critics (e.g., Paul 1985; Siegel 1985) pointed out, McPeck’s central argument needs elaboration, since it has obvious counter-examples in writing and speaking, for which (up to a certain level of complexity) there are teachable general abilities even though they are always about some subject-matter. To make his argument convincing, McPeck needs to explain how thinking differs from writing and speaking in a way that does not permit useful abstraction of its components from the subject-matters with which it deals. He has not done so. Nevertheless, his position that the dispositions and abilities of a critical thinker are best developed in the context of subject-matter instruction is shared by many theorists of critical thinking, including Dewey (1910, 1933), Glaser (1941), Passmore (1980), Weinstein (1990), Bailin et al. (1999b), and Willingham (2019).

McPeck’s challenge prompted reflection on the extent to which critical thinking is subject-specific. McPeck argued for a strong subject-specificity thesis, according to which it is a conceptual truth that all critical thinking abilities are specific to a subject. (He did not however extend his subject-specificity thesis to critical thinking dispositions. In particular, he took the disposition to suspend judgment in situations of cognitive dissonance to be a general disposition.) Conceptual subject-specificity is subject to obvious counter-examples, such as the general ability to recognize confusion of necessary and sufficient conditions. A more modest thesis, also endorsed by McPeck, is epistemological subject-specificity, according to which the norms of good thinking vary from one field to another. Epistemological subject-specificity clearly holds to a certain extent; for example, the principles in accordance with which one solves a differential equation are quite different from the principles in accordance with which one determines whether a painting is a genuine Picasso. But the thesis suffers, as Ennis (1989) points out, from vagueness of the concept of a field or subject and from the obvious existence of inter-field principles, however broadly the concept of a field is construed. For example, the principles of hypothetico-deductive reasoning hold for all the varied fields in which such reasoning occurs. A third kind of subject-specificity is empirical subject-specificity, according to which as a matter of empirically observable fact a person with the abilities and dispositions of a critical thinker in one area of investigation will not necessarily have them in another area of investigation.

The thesis of empirical subject-specificity raises the general problem of transfer. If critical thinking abilities and dispositions have to be developed independently in each school subject, how are they of any use in dealing with the problems of everyday life and the political and social issues of contemporary society, most of which do not fit into the framework of a traditional school subject? Proponents of empirical subject-specificity tend to argue that transfer is more likely to occur if there is critical thinking instruction in a variety of domains, with explicit attention to dispositions and abilities that cut across domains. But evidence for this claim is scanty. There is a need for well-designed empirical studies that investigate the conditions that make transfer more likely.

It is common ground in debates about the generality or subject-specificity of critical thinking dispositions and abilities that critical thinking about any topic requires background knowledge about the topic. For example, the most sophisticated understanding of the principles of hypothetico-deductive reasoning is of no help unless accompanied by some knowledge of what might be plausible explanations of some phenomenon under investigation.

Critics have objected to bias in the theory, pedagogy and practice of critical thinking. Commentators (e.g., Alston 1995; Ennis 1998) have noted that anyone who takes a position has a bias in the neutral sense of being inclined in one direction rather than others. The critics, however, are objecting to bias in the pejorative sense of an unjustified favoring of certain ways of knowing over others, frequently alleging that the unjustly favoured ways are those of a dominant sex or culture (Bailin 1995). These ways favour:

A common thread in this smorgasbord of accusations is dissatisfaction with focusing on the logical analysis and evaluation of reasoning and arguments. While these authors acknowledge that such analysis and evaluation is part of critical thinking and should be part of its conceptualization and pedagogy, they insist that it is only a part. Paul (1981), for example, bemoans the tendency of atomistic teaching of methods of analyzing and evaluating arguments to turn students into more able sophists, adept at finding fault with positions and arguments with which they disagree but even more entrenched in the egocentric and sociocentric biases with which they began. Martin (1992) and Thayer-Bacon (1992) cite with approval the self-reported intimacy with their subject-matter of leading researchers in biology and medicine, an intimacy that conflicts with the distancing allegedly recommended in standard conceptions and pedagogy of critical thinking. Thayer-Bacon (2000) contrasts the embodied and socially embedded learning of her elementary school students in a Montessori school, who used their imagination, intuition and emotions as well as their reason, with conceptions of critical thinking as

thinking that is used to critique arguments, offer justifications, and make judgments about what are the good reasons, or the right answers. (Thayer-Bacon 2000: 127–128)

Alston (2001) reports that her students in a women’s studies class were able to see the flaws in the Cinderella myth that pervades much romantic fiction but in their own romantic relationships still acted as if all failures were the woman’s fault and still accepted the notions of love at first sight and living happily ever after. Students, she writes, should

be able to connect their intellectual critique to a more affective, somatic, and ethical account of making risky choices that have sexist, racist, classist, familial, sexual, or other consequences for themselves and those both near and far… critical thinking that reads arguments, texts, or practices merely on the surface without connections to feeling/desiring/doing or action lacks an ethical depth that should infuse the difference between mere cognitive activity and something we want to call critical thinking. (Alston 2001: 34)

Some critics portray such biases as unfair to women. Thayer-Bacon (1992), for example, has charged modern critical thinking theory with being sexist, on the ground that it separates the self from the object and causes one to lose touch with one’s inner voice, and thus stigmatizes women, who (she asserts) link self to object and listen to their inner voice. Her charge does not imply that women as a group are on average less able than men to analyze and evaluate arguments. Facione (1990c) found no difference by sex in performance on his California Critical Thinking Skills Test. Kuhn (1991: 280–281) found no difference by sex in either the disposition or the competence to engage in argumentative thinking.

The critics propose a variety of remedies for the biases that they allege. In general, they do not propose to eliminate or downplay critical thinking as an educational goal. Rather, they propose to conceptualize critical thinking differently and to change its pedagogy accordingly. Their pedagogical proposals arise logically from their objections. They can be summarized as follows:

A common thread in these proposals is treatment of critical thinking as a social, interactive, personally engaged activity like that of a quilting bee or a barn-raising (Thayer-Bacon 2000) rather than as an individual, solitary, distanced activity symbolized by Rodin’s The Thinker . One can get a vivid description of education with the former type of goal from the writings of bell hooks (1994, 2010). Critical thinking for her is open-minded dialectical exchange across opposing standpoints and from multiple perspectives, a conception similar to Paul’s “strong sense” critical thinking (Paul 1981). She abandons the structure of domination in the traditional classroom. In an introductory course on black women writers, for example, she assigns students to write an autobiographical paragraph about an early racial memory, then to read it aloud as the others listen, thus affirming the uniqueness and value of each voice and creating a communal awareness of the diversity of the group’s experiences (hooks 1994: 84). Her “engaged pedagogy” is thus similar to the “freedom under guidance” implemented in John Dewey’s Laboratory School of Chicago in the late 1890s and early 1900s. It incorporates the dialogue, anchored instruction, and mentoring that Abrami (2015) found to be most effective in improving critical thinking skills and dispositions.

What is the relationship of critical thinking to problem solving, decision-making, higher-order thinking, creative thinking, and other recognized types of thinking? One’s answer to this question obviously depends on how one defines the terms used in the question. If critical thinking is conceived broadly to cover any careful thinking about any topic for any purpose, then problem solving and decision making will be kinds of critical thinking, if they are done carefully. Historically, ‘critical thinking’ and ‘problem solving’ were two names for the same thing. If critical thinking is conceived more narrowly as consisting solely of appraisal of intellectual products, then it will be disjoint with problem solving and decision making, which are constructive.

Bloom’s taxonomy of educational objectives used the phrase “intellectual abilities and skills” for what had been labeled “critical thinking” by some, “reflective thinking” by Dewey and others, and “problem solving” by still others (Bloom et al. 1956: 38). Thus, the so-called “higher-order thinking skills” at the taxonomy’s top levels of analysis, synthesis and evaluation are just critical thinking skills, although they do not come with general criteria for their assessment (Ennis 1981b). The revised version of Bloom’s taxonomy (Anderson et al. 2001) likewise treats critical thinking as cutting across those types of cognitive process that involve more than remembering (Anderson et al. 2001: 269–270). For details, see the Supplement on History .

As to creative thinking, it overlaps with critical thinking (Bailin 1987, 1988). Thinking about the explanation of some phenomenon or event, as in Ferryboat , requires creative imagination in constructing plausible explanatory hypotheses. Likewise, thinking about a policy question, as in Candidate , requires creativity in coming up with options. Conversely, creativity in any field needs to be balanced by critical appraisal of the draft painting or novel or mathematical theory.

How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

abilities | bias, implicit | children, philosophy for | civic education | decision-making capacity | Dewey, John | dispositions | education, philosophy of | epistemology: virtue | logic: informal

Copyright © 2022 by David Hitchcock < hitchckd @ mcmaster . ca >

Support SEP

Mirror sites.

View this site from another server:

The Stanford Encyclopedia of Philosophy is copyright © 2022 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

People also looked at

Original research article, epistemic emotions and epistemic cognition predict critical thinking about socio-scientific issues.

epistemic conception of critical thinking as a process

When thinking critically about socio-scientific issues, individuals’ expectations about the nature of knowledge and knowing, as well as their emotions when these expectations are met or not, may play an important role in critical thinking. In this study, we examined the role of epistemic emotions in mediating the effects of epistemic cognition on critical thinking when contending with conflicting information about genetically modified foods. Two hundred four university students completed a prior knowledge test on genetically modified foods, and then reported their epistemic beliefs about genetically modified foods. Participants then read a text that presented advantages and disadvantages of genetically modified foods, and reported the epistemic emotions they experienced during reading of that text. Participants then composed an argumentative essay about genetically modified foods, which were coded for critical thinking. Results from path analysis revealed that a belief in complex knowledge predicted less surprise and confusion, but more enjoyment. For the source of knowledge, a belief in the active construction of knowledge predicted less surprise and enjoyment. For justification for knowing, a belief that knowledge should be critically evaluated positively predicted curiosity, and negatively predicted confusion and boredom. Moreover, beliefs that knowledge about genetically modified foods is complex and uncertain positively predicted critical thinking. Confusion and anxiety also positively predicted critical thinking, whereas frustration negatively predicted critical thinking. Lastly, confusion mediated relations between epistemic beliefs and critical thinking. Results suggest complex relations between epistemic cognition, epistemic emotions, and critical thinking that have implications for educational practice as well as for future research on epistemic cognition and epistemic emotions.

Introduction

The information landscape in the 21st century is one of contrast. On the one hand, the Internet and social media provide an unprecedented wealth of diverse and accessible information from around the world. On the other hand, the structure of social networks and algorithmic filtering (e.g., news feeds and recommendations) have considerably narrowed the breadth of content that individuals consume, making it increasingly difficult to escape echo chambers and challenge one’s views with new information. In this context, any topic is likely to become the object of controversy. Topics of personal and global relevance such as ways to combat climate change or the safety of infant vaccines appear to be politically controversial, dividing the public’s opinion on what is considered accurate information, and stifling political action. To make informed decisions individually and collectively, the challenge lies in overcoming personal biases, and weighing the pros and cons of conflicting perspectives to reconcile views ( Noroozi et al., 2018 ). This is one aspect of the process known as critical thinking ( Kuhn, 2018 ).

There is little debate over the idea that society benefits when individuals are able to think deeply and critically about important issues (e.g., Dewey, 1933 ; Halpern, 2014 ). Educating people to become critical thinkers is of vital importance for the well-being of future generations. Accordingly, the Organization for Economic Cooperation and Development (OECD; Tremblay et al., 2012 ) has made teaching critical thinking a priority for higher education. However, empirical research shows that teaching critical thinking skills is arduous and often unyielding ( Abrami et al., 2008 ; Niu et al., 2013 ; Huber and Kuncel, 2015 ), with up to 45% of students completing post-secondary degrees lacking these essential skills ( Arum and Roksa, 2011 ). In light of these observations, many have suggested that to improve critical thinking outcomes, empirical work is needed to achieve a greater understanding of the underlying cognitive, motivational, and affective mechanisms that enable critical thinking ( Alexander, 2014 ; Greene and Yu, 2014 ; Bråten, 2016 ).

Socio-scientific topics are often characterized by the presence of opposing views that offer conflicting explanations to complex and multifaceted phenomena ( Levinson, 2006 ). Deciding what to believe or what to do about these topics requires that individuals engage with the underlying issues of knowledge that characterizes these topics: What counts as knowledge? How certain are the facts? Who can be trusted to provide a clear perspective on the topic? In other words, thinking critically about socio-scientific topics requires thinking about the knowledge- and knowing-related aspects of these issues ( Greene and Yu, 2014 ), a process termed epistemic cognition ( Greene et al., 2016 ). However, when engaged with complex and conflicting issues, individuals’ expectations about the nature of knowledge and knowing may be challenged, and in turn elicit emotions such as surprise, curiosity, confusion, frustration, or anxiety ( Muis et al., 2018 ).

Common understandings of critical thinking assume that emotions have no role to play in critical thinking, except perhaps to introduce unwarranted bias ( Kahneman, 2011 ). However, knowing and feeling 1 are closely related, and emotions may play a significant role in helping individuals disentangle the two ( Brun and Kuenzle, 2008 ). For example, Tiedens and Linton (2001) suggest that emotions can serve as information about the state of certainty. To illustrate, when presented with a knowledge claim, feelings of uncertainty may lead an individual to doubt the veracity of that claim. This uncertainty may then lead to a more thorough treatment of information and a greater attention to the quality of arguments over the source’s characteristics. Nonetheless, little is known about how cognitive and affective processes interact to predict critical thinking. As such, the aim of the current study is to shed light on the role that epistemic cognition and epistemic emotions play when thinking critically about socio-scientific issues. In the following sections, we define the concepts of critical thinking, epistemic cognition, and epistemic emotions, and review theoretical and empirical work that informed the hypotheses of the current study.

Thinking Critically About Controversial Topics

Critical thinking is regarded as one of the most important skills that individuals can develop and is a fundamental aim of education ( Bailin and Siegel, 2003 ; Halpern, 2014 ). Though several definitions of critical thinking are offered in the literature (e.g., Kurfiss, 1988 ; Siegel, 1988 ; Facione, 1990 ; Scriven and Paul, 1996 ; Litman, 2008 ; Ennis, 2018 ), Ennis (2018) argued that they do not significantly differ from each other. Drawing from these definitions, we define critical thinking as purposeful, reasonable and reflective thinking that enables individuals to decide what to believe or what to do when faced with complex and conflicting issues ( Facione, 1990 ; Ennis, 2018 ). Following Kuhn (2018) , we further define critical thinking as incorporating two key dimensions: inquiry (input), and argument (output).

According to Kuhn (2018) , these two key dimensions of critical thinking can be delineated as an input phase and an output phase. Inquiry, the input phase, captures what an individual does as they are faced with complex and conflicting issues. Critical thinking during this phase includes skills like identifying pertinent information, evaluating claims, identifying counter-arguments, and critically analyzing and synthesizing information. These processes are carried out for the ultimate purpose of bringing this newly synthesized information to bear on a claim, which leads to the second dimension of critical thinking: argument.

Argument refers to a product that is constructed in written or oral form by an individual, which consists of a claim and one or more supporting reasons or evidence that are connected to the claim with warrants ( Toulmin, 2003 ). Argumentation refers to the dynamic process that captures what is done to create the argument ( Kuhn et al., 2015 ). As such, the output phase refers to the actions or processes of reasoning systematically in support of an idea, action or theory. Argumentation can be captured via dialogic methods ( Kuhn, 2018 ) or via argumentative essay writing ( Noroozi et al., 2018 ; Latifi et al., 2019 ; Valero Haro et al., 2019 ). For example, high-quality argumentative essays encompass a clear claim supported by evidence and reason, followed by acknowledgments of counter-arguments against the original claim, and integration of the arguments and counter-arguments which eventually lead to the final conclusion ( Noroozi et al., 2016 ). The goal is to provide strong evidence to support one argument over another by weakening the other position ( Kuhn, 2018 ).

Recent research has shown that critical thinking skills differ across academic disciplines ( Gordon, 2000 ) given that various disciplines have different argumentation structures, epistemologies, and rules and goals ( Noroozi et al., 2016 ). For instance, in nursing, critical thinking is concerned with rigorous investigation and reflection on all aspects of a clinical situation to decide on an appropriate course of action ( Simpson and Courtney, 2002 ). In engineering, critical thinking consists of considering assumptions in problem-solving, selecting appropriate methods for experiments, structuring open-ended design problems, and assessing social impacts ( Claris and Riley, 2012 ). When it comes to taking a position on a socio-scientific issue such as genetically modified foods, the task of critical thinking rests on identifying opposing arguments, assumptions, and evidence, evaluating the credibility, reliability, and relevance of claims, producing valid explanations and arguments, and making decisions or drawing valid conclusions ( Facione, 1990 ; Kuhn and Crowell, 2011 ; Noroozi et al., 2016 ; Latifi et al., 2019 ).

Bailin and Siegel (2003) , as well as other philosophical theorists of critical thinking (e.g., Paul, 1990 ), emphasized the importance of generalizable abilities such as assessing reasons, evaluating claims, identifying underlying assumptions, and recognizing and applying valid forms of justification. They argue that what is “critical” about critical thinking is the use of a criterion—an epistemic criterion—for evaluating reasons and making sound judgments. The generalizable reasoning abilities described by Bailin and Siegel (2003) have long been studied by educational and developmental psychologists in the field of epistemic cognition (e.g., King and Kitchener, 2002 ; Chinn et al., 2011 ; Hofer and Bendixen, 2012 ; Greene et al., 2016 ). Epistemic cognition concerns individuals’ thoughts and beliefs about the nature of knowledge and the process of knowing ( Hofer and Pintrich, 1997 ). From the perspective of educational development, Kuhn (1991 , 1999) identified the development of epistemic cognition as perhaps the most central underpinning of critical thinking.

The Role of Epistemic Cognition in Critical Thinking

Epistemic cognition.

Epistemic cognition refers to how individuals vet, acquire, understand, justify, and use knowledge ( Greene et al., 2016 ). Specifically, individuals engage in epistemic cognition when they activate personal beliefs about the nature of knowledge and knowing (i.e., epistemic beliefs), define epistemic aims and criteria for knowing, and use evaluation and justification strategies to address issues of knowledge and knowing ( Chinn et al., 2011 ; Barzilai and Zohar, 2014 ; Muis et al., 2018 ). The vast majority of research on epistemic cognition has focused on epistemic beliefs, which refer to individuals’ personal beliefs about the nature of knowledge and the process of knowing ( Hofer and Pintrich, 1997 ). Hofer and Pintrich (1997) proposed that epistemic beliefs comprise four dimensions: (1) the complexity of knowledge, ranging from the belief that knowledge consists of a simple accumulation of facts, to the belief that knowledge consists of a complex structure of interrelated propositions; (2) the uncertainty of knowledge, ranging from the belief that knowledge is certain and unchanging, to the belief that knowledge is tentative and evolving; (3) the sources of knowing, ranging from the view that knowledge resides in external authorities, to the view that individuals are knowers who actively construct knowledge; and (4) the justification for knowing, which addresses how individuals evaluate knowledge claims, from an unquestioning reliance on authorities, to the evaluation and integration of evidence and arguments from various sources.

Numerous empirical studies have shown that individuals who adopt more constructivist epistemic cognition (e.g., who believe that knowledge is complex, tentative, actively constructed, and justified via evaluation) use better learning strategies ( Chiu et al., 2013 ; Muis et al., 2015 ), show better self-regulation during problem solving ( Muis et al., 2015 ), and attain greater academic performance ( Bråten et al., 2014 ) than those who adopt less constructivist epistemic cognition (i.e., who believe that knowledge is simple, certain, handed down from, and justified by authorities).

Relations Between Epistemic Cognition and Critical Thinking

Across multiple studies, more constructivist epistemic cognition has been positively associated with critical thinking. Specifically, constructivists are better at identifying the elements of discourse (i.e., assumptions, evidence, arguments; Mason and Boscolo, 2004 ) and understanding authors’ viewpoints ( Barzilai and Eshet-Alkalai, 2015 ) when reading texts that comprise conflicting perspectives, compared to individuals with less constructivist epistemic cognition. Similarly, when contending with multiple sources of information, individuals who engage in more constructivist epistemic cognition performed better at evaluating the trustworthiness and credibility of information using the features of the sources, distinguishing between types of sources, making associations between a source and its content, using criteria to evaluate the trustworthiness of sources, and using source integration strategies than those with less constructivist views ( Barzilai and Zohar, 2012 ; Bråten et al., 2014 ; Strømsø and Bråten, 2014 ; McGinnis, 2016 ).

More constructivist beliefs about the justification for knowing have been associated with the use of more competent criteria to evaluate the trustworthiness of sources ( Strømsø et al., 2011 ). Moreover, learners with more constructivist epistemic cognition have been found to possess greater argumentative skills ( Mason and Boscolo, 2004 ; Yang and Tsai, 2010 ; Noroozi, 2018 ). Constructivists are also better able to support their statements with acceptable, relevant, and multiple justifications ( Mason and Scirica, 2006 ). In sum, individuals who engage in more constructivist epistemic cognition are more likely to possess the cognitive skills necessary to think critically. In support of this, Muis and Duffy (2013) found that graduate students who received an intervention designed to develop more constructivist epistemic beliefs over the course of a semester also showed more critical thinking when learning statistics.

Research has also shown that, compared to less constructivist epistemic cognition, more constructivist epistemic cognition has been related to the will to take on multiple perspectives, reconsider one’s own thinking when drawing conclusions about controversial issues ( Schommer-Aikins and Hutter, 2002 ), engage in effortful thinking ( Hyytinen et al., 2014 ), and display skepticism toward unreliable sources ( McGinnis, 2016 ). Though motivational and affective dispositions have theoretically been proposed to support critical thinking within the epistemic cognition literature ( Chinn et al., 2011 ; Muis et al., 2015 , 2018 ), little research has been conducted to understand how epistemic cognition relates to the affective states that dispose learners to think critically.

Epistemic Emotions and Critical Thinking

There is increasing evidence for the important role of emotions for learning processes and outcomes. Empirical research has related emotions to academic motivation, knowledge building and revision, as well as academic performance ( Pekrun and Linnenbrink-Garcia, 2014 ). Broadly, emotions are defined by interrelated psychological processes that include affective (e.g., feeling nervous), cognitive (e.g., ruminating thoughts), motivational (e.g., a desire to escape), expressive (e.g., displaying a frown), and physiological (e.g., increased heart rate) components ( Ellsworth, 2013 ; Shuman and Scherer, 2014 ). Emotions can generally be classified in terms of valence, where pleasant emotions are positive and unpleasant emotions are negative (e.g., enjoyment is positive, surprise is neutral, frustration is negative), and level of activation (e.g., anxiety is activating, boredom is deactivating; see Pekrun and Stephens, 2012 ).

In educational psychology, one important line of research has concerned achievement emotions, that is, emotions that are tied to achievement activities (e.g., studying) or achievement outcomes (success or failure), such as anxiety, pride, or shame. However, not all emotions triggered in educational settings are related to achievement. Notably, Pekrun and Stephens (2012) distinguished topic emotions, social emotions, as well as epistemic emotions. Topic emotions relate to the content of learning (e.g., pride when learning about the American space conquest), whereas social emotions focus on relations to others in the learning context (e.g., compassion, gratitude; Weiner, 2007 ). Of particular relevance to critical thinking, epistemic emotions relate to the perceived quality of knowledge and the processing of information ( Pekrun and Stephens, 2012 ).

Muis et al. (2018) proposed that epistemic emotions arise as the result of appraisals of alignment or misalignment between the characteristics of incoming messages and individuals’ cognitive characteristics, including prior knowledge, epistemic beliefs, and epistemic aims. In the context of contending with socio-scientific issues such as climate change, vaccination, or genetically modified foods, incoming messages are likely to be characterized by knowledge claims that are complex that also include a degree of uncertainty ( Levinson, 2006 ). For individuals seeking simple and certain answers, engaging with such content may trigger a variety of epistemic emotions such as confusion, frustration, or anxiety. However, faced with the same content, individuals who expect knowledge to be uncertain and tentative, and who see value in consulting multiple sources before coming to a conclusion, may experience curiosity and enjoyment ( Muis et al., 2015 ). When presented with tasks that engage individuals’ beliefs about the nature of knowledge and knowing, frequently occurring epistemic emotions include surprise, curiosity, enjoyment, confusion, frustration, anxiety, and boredom ( Muis et al., 2015 ; Pekrun et al., 2017 ).

Surprise is likely to occur when individuals appraise new information as unexpected ( Meyer et al., 1997 ) or when they are unable to generate an explanation for the new information ( Foster and Keane, 2015 ). Mildly surprising information can lead to deep processing and integration of information, whereas information that is greatly surprising can be regarded as implausible and new information may fail to be integrated ( Munnich and Ranney, 2018 ). When information is not overly complex or perceived as relatively comprehensible, curiosity may arise. Litman (2008) proposes that epistemic curiosity arises in one of two forms: as a pleasant desire for information (i.e., interest-type curiosity), or as an unpleasant urge to obtain information to close the gap between what one knows and what one wants to know (e.g., deprivation-type curiosity; see also Loewenstein, 1994 ; Markey and Loewenstein, 2014 ). If the course of curiosity is followed, enjoyment may ensue, for instance, when validation or verification of a hypothesis is achieved ( Brun and Kuenzle, 2008 ), or when an epistemic aim is achieved ( Chinn et al., 2011 ; Muis et al., 2018 ). Confusion, on the other hand, follows from a lack of understanding when novel and complex information is perceived as incomprehensible ( Muis et al., 2018 ). Confusion can also arise in the face of severe discrepancies or contradictions, or from disruptions of goals or sequences of action ( D’Mello and Graesser, 2012 ). If an individual repeatedly fails to resolve the discrepancy causing confusion, frustration may arise ( D’Mello and Graesser, 2012 ; Di Leo et al., 2019 ; Munzar et al., 2021 ). Frustration can be described as a blend of anger and disappointment and, as such, can be an activating emotion when closer to anger, or deactivating if closer to disappointment ( Pekrun et al., 2002 ).

Another negative emotion is anxiety, which arises when a message implicates knowledge that is core to one’s identity. Individuals may begin to doubt or feel uncertain about their beliefs in a proposition, and feel that their identity is threatened ( Hookway, 2008 ). Pekrun (2006) described anxiety as a “complex” emotion that can either benefit or hinder motivation to engage in effortful thinking. On the one hand, anxiety can reduce cognitive resources such as memory, leading to poor performance on complex or difficult tasks, as well as poor academic achievement (see Pekrun et al., 2002 ; Zeidner, 2014 ). However, for some individuals, anxiety can increase extrinsic motivation to invest effort in complex processes such as analytical and critical thinking to avoid goal-related failure. Lastly, boredom may arise when information is unchallenging or when an intense negative emotion like frustration or anxiety precipitates disengagement ( D’Mello et al., 2014 ).

Consequences of Epistemic Emotions

Pekrun ( Pekrun et al., 2002 ; Pekrun, 2006 ; Pekrun and Perry, 2014 ) proposed that individuals process information in emotion-congruent ways. Specifically, Pekrun and colleagues proposed that positive emotions (e.g., interest-type curiosity and enjoyment) signal that the object of judgment is valuable, leading to more positive evaluations, greater efforts to engage, more elaboration of content, and more purposeful thinking than negative emotions. On the other hand, negative emotions (e.g., frustration, anxiety, and boredom) have been related to more negative evaluations, less efforts to engage (anxiety may be an exception), less elaboration of content, and more irrelevant thinking (see Pekrun et al., 2002 for a review). Further, positive emotions have been found to facilitate holistic, intuitive, and creative ways of thinking, whereas negative emotions have been associated to more focused, detail-oriented, analytical, and rigid modes of processing information (e.g., Bless et al., 1996 ).

Thus, critical thinking is theorized to be facilitated by optimal levels of surprise and positive emotions such as curiosity and enjoyment and hindered by certain negative emotions such as frustration and boredom. On the other hand, other negative emotions such as anxiety and confusion may be beneficial for critical thinking: D’Mello and Graesser (2014) argued that confusion is central to complex learning activities such as problem-solving and generating cohesive arguments. As such, confusion is expected to be beneficial to critical thinking because it signals that there is something wrong with the current state of affairs, which can precipitate critical thinking. However, this expectation holds only if individuals resolve confusion when it arises ( D’Mello and Graesser, 2014 ; Munzar et al., 2021 ). Indeed, as previous research has shown, when confusion is not resolved, this leads to frustration and disengagement from the task and can lower achievement outcomes ( Munzar et al., 2021 ). Similarly, anxiety in the face of complex and conflicting information may motivate critical thinking via effortful thinking to reduce the discomfort of anxiety but may also result in a decrease in critical thinking if anxiety consumes cognitive resources ( Meinhardt and Pekrun, 2003 ).

Empirical Evidence

To date, little theoretical and empirical work has explored how epistemic cognition relates to epistemic emotions experienced when contending with complex or conflicting information. To address this gap, Muis et al. (2015) examined relations between epistemic cognition, epistemic emotions, learning strategies—including critical thinking—and learning achievement in the context of learning about climate change. They hypothesized that individuals with more constructivist beliefs would experience more positive emotions given the consistency between the to-be-learned content and their epistemic beliefs, whereas individuals with less constructivist beliefs would experience more negative emotions given the conflicting perspectives presented to them on the causes and consequences of climate change. Results from path analyses revealed that individuals who espoused more constructivist epistemic beliefs about the justification for knowing used more critical thinking strategies, and that this relationship was mediated by curiosity: The more learners believed that knowledge is justified by systematic inquiry and integration of sources of information, the more they experienced curiosity and, in turn, the more they used critical thinking and attained greater learning achievement. They also found that surprise negatively predicted critical thinking, but surprise was not predicted by any epistemic belief dimension.

In sum, significant relations between epistemic cognition, epistemic emotions, and critical thinking are suggested in the literature. However, the studies reviewed were predominately designed to assess relations between epistemic beliefs, epistemic emotions, and critical thinking during learning; they did not instruct participants to think critically. As Greene et al. (2014) argued, the study of epistemic cognition and critical thinking should involve the need to argue for, and justify, conclusions drawn across sources and perspectives. As such, to fully understand the role of epistemic cognition and epistemic emotions in critical thinking, more research is needed wherein individuals are asked to engage in critical thinking during a complex learning task. We address this gap in the literature.

The Current Study

On the basis of theoretical and empirical considerations from Muis et al. (2015 , 2018) , Pekrun ( Pekrun et al., 2002 , 2017 ; Pekrun, 2006 ), as well as from the work of D’Mello and colleagues ( D’Mello and Graesser, 2012 ; D’Mello et al., 2014 ), we propose the following hypotheses: (1) Epistemic beliefs will predict critical thinking. Specifically, more constructivist beliefs will positively predict critical thinking. (2) Epistemic beliefs will predict epistemic emotions. Specifically, more constructivist epistemic beliefs will positively predict positive epistemic emotions, including interest-type curiosity and enjoyment, and negatively predict surprise and negative emotions, including confusion, frustration, anxiety and boredom. (3) Epistemic emotions will predict critical thinking. Specifically, surprise, curiosity, enjoyment, confusion, and anxiety will positively predict critical thinking, whereas frustration and boredom will negatively predict critical thinking. (4) Epistemic emotions will mediate relations between epistemic beliefs and critical thinking.

Materials and Methods

To test these hypotheses, we designed a study that specifically embedded a task that challenged individuals to critically evaluate knowledge claims from opposing perspectives, and to take a position on the topic in the form of an argumentative essay. The topic selected was genetically modified foods. Participants first took a knowledge assessment test to assess baseline knowledge about genetically modified foods, reported their epistemic beliefs about genetically modified foods, and then read a text on genetically modified foods that was comprised of two parts. The first part of the text was informative in nature and written in the style of a refutation text to ensure that all participants would engage in essay writing with good baseline knowledge about the nature of genetically modified foods. Refutation texts address commonly held misconceptions and directly refute them by presenting correct scientific explanations ( Sinatra and Broughton, 2011 ). The effectiveness of refutation texts for facilitating the revision of misconceptions has been well documented (see Tippett, 2010 ). The second part of the text was argumentative in nature and presented a series of points in favor for and against genetically modified foods. These points were supported by evidence that varied in strength and degree of certainty, but all information provided was valid. After having read the experimental text, participants wrote an argumentative essay of their choice in favor for or against genetically modified foods.

Participants were recruited from three research-intensive universities from Eastern Canada (40.7%), Western Canada (26.5%), and the Southern United States (32.8%). Ethics approval was first obtained by the ethics review board for each participating university. To recruit participants, flyers were posted around university campuses, advertisements were posted on university websites, and subject pools from psychology courses were used. Participants provided informed consent to participate in the study and then completed a prior knowledge test and the Topic-Specfic Epistemic Beliefs Questionnaire ( Bråten and Strømsø, 2009 ) to assess epistemic beliefs about genetically modified foods. Participants were then randomly assigned to read a version of the text that presented the advantages of genetically modified foods first ( n = 102), or the disadvantages of genetically modified foods first ( n = 102). After reading, participants completed the Epistemic Emotions Survey ( Pekrun et al., 2017 ) to capture the epistemic emotions they experienced while reading. Lastly, participants composed an argumentative essay and then completed a demographics questionnaire to conclude the study. Participants were compensated for their time with $15 cash, a $10 gift card, or course credit, depending on the university from which the participant was recruited. Figure 1 provides an overview of the procedure.

www.frontiersin.org

Figure 1. Overview of procedure.

Participants

Two hundred four university students from three universities across Canada and the United States participated. See Table 1 for a breakdown of all demographic characteristics of the sample by gender, year in university, race, and first language spoken. No differences between groups were found on any of the variables of interest as a function of university location, gender, year in university, or first language spoken. Participants studied a variety of domains (e.g., business administration, social sciences, natural sciences, computer sciences, psychology, linguistics, and arts) and reported an average GPA of 3.24 out 4.0 (SD = 0.55). Participants from the Western Canadian institution reported significantly lower GPA ( M = 2.97, SD = 0.67) than participants from the Eastern Canadian ( M = 3.43, SD = 0.44) and Southern American institutions [ M = 3.30, SD = 0.39; F (2, 124) = 10.02, p < 0.001]. Overall, no significant differences were observed between Canadian ( M = 3.23, SD = 0.60) versus American ( M = 3.30, SD = 0.39) participants in terms of reported GPA. Participants were 21.46 years of age on average (SD = 4.28).

www.frontiersin.org

Table 1. Demographic characteristics of the sample.

Experimental Text

Participants were given a text that first presented factual information about genetically modified foods, followed by a portion that presented advantages and disadvantages of genetically modified foods. The first half of the text was adapted from Heddy et al. (2017) and focused on debunking four common misconceptions about genetically modified foods by presenting accurate scientific explanations. Erroneous conceptions included the notion that genetically modifying food is the same process as cloning, that it involves injecting hormones into a plant or animal, that it only occurs in laboratories by scientists, and that it is the product of contemporary scientific research.

The second part of the text presented four advantages of, and four criticisms against genetically modified foods. It was written by the first author and adapted from content published by the Canadian Standards Association ( Whitman, 2000 ). To counterbalance a possible effect of text order with regard to the presentation of the advantages and disadvantages of genetically modified foods, two versions of the text were created: one version presented the advantages first, followed by the disadvantages, and the other version presented the disadvantages first, then the advantages. The text contained 1,295 words in total, including the informative and argumentative sections, with a Flesch-Kincaid index of grade 12.7 and a Flesch Reading Ease index of 37.7 (see Kincaid et al., 1975 ).

Prior Knowledge Test

Participants’ prior knowledge about genetically modified foods was measured with a 10-item multiple-choice test adapted from Heddy et al. (2017) . Each question presented four possible choices and participants were instructed to select the best answer. Examples of items include: “Cross-pollination is considered to be a process through which plants can be… (a) genetically modified. (b) cloned. (c) hormone injected. (d) exactly replicated.” “Methods that are NOT used in producing genetically modified foods include which of the following? (a) Gene cloning methods. (b) Hormone injection. (c) Cross pollination. (d) selective pollination.” Correct answers were given a score of 1 and incorrect answers were given a score of 0. Scores were then added to create a total sum, then a percentage, which was used as an indicator of prior knowledge.

A confirmatory factor analysis (CFA) was conducted to examine the factor structure of the prior knowledge test using Mplus Version 7.11 ( Muthén and Muthén, 2015 ). The initial model revealed a poor fit, χ 2 = 103.94, df = 35, p < 0.001, RMSEA = 0.05, and CFI = 0.88. An analysis of item loadings revealed low loadings for two items; therefore, these items were deleted. The final model (with the remaining eight items) resulted in a good fit, χ 2 = 64.14, df = 20, p < 0.01, CFI = 0.94 and RMSEA = 0.04. Cronbach’s reliability coefficient was acceptable, α = 0.79.

Epistemic Beliefs

Epistemic beliefs about genetically modified foods were measured with a version of the Topic-Specific Epistemic Beliefs Questionnaire (TSEBQ; Bråten and Strømsø, 2009 ) adapted to this topic. The TSEBQ comprises 24 items that participants rate on a 7-point Likert scale ranging from “strongly disagree” to “strongly agree.” Four dimensions of epistemic beliefs were measured: six items assessed beliefs about the complexity of knowledge (e.g., “Knowledge about genetic modification is primarily characterized by a large amount of detailed information”), six items assessed beliefs about the uncertainty of knowledge (e.g., “Certain knowledge about genetic modification is rare”), five items assessed beliefs about the source of knowing (e.g., “I often feel that I just have to accept that what I read about genetic modification problems can be trusted”), and seven items assessed beliefs about justification for knowing (e.g., “When I read about issues concerning genetic modification, I evaluate whether the content seems logical”).

A CFA was conducted to examine the factorial validity of scores for the instrument using Mplus7. The initial model (with 24 items) showed poor fit, χ 2 = 419.25, df = 246, p < 0.001, RMSEA = 0.06, and CFI = 0.78. Due to low loadings, 10 items were deleted: three items were removed from the uncertainty subscale, three from the complexity subscale, two from the source subscale, and three from the justification subscale. The final model (with 14 dimensions) resulted in good fit, χ 2 = 102.31, df = 71, p < 0.001, RMSEA = 0.05, and CFI = 0.93. Cronbach’s reliability coefficients were acceptable, α = 0.79 for the uncertainty subscale, α = 0.78 for the complexity subscale, α = 0.78 for the source subscale, and 0.76 for the justification subscale.

Epistemic Emotions

Epistemic emotions experienced while reading the experimental text were measured with the Epistemic Emotions Survey (EES; Pekrun et al., 2017 ). This questionnaire comprises 21 items that measure seven epistemic emotions, including: surprise, curiosity, enjoyment, confusion, frustration, anxiety, and boredom. Each item consisted of a single word describing one emotion, with three descriptors per emotion (e.g., “anxious,” “nervous,” and “worried” measured anxiety). Participants rated the intensity of their emotional responses to the text using a five-point Likert scale ranging from “Not at all” to “Very strong.” The scores for the descriptors of each emotion were averaged to represent each emotion. Cronbach’s reliability coefficients were acceptable, α = 0.78 for surprise; α = 0.76 for curiosity; α = 0.84 for enjoyment; α = 0.77 for confusion; α = 0.83 for frustration; α = 0.85 for anxiety; and α = 0.80 for boredom.

To assess critical thinking, we chose to measure argumentation, the second key dimension of critical thinking ( Kuhn, 2018 ). Accordingly, participants were instructed to compose an argumentative essay of their choice in favor for or against genetically modified foods and to justify their position. Instructions were as follows: “Based on the content you just read, write a brief (2–3 paragraphs) argument for or against genetically modified foods. Explain how you came to form and justify your point of view. You can refer back to the text you read, and include your judgment of the arguments, evidence, and conclusions it presented.” Critical thinking was assessed using a coding scheme developed for this purpose.

Coding critical thinking in essays

A coding scheme was developed by the second author to assess critical thinking in argumentative essays. The coding scheme was informed by the work of Facione and Facione (2014) , which outlines the development and use of a scoring rubric for evaluating critical thinking (see Table 2 for full descriptions and examples). Five elements were targeted via the coding scheme: taking a position, presenting supportive arguments in favor of a position, acknowledging an alternative perspective, evaluating the validity of claims on both sides of the issue, and integrating arguments from opposing viewpoints into a coherent perspective or conclusion. One point was attributed if participants took a position; no points were attributed if participants did not take a position. One point was attributed if participants supported their position with valid arguments, evidence, facts or reasons; no points were attributed if no arguments were presented in support of their position or if arguments were invalid. One point was attributed if participants acknowledged and presented an alternative perspective on genetically modified foods; no points were attributed if participants only presented arguments in favor of one perspective. One point was attributed if participants evaluated claims or arguments before accepting them as valid; no points were attributed if participants expediently accepted or dismissed claims or arguments without evaluation. Lastly, one point was attributed if participants reconciled or integrated perspectives; no points were attributed if the conclusion was one-sided, categorical, or failed to acknowledge the validity of any counter-argument. Points were summed to create a total score on five.

www.frontiersin.org

Table 2. Coding scheme for critical thinking in argumentative essays.

The coding scheme was tested by the second and third authors using 31 transcripts (15% of the sample), and inter-rater reliability for the first round was established at 75%. All disagreements were resolved through discussion and were used to update the coding scheme. A second round of coding was performed with an additional 31 transcripts (new 15% of the sample), and final inter-rater reliability was established at 88%. The second author then coded the remainder of the essays.

Preliminary Analyses

Prior to conducting full analyses, all variables were inspected for skewness and kurtosis. Based on Tabachnick and Fidell’s (2013) recommendations, acceptable ranges of ±3 for skewness and ±8 for kurtosis were used to investigate the relative normality of the distributions for each variable. Analyses revealed that the distributions for confusion (4.45), frustration (7.28), anxiety (3.73), and boredom (6.10) were positively skewed; however, given the nature of emotions, normal distributions for these variables are unlikely, so the variables were retained for subsequent analyses. Examination of text order (i.e., advantages of genetically modified foods first or disadvantages of genetically modified foods first) showed no or order effects on any variable. Descriptive statistics for all variables are presented in Table 3 and correlations between variables are presented in Table 4 .

www.frontiersin.org

Table 3. Descriptive statistics for variables.

www.frontiersin.org

Table 4. Correlations between variables.

To check for univariate outliers, each variable was converted to a standardized z -score. Any z -scores exceeding critical cut-offs of ±3.3 was considered an outlier ( Tabachnick and Fidell, 2013 ). Results revealed univariate outliers for justification ( n = 2, z = −3.36 to −5.53) and frustration ( n = 1, z = 3.51). Instead of deletion, all cases were retained given the values were not extreme and did not exceed more than 2% of cases for each variable (see Cohen et al., 2003 ). To check for multivariate outliers, Mahalanobis distances were calculated based on a χ 2 distribution with 12 degrees of freedom and a critical cut-off point of 32.91 (α = 0.001; see Meyers et al., 2017 ; Tabachnick and Fidell, 2013 ). No multivariate outliers were found.

Mediation Path Analysis

To test the hypothesized mediation model, we conducted a mediation analysis using Hayes (2018) PROCESS macro for SPSS, which is recommended for testing complex mediational models and maintaining high power while controlling for Type I error rates (see Hayes, 2018 ). Bootstrap sampling was used (with 10,000 bootstraps), which does not require assumptions of normality and which was appropriate given a few slightly skewed variables. A power analysis using G ∗ Power ( Faul and Erdfelder, 1992 ; for a full description, see Erdfelder et al., 1996 ) with power (1–β) set at.80 and α set at 0.05 revealed a required sample size of 218 for the present analysis. Given a sample of 204, the analysis would be underpowered. As such, we adjusted the level of the confidence intervals to 90% for the bootstrap sampling, which required a sample size of 180. The final model is depicted in Figure 2 with standardized effects.

www.frontiersin.org

Figure 2. Final model with standardized coefficients. Only significant paths are represented. * p < 0.05; ** p < 0.01.

We first examined the total effects model, which expresses the sum of the direct and indirect effects of epistemic beliefs on critical thinking scores to determine the predictive relations between epistemic beliefs and critical thinking, independent of the effects of mediational variables. We next calculated the direct effects of epistemic beliefs on epistemic emotions, the direct effects of epistemic beliefs on critical thinking, and the indirect effects of epistemic beliefs on critical thinking via epistemic emotions. At each step, we controlled for the effects of prior knowledge.

Complexity beliefs (β = 0.16, SE = 0.06, t = 2.06, p = 0.04) and uncertainty beliefs (β = 0.14, SE = 0.07, t = 2.07, p = 0.04) were direct predictors of critical thinking. For direct effects of epistemic beliefs on epistemic emotions, complexity beliefs predicted surprise (β = -0.24, SE = 0.07, t = −3.52, p = 0.0005), enjoyment (β = 0.15, SE = 0.07, t = 2.04, p = 0.04) and confusion (β = −0.28, SE = 0.07, t = −4.02, p = 0.0001); source beliefs predicted surprise (β = −0.15, SE = 0.07, t = −2.21 p = 0.02), and enjoyment (β = −0.18, SE = 0.07, t = −2.59, p = 0.03); and justification beliefs predicted curiosity (β = 0.14, SE = 0.07, t = 2.02, p = 0.04), confusion (β = −0.14, SE = 0.07, t = −2.01, p = 0.04), and boredom (β = −0.15, SE = 0.06, t = −2.02, p = 0.04). For the direct effects of epistemic emotions on critical thinking, confusion (β = 0.24, SE = 0.10, t = 2.30, p = 0.02) and anxiety (β = 0.18, SE = 0.10, t = 2.17, p = 0.03) were significant positive predictors, and frustration (β = −0.24, SE = 0.10, t = −2.40, p = 0.01) was a significant negative predictor of critical thinking. Finally, for indirect effects of epistemic beliefs on critical thinking via epistemic emotions, results showed that the effect of complexity beliefs on critical thinking was mediated by confusion, with a point estimate of −0.07 and bias corrected bootstrapped confidence interval (90%) of −0.12 to −0.02.

Two Illustrative Cases

The following cases reflect examples of how epistemic beliefs and epistemic emotions related to critical thinking for different individuals. These cases were chosen as they represent individuals with similar demographic profiles and levels of prior knowledge about genetically modified foods, but whose epistemic beliefs and emotions as well as critical thinking skills present an interesting contrast.

Case 1 was a 24-year-old female in the 3rd year of an environmental sciences degree with a self-reported GPA representing an academic average between 80–84% (or A-). Her prior knowledge about genetically modified foods was below average (test score = 20%). She reported epistemic beliefs that were slightly less constructivist than average on the complexity subscale (score = 3.33/7.00), less constructivist than average by more than two standard deviations on the uncertainty subscale (score = 2.83/7.00), and less constructivist than average on the source subscale by one standard deviation (score = 2.60/7.00). For epistemic emotions, she reported slightly less confusion than average (score = 1.33/5.00), slightly more frustration than average (score = 2.00/5.00), more anxiety than average by more than one standard deviation (score = 3.33/5.00), and more boredom than average by more than a standard deviation (score = 3.00/5.00).

Our analysis of Case 1’s essay indicated little critical thinking (score = 2/5) and reflected a one-sided view of genetically modified foods. Her essay included a well-positioned positive stance on genetically modified foods (“Genetically modified food is the way of the future”) as well as a few arguments in its support (“For instance, rice can be GM to have more nutrients, thus preventing millions of people from starvation” and “Already there are many third world nations that have hungry and malnourished populations. Genetically modified foods can help them by modifying their staple of food grown there.”) However, Case 1 did not identify nor engage with arguments from the opposing position. No arguments against genetically modified foods were specifically identified. Only the fact that genetically modified foods could have detrimental health effects was alluded to in a sentence that quickly dismissed the counter-argument with a statement that was justified by means of not having directly observed any opposing evidence: “Every day, there are hundreds of foods being bought in grocery stores that are GM and so far there have been no real significant downside to eating it (detrimental). In fact, I’m sure you’ve even eaten something that’s been GM this week!” Further, no conclusions were reached that hinted to an integration or reconciliation of perspectives. A conclusive statement was offered that solidified a position in favor of genetically modified foods (“Our knowledge is meant to be passed on to others so they can benefit from the fortunes that we are so lucky to have.”). Overall, Case 1 is representative of individuals with less constructivist epistemic beliefs who did not present elaborate critical thinking. Further, though prior knowledge was low, Case 1 reported little confusion. She also reported high levels of frustration, anxiety, and boredom. For Case 1, it may be the case that the presentation of opposing arguments led to more frustration and anxiety given her less constructivist beliefs about genetically modified foods. That is, consistent with Muis et al. (2018) , the nature of the information presented to her was in stark contrast to her epistemic beliefs, thus triggering negative epistemic emotions. She expected knowledge about genetically modified foods to be certain and simple but was presented as uncertain and complex. This increase in frustration and anxiety may have then led her to focus solely on one side of the argument, resulting in lower performance on the task.

Case 2 was a 24-year-old female in the 2nd year of a degree in psychology. She reported a GPA representing an academic average between 85–89% (or A). Akin to Case 1, Case 2’s prior knowledge about genetically modified foods was below average (test score = 20%). She reported epistemic beliefs that were more constructivist than average by more than one standard deviation on the uncertainty subscale (score = 5.00/7.00), more constructivist than average by more than one standard deviation on the uncertainty subscale (score = 5.83/7.00), and slightly less constructivist than average on the source subscale (score = 4.00/7.00). For epistemic emotions, she reported more confusion than average by more than one standard deviation (score = 3.00/5.00), more frustration than average by more than one standard deviation (score = 2.67/5.00), slightly more anxiety than average (score = 3.00/5.00), and slightly less boredom than average (score = 1.67/5.00).

Case 2’s essay reflected an integrated perspective on genetically modified foods. Case 2 first assumed a cautiously positive stance on genetically modified foods: “Though the use of genetically modified foods may present possible solutions to certain of the world’s problems, there is insufficient research on the matter and, more specifically, evidence supporting its proposed benefits.” She then presented some of benefits of genetically modified foods: “Genetically modified foods have been proposed to aid in addressing the many problems tied to the ever-growing population of the world, including malnutrition and land usage” and then exposed some criticism, pointing to a lack of supportive evidence, “However, these are mere propositions based on hypothetical scenarios, i.e., there is no evidence to show that certain foods can be genetically modified to provide additional vitamins and minerals - what has been proposed is a hypothetical solution.” The same pattern was repeated with the opposing perspective: Case 2 first presented arguments against genetically modified foods: “Meanwhile, a growing body of research is pointing to evidence supporting its harmful side-effects. For instance, a causal link was found between the presence of the modified B.t. corn and death of monarch butterfly caterpillars. Research has also shown that GM fed rats had digestive tracts that differed to rats fed unmodified foods”, then identified limitations, “While research on the effects of GM foods in humans is still rather limited, such animal studies are an important start.” A full reconciliation of perspectives was not reached, but a conclusion was drawn that followed the aforementioned evaluations and identified a lack of evidence as a halt to fully embracing the benefits of genetically modified foods: “Overall, the research on genetically modified foods remains inconsistent and limited. There is insufficient evidence to show that the benefits of genetically modified foods could outweigh its costs.” It may be the case that an optimal level of anxiety and confusion, combined with low boredom, motivated Case 2 to exert efforts to analyze each perspective on genetically modified foods to better understand their characteristics and nuances, resulting in observable critical thoughts.

Socio-scientific issues such as genetically modified foods are often depicted as controversial by influencers who are either in favor or against the propositions of scientific expertise. In the face of such issues, successful critical thinking occurs when individuals purposefully decide what to believe or what to do by evaluating knowledge claims and reconciling opposing views, taking relevant evidence and context into account ( Ennis, 1987 ; Facione, 1990 ). Prior theoretical and empirical work suggests that individuals’ thoughts and beliefs about the nature of knowledge and knowing play an important role in supporting critical thinking. However, little is known about the role that knowing-related emotions may play in critical thinking and the effects of epistemic cognition on such thinking. We hypothesized that epistemic cognition supports critical thinking via epistemic emotions.

This research contributes to the literature on epistemic cognition and epistemic emotions by empirically testing Muis et al. (2015) and Muis et al. (2018) model of epistemic cognition and epistemic emotions, and by providing new findings concerning relations between epistemic cognition, epistemic emotions, and critical thinking. Further, this study is the first to explore these relations in the context of an elaborate critical thinking task where participants were asked to decide what to believe about a socio-scientific issue on the basis of conflicting evidence. Specifically, results showed that a belief in complex and uncertain knowledge directly predicted critical thinking (Hypothesis 1). Complexity, source, and justification beliefs also predicted epistemic emotions, including surprise, curiosity, enjoyment, confusion, and boredom (Hypothesis 2), and epistemic emotions, confusion, frustration, and anxiety, in turn predicted critical thinking (Hypothesis 3). Lastly, confusion mediated relations between epistemic beliefs and critical thinking (Hypothesis 4). Next, we interpret each of the results described above and conclude with a discussion of limitations and directions for future research.

The Role of Epistemic Beliefs When Facing Socio-Scientific Issues

In support of our hypothesis, more constructivist epistemic beliefs about the nature of knowledge (complexity and uncertainty dimensions) significantly predicted critical thinking, indicating that the more individuals believed in complex and tentative knowledge, the more they presented support for arguments, acknowledged alternatives, evaluated claims, and drew balanced conclusions. However, epistemic beliefs about the nature of knowing (beliefs about the sources of, and justification for knowing) were not significantly related to critical thinking. It should be mentioned that it is frequent in epistemic belief research that not all belief dimensions are salient in a given situation, depending on the nature of the task ( Hammer and Elby, 2002 ; Greene et al., 2010 ). Similar to this study, Strømsø et al. (2011) examined relations between epistemic beliefs and undergraduate students’ evaluations of documents’ trustworthiness and found that source beliefs significantly predicted evaluation of conflicting claims, but justification beliefs did not contribute significantly to trustworthiness scores.

For our study, three dimensions of epistemic beliefs were found to have direct effects on five epistemic emotions. In particular, in line with hypotheses, the more individuals believed that knowledge about genetically modified foods is complex, the less likely there were to experience surprise and confusion, and the more likely they were to experience enjoyment. This supports the notion that epistemic beliefs shape individuals’ assumptions about the nature of knowledge ( Bromme et al., 2010 ), such that those who expected knowledge about genetically modified foods to be simple may have experienced dissonance related to the complex nature of information presented in the text. Individuals who expected knowledge to be complex, when presented with conflicting information, were not surprised by this conflict nor were they confused about the conflicting information. Moreover, consistent with hypotheses, more constructivist complexity beliefs predicted more enjoyment when reading about advantages and disadvantages of genetically modified foods.

Following Muis et al. (2015) and Muis et al. (2018) model of epistemic cognition and epistemic emotions, we hypothesized that enjoyment would stem from an alignment between epistemic beliefs that are congruent with the nature of science (i.e., more constructivist epistemic beliefs) and the epistemic nature of the material presented. Similarly, Franco et al. (2012) found that when individuals’ epistemic beliefs are consistent with the knowledge representations in complex learning material, they perform better on various measures, including deep processing of information, text recall, and changes in misconceptions. However, Muis et al. (2018) suggested that epistemic emotions have more antecedents than were measured here, including perceptions of control and task value, as well as information novelty and complexity. They argued that if an individual with more constructivist epistemic beliefs has low perceived control or assigns little value to the task at hand, then he or she may experience lower levels of enjoyment. This suggests that epistemic beliefs alone cannot fully predict the type of epistemic emotions that are likely to arise in a given situation. As such, to fully understand the relationship between epistemic cognition and epistemic emotions more broadly, future work should include other epistemic emotion antecedents and take further contextual elements into account.

Additionally, those who viewed personal interpretations and judgments as the main sources of knowledge about genetically modified foods experienced less surprise but also less enjoyment during learning when reading contradictory perspectives about the value and usefulness of genetically modified foods. This result is consistent with findings from Strømsø et al. (2011) who found that the more students viewed the self as a meaning maker, the less they trusted texts written by climate change experts. Similarly, Kardash and Scholes (1996) found that the less students believed in external authority as a source of knowledge, the stronger their opinions about the HIV-AIDS relationship.

It could be the case that individuals who believe that knowledge resides within the self (and who have low prior knowledge) also prefer to fall back on their own opinions and find it less enjoyable to have to consider the point of view of others. Traditionally, the belief that knowledge originates from external authorities has been viewed as “naïve,” whereas the conception of self as a knower has been viewed as “sophisticated” ( Hofer and Pintrich, 1997 ). However, researchers have called into question the assumption that more constructivist beliefs are better to espouse in all situations (see Bromme et al., 2008 ; Greene et al., 2010 ; Greene and Yu, 2014 ). Indeed, when novices face a complex topic such as genetically modified foods, it may be adaptive to assume that experts are trustworthy and to balance one’s own judgments with reliance on external expert sources.

Moreover, when individuals are presented with conflicting information about a topic, it is beneficial to evaluate and integrate evidence and arguments. That is, individuals who believed that knowledge is justified through a process of critical evaluation and integration of information experienced more curiosity and less confusion and boredom compared to individuals who believed in an unquestioning reliance on authorities. In the case of the texts presented to participants in this study, authorities reported both pros and cons about genetically modified foods. Under this condition, individuals are likely more confused given that they may be uncertain as to which authority to trust, may find the task too challenging, and then experience greater boredom. However, as previous research has shown ( D’Mello et al., 2014 ), confusion can be beneficial for learning by increasing critical thinking. We describe relations between emotions and critical thinking next.

The Role of Epistemic Emotions in Critical Thinking

Consistent with the contention that confusion can be beneficial for complex cognitive tasks, confusion was found to be a positive predictor of critical thinking. Also consistent with hypotheses, confusion was negatively predicted by complexity and justification beliefs and, as such, fully mediated relations between these beliefs and critical thinking. Although the full mediation effect seems to suggest that more constructivist beliefs are detrimental to critical thinking via decreased levels of confusion, we suggest that effects revealed here are more complex than they appear. It might be the case that compared to individuals with less constructivist epistemic beliefs, those who espouse more constructivist beliefs experience less confusion related to the complex nature of genetically modified foods knowledge, but nevertheless perceive discrepancies between perspectives that can trigger lower levels of confusion associated with beneficial effects. Indeed, philosophers such as Morton (2010) and Elgin (2008) have argued that epistemic emotions such as surprise and confusion can draw attention to the object of the emotion, which can lead to deep processing of information as well as metacognitive self-regulation ( Muis et al., 2015 ). Moreover, two of the epistemic belief dimensions directly positively predicted critical thinking. As such, it may be the case that individuals with more constructivist epistemic beliefs experience less confusion, but still directly engage in critical thinking given that they believe that knowledge must be critically evaluated and that perspectives must be weighed before coming to a specific conclusion on an issue.

In contrast to confusion, frustration was found to be a negative predictor of critical thinking. Frustration is an intense negative emotion that can overtake the cognitive system ( Rosenberg, 1998 ), and is linked to a reduction of effortful thinking and an increase of rigid and shallow processing of information (see Pekrun et al., 2011 ; Pekrun and Stephens, 2012 ). D’Mello and Graesser (2012) proposed that frustration can lead to boredom and ultimately, disengagement from task. Moreover, we observed a significant positive relationship between anxiety and critical thinking, suggesting that anxiety may be beneficial for critical thinking. This result was expected and is consistent with Muis et al.’s (2015) results, who also noted a significant positive path from anxiety to critical thinking. In the present study, anxiety was unrelated to epistemic beliefs but may have been related to epistemic aims such as to understand the content or find the truth about genetically modified foods. Measuring epistemic aims as antecedents of epistemic emotions will be an important avenue to understand the conditions under which anxiety can benefit critical thinking, and those under which it does not.

In terms of positive emotions, in this study, we did not find significant predictive relationships between enjoyment and critical thinking. Therefore, the current results do not replicate prior work by Muis et al. (2015) , who found curiosity to predict critical thinking. Muis et al. (2018) proposed that curiosity and confusion are similar in that they both result from surprise triggered by dissonance, incongruity, or uncertainty. They proposed that the complexity of information or of a task predicts whether curiosity or confusion follows surprise. Specifically, they argued that when complexity is high, surprise may turn into confusion, whereas curiosity is more likely to ensue in cases where discrepancies can be easily revolved. In the current study, it appears that curiosity and confusion highly co-occurred. More research is needed to better understand how individuals experience curiosity and confusion when trying to determine what is true or what to believe about a complex and controversial topic.

Overall, the current study provided support for many of the predictions posited in the epistemic cognition and emotion literature, yet also provided new insights into the epistemic and affective nature of critical thinking. Specifically, the notion that more constructivist epistemic cognition promotes critical thinking was generally supported, as was the contention that epistemic emotions mediate relations between epistemic cognition and cognitive processes. Further, results supported the idea that milder forms of negative emotions such as anxiety and confusion can be beneficial for critical thinking, whereas intense activating negative emotions (i.e., activating forms of frustration) are detrimental for critical thinking. However, results also challenged the assumptions that positive emotions are required for critical thinking to occur. Lastly, our results challenge dominant conceptions about beliefs in the self as the primary source of knowledge as being beneficial for critical thinking. Our counter-hypothetical results provide additional support for the idea that there is a need to reconsider and reinvestigate how individuals productively conceive of and justify knowledge (see Greene et al., 2008 ; Chinn et al., 2011 ; Greene and Yu, 2014 ). Overall, findings from the current study support the notion that critical thinking is not necessarily something that feels good ( Danvers, 2016 ), yet suggest that espousing more constructivist beliefs about the nature of knowledge may benefit critical thinking by tampering certain difficult emotions and supporting the use of critical thinking.

Educational Implications

The results obtained in the present study have several implications for educational interventions aimed at increasing critical thinking about socio-scientific issues. First, findings support the notion that knowledge- and knowing-related issues should be highlighted and discussed in educational settings, with the aim of developing more constructivist forms of epistemic cognition. Notably, discussions surrounding the complex and tentative nature of scientific knowledge may be beneficial to shaping individuals’ expectations about the issues they will be called upon to reflect and act on during their lifetime. Barnett (2004) , a prominent philosopher of higher education, has described the mission of university education as preparing students for a complex and uncertain future: For individuals to prosper, make decisions, and come to a position of security amid multiple interpretations, individuals must come not only to learn for uncertainty, but to learn to live with uncertainty. Barnett contends that no risk-free curricular approach can achieve this; instead, he calls for a curriculum that aims at educational transformation through exposure to dilemmas and uncertainties. This may include, for instance, confronting students with the limits of knowing in a field and with the limitations of the field as such. In addition to uncertainty- and complexity-focused curricula, Muis et al. (2016) proposed that to achieve epistemic change, epistemic climates are needed that involve constructivist pedagogical approaches (e.g., inquiry-based learning, apprenticeship, collaborative learning, knowledge building, and communities of practice), decentralized authority structures, open-ended assessment practices, and appropriate levels of teacher support, as students experience the sometimes difficult process of belief change.

Second, findings from the present study suggest that to develop critical thinking about socio-scientific issues, learning environments should be supportive of students’ emotional responses. In particular, for students with less constructivist epistemic cognition, being exposed to complex and conflicting information may trigger surprise, confusion, and frustration. We argue that such emotions should be welcomed without judgment by teachers and peers, and that these emotional experiences should be normalized ( Di Leo and Muis, 2020 ). Further, teachers should discuss their own epistemic emotions and model appropriate emotion regulation strategies ( Gross, 2014 ). Related to confusion, students may have a tendency to want to avoid confusion by seeking out tasks with minimal intellectual challenges (situation selection), seeking help when challenged (situation modification), or intentionally ignoring or misattributing the cause of discrepant events to avoid confusion (reappraisal; D’Mello and Graesser, 2014 ; Gross, 2014 ; Harley et al., 2019 ). However, teachers can discuss the drawback of these strategies, and further suggest and model a different set of emotion regulation strategies, including choosing to engage in tasks that are intellectually challenging (situation selection), open up to perspectives that do not at first flatter their preferred position (situation modification), and help students build competencies for critical reflection (competence enhancement). By reinforcing the latter strategies, students may become what Clifford (1988) describes as “academic risk takers,” who are more tolerant to uncertainty and failure.

Third and relatedly, given observed relations between beliefs, confusion and critical thinking, we suggest that students with less constructivist epistemic beliefs may benefit from learning materials that trigger mild confusion, but without giving way to frustration. To this end, D’Mello and Graesser (2012) suggest pedagogical practices where misconceptions are exposed, where complexity is embraced, and where less cohesive texts and lectures replace the polished deliveries of textbooks and formal lectures. However, to avoid confusion turning into frustration or disengagement ( D’Mello and Graesser, 2012 ; Munzar et al., 2021 ), teachers should support the development of students’ critical thinking skills and resolution strategies by scaffolding and modeling these abilities ( Muis and Duffy, 2013 ; Di Leo and Muis, 2020 ), so that students become able to productively engage with confusion-inducing materials, to the benefit of deep and critical thinking.

Finally, it is also important to note that all students could benefit from being taught how to write argumentative essays; the task we used to capture the output dimension of critical thinking ( Kuhn, 2018 ). As previous research has shown, undergraduate and graduate students typically perform below the expected level for argumentative essay writing tasks ( Kellogg and Whiteford, 2009 ). Our sample was no different, with an average of just over 53%. Clearly, beyond focusing on epistemic cognition and epistemic emotions in the classroom, students could benefit from direct instruction on argumentative writing. One method that has been effective in improving students’ argumentative essay writing is via scaffolding through adaptive fading ( Noroozi et al., 2018 ), and worked examples and peer feedback ( Latifi et al., 2019 ; Noroozi and Hatami, 2019 ; Valero Haro et al., 2019 ). Indeed, as Muis (2007) argued, it may be the case the teaching students these skills may also help to improve their epistemic cognition. Of course, one could also argue that we measured only one key dimension of critical thinking. Future research should also measure the inquiry dimension of critical thinking to assess whether results from our study replicate. Importantly, relations between constructs may differ depending on how critical thinking is measured. We address limitations next.

Limitations and Future Directions

Several concerns may limit the results presented herein. First, the analysis used correlational associations of the study variables over time but did not experimentally manipulate the predictor variables. As such, future research should complement the approach used here with experimental studies. However, this may be easier to do with emotions, which can to some extent be manipulated experimentally, than with more stable epistemic beliefs. A second limitation concerns the rubric employed to capture critical thinking in essays. Specifically, we opted for a quantitative approach to coding critical thinking by attributing one point for the presence of each component of critical thinking. However, a weighted coding scheme or a holistic rubric are two other modes of critical thinking assessment that include qualitative elements of analysis that could have yielded different results. Third, epistemic cognition and epistemic emotions were measured via self-report, which also have inherent limitations (see, for example, DeBacker et al., 2008 ). Therefore, future research is needed to replicate the findings presented here using alternative methods, which we delineate below.

The current findings have important implications for future research on epistemic cognition and epistemic emotions. Specifically, to fully understand how epistemic cognition supports critical thinking, future research should explore the role that other facets of epistemic cognition play in mediating this relationship. For instance, how do individuals’ knowledge of epistemic strategies shape critical thinking, and do these abilities influence the arousal of epistemic emotions in the face of complex and conflicting information? And how might epistemic aims moderate these relations? Prior work has shown that these other epistemic facets play a significant role in epistemic emotion arousal and researchers have called for more research on epistemic cognition that conceptualize and operationalize the construct beyond the sole notion of epistemic beliefs ( Greene et al., 2016 ).

Lastly, in light of the findings revealed herein, we contend that one important avenue for future work will be to investigate how different intensities of positive, neutral, and negative epistemic emotions influence information processing and critical thinking. To this end, we believe that the self-report measurement of epistemic cognition and emotions can be complemented by and triangulated with trace data collected by think-aloud or emote-aloud protocols (e.g., Craig et al., 2008 ; Di Leo and Muis, 2020 ), physiological measures of emotions such as analysis of facial expression, electrocardiograms, and galvanic skin responses ( Azevedo et al., 2013 ; D’Mello et al., 2014 ), and qualitative work. In sum, by broadening conceptual horizons and employing advanced methodologies, we believe that future research will provide a rich portrait of the ways in which epistemic cognition and epistemic emotions support critical thinking.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by McGill University Research Ethics Board. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

KM developed the larger program of study, designed the materials and research questions, cleaned and analyzed the data, and wrote the manuscript. MC helped develop the research questions, collected the data, cleaned the data, developed the coding scheme for the essays, coded the essays, analyzed the data, and wrote the manuscript. CD helped collect the data, coded the essays, and helped in writing the manuscript. KL helped collect the data and wrote the manuscript. All authors contributed to the article and approved the submitted version.

Funding for this work was provided by a grant to KM from the Social Sciences and Humanities Research Council of Canada (435-2014-0155), and from the Canada Research Chair program. Correspondence concerning this article can be addressed to KM, Department of Educational and Counselling Psychology, Faculty of Education, McGill University, 3,700 McTavish Street, Montreal, QC, H3A 1Y2, or via email at [email protected] .

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abrami, P. C., Bernard, R. M., Borokhovski, E., Wade, A., Surkes, M. A., Tamim, R., et al. (2008). Instructional interventions affecting critical thinking skills and dispositions: A stage 1 meta-analysis. Rev. Educat. Res. 78, 1102–1134. doi: 10.3102/0034654308326084

CrossRef Full Text | Google Scholar

Alexander, P. A. (2014). Thinking critically and analytically about critical-analytic thinking: An introduction. Educat. Psychol. Rev. 26, 469–476. doi: 10.1007/s10648-014-9283-1

Arum, R., and Roksa, J. (2011). Academically adrift: Limited learning on college campuses. Chicago, IL: University of Chicago Press.

Google Scholar

Azevedo, R., Harley, J., Trevors, G., Duffy, M., Feyzi-Behnagh, R., Bouchet, F., et al. (2013). “Using trace data to examine the complex roles of cognitive, metacognitive, and emotional self-regulatory processes during learning with multi-agent systems,” In International handbook of metacognition and learning technologies. eds R. Azevedo and V. Aleven (New York, NY: Springer), 427–449. doi: 10.1007/978-1-4419-5546-3_28

Bailin, S., and Siegel, H. (2003). “Critical thinking,” in The Blackwell guide to the philosophy of education , eds N. Blake, P. Smeyers, R. Smith, and P. Standish (Oxford, UK: Blackwell Publishing), 181–193.

Barnett, R. (2004). Learning for an unknown future. Higher Educat. Res. Dev. 23, 247–260. doi: 10.1080/0729436042000235382

Barzilai, S., and Eshet-Alkalai, Y. (2015). The role of epistemic perspectives in comprehension of multiple author viewpoints. Learning Instruct. 36, 86–103. doi: 10.1016/j.learninstruc.2014.12.003

Barzilai, S., and Zohar, A. (2012). Epistemic thinking in action: Evaluating and integrating online sources. Cognit. Instruct. 30, 39–85. doi: 10.1080/07370008.2011.636495

Barzilai, S., and Zohar, A. (2014). Reconsidering personal epistemology as metacognition: A multifaceted approach to the analysis of epistemic thinking. Educat. Psychol. 49, 13–35. doi: 10.1080/00461520.2013.863265

Bless, H., Clore, G. L., Schwarz, N., Golisano, V., Rabe, C., and Wölk, M. (1996). Mood and the use of scripts: Does a happy mood really lead to mindlessness? J. Personal. Soc. Psychol. 71, 665–679. doi: 10.1037/0022-3514.71.4.665

PubMed Abstract | CrossRef Full Text | Google Scholar

Bråten, I. (2016). “Epistemic cognition interventions,” in Handbook of epistemic cognition , eds J. A. Greene, W. A. Sandoval, and I. Bråten (New York, NY: Routledge), 360–371.

Bråten, I., and Strømsø, H. I. (2009). Effects of task instruction and personal epistemology on the understanding of multiple texts about climate change. Discour. Proc. 47, 1–31. doi: 10.1080/01638530902959646

Bråten, I., Anmarkrud, Ø, Brandmo, C., and Strømsø, H. I. (2014). Developing and testing a model of direct and indirect relationships between individual differences, processing, and multiple-text comprehension. Learning Instruct. 30, 9–24. doi: 10.1016/j.learninstruc.2013.11.002

Bromme, R., Kienhues, D., and Stahl, E. (2008). “Knowledge and epistemological beliefs: An intimate but complicate relationship,” in Knowing, knowledge and beliefs: Epistemological studies across diverse cultures , ed. M. S. Khine (New York, NY: Springer), 423–441. doi: 10.1007/978-1-4020-6596-5_20

Bromme, R., Pieschl, S., and Stahl, E. (2010). Epistemological beliefs are standards for adaptive learning: a functional theory about epistemological beliefs and metacognition. Metacognit. Learning 5, 7–26. doi: 10.1007/s11409-009-9053-5

Brun, G., and Kuenzle, D. (2008). “A new role for emotions in epistemology?,” in Epistemology and emotions , eds G. Brun, U. Dog̃uog̃lu, and D. Kuenzle (Aldershot: Ashgate), 1–32.

Chinn, C. A., Buckland, L. A., and Samarapungavan, A. L. A. (2011). Expanding the dimensions of epistemic cognition: Arguments from philosophy and psychology. Educat. Psychol. 46, 141–167. doi: 10.1080/00461520.2011.587722

Chiu, Y. L., Liang, J. C., and Tsai, C. C. (2013). Internet-specific epistemic beliefs and self-regulated learning in online academic information searching. Metacognit. Learning 8, 235–260. doi: 10.1007/s11409-013-9103-x

Claris, L., and Riley, D. (2012). Situation critical: critical theory and critical thinking in engineering education. Engine. Stud. 4, 101–120. doi: 10.1080/19378629.2011.649920

Clifford, M. M. (1988). Failure tolerance and academic risk−taking in ten−to twelve−year−old students. Br. J. Educat. Psychol. 58, 15–27. doi: 10.1111/j.2044-8279.1988.tb00875.x

Cohen, J., Cohen, P., West, S. G., and Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences , 3rd Edn. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Craig, S. D., D’Mello, S., Witherspoon, A., and Graesser, A. (2008). Emote aloud during learning with AutoTutor: Applying the Facial Action Coding System to cognitive–affective states during learning. Cognit. Emot. 22, 777–788. doi: 10.1080/02699930701516759

D’Mello, S. K., and Graesser, A. C. (2012). Dynamics of affective states during complex learning. Learning Instruct. 22, 145–157. doi: 10.1016/j.learninstruc.2011.10.001

D’Mello, S. K., and Graesser, A. C. (2014). “Confusion,” in Handbook of emotions in education , eds R. Pekrun and L. Linnenbrink-Garcia (New York: Routledge), 289–310.

D’Mello, S., Lehman, B., Pekrun, R., and Graesser, A. (2014). Confusion can be beneficial for learning. Learning Instruct. 29, 153–170. doi: 10.1016/j.learninstruc.2012.05.003

Danvers, E. C. (2016). Criticality’s affective entanglements: rethinking emotion and critical thinking in higher education. Gender Educat. 28, 282–297. doi: 10.1080/09540253.2015.1115469

DeBacker, T. K., Crowson, H. M., Beesley, A. D., Thoma, S. J., and Hestevold, N. L. (2008). The challenge of measuring epistemic beliefs: An analysis of three self-report instruments. J. Exp. Educat. 76, 281–312. doi: 10.3200/jexe.76.3.281-314

Dewey, J. (1933). Philosophy and civilization. Boston: D.C. Heath & Co.

Di Leo, I., and Muis, K. R. (2020). Confused, now what? A cognitive-emotional strategy training intervention for elementary students during mathematics problem solving. Contemporary Educat. Psychol. 62:101876.

Di Leo, I., Muis, K. R., Singh, C., and Psaradellis, C. (2019). Curiosity. Confusion? Frustration! The role and sequencing of emotions during mathematics problem solving. Contempor. Educat. Psychol. 58, 121–137. doi: 10.1016/j.cedpsych.2019.03.001

Elgin, C. Z. (2008). “Emotion and understanding,” in Epistemology and emotions , eds G. Brun, U. Doguoglu, and D. Kuenzle (Aldershot: Ashgate), 33–50.

Ellsworth, P. C. (2013). Appraisal theory: Old and new questions. Emot. Rev. 5, 125–131. doi: 10.1177/1754073912463617

Ennis, R. (1987). “A taxonomy of critical thinking dispositions and abilities,” in Teaching thinking skills: Theory and practice , eds J. Baron and R. Sternberg (New York, NY: Freeman), 9–26.

Ennis, R. H. (2018). Critical thinking across the curriculum: A vision. Topoi 37, 165–184. doi: 10.1007/s11245-016-9401-4

Erdfelder, E., Faul, F., and Buchner, A. (1996). GPOWER: A general power analysis program. Behav. Res. Methods Instruments Comput. 28, 1–11. doi: 10.3758/bf03203630

Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (Research findings and recommendations). Newark, DE: American Philosophical Association.

Facione, P. A., and Facione, N. C. (2014). The holistic critical thinking scoring rubric: A tool for developing and evaluating critical thinking. California: California Academic Press.

Faul, F., and Erdfelder, E. (1992). GPOWER: A priori, post-hoc, and compromise power analyses for MS-DOS. Bonn: Bonn University.

Foster, M. I., and Keane, M. T. (2015). Why some surprises are more surprising than others: Surprises as a metacognitive sense of explanatory difficulty. Cognit. Psychol. 81, 74–116. doi: 10.1016/j.cogpsych.2015.08.004

Franco, G. M., Muis, K. R., Kendeou, P., Wang, X., Ranellucci, J., and Sampasivam, L. (2012). Examining the influences of epistemic beliefs and knowledge representations on cognitive processing and conceptual change when learning physics. Learning Instruct. 22, 62–77. doi: 10.1016/j.learninstruc.2011.06.003

Gordon, J. (2000). Congruency in defining critical thinking by nurse educators and non-nurse scholars. J. Nurs. Educat. 39, 340–351.

Greene, J. A., and Yu, S. B. (2014). Modeling and measuring epistemic cognition: A qualitative re-investigation. Contemp. Educat. Psychol. 39, 12–28. doi: 10.1016/j.cedpsych.2013.10.002

Greene, J. A., Azevedo, R., and Torney-Purta, J. (2008). Modeling epistemic and ontological cognition: Philosophical perspectives and methodological directions. Educat. Psychol. 43, 142–160.

Greene, J. A., Muis, K. R., and Pieschl, S. (2010). The role of epistemic beliefs in students’ self-regulated learning with computer-based learning environments: Conceptual and methodological issues. Educat. Psychol. 45, 245–257. doi: 10.1080/00461520.2010.515932

Greene, J. A., Sandoval, W. A., and Bråten, I. (2016). Handbook of epistemic cognition. New York, NY: Routledge.

Greene, J. A., Yu, S. B., and Copeland, D. Z. (2014). Measuring critical components of digital literacy and their relationships with learning. Comput. Educat. 76, 55–69. doi: 10.1016/j.compedu.2014.03.008

Gross, J. J. (2014). “Emotion regulation: Conceptual and empirical foundations,” in Handbook of emotion regulation , 2nd Edn, ed. J. J. Gross (New York, NY: Guilford), 3–20.

Halpern, D. F. (2014). Thought and Knowledge , 5th Edn. London: Psychology Press.

Hammer, D., and Elby, A. (2002). “On the form of personal epistemology,” in Personal epistemology: The psychology of beliefs about knowledge and knowing , eds B. K. Hofer and P. R. Pintrich (Mahwah, NJ: Lawrence Erlbaum Associates), 169–190.

Harley, J. M., Pekrun, R., Taxer, J. L., and Gross, J. J. (2019). Emotion regulation in achievement situations: An integrated model. Educat. Psychol. 54, 106–126. doi: 10.1080/00461520.2019.1587297

Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach , 2nd Edn. New York, NY: Guilford Press.

Heddy, B. C., Danielson, R. W., Sinatra, G. M., and Graham, J. (2017). Modifying knowledge, emotions, and attitudes regarding genetically modified foods. J. Exp. Educat. 85, 513–533. doi: 10.1080/00220973.2016.1260523

Hofer, B. K., and Bendixen, L. D. (2012). “Personal epistemology: Theory, research, and future directions,” in APA Educational Psychology Handbook , Vol. 1, eds K. R. Harris, S. Graham, and T. Urdan (Washington, DC: American Psychological Association), 227–256. doi: 10.1037/13273-009

Hofer, B. K., and Pintrich, P. R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Rev. Educat. Res. 6, 88–140. doi: 10.3102/00346543067001088

Hookway, C. (2008). “Epistemic immediacy, doubt and anxiety: On a role for affective states in epistemic evaluation,” in Epistemology and emotions , eds G. Brun, U. Doguoglu, and D. Kuenzle (Aldershot, UK: Ashgate), 51–65.

Huber, C. R., and Kuncel, N. R. (2015). Does college teach critical thinking? A meta-analysis. Rev. Educat. Res. 86:5917.

Hyytinen, H., Holma, K., Toom, A., Shavelson, R. J., and Lindblom-Ylänne, S. (2014). The complex relationship between students’ critical thinking and epistemological beliefs in the context of problem solving. Frontline Learning Res. 2, 1–25. doi: 10.14689/ejer.2017.72.11

Kahneman, D. (2011). Thinking, fast and slow. New York, NY: Farrar, Straus and Giroux.

Kardash, C. M., and Scholes, R. J. (1996). Effects of preexisiting beliefs, epistemological beliefs, and need for cognition on interpretation of controversial issues. J. Educat. Psychol. 88:260. doi: 10.1037/0022-0663.88.2.260

Kellogg, R. T., and Whiteford, A. P. (2009). Training advanced writing skills: The case for deliberate practice. Educat. Psychol. 44, 250–266. doi: 10.1080/00461520903213600

Kincaid, J. P., Fishburne, R., Rogers, R. L., and Chissom, B. S. (1975). Derivation of new readability formulas (Automated Readability Index, Fog Count, and Flesch Reading Ease formula) for Navy enlisted personnel. Millington, TN: Chief of Naval Training.

King, P. M., and Kitchener, K. S. (2002). “The reflective judgment model: Twenty years of research on epistemic cognition,” in Personal epistemology. The psychology of beliefs about knowledge and knowing , eds B. K. Hofer and P. R. Pintrich (Mahwah, NJ: Erlbaum), 37–62.

Kuhn, D. (1991). The skills of argument. Cambridge: Cambridge University Press.

Kuhn, D. (1999). A developmental model of critical thinking. Educat. Researcher 28, 16–46. doi: 10.3102/0013189x028002016

Kuhn, D. (2018). A role for reasoning in a dialogic approach to critical thinking. Topoi 37, 121–128. doi: 10.1007/s11245-016-9373-4

Kuhn, D., and Crowell, A. (2011). Dialogic argumentation as a vehicle for developing young adolescents’ thinking. Psychol. Sci. 22, 545–552. doi: 10.1177/0956797611402512

Kuhn, D., Hemberger, L., and Khait, V. (2015). Argue with me: argument as a path to developing students’ thinking and writing (Second). London, UK: Routledge.

Kurfiss, J. G. (1988). Critical thinking: Theory, research, practice, and possibilities (Ashe-Eric Higher Education Report No. 2). Washington, DC: Association for the Study of Higher Education.

Latifi, S., Noroozi, O., Hatami, J., and Biemans, H. J. (2019). How does online peer feedback improve argumentative essay writing and learning? Innovat. Educat. Teaching Int. 2019, 1–12. doi: 10.1080/14703297.2019.1687005

Levinson, R. (2006). Towards a theoretical framework for teaching controversial socio−scientific issues. Int. J. Sci. Educat. 28, 1201–1224. doi: 10.1080/09500690600560753

Litman, J. A. (2008). Interest and deprivation factors of epistemic curiosity. Personal. Individ. Differ. 44, 1585–1595. doi: 10.1016/j.paid.2008.01.014

Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation. Psychol. Bull. 116, 75–98. doi: 10.1037/0033-2909.116.1.75

Markey, A., and Loewenstein, G. (2014). “Curiosity,” in International handbook of emotion in education , eds R. Pekrun and L. Linnenbrink-Garcia (New York, NY: Routledge), 228–245.

Mason, L., and Boscolo, P. (2004). Role of epistemological understanding and interest in interpreting a controversy and in topic-specific belief change. Contemp. Educat. Psychol. 29, 103–128. doi: 10.1016/j.cedpsych.2004.01.001

Mason, L., and Scirica, F. (2006). Prediction of students’ argumentation skills about controversial topics by epistemological understanding. Learning Instruct. 16, 492–509. doi: 10.1016/j.learninstruc.2006.09.007

McGinnis, D. (2016). Epistemological orientations and evidence evaluation in undergraduates. Thinking Skills Creativity 19, 279–289. doi: 10.1016/j.tsc.2016.01.002

Meinhardt, J., and Pekrun, R. (2003). Attentional resource allocation to emotional events: An ERP study. Cognit. Emot. 17, 477–500. doi: 10.1080/02699930244000039

Meyer, W. U., Reisenzein, R., and Schützwohl, A. (1997). Toward a process analysis of emotions: The case of surprise. Motivat. Emot. 21, 251–274.

Meyers, L. S., Gamest, G., and Guarino, A. J. (2017). Applied multivariate research: Design and interpretation , 3rd Edn. Thousand Oaks, CA: SAGE Publications.

Morton, A. (2010). “Epistemic emotions,” in The Oxford handbook of philosophy of emotion , ed. P. Goldie (Oxford: Oxford University Press), 385–399.

Muis, K. R. (2007). The role of epistemic beliefs in self-regulated learning. Educ. Psychol. 42, 173–190. doi: 10.1080/00461520.2010.515932

Muis, K. R., and Duffy, M. C. (2013). Epistemic climate and epistemic change: Instruction designed to change students’ epistemic beliefs and learning strategies and improve achievement. J. Educat. Psychol. 105, 213–225. doi: 10.1037/a0029690

Muis, K. R., Pekrun, R., Sinatra, G. M., Azevedo, R., Trevors, G., Meier, E., et al. (2015). The curious case of climate change: Testing a theoretical model of epistemic beliefs, epistemic emotions, and complex learning. Learning Instruct. 39, 168–183. doi: 10.1016/j.learninstruc.2015.06.003

Muis, K. R., Trevors, G., and Chevrier, M. (2016). “Epistemic climate for epistemic change,” in Handbook of epistemic cognition , eds J. Greene, B. Sandoval, and I. Bråten (London: Routledge), 331–359.

Muis, R. R., Chevrier, M., and Singh, C. (2018). The role of epistemic emotions in personal epistemology and self-regulated learning. Educat. Psychol. 53, 165–184. doi: 10.1080/00461520.2017.1421465

Munnich, E., and Ranney, M. A. (2018). Learning from surprise: Harnessing a metacognitive surprise signal to build and adapt belief networks. Topics Cognit. Sci. 11, 164–177.

Munzar, B., Muis, K. R., Denton, C., and Losenno, K. (2021). Elementary students’ cognitive and affective responses to impasses during mathematics problem-solving. J. Educat. Psychol. 113, 104–124. doi: 10.1037/edu0000460

Muthén, L. K., and Muthén, B. O. (2015). Mplus user’s guide. Los Angeles, CA: Muthén & Muthén.

Niu, L., Behar-Horenstein, L. S., and Garvan, C. W. (2013). Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educat. Res. Rev. 9, 114–128. doi: 10.1016/j.edurev.2012.12.002

Noroozi, O. (2018). Considering students’ epistemic beliefs to facilitate their argumentative discourse and attitudinal change with a digital dialogue game. Innovat. Educat. Teaching Int. 55, 357–365. doi: 10.1080/14703297.2016.1208112

Noroozi, O., and Hatami, J. (2019). The effects of online peer feedback and epistemic beliefs on students’ argumentation-based learning. Innovat. Educat. Teaching Int. 56, 548–557. doi: 10.1080/14703297.2018.1431143

Noroozi, O., Biemans, H., and Mulder, M. (2016). Relations between scripted online peer feedback processes and quality of written argumentative essay. Internet Higher Educ. 31, 20–31. doi: 10.1016/j.iheduc.2016.05.002

Noroozi, O., Kirschner, P. A., Biemans, H. J., and Mulder, M. (2018). Promoting argumentation competence: Extending from first-to second-order scaffolding through adaptive fading. Educ. Psychol. Rev. 30, 153–176. doi: 10.1007/s10648-017-9400-z

Paul, R. W. (1990). Critical thinking: What every person needs to survive in a rapidly changing world. Santa Rosa, CA: Foundation for Critical Thinking.

Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educat. Psychol. Rev. 18, 315–341. doi: 10.1007/s10648-006-9029-9

Pekrun, R., and Linnenbrink-Garcia, L. (2014). International handbook of emotions in education. New York, NY: Routledge.

Pekrun, R., and Perry, R. P. (2014). “Control-value theory of achievement emotions,” in International handbook of emotions in education , eds R. Pekrun and L. Linnenbrink-Garcia (New York: Taylor & Francis), 120–141.

Pekrun, R., and Stephens, E. J. (2012). “Academic emotions,” in APA educational psychology handbook , Vol. 2, eds K. R. Harris, S. Graham, T. Urdan, S. Graham, J. M. Royer, and M. Zeidner (Washington, DC: American Psychological Association), 3–31.

Pekrun, R., Goetz, T., Frenzel, A. C., Barchfeld, P., and Perry, R. P. (2011). Measuring emotions in students’ learning and performance: The Achievement Emotions Questionnaire (AEQ). Contempor. Educat. Psychol. 36, 36–48. doi: 10.1016/j.cedpsych.2010.10.002

Pekrun, R., Goetz, T., Titz, W., and Perry, R. P. (2002). Academic emotions in students’ self-regulated learning and achievement: A program of qualitative and quantitative research. Educat. Psychol. 37, 91–105. doi: 10.4324/9781410608628-4

Pekrun, R., Vogl, E., Muis, K. R., and Sinatra, G. M. (2017). Measuring emotions during epistemic activities: The Epistemically-Related Emotion Scales. Cognit. Emot. 31, 1268–1276. doi: 10.1080/02699931.2016.1204989

Rosenberg, E. (1998). Levels of analysis and the organization of affect. Rev. General Psychol. 2, 247–270. doi: 10.1037/1089-2680.2.3.247

Schommer-Aikins, M., and Hutter, R. (2002). Epistemological beliefs and thinking about everyday controversial issues. J. Psychol. 136, 5–20. doi: 10.1080/00223980209604134

Scriven, M., and Paul, R. (1996). Defining critical thinking: A draft statement for the National Council for Excellence in Critical Thinking. Tomales, CA: Foundation for Critical Thinking.

Shuman, V., and Scherer, K. R. (2014). “Concepts and structure of emotions,” in International handbook of emotions in education , eds R. Pekrun and L. Linnenbrink-Garcia (New York: Routledge), 13–35.

Siegel, H. (1988). Educating reason: Rationality, critical thinking, and education. New York, NY: Routledge.

Simpson, E., and Courtney, M. D. (2002). Critical thinking in nursing education: Literature review. Int. J. Nurs. Practice 8, 89–98. doi: 10.1046/j.1440-172x.2002.00340.x

Sinatra, G. M., and Broughton, S. H. (2011). Bridging reading comprehension and conceptual change in science education: The promise of refutation text. Reading Res. Quart. 46, 374–393. doi: 10.1002/rrq.005

Strømsø, H. I., and Bråten, I. (2014). Students’ sourcing while reading and writing from multiple web documents. Nordic J. Digit. Literacy 2, 92–111.

Strømsø, H. I., Bråten, I., and Britt, M. A. (2011). Do students’ beliefs about knowledge and knowing predict their judgement of texts’ trustworthiness? Educat. Psychol. 31, 177–206. doi: 10.1080/01443410.2010.538039

Tabachnick, B. G., and Fidell, L. S. (2013). Using multivariate statistics , 6th Edn. Boston: Pearson.

Tiedens, L. Z., and Linton, S. (2001). Judgment under emotional certainty and uncertainty: The effects of specific emotions on information processing. J. Personal. Soc. Psychol. 81, 973–988. doi: 10.1037/0022-3514.81.6.973

Tippett, C. D. (2010). Refutation text in science education: A review of two decades of research. Int. J. Sci. Math. Educat. 8, 951–970. doi: 10.1007/s10763-010-9203-x

Toulmin, S. E. (2003). The Uses of Argument. New York, NY: Cambridge University Press.

Tremblay, K., Lalancette, D., and Roseveare, D. (2012). AHELO Feasibility Study Report. Volume 1 Design and Implementation. Paris: Organization for Economic Co-operation and development (OECD).

Valero Haro, A., Noroozi, O., Biemans, H. J., and Mulder, M. (2019). The effects of an online learning environment with worked examples and peer feedback on students’ argumentative essay writing and domain-specific knowledge acquisition in the field of biotechnology. J. Biol. Educat. 53, 390–398. doi: 10.1080/00219266.2018.1472132

Weiner, B. (2007). “Examining emotional diversity on the classroom: An attribution theorist considers the moral emotions,” in Emotion in education , eds P. A. Shutz and R. Pekrun (San Diego, CA: Academic Press), 73–88.

Whitman, D. B. (2000). “Genetically modified foods: Harmful or helpful?,” in Canadian Standards Association: Discover Guides , (Ottawa, ON: Government of Canada).

Yang, F., and Tsai, C. (2010). Reasoning about science-related uncertain issues and epistemological perspectives among children. Instruct. Sci. Int. J. Learning Sci. 38, 325–354. doi: 10.1007/s11251-008-9084-3

Zeidner, M. (2014). “Anxiety in education,” in International handbook of emotions in education , eds R. Pekrun and L. Linnenbrink-Garcia (New York: Routledge), 265–288.

Keywords : epistemic cognition, epistemic emotions, critical thinking, argumentation, socio-scientific issues

Citation: Muis KR, Chevrier M, Denton CA and Losenno KM (2021) Epistemic Emotions and Epistemic Cognition Predict Critical Thinking About Socio-Scientific Issues. Front. Educ. 6:669908. doi: 10.3389/feduc.2021.669908

Received: 19 February 2021; Accepted: 25 March 2021; Published: 14 April 2021.

Reviewed by:

Copyright © 2021 Muis, Chevrier, Denton and Losenno. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Krista R. Muis, [email protected]

Bookmark this page

Defining Critical Thinking

Translate this page from English...

*Machine translated pages not guaranteed for accuracy. Click Here for our professional translations.

For full copies of this and many other critical thinking articles, books, videos, and more, join us at the Center for Critical Thinking Community Online - the world's leading online community dedicated to critical thinking!   Also featuring interactive learning activities, study groups, and even a social media component, this learning platform will change your conception of intellectual development.

epistemic conception of critical thinking as a process

Internet Encyclopedia of Philosophy

Critical thinking.

Critical Thinking is the process of using and assessing reasons to evaluate statements, assumptions, and arguments in ordinary situations. The goal of this process is to help us have good beliefs, where “good” means that our beliefs meet certain goals of thought, such as truth, usefulness, or rationality. Critical thinking is widely regarded as a species of informal logic, although critical thinking makes use of some formal methods. In contrast with formal reasoning processes that are largely restricted to deductive methods—decision theory, logic, statistics—the process of critical thinking allows a wide range of reasoning methods, including formal and informal logic, linguistic analysis, experimental methods of the sciences, historical and textual methods, and philosophical methods, such as Socratic questioning and reasoning by counterexample.

The goals of critical thinking are also more diverse than those of formal reasoning systems. While formal methods focus on deductive validity and truth, critical thinkers may evaluate a statement’s truth, its usefulness, its religious value, its aesthetic value, or its rhetorical value. Because critical thinking arose primarily from the Anglo-American philosophical tradition (also known as “analytic philosophy”), contemporary critical thinking is largely concerned with a statement’s truth. But some thinkers, such as Aristotle (in Rhetoric ), give substantial attention to rhetorical value.

The primary subject matter of critical thinking is the proper use and goals of a range of reasoning methods, how they are applied in a variety of social contexts, and errors in reasoning. This article also discusses the scope and virtues of critical thinking.

Critical thinking should not be confused with Critical Theory. Critical Theory refers to a way of doing philosophy that involves a moral critique of culture. A “critical” theory, in this sense, is a theory that attempts to disprove or discredit a widely held or influential idea or way of thinking in society. Thus, critical race theorists and critical gender theorists offer critiques of traditional views and latent assumptions about race and gender. Critical theorists may use critical thinking methodology, but their subject matter is distinct, and they also may offer critical analyses of critical thinking itself.

Table of Contents

The process of evaluating a statement traditionally begins with making sure we understand it; that is, a statement must express a clear meaning. A statement is generally regarded as clear if it expresses a proposition , which is the meaning the author of that statement intends to express, including definitions, referents of terms, and indexicals, such as subject, context, and time. There is significant controversy over what sort of “entity” propositions are, whether abstract objects or linguistic constructions or something else entirely. Whatever its metaphysical status, it is used here simply to refer to whatever meaning a speaker intends to convey in a statement.

The difficulty with identifying intended propositions is that we typically speak and think in natural languages (English, Swedish, French), and natural languages can be misleading. For instance, two different sentences in the same natural language may express the same proposition, as in these two English sentences:

Jamie is taller than his father. Jamie’s father is shorter than he.

Further, the same sentence in a natural language can express more than one proposition depending on who utters it at a time:

I am shorter than my father right now.

The pronoun “I” is an indexical; it picks out, or “indexes,” whoever utters the sentence and, therefore, expresses a different proposition for each new speaker who utters it. Similarly, “right now” is a temporal indexical; it indexes the time the sentence is uttered. The proposition it is used to express changes each new time the sentence is uttered and, therefore, may have a different truth value at different times (as, say, the speaker grows taller: “I am now five feet tall” may be true today, but false a year from now). Other indexical terms that can affect the meaning of the sentence include other pronouns (he, she, it) and definite articles (that, the).

Further still, different sentences in different natural languages may express the same proposition . For example, all of the following express the proposition “Snow is white”:

Snow is white. (English)

Der Schnee ist weiss. (German)

La neige est blanche. (French)

La neve é bianca. (Italian)

Finally, statements in natural languages are often vague or ambiguous , either of which can obscure the propositions actually intended by their authors. And even in cases where they are not vague or ambiguous, statements’ truth values sometimes vary from context to context. Consider the following example.

The English statement, “It is heavy,” includes the pronoun “it,” which (when used without contextual clues) is ambiguous because it can index any impersonal subject. If, in this case, “it” refers to the computer on which you are reading this right now, its author intends to express the proposition, “The computer on which you are reading this right now is heavy.” Further, the term “heavy” reflects an unspecified standard of heaviness (again, if contextual clues are absent). Assuming we are talking about the computer, it may be heavy relative to other computer models but not to automobiles. Further still, even if we identify or invoke a standard of heaviness by which to evaluate the appropriateness of its use in this context, there may be no weight at which an object is rightly regarded as heavy according to that standard. (For instance, is an object heavy because it weighs 5.3 pounds but not if it weighs 5.2 pounds? Or is it heavy when it is heavier than a mouse but lighter than an anvil?) This means “heavy” is a vague term. In order to construct a precise statement, vague terms (heavy, cold, tall) must often be replaced with terms expressing an objective standard (pounds, temperature, feet).

Part of the challenge of critical thinking is to clearly identify the propositions (meanings) intended by those making statements so we can effectively reason about them. The rules of language help us identify when a term or statement is ambiguous or vague, but they cannot, by themselves, help us resolve ambiguity or vagueness. In many cases, this requires assessing the context in which the statement is made or asking the author what she intends by the terms. If we cannot discern the meaning from the context and we cannot ask the author, we may stipulate a meaning, but this requires charity, to stipulate a plausible meaning, and humility, to admit when we discover that our stipulation is likely mistaken.

2. Argument and Evaluation

Once we are satisfied that a statement is clear, we can begin evaluating it. A statement can be evaluated according to a variety of standards. Commonly, statements are evaluated for truth, usefulness, or rationality. The most common of these goals is truth, so that is the focus of this article.

The truth of a statement is most commonly evaluated in terms of its relation to other statements and direct experiences. If a statement follows from or can be inferred from other statements that we already have good reasons to believe, then we have a reason to believe that statement. For instance, the statement “The ball is blue” can be derived from “The ball is blue and round.” Similarly, if a statement seems true in light of, or is implied by, an experience, then we have a reason to believe that statement. For instance, the experience of seeing a red car is a reason to believe, “The car is red.” (Whether these reasons are good enough for us to believe is a further question about justification , which is beyond the scope of this article, but see “ Epistemic Justification .”) Any statement we derive in these ways is called a conclusion . Though we regularly form conclusions from other statements and experiences—often without thinking about it—there is still a question of whether these conclusions are true: Did we draw those conclusions well? A common way to evaluate the truth of a statement is to identify those statements and experiences that support our conclusions and organize them into structures called arguments . (See also, “ Argument .”)

An argument is one or more statements (called premises ) intended to support the truth of another statement (the conclusion ). Premises comprise the evidence offered in favor of the truth of a conclusion. It is important to entertain any premises that are intended to support a conclusion, even if the attempt is unsuccessful. Unsuccessful attempts at supporting a proposition constitute bad arguments, but they are still arguments. The support intended for the conclusion may be formal or informal. In a formal, or deductive, argument, an arguer intends to construct an argument such that, if the premises are true, the conclusion must be true. This strong relationship between premises and conclusion is called validity . This relationship between the premises and conclusion is called “formal” because it is determined by the form (that is, the structure) of the argument (see §3). In an informal, or inductive , argument, the conclusion may be false even if the premises are true. In other words, whether an inductive argument is good depends on something more than the form of the argument. Therefore, all inductive arguments are invalid, but this does not mean they are bad arguments. Even if an argument is invalid, its premises can increase the probability that its conclusion is true. So, the form of inductive arguments is evaluated in terms of the strength the premises confer on the conclusion, and stronger inductive arguments are preferred to weaker ones (see §4). (See also, “ Deductive and Inductive Arguments .”)

Psychological states, such as sensations, memories, introspections, and intuitions often constitute evidence for statements. Although these states are not themselves statements, they can be expressed as statements. And when they are, they can be used in and evaluated by arguments. For instance, my seeing a red wall is evidence for me that, “There is a red wall,” but the physiological process of seeing is not a statement. Nevertheless, the experience of seeing a red wall can be expressed as the proposition, “I see a red wall” and can be included in an argument such as the following:

This is an inductive argument, though not a strong one. We do not yet know whether seeing something (under these circumstances) is reliable evidence for the existence of what I am seeing. Perhaps I am “seeing” in a dream, in which case my seeing is not good evidence that there is a wall. For similar reasons, there is also reason to doubt whether I am actually seeing. To be cautious, we might say we seem to see a red wall.

To be good , an argument must meet two conditions: the conclusion must follow from the premises—either validly or with a high degree of likelihood—and the premises must be true. If the premises are true and the conclusion follows validly, the argument is sound . If the premises are true and the premises make the conclusion probable (either objectively or relative to alternative conclusions), the argument is cogent .

Here are two examples:

In example 1, the premises are true. And since “larger than” is a transitive relation, the structure of the argument guarantees that, if the premises are true, the conclusion must be true. This means the argument is also valid. Since it is both valid and has true premises, this deductive argument is sound.

  Example 2:

In example 2, premise 1 is true, and let us assume premise 2 is true. The phrase “almost always” indicates that a majority of days in Montana are sunny, so that, for any day you choose, it will probably be a sunny day. Premise 2 says I am choosing days in February to visit. Together, these premises strongly support (though they do not guarantee) the conclusion that it will be sunny when I am there, and so this inductive argument is cogent.

In some cases, arguments will be missing some important piece, whether a premise or a conclusion. For instance, imagine someone says, “Well, she asked you to go, so you have to go.” The idea that you have to go does not follow logically from the fact that she asked you to go without more information. What is it about her asking you to go that implies you have to go? Arguments missing important information are called enthymemes . A crucial part of critical thinking is identifying missing or assumed information in order to effectively evaluate an argument. In this example, the missing premise might be that, “She is your boss, and you have to do what she asks you to do.” Or it might be that, “She is the woman you are interested in dating, and if you want a real chance at dating her, you must do what she asks.” Before we can evaluate whether her asking implies that you have to go, we need to know this missing bit of information. And without that missing bit of information, we can simply reply, “That conclusion doesn’t follow from that premise.”

The two categories of reasoning associated with soundness and cogency—formal and informal, respectively—are considered, by some, to be the only two types of argument. Others add a third category, called abductive reasoning, according to which one reasons according to the rules of explanation rather than the rules of inference . Those who do not regard abductive reasoning as a third, distinct category typically regard it as a species of informal reasoning. Although abductive reasoning has unique features, here it is treated, for reasons explained in §4d, as a species of informal reasoning, but little hangs on this characterization for the purposes of this article.

3. Formal Reasoning

Although critical thinking is widely regarded as a type of informal reasoning, it nevertheless makes substantial use of formal reasoning strategies. Formal reasoning is deductive , which means an arguer intends to infer or derive a proposition from one or more propositions on the basis of the form or structure exhibited by the premises. Valid argument forms guarantee that particular propositions can be derived from them. Some forms look like they make such guarantees but fail to do so (we identify these as formal fallacies in §5a). If an arguer intends or supposes that a premise or set of premises guarantee a particular conclusion, we may evaluate that argument form as deductive even if the form fails to guarantee the conclusion, and is thus discovered to be invalid.

Before continuing in this section, it is important to note that, while formal reasoning provides a set of strict rules for drawing valid inferences, it cannot help us determine the truth of many of our original premises or our starting assumptions. And in fact, very little critical thinking that occurs in our daily lives (unless you are a philosopher, engineer, computer programmer, or statistician) involves formal reasoning. When we make decisions about whether to board an airplane, whether to move in with our significant others, whether to vote for a particular candidate, whether it is worth it to drive ten miles faster the speed limit even if I am fairly sure I will not get a ticket, whether it is worth it to cheat on a diet, or whether we should take a job overseas, we are reasoning informally. We are reasoning with imperfect information (I do not know much about my flight crew or the airplane’s history), with incomplete information (no one knows what the future is like), and with a number of built-in biases, some conscious (I really like my significant other right now), others unconscious (I have never gotten a ticket before, so I probably will not get one this time). Readers who are more interested in these informal contexts may want to skip to §4.

An argument form is a template that includes variables that can be replaced with sentences. Consider the following form (found within the formal system known as sentential logic ):

This form was named modus ponens (Latin, “method of putting”) by medieval philosophers. p and q are variables that can be replaced with any proposition, however simple or complex. And as long as the variables are replaced consistently (that is, each instance of p is replaced with the same sentence and the same for q ), the conclusion (line 3), q , follows from these premises. To be more precise, the inference from the premises to the conclusion is valid . “Validity” describes a particular relationship between the premises and the conclusion, namely: in all cases , the conclusion follows necessarily from the premises, or, to use more technical language, the premises logically guarantee an instance of the conclusion.

Notice we have said nothing yet about truth . As critical thinkers, we are interested, primarily, in evaluating the truth of sentences that express propositions, but all we have discussed so far is a type of relationship between premises and conclusion (validity). This formal relationship is analogous to grammar in natural languages and is known in both fields as syntax . A sentence is grammatically correct if its syntax is appropriate for that language (in English, for example, a grammatically correct simple sentence has a subject and a predicate—“He runs.” “Laura is Chairperson.”—and it is grammatically correct regardless of what subject or predicate is used—“Jupiter sings.”—and regardless of whether the terms are meaningful—“Geflorble rowdies.”). Whether a sentence is meaningful, and therefore, whether it can be true or false, depends on its semantics , which refers to the meaning of individual terms (subjects and predicates) and the meaning that emerges from particular orderings of terms. Some terms are meaningless—geflorble; rowdies—and some orderings are meaningless even though their terms are meaningful—“Quadruplicity drinks procrastination,” and “Colorless green ideas sleep furiously.”.

Despite the ways that syntax and semantics come apart, if sentences are meaningful, then syntactic relationships between premises and conclusions allow reasoners to infer truth values for conclusions. Because of this, a more common definition of validity is this: it is not possible for all the premises to be true and the conclusion false . Formal logical systems in which syntax allows us to infer semantic values are called truth-functional or truth-preserving —proper syntax preserves truth throughout inferences.

The point of this is to note that formal reasoning only tells us what is true if we already know our premises are true. It cannot tell us whether our experiences are reliable or whether scientific experiments tell us what they seem to tell us. Logic can be used to help us determine whether a statement is true, but only if we already know some true things. This is why a broad conception of critical thinking is so important: we need many different tools to evaluate whether our beliefs are any good.

Consider, again, the form modus ponens , and replace p with “It is a cat” and q with “It is a mammal”:

In this case, we seem to “see” (in a metaphorical sense of see ) that the premises guarantee the truth of the conclusion. On reflection, it is also clear that the premises might not be true; for instance, if “it” picks out a rock instead of a cat, premise 1 is still true, but premise 2 is false. It is also possible for the conclusion to be true when the premises are false. For instance, if the “it” picks out a dog instead of a cat, the conclusion “It is a mammal” is true. But in that case, the premises do not guarantee that conclusion; they do not constitute a reason to believe the conclusion is true.

Summing up, an argument is valid if its premises logically guarantee an instance of its conclusion (syntactically), or if it is not possible for its premises to be true and its conclusion false (semantically). Logic is truth-preserving but not truth-detecting; we still need evidence that our premises are true to use logic effectively.

            A Brief Technical Point

Some readers might find it worth noting that the semantic definition of validity has two counterintuitive consequences. First, it implies that any argument with a necessarily true conclusion is valid. Notice that the condition is phrased hypothetically: if the premises are true, then the conclusion cannot be false. This condition is met if the conclusion cannot be false:

This is because the hypothetical (or “conditional”) statement would still be true even if the premises were false:

It is true of this argument that if the premises were true, the conclusion would be since the conclusion is true no matter what.

Second, the semantic formulation also implies that any argument with necessarily false premises is valid. The semantic condition for validity is met if the premises cannot be true:

In this case, if the premise were true, the conclusion could not be false (this is because anything follows syntactically from a contradiction), and therefore, the argument is valid. There is nothing particularly problematic about these two consequences. But they highlight unexpected implications of our standard formulations of validity, and they show why there is more to good arguments than validity.

Despite these counterintuitive implications, valid reasoning is essential to thinking critically because it is a truth-preserving strategy: if deductive reasoning is applied to true premises, true conclusions will result.

There are a number of types of formal reasoning, but here we review only some of the most common: categorical logic, propositional logic, modal logic, and predicate logic.

a. Categorical Logic

Categorical logic is formal reasoning about categories or collections of subjects, where subjects refers to anything that can be regarded as a member of a class, whether objects, properties, or events or even a single object, property, or event. Categorical logic employs the quantifiers “all,” “some,” and “none” to refer to the members of categories, and categorical propositions are formulated in four ways:

A claims: All As are Bs (where the capitals “A” and “B” represent categories of subjects).

E claims: No As are Bs.

I claims: Some As are Bs.

O claims: Some As are not Bs.

Categorical syllogisms are syllogisms (two-premised formal arguments) that employ categorical propositions. Here are two examples:

There are interesting limitations on what categorical logic can do. For instance, if one premise says that, “Some As are not Bs,” may we infer that some As are Bs, in what is known as an “existential assumption”? Aristotle seemed to think so ( De Interpretatione ), but this cannot be decided within the rules of the system. Further, and counterintuitively, it would mean that a proposition such as, “Some bachelors are not married,” is false since it implies that some bachelors are married.

Another limitation on categorical logic is that arguments with more than three categories cannot be easily evaluated for validity. The standard method for evaluating the validity of categorical syllogisms is the Venn diagram (named after John Venn, who introduced it in 1881), which expresses categorical propositions in terms of two overlapping circles and categorical arguments in terms of three overlapping circles, each circle representing a category of subjects.

Venn diagram for claim and Venn diagram for argument

A, B, and C represent categories of objects, properties, or events. The symbol “ ∩ ” comes from mathematical set theory to indicate “intersects with.” “A∩B” means all those As that are also Bs and vice versa. 

Though there are ways of constructing Venn diagrams with more than three categories, determining the validity of these arguments using Venn diagrams is very difficult (and often requires computers). These limitations led to the development of more powerful systems of formal reasoning.

b. Propositional Logic

Propositional, or sentential , logic has advantages and disadvantages relative to categorical logic. It is more powerful than categorical logic in that it is not restricted in the number of terms it can evaluate, and therefore, it is not restricted to the syllogistic form. But it is weaker than categorical logic in that it has no operators for quantifying over subjects, such as “all” or “some.” For those, we must appeal to predicate logic (see §3c below).

Basic propositional logic involves formal reasoning about propositions (as opposed to categories), and its most basic unit of evaluation is the atomic proposition . “Atom” means the smallest indivisible unit of something, and simple English statements (subject + predicate) are atomic wholes because if either part is missing, the word or words cease to be a statement, and therefore ceases to be capable of expressing a proposition. Atomic propositions are simple subject-predicate combinations, for instance, “It is a cat” and “I am a mammal.” Variable letters such as p and q in argument forms are replaced with semantically rich constants, indicated by capital letters, such as A and B . Consider modus ponens again (noting that the atomic propositions are underlined in the English argument):

As you can see from premise 1 of the Semantic Replacement, atomic propositions can be combined into more complex propositions using symbols that represent their logical relationships (such as “If…, then…”). These symbols are called “operators” or “connectives.” The five standard operators in basic propositional logic are:

These operations allow us to identify valid relations among propositions: that is, they allow us to formulate a set of rules by which we can validly infer propositions from and validly replace them with others. These rules of inference (such as modus ponens ; modus tollens ; disjunctive syllogism) and rules of replacement (such as double negation; contraposition; DeMorgan’s Law) comprise the syntax of propositional logic, guaranteeing the validity of the arguments employing them.

Two Rules of Inference:

Two Rules of Replacement:

For more, see “ Propositional Logic .”

c. Modal Logic

Standard propositional logic does not capture every type of proposition we wish to express (recall that it does not allow us to evaluate categorical quantifiers such as “all” or “some”). It also does not allow us to evaluate propositions expressed as possibly true or necessarily true, modifications that are called modal operators or modal quantifiers .

Modal logic refers to a family of formal propositional systems, the most prominent of which includes operators for necessity (□) and possibility (◊) (see §3d below for examples of other modal systems). If a proposition, p , is possibly true, ◊ p , it may or may not be true. If p is necessarily true, □ p , it must be true; it cannot be false. If p is necessarily false, either ~◊ p or □~ p , it must be false; it cannot be true.

There is a variety of modal systems, the weakest of which is called K (after Saul Kripke, who exerted important influence on the development of modal logic), and it involves only two additional rules:

Necessitation Rule:   If  A  is a theorem of  K , then so is □ A .

Distribution Axiom:  □( A ⊃ B ) ⊃ (□ A ⊃□ B ).  [If it is necessarily the case that if A, then B , then if it is necessarily the case that A, it is necessarily the case that B .]

Other systems maintain these rules and add others for increasing strength. For instance, the (S4) modal system includes axiom (4):

(4)  □ A ⊃ □□ A   [If it is necessarily the case that A, then it is necessarily necessary that A.]

An influential and intuitive way of thinking about modal concepts is the idea of “possible worlds” (see Plantinga, 1974; Lewis 1986). A world is just the set of all true propositions. The actual world is the set of all actually true propositions—everything that was true, is true, and (depending on what you believe about the future) will be true. A possible world is a way the actual world might have been. Imagine you wore green underwear today. The actual world might have been different in that way: you might have worn blue underwear. In this interpretation of modal quantifiers, there is a possible world in which you wore blue underwear instead of green underwear. And for every possibility like this, and every combination of those possibilities, there is a distinct possible world.

If a proposition is not possible, then there is no possible world in which that proposition is true. The statement, “That object is red all over and blue all over at the same time” is not true in any possible worlds. Therefore, it is not possible (~◊P), or, in other words, necessarily false (□~P). If a proposition is true in all possible worlds, it is necessarily true. For instance, the proposition, “Two plus two equal four,” is true in all possible worlds, so it is necessarily true (□P) or not possibly false (~◊~P).

All modal systems have a number of controversial implications, and there is not space to review them here. Here we need only note that modal logic is a type of formal reasoning that increases the power of propositional logic to capture more of what we attempt to express in natural languages. (For more, see “ Modal Logic: A Contemporary View .”)

d. Predicate Logic

Predicate logic, in particular, first-order predicate logic, is even more powerful than propositional logic. Whereas propositional logic treats propositions as atomic wholes, predicate logic allows reasoners to identify and refer to subjects of propositions, independently of their predicates. For instance, whereas the proposition, “Susan is witty,” would be replaced with a single upper-case letter, say “S,” in propositional logic, predicate logic would assign the subject “Susan” a lower-case letter, s, and the predicate “is witty” an upper-case letter, W, and the translation (or formula ) would be: Ws.

In addition to distinguishing subjects and predicates, first-order predicate logic allows reasoners to quantify over subjects. The quantifiers in predicate logic are “All…,” which is comparable to “All” quantifier in categorical logic and is sometimes symbolized with an upside-down A: ∀ (though it may not be symbolized at all), and “There is at least one…,” which is comparable to “Some” quantifier in categorical logic and is symbolized with a backward E: ∃. E and O claims are formed by employing the negation operator from propositional logic. In this formal system, the proposition, “Someone is witty,” for example, has the form: There is an x , such that x has the property of being witty, which is symbolized: (∃ x)(Wx). Similarly, the proposition, “Everyone is witty,” has the form: For all x, x has the property of being witty, which is symbolized (∀ x )( Wx ) or, without the ∀: ( x )( Wx ).

Predicate derivations are conducted according to the same rules of inference and replacement as propositional logic with the exception of four rules to accommodate adding and eliminating quantifiers.

Second-order predicate logic extends first-order predicate logic to allow critical thinkers to quantify over and draw inferences about subjects and predicates, including relations among subjects and predicates. In both first- and second-order logic, predicates typically take the form of properties (one-place predicates) or relations (two-place predicates), though there is no upper limit on place numbers. Second-order logic allows us to treat both as falling under quantifiers, such as e verything that is (specifically, that has the property of being) a tea cup and everything that is a bachelor is unmarried .

e. Other Formal Systems

It is worth noting here that the formal reasoning systems we have seen thus far (categorical, propositional, and predicate) all presuppose that truth is bivalent , that is, two-valued. The two values critical thinkers are most often concerned with are true and false , but any bivalent system is subject to the rules of inference and replacement of propositional logic. The most common alternative to truth values is the binary code of 1s and 0s used in computer programming. All logics that presuppose bivalence are called classical logics . In the next section, we see that not all formal systems are bivalent; there are non-classical logics . The existence of non-classical systems raises interesting philosophical questions about the nature of truth and the legitimacy of our basic rules of reasoning, but these questions are too far afield for this context. Many philosophers regard bivalent systems as legitimate for all but the most abstract and purely formal contexts. Included below is a brief description of three of the most common non-classical logics.

Tense logic , or temporal logic, is a formal modal system developed by Arthur Prior (1957, 1967, 1968) to accommodate propositional language about time. For example, in addition to standard propositional operators, tense logic includes four operators for indexing times: P “It has at some time been the case that…”; F “It will at some time be the case that…”; H “It has always been the case that…”; and G “It will always be the case that….”

Many-valued logic , or n -valued logic, is a family of formal logical systems that attempts to accommodate intuitions that suggest some propositions have values in addition to true and false. These are often motivated by intuitions that some propositions have neither of the classic truth values; their truth value is indeterminate (not just undeterminable, but neither true nor false), for example, propositions about the future such as, “There will be a sea battle tomorrow.” If the future does not yet exist, there is no fact about the future, and therefore, nothing for a proposition to express.

Fuzzy logic is a type of many-valued logic developed out of Lotfi Zadeh’s (1965) work on mathematical sets. Fuzzy logic attempts to accommodate intuitions that suggest some propositions have truth value in degrees, that is, some degree of truth between true and false. It is motivated by concerns about vagueness in reality, for example whether a certain color is red or some degree of red, or whether some temperature is hot or some degree of hotness.

Formal reasoning plays an important role in critical thinking, but not very often. There are significant limits to how we might use formal tools in our daily lives. If that is true, how do critical thinkers reason well when formal reasoning cannot help? That brings us to informal reasoning.

4. Informal Reasoning

Informal reasoning is inductive , which means that a proposition is inferred (but not derived) from one or more propositions on the basis of the strength provided by the premises (where “strength” means some degree of likelihood less than certainty or some degree of probability less than 1 but greater than 0; a proposition with 0% probability is necessarily false).

Particular premises grant strength to premises to the degree that they reflect certain relationships or structures in the world . For instance, if a particular type of event, p , is known to cause or indicate another type of event, q , then upon encountering an event of type p , we may infer that an event of type q is likely to occur. We may express this relationship among events propositionally as follows:

If the structure of the world (for instance, natural laws) makes premise 1 true, then, if premise 2 is true, we can reasonably (though not certainly) infer the conclusion.

Unlike formal reasoning, the adequacy of informal reasoning depends on how well the premises reflect relationships or structures in the world. And since we have not experienced every relationship among objects or events or every structure, we cannot infer with certainty that a particular conclusion follows from a true set of premises about these relationships or structures. We can only infer them to some degree of likelihood by determining to the best of our ability either their objective probability or their probability relative to alternative conclusions.

The objective probability of a conclusion refers to how likely, given the way the world is regardless of whether we know it , that conclusion is to be true. The epistemic probability of a conclusion refers to how likely that conclusion is to be true given what we know about the world , or more precisely, given our evidence for its objective likelihood.

Objective probabilities are determined by facts about the world and they are not truths of logic, so we often need evidence for objective probabilities. For instance, imagine you are about to draw a card from a standard playing deck of 52 cards. Given particular assumptions about the world (that this deck contains 52 cards and that one of them is the Ace of Spades), the objective likelihood that you will draw an Ace of Spades is 1/52. These assumptions allow us to calculate the objective probability of drawing an Ace of Spades regardless of whether we have ever drawn a card before. But these are assumptions about the world that are not guaranteed by logic: we have to actually count the cards, to be sure we count accurately and are not dreaming or hallucinating, and that our memory (once we have finished counting) reliably maintains our conclusions. None of these processes logically guarantees true beliefs. So, if our assumptions are correct, we know the objective probability of actually drawing an Ace of Spades in the real world. But since there is no logical guarantee that our assumptions are right, we are left only with the epistemic probability (the probability based on our evidence) of drawing that card. If our assumptions are right, then the objective probability is the same as our epistemic probability: 1/52. But even if we are right, objective and epistemic probabilities can come apart under some circumstances.

Imagine you draw a card without looking at it and lay it face down. What is the objective probability that that card is an Ace of Spades? The structure of the world has now settled the question, though you do not know the outcome. If it is an Ace of Spades, the objective probability is 1 (100%); it is the Ace of Spades. If it is not the Ace of Spades, the objective probability is 0 (0%); it is not the Ace of Spades. But what is the epistemic probability? Since you do not know any more about the world than you did before you drew the card, the epistemic probability is the same as before you drew it: 1/52.

Since much of the way the world is is hidden from us (like the card laid face down), and since it is not obvious that we perceive reality as it actually is (we do not know whether the actual coins we flip are evenly weighted or whether the actual dice we roll are unbiased), our conclusions about probabilities in the actual world are inevitably epistemic probabilities. We can certainly calculate objective probabilities about abstract objects (for instance, hypothetically fair coins and dice—and these calculations can be evaluated formally using probability theory and statistics), but as soon as we apply these calculations to the real world, we must accommodate the fact that our evidence is incomplete.

There are four well-established categories of informal reasoning: generalization, analogy, causal reasoning, and abduction.

a. Generalization

Generalization is a way of reasoning informally from instances of a type to a conclusion about the type. This commonly takes two forms: reasoning from a sample of a population to the whole population , and reasoning from past instances of an object or event to future instances of that object or event . The latter is sometimes called “enumerative induction” because it involves enumerating past instances of a type in order to draw an inference about a future instance. But this distinction is weak; both forms of generalization use past or current data to infer statements about future instances and whole current populations.

A popular instance of inductive generalization is the opinion poll: a sample of a population of people is polled with respect to some statement or belief. For instance, if we poll 57 sophomores enrolled at a particular college about their experiences of living in dorms, these 57 comprise our sample of the population of sophomores at that particular college. We want to be careful how we define our population given who is part of our sample. Not all college students are like sophomores, so it is not prudent to draw inferences about all college students from these sophomores. Similarly, sophomores at other colleges are not necessarily like sophomores at this college (it could be the difference between a liberal arts college and a research university), so it is prudent not to draw inferences about all sophomores from this sample at a particular college.

Let us say that 90% of the 57 sophomores we polled hate the showers in their dorms. From this information, we might generalize in the following way:

Is this good evidence that 90% of all sophomores at that college hate the showers in their dorms?

A generalization is typically regarded as a good argument if its sample is representative of its population. A sample is representative if it is similar in the relevant respects to its population. A perfectly representative sample would include the whole population: the sample would be identical with the population, and thus, perfectly representative. In that case, no generalization is necessary. But we rarely have the time or resources to evaluate whole populations. And so, a sample is generally regarded as representative if it is large relative to its population and unbiased .

In our example, whether our inference is good depends, in part, on how many sophomores there are. Are there 100, 2,000? If there are only 100, then our sample size seems adequate—we have polled over half the population. Is our sample unbiased? That depends on the composition of the sample. Is it comprised only of women or only of men? If this college is not co-ed, that is not a problem. But if the college is co-ed and we have sampled only women, our sample is biased against men. We have information only about female freshmen dorm experiences, and therefore, we cannot generalize about male freshmen dorm experiences.

How large is large enough? This is a difficult question to answer. A poll of 1% of your high school does not seem large enough to be representative. You should probably gather more data. Yet a poll of 1% of your whole country is practically impossible (you are not likely to ever have enough grant money to conduct that poll). But could a poll of less than 1% be acceptable? This question is not easily answered, even by experts in the field. The simple answer is: the more, the better. The more complicated answer is: it depends on how many other factors you can control for, such as bias and hidden variables (see §4c for more on experimental controls).

Similarly, we might ask what counts as an unbiased sample. An overly simple answer is: the sample is taken randomly, that is, by using a procedure that prevents consciously or unconsciously favoring one segment of the population over another (flipping a coin, drawing lottery balls). But reality is not simple. In political polls, it is important not to use a selection procedure that results in a sample with a larger number of members of one political party than another relative to their distribution in the population, even if the resulting sample is random. For example, the two most prominent parties in the U.S. are the Democratic Party and the Republican Party. If 47% of the U.S. is Republican and 53% is Democrat, an unbiased sample would have approximately 47% Republicans and 53% Democrats. But notice that simply choosing at random may not guarantee that result; it could easily occur, just by choosing randomly, that our sample has 70% Democrats and 30% Republicans (suppose our computer chose, albeit randomly, from a highly Democratic neighborhood). Therefore, we want to control for representativeness in some criteria, such as gender, age, and education. And we explicitly want to avoid controlling for the results we are interested in; if we controlled for particular answers to the questions on our poll, we would not learn anything—we would get all and only the answers we controlled for.

Difficulties determining representativeness suggest that reliable generalizations are not easy to construct. If we generalize on the basis of samples that are too small or if we cannot control for bias, we commit the informal fallacy of hasty generalization (see §5b). In order to generalize well, it seems we need a bit of machinery to guarantee representativeness. In fact, it seems we need an experiment, one of the primary tools in causal reasoning (see §4c below).

Argument from Analogy , also called analogical reasoning , is a way of reasoning informally about events or objects based on their similarities. A classic instance of reasoning by analogy occurs in archaeology, when researchers attempt to determine whether a stone object is an artifact (a human-made item) or simply a rock. By comparing the features of an unknown stone with well-known artifacts, archaeologists can infer whether a particular stone is an artifact. Other examples include identifying animals’ tracks by their similarities with pictures in a guidebook and consumer reports on the reliability of products.

To see how arguments from analogy work in detail, imagine two people who, independently of one another, want to buy a new pickup truck. Each chooses a make and model he or she likes, and let us say they decide on the same truck. They then visit a number of consumer reporting websites to read reports on trucks matching the features of the make and model they chose, for instance, the year it was built, the size of the engine (6 cyl. or 8 cyl.), the type of transmission (2WD or 4WD), the fuel mileage, and the cab size (standard, extended, crew). Now, let us say one of our prospective buyers is interested in safety —he or she wants a tough, safe vehicle that will protect against injuries in case of a crash. The other potential buyer is interested in mechanical reliability —he or she does not want to spend a lot of time and money fixing mechanical problems.

With this in mind, here is how our two buyers might reason analogically about whether to purchase the truck (with some fake report data included):

Are the features of these analogous vehicles (the ones reported on) sufficiently numerous and relevant for helping our prospective truck buyers decide whether to purchase the truck in question (the one on the lot)? Since we have some idea that the type of engine and transmission in a vehicle contribute to its mechanical reliability, Buyer 2 may have some relevant features on which to draw a reliable analogy. Fuel mileage and cab size are not obviously relevant, but engine specifications seem to be. Are these specifications numerous enough? That depends on whether anything else that we are not aware of contributes to overall reliability. Of course, if the trucks having the features we know also have all other relevant features we do not know (if there are any), then Buyer 2 may still be able to draw a reliable inference from analogy. Of course, we do not currently know this.

Alternatively, Buyer 1 seems to have very few relevant features on which to draw a reliable analogy. The features listed are not obviously related to safety. Are there safety options a buyer may choose but that are not included in the list? For example, can a buyer choose side-curtain airbags, or do such airbags come standard in this model? Does cab size contribute to overall safety? Although there are a number of similarities between the trucks, it is not obvious that we have identified features relevant to safety or whether there are enough of them. Further, reports of “feeling safe” are not equivalent to a truck actually being safe. Better evidence would be crash test data or data from actual accidents involving this truck. This information is not likely to be on a consumer reports website.

A further difficulty is that, in many cases, it is difficult to know whether many similarities are necessary if the similarities are relevant. For instance, if having lots of room for passengers is your primary concern, then any other features are relevant only insofar as they affect cab size. The features that affect cab size may be relatively small.

This example shows that arguments from analogy are difficult to formulate well. Arguments from analogy can be good arguments when critical thinkers identify a sufficient number of features of known objects that are also relevant to the feature inferred to be shared by the object in question. If a rock is shaped like a cutting tool, has marks consistent with shaping and sharpening, and has wear marks consistent with being held in a human hand, it is likely that rock is an artifact. But not all cases are as clear.

It is often difficult to determine whether the features we have identified are sufficiently numerous or relevant to our interests. To determine whether an argument from analogy is good, a person may need to identify a causal relationship between those features and the one in which she is interested (as in the case with a vehicle’s mechanical reliability). This usually takes the form of an experiment, which we explore below (§4c).

Difficulties with constructing reliable generalizations and analogies have led critical thinkers to develop sophisticated methods for controlling for the ways these arguments can go wrong. The most common way to avoid the pitfalls of these arguments is to identify the causal structures in the world that account for or underwrite successful generalizations and analogies. Causal arguments are the primary method of controlling for extraneous causal influences and identifying relevant causes. Their development and complexity warrant regarding them as a distinct form of informal reasoning.

c. Causal Reasoning

Causal arguments attempt to draw causal conclusions (that is, statements that express propositions about causes: x causes y ) from premises about relationships among events or objects. Though it is not always possible to construct a causal argument, when available, they have an advantage over other types of inductive arguments in that they can employ mechanisms (experiments) that reduce the risks involved in generalizations and analogies.

The interest in identifying causal relationships often begins with the desire to explain correlations among events (as pollen levels increase, so do allergy symptoms) or with the desire to replicate an event (building muscle, starting a fire) or to eliminate an event (polio, head trauma in football).

Correlations among events may be positive (where each event increases at roughly the same rate) or negative (where one event decreases in proportion to another’s increase). Correlations suggest a causal relationship among the events correlated.

But we must be careful; correlations are merely suggestive—other forces may be at work. Let us say the y-axis in the charts above represents the number of millionaires in the U.S. and the x-axis represents the amount of money U.S. citizens pay for healthcare each year. Without further analysis, a positive correlation between these two may lead someone to conclude that increasing wealth causes people to be more health conscious and to seek medical treatment more often. A negative correlation may lead someone to conclude that wealth makes people healthier and, therefore, that they need to seek medical care less frequently.

Unfortunately, correlations can occur without any causal structures (mere coincidence) or because of a third, as-yet-unidentified event (a cause common to both events, or “common cause”), or the causal relationship may flow in an unexpected direction (what seems like the cause is really the effect). In order to determine precisely which event (if any) is responsible for the correlation, reasoners must eliminate possible influences on the correlation by “controlling” for possible influences on the relationship (variables).

Critical thinking about causes begins by constructing hypotheses about the origins of particular events. A hypothesis is an explanation or event that would account for the event in question. For example, if the question is how to account for increased acne during adolescence, and we are not aware of the existence of hormones, we might formulate a number of hypotheses about why this happens: during adolescence, people’s diets change (parents no longer dictate their meals), so perhaps some types of food cause acne; during adolescence, people become increasingly anxious about how they appear to others, so perhaps anxiety or stress causes acne; and so on.

After we have formulated a hypothesis, we identify a test implication that will help us determine whether our hypothesis is correct. For instance, if some types of food cause acne, we might choose a particular food, say, chocolate, and say: if chocolate causes acne (hypothesis), then decreasing chocolate will decrease acne (test implication). We then conduct an experiment to see whether our test implication occurs.

Reasoning about our experiment would then look like one of the following arguments:

There are a couple of important things to note about these arguments. First, despite appearances, both are inductive arguments. The one on the left commits the formal fallacy of affirming the consequent, so, at best, the premises confer only some degree of probability on the conclusion. The argument on the right looks to be deductive (on the face of it, it has the valid form modus tollens ), but it would be inappropriate to regard it deductively. This is because we are not evaluating a logical connection between H and TI, we are evaluating a causal connection—TI might be true or false regardless of H (we might have chosen an inappropriate test implication or simply gotten lucky), and therefore, we cannot conclude with certainty that H does not causally influence TI. Therefore, “If…, then…” statements in experiments must be read as causal conditionals and not material conditionals (the term for how we used conditionals above).

Second, experiments can go wrong in many ways, so no single experiment will grant a high degree of probability to its causal conclusion. Experiments may be biased by hidden variables (causes we did not consider or detect, such as age, diet, medical history, or lifestyle), auxiliary assumptions (the theoretical assumptions by which evaluating the results may be faulty), or underdetermination (there may be a number of hypotheses consistent with those results; for example, if it is actually sugar that causes acne, then chocolate bars, ice cream, candy, and sodas would yield the same test results). Because of this, experiments either confirm or disconfirm a hypothesis; that is, they give us some reason (but not a particularly strong reason) to believe our hypothesized causes are or are not the causes of our test implications, and therefore, of our observations (see Quine and Ullian, 1978). Because of this, experiments must be conducted many times, and only after we have a number of confirming or disconfirming results can we draw a strong inductive conclusion. (For more, see “ Confirmation and Induction .”)

Experiments may be formal or informal . In formal experiments, critical thinkers exert explicit control over experimental conditions: experimenters choose participants, include or exclude certain variables, and identify or introduce hypothesized events. Test subjects are selected according to control criteria (criteria that may affect the results and, therefore, that we want to mitigate, such as age, diet, and lifestyle) and divided into control groups (groups where the hypothesized cause is absent) and experimental groups (groups where the hypothesized cause is present, either because it is introduced or selected for).

Subjects are then placed in experimental conditions. For instance, in a randomized study, the control group receives a placebo (an inert medium) whereas the experimental group receives the hypothesized cause—the putative cause is introduced, the groups are observed, and the results are recorded and compared. When a hypothesized cause is dangerous (such as smoking) or its effects potentially irreversible (for instance, post-traumatic stress disorder), the experimental design must be restricted to selecting for the hypothesized cause already present in subjects, for example, in retrospective (backward-looking) and prospective (forward-looking) studies. In all types of formal experiments, subjects are observed under exposure to the test or placebo conditions for a specified time, and results are recorded and compared.

In informal experiments, critical thinkers do not have access to sophisticated equipment or facilities and, therefore, cannot exert explicit control over experimental conditions. They are left to make considered judgments about variables. The most common informal experiments are John Stuart Mill’s five methods of inductive reasoning, called Mill’s Methods, which he first formulated in A System of Logic (1843). Here is a very brief summary of Mill’s five methods:

(1) The Method of Agreement

If all conditions containing the event y also contain x , x is probably the cause of y .

For example:

“I’ve eaten from the same box of cereal every day this week, but all the times I got sick after eating cereal were times when I added strawberries. Therefore, the strawberries must be bad.”

(2) The Method of Difference

If all conditions lacking y also lack x , x is probably the cause of y .

“The organization turned all its tax forms in on time for years, that is, until our comptroller, George, left; after that, we were always late. Only after George left were we late. Therefore, George was probably responsible for getting our tax forms in on time.”

(3) The Joint Method of Agreement and Difference

If all conditions containing event y also contain event x , and all events lacking y also lack x , x is probably the cause of y .

“The conditions at the animal shelter have been pretty regular, except we had a string of about four months last year when the dogs barked all night, every night. But at the beginning of those four months we sheltered a redbone coonhound, and the barking stopped right after a family adopted her. All the times the redbone hound wasn’t present, there was no barking. Only the time she was present was there barking. Therefore, she probably incited all the other dogs to bark.”

(4) The Method of Concomitant Variation

If the frequency of event y increases and decreases as event x increases and decreases, respectively, x is probably the cause of y .

“We can predict the amount of alcohol sales by the rate of unemployment. As unemployment rises, so do alcohol sales. As unemployment drops, so do alcohol sales. Last quarter marked the highest unemployment in three years, and our sales last quarter are the highest they had been in those three years. Therefore, unemployment probably causes people to buy alcohol.”

(5) The Method of Residues

If a number of factors x , y , and z , may be responsible for a set of events A , B , and C , and if we discover reasons for thinking that x is the cause of A and y is the cause of B , then we have reason to believe z is the cause of C .

“The people who come through this medical facility are usually starving and have malaria, and a few have polio. We are particularly interested in treating the polio. Take this patient here: she is emaciated, which is caused by starvation; and she has a fever, which is caused by malaria. But notice that her muscles are deteriorating, and her bones are sore. This suggests she also has polio.”

d. Abduction

Not all inductive reasoning is inferential. In some cases, an explanation is needed before we can even begin drawing inferences. Consider Darwin’s idea of natural selection. Natural selection is not an object, like a blood vessel or a cellular wall, and it is not, strictly speaking, a single event. It cannot be detected in individual organisms or observed in a generation of offspring. Natural selection is an explanation of biodiversity that combines the process of heritable variation and environmental pressures to account for biomorphic change over long periods of time. With this explanation in hand, we can begin to draw some inferences. For instance, we can separate members of a single species of fruit flies, allow them to reproduce for several generations, and then observe whether the offspring of the two groups can reproduce. If we discover they cannot reproduce, this is likely due to certain mutations in their body types that prevent them from procreating. And since this is something we would expect if natural selection were true, we have one piece of confirming evidence for natural selection. But how do we know the explanations we come up with are worth our time?

Coined by C. S. Peirce (1839-1914), abduction , also called retroduction, or inference to the best explanation , refers to a way of reasoning informally that provides guidelines for evaluating explanations. Rather than appealing to types of arguments (generalization, analogy, causation), the value of an explanation depends on the theoretical virtues it exemplifies. A theoretical virtue is a quality that renders an explanation more or less fitting as an account of some event. What constitutes fittingness (or “loveliness,” as Peter Lipton (2004) calls it) is controversial, but many of the virtues are intuitively compelling, and abduction is a widely accepted tool of critical thinking.

The most widely recognized theoretical virtue is probably simplicity , historically associated with William of Ockham (1288-1347) and known as Ockham’s Razor . A legend has it that Ockham was asked whether his arguments for God’s existence prove that only one God exists or whether they allow for the possibility that many gods exist. He supposedly responded, “Do not multiply entities beyond necessity.” Though this claim is not found in his writings, Ockham is now famous for advocating that we restrict our beliefs about what is true to only what is absolutely necessary for explaining what we observe.

In contemporary theoretical use, the virtue of simplicity is invoked to encourage caution in how many mechanisms we introduce to explain an event. For example, if natural selection can explain the origin of biological diversity by itself, there is no need to hypothesize both natural selection and a divine designer. But if natural selection cannot explain the origin of, say, the duck-billed platypus, then some other mechanism must be introduced. Of course, not just any mechanism will do. It would not suffice to say the duck-billed platypus is explained by natural selection plus gremlins. Just why this is the case depends on other theoretical virtues; ideally, the virtues work together to help critical thinkers decide among competing hypotheses to test. Here is a brief sketch of some other theoretical virtues or ideals:

Conservatism – a good explanation does not contradict well-established views in a field.

Independent Testability – a good explanation is successful on different occasions under similar circumstances.

Fecundity – a good explanation leads to results that make even more research possible.

Explanatory Depth – a good explanation provides details of how an event occurs.

Explanatory Breadth – a good explanation also explains other, similar events.

Though abduction is structurally distinct from other inductive arguments, it functions similarly in practice: a good explanation provides a probabilistic reason to believe a proposition. This is why it is included here as a species of inductive reasoning. It might be thought that explanations only function to help critical thinkers formulate hypotheses, and do not, strictly speaking, support propositions. But there are intuitive examples of explanations that support propositions independently of however else they may be used. For example, a critical thinker may argue that material objects exist outside our minds is a better explanation of why we perceive what we do (and therefore, a reason to believe it) than that an evil demon is deceiving me , even if there is no inductive or deductive argument sufficient for believing that the latter is false. (For more, see “ Charles Sanders Peirce: Logic .”)

5. Detecting Poor Reasoning

Our attempts at thinking critically often go wrong, whether we are formulating our own arguments or evaluating the arguments of others. Sometimes it is in our interests for our reasoning to go wrong, such as when we would prefer someone to agree with us than to discover the truth value of a proposition. Other times it is not in our interests; we are genuinely interested in the truth, but we have unwittingly made a mistake in inferring one proposition from others. Whether our errors in reasoning are intentional or unintentional, such errors are called fallacies (from the Latin, fallax, which means “deceptive”). Recognizing and avoiding fallacies helps prevent critical thinkers from forming or maintaining defective beliefs.

Fallacies occur in a number of ways. An argument’s form may seem to us valid when it is not, resulting in a formal fallacy . Alternatively, an argument’s premises may seem to support its conclusion strongly but, due to some subtlety of meaning, do not, resulting in an informal fallacy . Additionally, some of our errors may be due to unconscious reasoning processes that may have been helpful in our evolutionary history, but do not function reliably in higher order reasoning. These unconscious reasoning processes are now widely known as heuristics and biases . Each type is briefly explained below.

a. Formal Fallacies

Formal fallacies occur when the form of an argument is presumed or seems to be valid (whether intentionally or unintentionally) when it is not. Formal fallacies are usually invalid variations of valid argument forms. Consider, for example, the valid argument form modus ponens (this is one of the rules of inference mentioned in §3b):

modus ponens (valid argument form)

In modus ponens , we assume or “affirm” both the conditional and the left half of the conditional (called the antecedent ): (p à q) and p. From these, we can infer that q, the second half or consequent , is true. This a valid argument form: if the premises are true, the conclusion cannot be false.

Sometimes, however, we invert the conclusion and the second premise, affirming that the conditional, (p à q), and the right half of the conditional, q (the consequent), are true, and then inferring that the left half, p (the antecedent), is true. Note in the example below how the conclusion and second premise are switched. Switching them in this way creates a problem.

To get an intuitive sense of why “affirming the consequent” is a problem, consider this simple example:

affirming the consequent

From the fact that something is a mammal, we cannot conclude that it is a cat. It may be a dog or a mouse or a whale. The premises can be true and yet the conclusion can still be false. Therefore, this is not a valid argument form. But since it is an easy mistake to make, it is included in the set of common formal fallacies.

Here is a second example with the rule of inference called modus tollens . Modus tollens involves affirming a conditional, (p à q), and denying that conditional’s consequent: ~q. From these two premises, we can validly infer the denial of the antecedent: ~p. But if we switch the conclusion and the second premise, we get another fallacy, called denying the antecedent .

Technically, all informal reasoning is formally fallacious—all informal arguments are invalid. Nevertheless, since those who offer inductive arguments rarely presume they are valid, we do not regard them as reasoning fallaciously.

b. Informal Fallacies

Informal fallacies occur when the meaning of the terms used in the premises of an argument suggest a conclusion that does not actually follow from them (the conclusion either follows weakly or with no strength at all). Consider an example of the informal fallacy of equivocation , in which a word with two distinct meanings is used in both of its meanings:

In this case, the argument’s premises are true when the word “law” is rightly interpreted, but the conclusion does not follow because the word law has a different referent in premise 1 (political laws) than in premise 2 (a law of nature). This argument equivocates on the meaning of law and is, therefore, fallacious.

Consider, also, the informal fallacy of ad hominem , abusive, when an arguer appeals to a person’s character as a reason to reject her proposition:

“Elizabeth argues that humans do not have souls; they are simply material beings. But Elizabeth is a terrible person and often talks down to children and the elderly. Therefore, she could not be right that humans do not have souls.”

The argument might look like this:

The conclusion does not follow because whether Elizabeth is a terrible person is irrelevant to the truth of the proposition that humans do not have souls. Elizabeth’s argument for this statement is relevant, but her character is not.

Another way to evaluate this fallacy is to note that, as the argument stands, it is an enthymeme (see §2); it is missing a crucial premise, namely: If anyone is a terrible person, that person makes false statements. But this premise is clearly false. There are many ways in which one can be a terrible person, and not all of them imply that someone makes false statements. (In fact, someone could be terrible precisely because they are viciously honest.) Once we fill in the missing premise, we see the argument is not cogent because at least one premise is false.

Importantly, we face a number of informal fallacies on a daily basis, and without the ability to recognize them, their regularity can make them seem legitimate. Here are three others that only scratch the surface:

Appeal to the People: We are often encouraged to believe or do something just because everyone else does. We are encouraged to believe what our political party believes, what the people in our churches or synagogues or mosques believe, what people in our family believe, and so on. We are encouraged to buy things because they are “bestsellers” (lots of people buy them). But the fact that lots of people believe or do something is not, on its own, a reason to believe or do what they do.

Tu Quoque (You, too!): We are often discouraged from pursuing a conclusion or action if our own beliefs or actions are inconsistent with them. For instance, if someone attempts to argue that everyone should stop smoking, but that person smokes, their argument is often given less weight: “Well, you smoke! Why should everyone else quit?” But the fact that someone believes or does something inconsistent with what they advocate does not, by itself, discredit the argument. Hypocrites may have very strong arguments despite their personal inconsistencies.

Base Rate Neglect: It is easy to look at what happens after we do something or enact a policy and conclude that the act or policy caused those effects. Consider a law reducing speed limits from 75 mph to 55 mph in order to reduce highway accidents. And, in fact, in the three years after the reduction, highway accidents dropped 30%! This seems like a direct effect of the reduction. However, this is not the whole story. Imagine you looked back at the three years prior to the law and discovered that accidents had dropped 30% over that time, too. If that happened, it might not actually be the law that caused the reduction in accidents. The law did not change the trend in accident reduction. If we only look at the evidence after the law, we are neglecting the rate at which the event occurred without the law. The base rate of an event is the rate that the event occurs without the potential cause under consideration. To take another example, imagine you start taking cold medicine, and your cold goes away in a week. Did the cold medicine cause your cold to go away? That depends on how long colds normally last and when you took the medicine. In order to determine whether a potential cause had the effect you suspect, do not neglect to compare its putative effects with the effects observed without that cause.

For more on formal and informal fallacies and over 200 different types with examples, see “ Fallacies .”

c. Heuristics and Biases

In the 1960s, psychologists began to suspect there is more to human reasoning than conscious inference. Daniel Kahneman and Amos Tversky confirmed these suspicions with their discoveries that many of the standard assumptions about how humans reason in practice are unjustified. In fact, humans regularly violate these standard assumptions, the most significant for philosophers and economists being that humans are fairly good at calculating the costs and benefits of their behavior; that is, they naturally reason according to the dictates of Expected Utility Theory. Kahneman and Tversky showed that, in practice, reasoning is affected by many non-rational influences, such as the wording used to frame scenarios (framing bias) and information most vividly available to them (the availability heuristic).

Consider the difference in your belief about the likelihood of getting robbed before and after seeing a news report about a recent robbery, or the difference in your belief about whether you will be bitten by a shark the week before and after Discovery Channel’s “Shark Week.” For most of us, we are likely to regard their likelihood as higher after we have seen these things on television than before. Objectively, they are no more or less likely to happen regardless of our seeing them on television, but we perceive they are more likely because their possibility is more vivid to us. These are examples of the availability heuristic.

Since the 1960s, experimental psychologists and economists have conducted extensive research revealing dozens of these unconscious reasoning processes, including ordering bias , the representativeness heuristic , confirmation bias , attentional bias , and the anchoring effect . The field of behavioral economics, made popular by Dan Ariely (2008; 2010; 2012) and Richard Thaler and Cass Sunstein (2009), emerged from and contributes to heuristics and biases research and applies its insights to social and economic behaviors.

Ideally, recognizing and understanding these unconscious, non-rational reasoning processes will help us mitigate their undermining influence on our reasoning abilities (Gigerenzer, 2003). However, it is unclear whether we can simply choose to overcome them or whether we have to construct mechanisms that mitigate their influence (for instance, using double-blind experiments to prevent confirmation bias).

6. The Scope and Virtues of Good Reasoning

Whether the process of critical thinking is productive for reasoners—that is, whether it actually answers the questions they are interested in answering—often depends on a number of linguistic, psychological, and social factors. We encountered some of the linguistic factors in §1. In closing, let us consider some of the psychological and social factors that affect the success of applying the tools of critical thinking.

Not all psychological and social contexts are conducive for effective critical thinking. When reasoners are depressed or sad or otherwise emotionally overwhelmed, critical thinking can often be unproductive or counterproductive. For instance, if someone’s child has just died, it would be unproductive (not to mention cruel) to press the philosophical question of why a good God would permit innocents to suffer or whether the child might possibly have a soul that could persist beyond death. Other instances need not be so extreme to make the same point: your company’s holiday party (where most people would rather remain cordial and superficial) is probably not the most productive context in which to debate the president’s domestic policy or the morality of abortion.

The process of critical thinking is primarily about detecting truth, and truth may not always be of paramount value. In some cases, comfort or usefulness may take precedence over truth. The case of the loss of a child is a case where comfort seems to take precedence over truth. Similarly, consider the case of determining what the speed limit should be on interstate highways. Imagine we are trying to decide whether it is better to allow drivers to travel at 75 mph or to restrict them to 65. To be sure, there may be no fact of the matter as to which is morally better, and there may not be any difference in the rate of interstate deaths between states that set the limit at 65 and those that set it at 75. But given the nature of the law, a decision about which speed limit to set must be made. If there is no relevant difference between setting the limit at 65 and setting it at 75, critical thinking can only tell us that , not which speed limit to set. This shows that, in some cases, concern with truth gives way to practical or preferential concerns (for example, Should I make this decision on the basis of what will make citizens happy? Should I base it on whether I will receive more campaign contributions from the business community?). All of this suggests that critical thinking is most productive in contexts where participants are already interested in truth.

b. The Principle of Charity/Humility

Critical thinking is also most productive when people in the conversation regard themselves as fallible, subject to error, misinformation, and deception. The desire to be “right” has a powerful influence on our reasoning behavior. It is so strong that our minds bias us in favor of the beliefs we already hold even in the face of disconfirming evidence (a phenomenon known as “confirmation bias”). In his famous article, “The Ethics of Belief” (1878), W. K. Clifford notes that, “We feel much happier and more secure when we think we know precisely what to do, no matter what happens, than when we have lost our way and do not know where to turn. … It is the sense of power attached to a sense of knowing that makes men desirous of believing, and afraid of doubting” (2010: 354).

Nevertheless, when we are open to the possibility that we are wrong, that is, if we are humble about our conclusions and we interpret others charitably, we have a better chance at having rational beliefs in two senses. First, if we are genuinely willing to consider evidence that we are wrong—and we demonstrate that humility—then we are more likely to listen to others when they raise arguments against our beliefs. If we are certain we are right, there would be little reason to consider contrary evidence. But if we are willing to hear it, we may discover that we really are wrong and give up faulty beliefs for more reasonable ones.

Second, if we are willing to be charitable to arguments against our beliefs, then if our beliefs are unreasonable, we have an opportunity to see the ways in which they are unreasonable. On the other hand, if our beliefs are reasonable, then we can explain more effectively just how well they stand against the criticism. This is weakly analogous to competition in certain types of sporting events, such as basketball. If you only play teams that are far inferior to your own, you do not know how good your team really is. But if you can beat a well-respected team on fair terms, any confidence you have is justified.

c. The Principle of Caution

In our excitement over good arguments, it is easy to overextend our conclusions, that is, to infer statements that are not really warranted by our evidence. From an argument for a first, uncaused cause of the universe, it is tempting to infer the existence of a sophisticated deity such as that of the Judeo-Christian tradition. From an argument for the compatibilism of the free will necessary for moral responsibility and determinism, it is tempting to infer that we are actually morally responsible for our behaviors. From an argument for negative natural rights, it is tempting to infer that no violation of a natural right is justifiable. Therefore, it is prudent to continually check our conclusions to be sure they do not include more content than our premises allow us to infer.

Of course, the principle of caution must itself be used with caution. If applied too strictly, it may lead reasoners to suspend all belief, and refrain from interacting with one another and their world. This is not, strictly speaking, problematic; ancient skeptics, such as the Pyrrhonians, advocated suspending all judgments except those about appearances in hopes of experiencing tranquility. However, at least some judgments about the long-term benefits and harms seem indispensable even for tranquility, for instance, whether we should retaliate in self-defense against an attacker or whether we should try to help a loved one who is addicted to drugs or alcohol.

d. The Expansiveness of Critical Thinking

The importance of critical thinking cannot be overstated because its relevance extends into every area of life, from politics, to science, to religion, to ethics. Not only does critical thinking help us draw inferences for ourselves, it helps us identify and evaluate the assumptions behind statements, the moral implications of statements, and the ideologies to which some statements commit us. This can be a disquieting and difficult process because it forces us to wrestle with preconceptions that might not be accurate. Nevertheless, if the process is conducted well, it can open new opportunities for dialogue, sometimes called “critical spaces,” that allow people who might otherwise disagree to find beliefs in common from which to engage in a more productive conversation.

It is this possibility of creating critical spaces that allows philosophical approaches like Critical Theory to effectively challenge the way social, political, and philosophical debates are framed. For example, if a discussion about race or gender or sexuality or gender is framed in terms that, because of the origins those terms or the way they have functioned socially, alienate or disproportionately exclude certain members of the population, then critical space is necessary for being able to evaluate that framing so that a more productive dialogue can occur (see Foresman, Fosl, and Watson, 2010, ch. 10 for more on how critical thinking and Critical Theory can be mutually supportive).

e. Productivity and the Limits of Rationality

Despite the fact that critical thinking extends into every area of life, not every important aspect of our lives is easily or productively subjected to the tools of language and logic. Thinkers who are tempted to subject everything to the cold light of reason may discover they miss some of what is deeply enjoyable about living. The psychologist Abraham Maslow writes, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (1966: 16). But it is helpful to remember that language and logic are tools, not the projects themselves. Even formal reasoning systems depend on axioms that are not provable within their own systems (consider Euclidean geometry or Peano arithmetic). We must make some decisions about what beliefs to accept and how to live our lives on the basis of considerations outside of critical thinking.

Borrowing an example from William James (1896), consider the statement, “Religion X is true.” James says that, while some people find this statement interesting, and therefore, worth thinking critically about, others may not be able to consider the truth of the statement. For any particular religious tradition, we might not know enough about it to form a belief one way or the other, and even suspending judgment may be difficult, since it is not obvious what we are suspending judgment about.

If I say to you: ‘Be a theosophist or be a Mohammedan,’ it is probably a dead option, because for you neither hypothesis is likely to be alive. But if I say: ‘Be an agnostic or be a Christian,’ it is otherwise: trained as you are, each hypothesis makes some appeal, however small, to your belief (2010: 357).

Ignoring the circularity in his definition of “dead option,” James’s point seems to be that if you know nothing about a view or what statements it entails, no amount of logic or evidence could help you form a reasonable belief about that position.

We might criticize James at this point because his conclusion seems to imply that we have no duty to investigate dead options, that is, to discover if there is anything worth considering in them. If we are concerned with truth, the simple fact that we are not familiar with a proposition does not mean it is not true or potentially significant for us. But James’s argument is subtler than this criticism suggests. Even if you came to learn about a particularly foreign religious tradition, its tenets may be so contrary to your understanding of the world that you could not entertain them as possible beliefs of yours . For instance, you know perfectly well that, if some events had been different, Hitler would not have existed: his parents might have had no children, or his parents’ parents might have had no children. You know roughly what it would mean for Hitler not to have existed and the sort of events that could have made it true that he did not exist. But how much evidence would it take to convince you that, in fact, Hitler did not exist, that is, that your belief that Hitler did exist is false ? Could there be an argument strong enough? Not obviously. Since all the information we have about Hitler unequivocally points to his existence, any arguments against that belief would have to affect a very broad range of statements; they would have to be strong enough to make us skeptical of large parts of reality.

7. Approaches to Improving Reasoning through Critical Thinking

Recall that the goal of critical thinking is not just to study what makes reasons and statements good, but to help us improve our ability to reason, that is, to improve our ability to form, hold, and discard beliefs according to whether they meet the standards of good thinking. Some ways of approaching this latter goal are more effective than others. While the classical approach focuses on technical reasoning skills, the Paul/Elder model encourages us to think in terms of critical concepts, and irrationality approaches use empirical research on instances of poor reasoning to help us improve reasoning where it is least obvious we need it and where we need it most. Which approach or combination of approaches is most effective depends, as noted above, on the context and limits of critical thinking, but also on scientific evidence of their effectiveness. Those who teach critical thinking, of all people, should be engaged with the evidence relevant to determining which approaches are most effective.

a. Classical Approaches

The classic approach to critical thinking follows roughly the structure of this article: critical thinkers attempt to interpret statements or arguments clearly and charitably, and then they apply the tools of formal and informal logic and science, while carefully attempting to avoid fallacious inferences (see Weston, 2008; Walton, 2008; Watson and Arp, 2015). This approach requires spending extensive time learning and practicing technical reasoning strategies. It presupposes that reasoning is primarily a conscious activity, and that enhancing our skills in these areas will improve our ability to reason well in ordinary situations.

There are at least two concerns about this approach. First, it is highly time intensive relative to its payoff. Learning the terminology of systems like propositional and categorical logic and the names of the fallacies, and practicing applying these tools to hypothetical cases requires significant time and energy. And it is not obvious, given the problems with heuristics and biases, whether this practice alone makes us better reasoners in ordinary contexts. Second, many of the ways we reason poorly are not consciously accessible (recall the heuristics and biases discussion in §5c). Our biases, combined with the heuristics we rely on in ordinary situations, can only be detected in experimental settings, and addressing them requires restructuring the ways in which we engage with evidence (see Thaler and Sunstein, 2009).

b. The Paul/Elder Model

Richard Paul and Linda Elder (Paul and Elder, 2006; Paul, 2012) developed an alternative to the classical approach on the assumption that critical thinking is not something that is limited to academic study or to the discipline of philosophy. On their account, critical thinking is a broad set of conceptual skills and habits aimed at a set of standards that are widely regarded as virtues of thinking: clarity, accuracy, depth, fairness, and others. They define it simply as “the art of analyzing and evaluating thinking with a view to improving it” (2006: 4). Their approach, then, is to focus on the elements of thought and intellectual virtues that help us form beliefs that meet these standards.

The Paul/Elder model is made up of three sets of concepts: elements of thought, intellectual standards, and intellectual traits. In this model, we begin by identifying the features present in every act of thought. They use “thought” to mean critical thought aimed at forming beliefs, not just any act of thinking, musing, wishing, hoping, remembering. According to the model, every act of thought involves:

These comprise the subject matter of critical thinking; that is, they are what we are evaluating when we are thinking critically. We then engage with this subject matter by subjecting them to what Paul and Elder call universal intellectual standards. These are evaluative goals we should be aiming at with our thinking:

While in classical approaches, logic is the predominant means of thinking critically, in the Paul/Elder model, it is put on equal footing with eight other standards. Finally, Paul and Elder argue that it is helpful to approach the critical thinking process with a set of intellectual traits or virtues that dispose us to using elements and standards well.

To remind us that these are virtues of thought relevant to critical thinking, they use “intellectual” to distinguish these traits from their moral counterparts (moral integrity, moral courage, and so on).

The aim is that, as we become familiar with these three sets of concepts and apply them in everyday contexts, we become better at analyzing and evaluating statements and arguments in ordinary situations.

Like the classical approach, this approach presupposes that reasoning is primarily a conscious activity, and that enhancing our skills will improve our reasoning. This means that it still lacks the ability to address the empirical evidence that many of our reasoning errors cannot be consciously detected or corrected. It differs from the classical approach in that it gives the technical tools of logic a much less prominent role and places emphasis on a broader, and perhaps more intuitive, set of conceptual tools. Learning and learning to apply these concepts still requires a great deal of time and energy, though perhaps less than learning formal and informal logic. And these concepts are easy to translate into disciplines outside philosophy. Students of history, psychology, and economics can more easily recognize the relevance of asking questions about an author’s point of view and assumptions than perhaps determining whether the author is making a deductive or inductive argument. The question, then, is whether this approach improves our ability to think better than the classical approach.

c. Other Approaches

A third approach that is becoming popular is to focus on the ways we commonly reason poorly and then attempt to correct them. This can be called the Rationality Approach , and it takes seriously the empirical evidence (§5c) that many of our errors in reasoning are not due to a lack of conscious competence with technical skills or misusing those skills, but are due to subconscious dispositions to ignore or dismiss relevant information or to rely on irrelevant information.

One way to pursue this approach is to focus on beliefs that are statistically rare or “weird.” These include beliefs of fringe groups, such as conspiracy theorists, religious extremists, paranormal psychologists, and proponents of New Age metaphysics (see Gilovich, 1992; Vaughn and Schick, 2010; Coady, 2012). If we recognize the sorts of tendencies that lead to these controversial beliefs, we might be able to recognize and avoid similar tendencies in our own reasoning about less extreme beliefs, such as beliefs about financial investing, how statistics are used to justify business decisions, and beliefs about which public policies to vote for.

Another way to pursue this approach is to focus directly on the research on error, those ordinary beliefs that psychologists and behavioral economists have discovered we reason poorly, and to explore ways of changing how we frame decisions about what to believe (see Nisbett and Ross, 1980; Gilovich, 1992; Ariely, 2008; Kahneman, 2011). For example, in one study, psychologists found that judges issue more convictions just before lunch and the end of the day than in the morning or just after lunch (Danzinger, et al., 2010). Given that dockets do not typically organize cases from less significant crimes to more significant crimes, this evidence suggests that something as irrelevant as hunger can bias judicial decisions. Even though hunger has nothing to do with the truth of a belief, knowing that it can affect how we evaluate a belief can help us avoid that effect. This study might suggest something as simple as that we should avoid being hungry when making important decisions. The more we learn ways in which our brains use irrelevant information, the better we can organize our reasoning to avoid these mistakes. For more on how decisions can be improved by restructuring our decisions, see Thaler and Sunstein, 2009.

A fourth approach is to take more seriously the role that language plays in our reasoning. Arguments involve complex patterns of expression, and we have already seen how vagueness and ambiguity can undermine good reasoning (§1). The pragma-dialectics approach (or pragma-dialectical theory) is the view that the quality of an argument is not solely or even primarily a matter of its logical structure, but is more fundamentally a matter of whether it is a form of reasonable discourse (Van Eemeren and Grootendorst, 1992). The proponents of this view contend that, “The study of argumentation should … be construed as a special branch of linguistic pragmatics in which descriptive and normative perspectives on argumentative discourse are methodically integrated” (Van Eemeren and Grootendorst, 1995: 130).

The pragma-dialectics approach is a highly technical approach that uses insights from speech act theory, H. P. Grice’s philosophy of language, and the study of discourse analysis. Its use, therefore, requires a great deal of background in philosophy and linguistics. It has an advantage over other approaches in that it highlights social and practical dimensions of arguments that other approaches largely ignore. For example, argument is often public ( external ), in that it creates an opportunity for opposition, which influences people’s motives and psychological attitudes toward their arguments. Argument is also social in that it is part of a discourse in which two or more people try to arrive at an agreement. Argument is also functional ; it aims at a resolution that can only be accommodated by addressing all the aspects of disagreement or anticipated disagreement, which can include public and social elements. Argument also has a rhetorical role ( dialectical ) in that it is aimed at actually convincing others, which may have different requirements than simply identifying the conditions under which they should be convinced.

These four approaches are not mutually exclusive. All of them presuppose, for example, the importance of inductive reasoning and scientific evidence. Their distinctions turn largely on which aspects of statements and arguments should take precedence in the critical thinking process and on what information will help us have better beliefs.

8. References and Further Reading

Author Information

Jamie Carlin Watson Email: [email protected] University of Arkansas for Medical Sciences U. S. A.

An encyclopedia of philosophy articles written by professional philosophers.

Book cover

Critical Thinking and Epistemic Injustice

An Essay in Epistemology of Education

Centre for Knowledge and Society, University of Aberdeen (associate member), Aberdeen, UK

You can also search for this author in PubMed   Google Scholar

Casts new light on Critical Thinking in Education

Gives an original contribution to the wider debate on Epistemic Injustice

Contrasts Competence Based Education in relation to a Bildung approach

Part of the book series: Contemporary Philosophies and Theories in Education (COPT, volume 20)

1907 Accesses

1 Citations

3 Altmetric

About this book

Authors and affiliations, about the author, bibliographic information, buying options.

This is a preview of subscription content, access via your institution .

Table of contents (7 chapters)

Front matter, introduction.

Alessia Marabini

Ethics, Education, and Reasoning

Critical thinking and epistemic value, critique of critical thinking: bildung and the value of critical thinking, critical thinking and epistemic injustice in education, conclusions. education injustice and critical thinking between bildung, cultural heritage and recognition.

This book argues that the mainstream view and practice of critical thinking in education mirrors a reductive and reified conception of competences that ultimately leads to forms of epistemic injustice in assessment. It defends an alternative view of critical thinking as a competence that is normative in nature rather than reified and reductive. This book contends that critical thinking competence should be at the heart of learning how to learn, but that much depends on how we understand critical thinking. It defends an alternative view of critical thinking as a competence that is normative in nature rather than reified and reductive. The book draws from a conception of human reasoning and rationality that focuses on belief revision and is interwoven with a Bildung approach to teaching and learning: it emphasises the relevance of knowledge and experience in making inferences.

The book is an enhanced, English version of the Italian monograph Epistemologia dell’Educazione: Pensiero Critico, Etica ed Epistemic Injustice.

Alessia Marabini is a high school professor in Italy and a member of Centre for Knowledge and Society (CEKAS) at the University of Aberdeen. She graduated with a BA (Laurea) in Philosophy of Language and a PhD in Mind, Language and Logic at the University of Bologna. Her areas of research are epistemology and philosophy of education. In epistemology of education, she has contributed to the debate with articles published in the Journal of Philosophy of Education . She has also given talks at the Universities of Seattle, Chicago, Calgary, Oxford, MGU Moscow, Venezia, APA in Boston and UCL Institute of Education. She has published various monographs, in Italian including La Concezione Epistemica dell’Analiticità: un Dibattito in Corso ( The Conception of Epistemic Analyticity: An Ongoing Debate ) 2013 (Aracne: Rome), and Epistemologia dell’Educazione: Pensiero Critico, Etica ed Epistemic Injustice ( Epistemology of Education: Critical Thinking, Ethics and Epistemic Injustice ) 2020 (Aracne: Rome).

Book Title : Critical Thinking and Epistemic Injustice

Book Subtitle : An Essay in Epistemology of Education

Authors : Alessia Marabini

Series Title : Contemporary Philosophies and Theories in Education

DOI : https://doi.org/10.1007/978-3-030-95714-8

Publisher : Springer Cham

eBook Packages : Education , Education (R0)

Copyright Information : The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2022

Hardcover ISBN : 978-3-030-95713-1 Published: 23 March 2022

Softcover ISBN : 978-3-030-95716-2 Due: 06 April 2023

eBook ISBN : 978-3-030-95714-8 Published: 22 March 2022

Series ISSN : 2214-9759

Series E-ISSN : 2214-9767

Edition Number : 1

Number of Pages : XIX, 225

Number of Illustrations : 1 b/w illustrations

Topics : Educational Philosophy , School and Schooling , Epistemology

IMAGES

  1. PPT

    epistemic conception of critical thinking as a process

  2. Overview of epistemology and ontology.

    epistemic conception of critical thinking as a process

  3. Social Networking: Epistemic Cognition

    epistemic conception of critical thinking as a process

  4. 👍 What is the critical thinking process. [C01] What is critical thinking?. 2019-02-16

    epistemic conception of critical thinking as a process

  5. Epistemology

    epistemic conception of critical thinking as a process

  6. The Dispositional Architecture of Epistemic Reasons

    epistemic conception of critical thinking as a process

VIDEO

  1. DVD 01 Dynamic Epistemic Game Theory by Pierpaolo Battigalli

  2. Formed: Concerning Critical Theory

  3. Kindly direct your appointment to: Unittas Multi-speciality Hospital

  4. IC Reflections: On Fasting

  5. What is epistemic responsibility?

  6. What is Epistemic Justice?

COMMENTS

  1. What Is Online Processing?

    Online processing refers to a method of transaction where companies can use an interface, usually through the Internet, to take product orders and handle payments from customers. Online processing can be very costly, however.

  2. What Are Some Examples of Critical Thinking?

    Examples of critical thinking include observing, analyzing, discriminating and predicting. Critical thinkers solve problems through observation, data gathering, and reasoning. Other examples of critical thinking are applying standards and s...

  3. What Are the Five Basic Concepts of Psychology?

    Five major concepts used in psychology to explain human behavior are the biological, learning, cognitive, psychoanalytic and sociocultural perspectives. A majority of psychologists take an eclectic approach, using components of all five con...

  4. Educating Critical Thinkers: The Role of Epistemic Cognition

    Critical thinking requires epistemic cognition: the ability to construct, evaluate, and use knowledge. Epistemic dispositions and beliefs predict many

  5. The epistemological approach to critical thinking

    That is, reasoning well The result of the evaluation process that a critical thinker is about beliefs, decisions, etc., is the justification of the same to be

  6. EPISTEMIC RESPONSIBILITY AND CRITICAL THINKING

    assent arrived at through a free deliberative process of deciding what to.

  7. Educating Critical Thinkers: The Role of Epistemic Cognition

    Critical thinking requires epistemic cognition: the ability to construct, evaluate, and use knowledge. Epistemic dispositions and beliefs

  8. Critical Thinking

    Critical thinking is a widely accepted educational goal. Its definition is contested, but the competing definitions can be understood as

  9. Epistemology, Understanding and Critical Thinking

    The Problem With Percy: Epistemology,. Understanding and Critical Thinking. SHARON BAILIN. Simon Fraser University. Abstract: Most current conceptions of

  10. Epistemic Emotions and Epistemic Cognition Predict Critical

    (3) Epistemic emotions will predict critical thinking. Specifically, surprise, curiosity, enjoyment, confusion, and anxiety will positively

  11. Defining Critical Thinking

    Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating

  12. Critical Thinking

    Critical Thinking is the process of using and assessing reasons to evaluate statements, assumptions, and arguments in ordinary situations.

  13. Do Epistemological Beliefs Predict Critical Thinking Dispositions?

    sophisticated EBs are likely to be more critical in the thinking process (Getahun, Saroyan, & Aulls,.

  14. Critical Thinking and Epistemic Injustice

    The book draws from a conception of human reasoning and rationality that focuses on belief revision and is interwoven with a Bildung approach to teaching and