Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed inheritance.
Twentieth-century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.
A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”
The opening lines of Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864)—“I am a sick man.… I am a spiteful man”—are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an “overly conscious” intellectual.
open sidebar
In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.
Jean-Paul Sartre, a twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.
Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940-41 he was imprisoned by the Germans; after his release, he taught in Neuilly, France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so-called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.
Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.
In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.
In his later philosophic work Critique of Dialectical Reason (1960; trans. 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.
Søren Aabye Kierkegaard (1813-1855), Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.
Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this “leap of faith” as the essence of Christianity.
Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt-ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy (see below) and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17-year-old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.
Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846; trans. 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.
Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.
Kierkegaard maintained that systematic philosophy not only imposed a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.
In his first major work, Either/Or (2 volumes, 1843; trans. 1944), Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is a refined hedonism, consisting of a search for pleasure and a cultivation of mood. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later works, such as Stages on Life's Way (1845; trans. 1940), Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846; trans. 1941) Kierkegaard focussed on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1-19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This “suspension of the ethical,” as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar “leap of faith” into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread,1844; trans. 1944), which is ultimately a fear of nothingness.
Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly sombre view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846; trans. 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and of the controversies in which he engaged gradually undermined his health; in October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.
Kierkegaard's influence was at first confined to Scandinavia and to German-speaking Europe, where his work had a strong impact on Protestant theology and on such writers as the 20th-century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War. Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.
MAURICE MERLEAU-PONTY (1908-1961)
Maurice Merleau-Ponty was an existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at the Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment (1942; trans. 1963), a critique of behaviourism. His major work, Phenomenology of Perception (1945; trans. 1962), is a detailed study of perception, influenced by the German philosopher Edmund Husserl's phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism - the view that the working of the human mind can be understood in terms of rules or programs. It is also a telling critique of the existentialism of his contemporary, Jean-Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our embodiment.
With Sartre and Simone de Beauvoir, Merleau-Ponty founded an influential postwar French journal, Les Temps Modernes. His brilliant and timely essays on art, film, politics, psychology, and religion, first published in this journal, were later collected in Sense and Nonsense (1948; trans. 1964). At the time of his death, he was working on a book, The Visible and the Invisible (1964; trans. 1968), arguing that the whole perceptual world has the sort of organic unity he had earlier attributed to the body and to works of art.
ANALYTIC and LINGUISTIC PHILOSOPHY
Analytic and Linguistic Philosophy, 20th-century philosophical movement, dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates—has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.
The positivists’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Additional contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.
LUDWIG WITTGENSTEIN
Born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus (1921; trans. 1922), a work he then believed provided the “final solution” to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations (pub. posthumously 1953, trans. 1953). Wittgenstein retired in 1947; he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.
Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”
Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analysed into less complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analysed into less complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or “states of affairs.” He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts - the propositions of science - are considered cognitively meaningful. Metaphysical and ethical statements are not meaningful assertions. The logical positivists associated with the Vienna Circle were greatly influenced by this conclusion.
Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one actually looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different functions, so linguistic expressions serve many functions. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, in terms of the rules of the game of which that proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.
Semantics (Greek semantikos, “significant”), is the study of the meaning of linguistic signs - that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance - that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs—what they stand for - with the process of assigning those meanings.
Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.
These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the loser at Waterloo,” both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.
In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.
The German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analysing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).
An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition—a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to—the moon—is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.
The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs—by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favour of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.
From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs - that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.
What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic - that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone—independent of a speaker and hearer.
Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analysing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines - with one or more arguments (also signs) =often nominal arguments (noun phrases) or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”
Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role - to identify a person - in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.
Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs - usually single words as vocabulary items called lexemes - in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”
Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.
Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colourless green ideas sleep furiously”), although grammatical expressions, are meaningless—semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.
Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colourless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.
In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech - that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.
Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.
Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible - in a syntactically based theory - for surface structure and deep structure jointly to determine the semantic interpretation of an expression.
The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigour, and the approach has declined in polarity.
Quine, an American philosopher, known for his work in mathematical logic and his contributions to a pragmatic theory of knowledge. Born in Akron, Ohio, Quine was educated at Oberlin College and at Harvard University, where he became a member of the faculty in 1936.
Quine became noted for his claim that the way one uses language determines what kinds of things one is committed to saying exist. Moreover, the justification for speaking one way rather than another, just as the justification for adopting one conceptual system rather than another, was a thoroughly pragmatic one for Quine. He also became known for his criticism of the traditional distinction between synthetic statements (empirical, or factual, propositions) and analytic statements (necessarily true propositions). Quine made major contributions in set theory, a branch of mathematical logic concerned with the relationship between classes
Pragmatism is a philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
American psychologist and philosopher William James helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truths is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
The Association for International Conciliation first published William James’s pacifist statement, “The Moral Equivalent of War,” in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism - a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represents the standards of the time.
All theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behaviour. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept “brittle,” for example, is given by the observed consequences or properties that objects called “brittle” exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called “the will to believe” and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.
Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.
The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - as an alternative to Rorty’s interpretation of the tradition.
In an ever changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.
THE AXIOM
An axiom, in logic and mathematics, a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: “No sentence can be true and false at the same time” (the principle of contradiction); “If equals are added to equals, the sums are equal”; “The whole is greater than any of its parts.” Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.
The terms axiom and postulate are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.
Semantics is the study of the meaning of linguistic signs - that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance - that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs - what they stand for - with the process of assigning those meanings.
Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.
These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the loser at Waterloo,” both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.
In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism (see Analytic and Linguistic Philosophy).
German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analysing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).
An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to indicate a definite pen - plume - of the colour red-rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.
An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition - a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to - the moon - is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.
The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs—by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favour of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.
From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs - that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.
What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic - that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone - independent of a speaker and hearer.
Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analysing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs) - often nominal arguments (noun phrases) - or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”
Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role - to identify a person - in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.
Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs - usually single words as vocabulary items called lexemes—in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”
Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.
Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colourless green ideas sleep furiously”), although grammatical expressions, are meaningless - semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.
Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colourless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.
In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech - that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.
Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.
Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible - in a syntactically based theory—for surface structure and deep structure jointly to determine the semantic interpretation of an expression.
The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigour, and the approach has declined in popularity.
Analytic and Linguistic Philosophy of the 20th-century philosophical movement, dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates—has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical consideration is based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science - are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.
The positivists’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Adaptational contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered. Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.
9-1938)
Martin Heidegger (1889-1976)
German philosopher Martin Heidegger greatly influenced the modern philosophy movements of phenomenology and existentialism. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).
Heidegger was born in Messkirch, Baden. He studied Roman Catholic theology and then philosophy at the University of Freiburg, where he was an assistant to Edmund Husserl, the founder of phenomenology. Heidegger began teaching at Freiburg in 1915. From 1923 to 1928 he taught at Marburg University. He then returned to Freiburg in 1928, inheriting Husserl's position as professor of philosophy. Because of his public support of Adolf Hitler and the Nazi Party in 1933 and 1934, Heidegger's professional activities were restricted in 1945, and controversy surrounded his university standing until his retirement in 1959.
German philosopher Martin Heidegger was instrumental in the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).
open sidebar
Besides Husserl, Heidegger was especially influenced by the pre-Socratics (see Greek Philosophy; Philosophy), by Danish philosopher Søren Kierkegaard, and by German philosopher Friedrich Nietzsche. In developing his theories, Heidegger rejected traditional philosophic terminology in favour of an individual interpretation of the works of past thinkers. He applied original meanings and etymologies to individual words and expressions, and coined hundreds of new, complex words. In his most important and influential work, Sein und Zeit (Being and Time, 1927), Heidegger was concerned with what he considered the essential philosophical question: What is it, to be? This led to the question of what kind of “being” human beings have. They are, he said, thrown into a world that they have not made but that consists of potentially useful things, including cultural as well as natural objects. Because these objects come to humanity from the past and are used in the present for the sake of future goals, Heidegger posited a fundamental relation between the mode of being of objects, of humanity, and of the structure of time.
The individual is, however, always in danger of being submerged in the world of objects, everyday routine, and the conventional, shallow behaviour of the crowd. The feeling of dread (Angst) brings the individual to a confrontation with death and the ultimate meaninglessness of life, but only in this confrontation can an authentic sense of Being and of freedom be attained.
After 1930, Heidegger turned, in such works as Einführung in die Metaphysik (An Introduction to Metaphysics, 1953), to the interpretation of particular Western conceptions of being. He felt that, in contrast to the reverent ancient Greek conception of being, modern technological society has fostered a purely manipulative attitude that has deprived Being and human life of meaning—a condition he called nihilism. Humanity has forgotten its true vocation and must recover the deeper understanding of Being (achieved by the early Greeks and lost by subsequent philosophers) to be receptive to new understandings of Being.
Heidegger's original treatment of such themes as human finitude, death, nothingness, and authenticity led many observers to associate him with existentialism, and his work had a crucial influence on French existentialist Jean-Paul Sartre. Heidegger, however, eventually repudiated existentialist interpretations of his work. His thought directly influenced the work of French philosophers Michel Foucault and Jacques Derrida and of German sociologist Jurgen Habermas. Since the 1960s his influence has spread beyond continental Europe and has had an increasing impact on philosophy in English-speaking countries worldwide.
German philosopher Martin Heidegger was instrumental in the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).
Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.
One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.
Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.
The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).
Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis—in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology (see Metaphysics) as well as on language.
Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.
Sartre first gave the term existentialism general currency by using it for his own philosophy and by becoming the leading figure of a distinct movement in France that became internationally influential after World War II. Sartre's philosophy is explicitly atheistic and pessimistic; he declared that human beings require a rational basis for their lives but are unable to achieve one, and thus human life is a “futile passion.” Sartre nevertheless insisted that his existentialism is a form of humanism, and he strongly emphasized human freedom, choice, and responsibility. He eventually tried to reconcile these existentialist concepts with a Marxist analysis of society and history.
Although existentialist thought encompasses the uncompromising atheism of Nietzsche and Sartre and the agnosticism of Heidegger, its origin in the intensely religious philosophies of Pascal and Kierkegaard foreshadowed its profound influence on 20th-century theology. The 20th-century German philosopher Karl Jaspers, although he rejected explicit religious doctrines, influenced contemporary theology through his preoccupation with transcendence and the limits of human experience. The German Protestant theologians Paul Tillich and Rudolf Bultmann, the French Roman Catholic theologian Gabriel Marcel, the Russian Orthodox philosopher Nikolay Berdyayev, and the German Jewish philosopher Martin Buber inherited many of Kierkegaard's concerns, especially that a personal sense of authenticity and commitment is essential to religious faith.
Renowned as one of the most important writers in world history, 19th-century Russian author Fyodor Dostoyevsky wrote psychologically intense novels that probed the motivations and moral justifications for his characters’ actions. Dostoyevsky commonly addressed themes such as the struggle between good and evil within the human soul and the idea of salvation through suffering. The Brothers Karamazov (1879-1880), generally considered Dostoyevsky’s best work, interlaces religious exploration with the story of a family’s violent quarrels over a woman and a disputed inheritance.
Twentieth-century writer and philosopher Albert Camus examined what he considered the tragic inability of human beings to understand and transcend their intolerable conditions. In his work Camus presented an absurd and seemingly unreasonable world in which some people futilely struggle to find meaning and rationality while others simply refuse to care. For example, the main character of The Stranger (1942) kills a man on a beach for no reason and accepts his arrest and punishment with dispassion. In contrast, in The Plague (1947), Camus introduces characters who act with courage in the face of absurdity.
A number of existentialist philosophers used literary forms to convey their thought, and existentialism has been as vital and as extensive a movement in literature as in philosophy. The 19th-century Russian novelist Fyodor Dostoyevsky is probably the greatest existentialist literary figure. In Notes from the Underground (1864), the alienated antihero rages against the optimistic assumptions of rationalist humanism. The view of human nature that emerges in this and other novels of Dostoyevsky is that it is unpredictable and perversely self-destructive; only Christian love can save humanity from itself, but such love cannot be understood philosophically. As the character Alyosha says in The Brothers Karamazov (1879-80), “We must love life more than the meaning of it.”
The opening lines of Russian novelist Fyodor Dostoyevsky’s Notes from Underground (1864) - “I am a sick man. - I am a spiteful man”—are among the most famous in 19th-century literature. Published five years after his release from prison and involuntary military service in Siberia, Notes from Underground is a sign of Dostoyevsky’s rejection of the radical social thinking he had embraced in his youth. The unnamed narrator is antagonistic in tone, questioning the reader’s sense of morality as well as the foundations of rational thinking. In this excerpt from the beginning of the novel, the narrator describes himself, derisively referring to himself as an “overly conscious” intellectual.
open sidebar
In the 20th century, the novels of the Austrian Jewish writer Franz Kafka, such as The Trial (1925; trans. 1937) and The Castle (1926; trans. 1930), present isolated men confronting vast, elusive, menacing bureaucracies; Kafka's themes of anxiety, guilt, and solitude reflect the influence of Kierkegaard, Dostoyevsky, and Nietzsche. The influence of Nietzsche is also discernible in the novels of the French writers André Malraux and in the plays of Sartre. The work of the French writer Albert Camus is usually associated with existentialism because of the prominence in it of such themes as the apparent absurdity and futility of life, the indifference of the universe, and the necessity of engagement in a just cause. Existentialist themes are also reflected in the theatre of the absurd, notably in the plays of Samuel Beckett and Eugène Ionesco. In the United States, the influence of existentialism on literature has been more indirect and diffuse, but traces of Kierkegaard's thought can be found in the novels of Walker Percy and John Updike, and various existentialist themes are apparent in the work of such diverse writers as Norman Mailer, John Barth, and Arthur Miller.
Maurice Merleau-Ponty had been a existentialist philosopher, whose phenomenological studies of the role of the body in perception and society opened a new field of philosophical investigation. He taught at the University of Lyon, at the Sorbonne, and, after 1952, at the Collège de France. His first important work was The Structure of Comportment (1942; trans. 1963), a critique of behaviourism. His major work, Phenomenology of Perception (1945; trans. 1962), is a detailed study of perception, influenced by the German philosopher Edmund Husserl's phenomenology and by Gestalt psychology. In it, he argues that science presupposes an original and unique perceptual relation to the world that cannot be explained or even described in scientific terms. This book can be viewed as a critique of cognitivism - the view that the working of the human mind can be understood in terms of rules or programs. It is also a telling critique of the existentialism of his contemporary, Jean-Paul Sartre, showing how human freedom is never total, as Sartre claimed, but is limited by our embodiment.
With Sartre and Simone de Beauvoir, Merleau-Ponty founded an influential postwar French journal, Les Temps Modernes. His brilliant and timely essays on art, film, politics, psychology, and religion, first published in this journal, were later collected in Sense and Nonsense (1948; trans. 1964). At the time of his death, he was working on a book, The Visible and the Invisible (1964; trans. 1968), arguing that the whole perceptual world has the sort of organic unity he had earlier attributed to the body and to works of art.
Semantics (Greek semantikos, “significant”), the study of the meaning of linguistic signs- that is, words, expressions, and sentences. Scholars of semantics try to answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significanc--that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs—what they stand for - with the process of assigning those meanings.
Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behaviour that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.
These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally important. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the loser at Waterloo,” both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.
In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism (see Analytic and Linguistic Philosophy).
The German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analysing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).
An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to specify of a definite pen - plume - of the Collor red-rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.
An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition—a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to - the moon - is in fact spherical. To determine the sign's truth value, one must look at the moon for oneself.
The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs—by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favour of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.
From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs - that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.
What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic—that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone—independent of a speaker and hearer.
Researchers in descriptive semantics examine what signs mean in particular languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analysing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs) - often nominal arguments (noun phrases) or relates nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression “Bill gives Mary the book,””gives” is an operator that relates the arguments “Bill,””Mary,” and “the book.”
Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role - to identify a person - in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.
Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs—usually single words as vocabulary items called lexemes—in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain “seat” in English, the lexemes “chair,””sofa,””loveseat,” and “bench” can be distinguished from one another according to how many people are accommodated and whether a back support is included. At the same time all these lexemes share the common component, or feature, of meaning “something on which to sit.”
Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.
Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colourless green ideas sleep furiously”), although grammatical expressions, are meaningless—semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.
Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colourless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.
In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech—that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, object, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.
Whether deep structure and semantic interpretation are distinct from one another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically well-formed expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.
Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); or whether it should be seen as semantically based. According to Noam Chomsky, an American scholar who is particularly influential in this course, it signifies the possibility - in a syntactically based theory - for surface structure and deep structure jointly to determine the semantic interpretation of an expression.
The focus of general semantics is how people evaluate words and how that evaluation influences their behaviour. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigour, and the approach has declined in popularity.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.
The positivists’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Additional contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered.
Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday discourse can often aid in resolving philosophical problems.
LINGUISTIC PHILOSOPHY
Linguistics, the scientific study of language. It encompasses the description of languages, the study of their origin, and the analysis of how children acquire language and how people learn languages other than their own. Linguistics is also concerned with relationships between languages and with the ways languages change over time. Linguists may study language as a thought process and seek a theory that accounts for the universal human capacity to produce and understand language. Some linguists examine language within a cultural context. By observing talk, they try to determine what a person needs to know in order to speak appropriately in different settings, such as the workplace, among friends, or among family. Other linguists focus on what happens when speakers from different language and cultural backgrounds interact. Linguists may also concentrate on how to help people learn another language, using what they know about the learner’s first language and about the language being acquired.
Although there are many ways of studying language, most approaches belong to one of the two main branches of linguistics: descriptive linguistics and comparative linguistics.
Descriptive linguistics is the study and analysis of spoken language. The techniques of descriptive linguistics were devised by German American anthropologist Franz Boas and American linguist and anthropologist Edward Sapir in the early 1900s to record and analyze Native American languages. Descriptive linguistics begins with what a linguist hears native speakers say. By listening to native speakers, the linguist gathers a body of data and analyze it in order to identify distinctive sounds, called phonemes. Individual phonemes, such as /p/ and /b/, are established on the grounds that substitution of one for the other changes the meaning of a word. After identifying the entire inventory of sounds in a language, the linguist looks at how these sounds combine to create morphemes, or units of sound that carry meaning, such as the words push and bush. Morphemes may be individual words such as push; root words, such as berry in blueberry; or prefixes (pre- in preview) and suffixes (-ness in openness).
The linguist’s next step is to see how morphemes combine into sentences, obeying both the dictionary meaning of the morpheme and the grammatical rules of the sentence. In the sentence “She pushed the bush,” the morpheme she, a pronoun, is the subject; push, a transitive verb, is the verb; the, a definite article, is the determiner; and bush, a noun, is the object. Knowing the function of the morphemes in the sentence enables the linguist to describe the grammar of the language. The scientific procedures of phonemics (finding phonemes), morphology (discovering morphemes), and syntax (describing the order of morphemes and their function) provide descriptive linguists with a way to write down grammars of languages never before written down or analysed. In this way they can begin to study and understand these languages.
Comparative linguistics is the study and analysis, by means of written records, of the origins and relatedness of different languages. In 1786 Sir William Jones, a British scholar, asserted that Sanskrit, Greek, and Latin were related to one another and had descended from a common source. He based this assertion on observations of similarities in sounds and meanings among the three languages. For example, the Sanskrit word bhratar for “brother” resembles the Latin word frater, the Greek word phrater, (and the English word brother).
Other scholars went on to compare Icelandic with Scandinavian languages, and Germanic languages with Sanskrit, Greek, and Latin. The correspondences among languages, known as genetic relationships, came to be represented on what comparative linguists refer to as family trees. Family trees established by comparative linguists include the Indo-European, relating Sanskrit, Greek, Latin, German, English, and other Asian and European languages; the Algonquian, relating Fox, Cree, Menomini, Ojibwa, and other Native North American languages; and the Bantu, relating Swahili, Xhosa, Zulu, Kikuyu, and other African languages.
Comparative linguists also look for similarities in the way words are formed in different languages. Latin and English, for example, change the form of a word to express different meanings, as when the English verb go changes to go and gone to express a past action. Chinese, on the other hand, has no such inflected forms; the verb remains the same while other words indicate the time (as in “go store tomorrow”). In Swahili, prefixes, suffixes, and infixes (additions in the body of the word) combine with a root word to change its meaning. For example, a single word might express when something was done, by whom, to whom, and in what manner.
Some comparative linguists reconstruct hypothetical ancestral languages known as proto-languages, which they use to demonstrate relatedness among contemporary languages. A proto-language is not intended to depict a real language, however, and does not represent the speech of ancestors of people speaking modern languages. Unfortunately, some groups have mistakenly used such reconstructions in efforts to demonstrate the ancestral homeland of a people.
Comparative linguists have suggested that certain basic words in a language do not change over time, because people are reluctant to introduce new words for such constants as arm, eye, or mother. These words are termed culture free. By comparing lists of culture-free words in languages within a family, linguists can derive the percentage of related words and use a formula to figure out when the languages separated from one another.
By the 1960s comparativists were no longer satisfied with focussing on origins, migrations, and the family tree method. They challenged as unrealistic the notion that an earlier language could remain sufficiently isolated for other languages to be derived exclusively from it over a period of time. Today comparativists seek to understand the more complicated reality of language history, taking language contact into account. They are concerned with universal characteristics of language and with comparisons of grammars and structures.
The field of linguistics both borrows from and lends its own theories and methods to other disciplines. The many subfields of linguistics have expanded our understanding of languages. Linguistic theories and methods are also used in other fields of study. These overlapping interests have led to the creation of several cross-disciplinary fields.
Sociolinguistics is the study of patterns and variations in language within a society or community. It focuses on the way people use language to express social class, group status, gender, or ethnicity, and it looks at how they make choices about the form of language they use. It also examines the way people use language to negotiate their role in society and to achieve positions of power. For example, sociolinguistic studies have found that the way a New Yorker pronounces the phoneme /r/ in an expression such as “fourth floor” can indicate the person’s social class. According to one study, people aspiring to move from the lower middle class to the upper middle class attach prestige to pronouncing the /r/. Sometimes they even overcorrect their speech, pronouncing an /r/ where those whom they wish to copy may not.
Some sociolinguists believe that analysing such variables as the use of a particular phoneme can predict the direction of language change. Change, they say, moves toward the variable associated with power, prestige, or other quality having high social value. Other sociolinguists focus on what happens when speakers of different languages interact. This approach to language change emphasizes the way languages mix rather than the direction of change within a community. The goal of sociolinguistics is to understand communicative competence—what people need to know to use the appropriate language for a given social setting.
Psycholinguistics merges the fields of psychology and linguistics to study how people process language and how language use is related to underlying mental processes. Studies of children’s language acquisition and of second-language acquisition are psycholinguistic in nature. Psycholinguists work to develop models for how language is processed and understood, using evidence from studies of what happens when these processes go awry. They also study language disorders such as aphasia (impairment of the ability to use or comprehend words) and dyslexia (impairment of the ability to make out written language).
Computational linguistics involves the use of computers to compile linguistic data, analyze languages, translate from one language to another, and develop and test models of language processing. Linguists use computers and large samples of actual language to analyze the relatedness and the structure of languages and to look for patterns and similarities. Computers also aid in stylistic studies, information retrieval, various forms of textual analysis, and the construction of dictionaries and concordances. Applying computers to language studies has resulted in machine translation systems and machines that recognize and produce speech and text. Such machines facilitate communication with humans, including those who are perceptually or linguistically impaired.
Applied linguistics employs linguistic theory and methods in teaching and in research on learning a second language. Linguists look at the errors people make as they learn another language and at their strategies for communicating in the new language at different degrees of competence. In seeking to understand what happens in the mind of the learner, applied linguists recognize that motivation, attitude, learning style, and personality affect how well a person learns another language.
Discussions of the massive extinction of human languages that has taken place worldwide over the last few centuries, and the enormous consequences that such loss has had on the richness of the world's cultural heritage.
Anthropological linguistics, also known as linguistic anthropology, uses linguistic approaches to analyze culture. Anthropological linguists examine the relationship between a culture and its language, the way cultures and languages have changed over time, and how different cultures and languages are related to one another. For example, the present English use of family and given names arose in the late 13th and early 14th centuries when the laws concerning registration, tenure, and inheritance of property were changed.
Philosophical linguistics examines the philosophy of language. Philosophers of language search for the grammatical principles and tendencies that all human languages share. Among the concerns of linguistic philosophers is the range of possible word order combinations throughout the world. One finding is that 95 percent of the world’s languages use a subject-verb-object order as English does (“She pushed the bush.”). Only 5 percent use a subject-object-verb order or verb-subject-object order.
Neurolinguistics is the study of how language is processed and represented in the brain. Neurolinguists seek to identify the parts of the brain involved with the production and understanding of language and to determine where the components of language (phonemes, morphemes, and structure or syntax) are stored. In doing so, they make use of techniques for analysing the structure of the brain and the effects of brain damage on language.
Speculation about language goes back thousands of years. Ancient Greek philosophers speculated on the origins of language and the relationship between objects and their names. They also discussed the rules that govern language, or grammar, and by the 3rd century Bc they had begun grouping words into parts of speech and devising names for different forms of verbs and nouns.
In India religion provided the motivation for the study of language nearly 2500 years ago. Hindu priests noted that the language they spoke had changed since the compilation of their ancient sacred texts, the Vedas, starting about 1000 Bc. They believed that for certain religious ceremonies based upon the Vedas to succeed, they needed to reproduce the language of the Vedas precisely. Panini, an Indian grammarian who lived about 400 Bc, produced the earliest work describing the rules of Sanskrit, the ancient language of India.
The Romans used Greek grammars as models for their own, adding commentary on Latin style and usage. Statesman and orator Marcus Tullius Cicero wrote on rhetoric and style in the 1st century Bc. Later grammarians Aelius Donatus (4th century ad) and Priscian (6th century ad) produced detailed Latin grammars. Roman works served as textbooks and standards for the study of language for more than 1000 years.
It was not until the end of the 18th century that language was researched and studied in a scientific way. During the 17th and 18th centuries, modern languages, such as French and English, replaced Latin as the means of universal communication in the West. This occurrence, along with developments in printing, meant that many more texts became available. At about this time, the study of phonetics, or the sounds of a language, began. Such investigations led to comparisons of sounds in different languages; in the late 18th century the observation of correspondences among Sanskrit, Latin, and Greek gave birth to the field of Indo-European linguistics.
During the 19th century, European linguists focussed on philology, or the historical analysis and comparison of languages. They studied written texts and looked for changes over time or for relationships between one language and another.
American linguist, writer, teacher, and political activist Noam Chomsky is considered the founder of transformational-generative linguistic analysis, which revolutionized the field of linguistics. This system of linguistics treats grammar as a theory of language—that is, Chomsky believes that in addition to the rules of grammar specific to individual languages, there are universal rules common to all languages that indicate that the ability to form and understand language is innate to all human beings. Chomsky also is well known for his political activism - he opposed United States involvement in Vietnam in the 1960s and 1970s and has written various books and articles and delivered many lectures in an attempt to educate and empower people on various political and social issues.
In the early 20th century, linguistics expanded to include the study of unwritten languages. In the United States linguists and anthropologists began to study the rapidly disappearing spoken languages of Native North Americans. Because many of these languages were unwritten, researchers could not use historical analysis in their studies. In their pioneering research on these languages, anthropologists Franz Boas and Edward Sapir developed the techniques of descriptive linguistics and theorized on the ways in which language shapes our perceptions of the world.
An important outgrowth of descriptive linguistics is a theory known as structuralism, which assumes that language is a system with a highly organized structure. Structuralism began with publication of the work of Swiss linguist Ferdinand de Saussure in Cours de linguistique générale (1916; Course in General Linguistics, 1959). This work, compiled by Saussure’s students after his death, is considered the foundation of the modern field of linguistics. Saussure made a distinction between actual speech, and spoken language, and the knowledge underlying speech that speakers share about what is grammatical. Speech, he said, represents instances of grammar, and the linguist’s task is to find the underlying rules of a particular language from examples found in speech. To the structuralist, grammar is a set of relationships that account for speech, rather than a set of instances of speech, as it is to the descriptivists.
Once linguists began to study language as a set of abstract rules that somehow account for speech, other scholars began to take an interest in the field. They drew analogies between language and other forms of human behaviour, based on the belief that a shared structure underlies many aspects of a culture. Anthropologists, for example, became interested in a structuralist approach to the interpretation of kinship systems and analysis of myth and religion. American linguist Leonard Bloomfield promoted structuralism in the United States.
Saussure’s ideas also influenced European linguistics, most notably in France and Czechoslovakia (now the Czech Republic). In 1926 Czech linguist Vilem Mathesius founded the Linguistic Circle of Prague, a group that expanded the focus of the field to include the context of language use. The Prague circle developed the field of phonology, or the study of sounds, and demonstrated that universal features of sounds in the languages of the world interrelate in a systematic way. Linguistic analysis, they said, should focus on the distinctiveness of sounds rather than on the ways they combine. Where descriptivists tried to locate and describe individual phonemes, such as /b/ and /p/, the Prague linguists stressed the features of these phonemes and their interrelationships in different languages. In English, for example, the voice distinguishes between the similar sounds of /b/ and /p/, but these are not distinct phonemes in a number of other languages. An Arabic speaker might pronounce the cities Pompei and Bombay the same way.
As linguistics developed in the 20th century, the notion became prevalent that language is more than speech - specifically, that it is an abstract system of interrelationships shared by members of a speech community. Structural linguistics led linguists to look at the rules and the patterns of behaviour shared by such communities. Whereas structural linguists saw the basis of language in the social structure, other linguists looked at language as a mental process.
The 1957 publication of Syntactic Structures by American linguist Noam Chomsky initiated what many view as a scientific revolution in linguistics. Chomsky sought a theory that would account for both linguistic structure and the creativity of language—the fact that we can create entirely original sentences and understand sentences never before uttered. He proposed that all people have an innate ability to acquire language. The task of the linguist, he claimed, is to describe this universal human ability, known as language competence, with a grammar from which the grammars of all languages could be derived. The linguist would develop this grammar by looking at the rules children use in hearing and speaking their first language. He termed the resulting model, or grammar, a transformational-generative grammar, referring to the transformations (or rules) that generate (or account for) language. Certain rules, Chomsky asserted, are shared by all languages and form part of a universal grammar, while others are language specific and associated with particular speech communities. Since the 1960s much of the development in the field of linguistics has been a reaction to or against Chomsky’s theories.
At the end of the 20th century, linguists used the term grammar primarily to refer to a subconscious linguistic system that enables people to produce and comprehend an unlimited number of utterances. Grammar thus accounts for our linguistic competence. Observations about the actual language we use, or language performance, are used to theorize about this invisible mechanism known as grammar.
The orientation toward the scientific study of language led by Chomsky has had an impact on nongenerative linguists as well. Comparative and historically oriented linguists are looking for the various ways linguistic universals show up in individual languages. Psycholinguists, interested in language acquisition, are investigating the notion that an ideal speaker-hearer is the origin of the acquisition process. Sociolinguists are examining the rules that underlie the choice of language variants, or codes, and allow for switching from one code to another. Some linguists are studying language performance—the way people use language—to see how it reveals a cognitive ability shared by all human beings. Others seek to understand animal communication within such a framework. What mental processes enable chimpanzees to make signs and communicate with one another and how do these processes differ from those of humans?
THE PROBLEM of CONSCIOUSNESS
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
The most overwhelming question in neurobiology today is the relation between the mind and the brain. Everyone agrees that what we know as mind is closely related to certain aspects of the behaviour of the brain, not to the heart, as Aristotle thought. Its most mysterious aspect is consciousness or awareness, which can take many forms, from the experience of pain to self-consciousness. In the past the mind (or soul) was often regarded, as it was by Descartes, as something immaterial, separate from the brain but interacting with it in some way. A few neuroscientists, such as Sir John Eccles, still assert that the soul is distinct from the body. But most neuroscientists now believe that all aspects of mind, including its most puzzling attribute - consciousness or awareness are likely to be explainable in a more materialistic way as the behaviour of large sets of interacting neurons. As William James, the father of American psychology, said a century ago, consciousness is not a thing but a process.
Exactly what the process is, however, has yet to be discovered. For many years after James penned The Principles of Psychology, consciousness was a taboo concept in American psychology because of the dominance of the behaviorist movement. With the advent of cognitive science in the mid-1950s, it became possible once more for psychologists to consider mental processes as opposed to merely observing behaviour. In spite of these changes, until recently most cognitive scientists ignored consciousness, as did almost all neuroscientists. The problem was felt to be either purely "philosophical" or too elusive to study experimentally. It would not have been easy for a neuroscientists to get a grant just to study consciousness.
In our opinion, such timidity is ridiculous, so a few years ago we began to think about how best to attack the problem scientifically. How to explain mental events as caused by the firing of large sets of neurons? Although there are those who believe such an approach is hopeless, we feel it is not productive to worry too much over aspects of the problem that cannot be solved scientifically or, more precisely, cannot be solved solely by using existing scientific ideas. Radically new concepts may indeed be needed—recall the modifications of scientific thinking forced on us by quantum mechanics. The only sensible approach is to press the experimental attack until we are confronted with dilemmas that call for new ways of thinking.
There are many possible approaches to the problem of consciousness. Some psychologists feel that any satisfactory theory should try to explain as many aspects of consciousness as possible, including emotion, imagination, dreams, mystical experiences and so on. Although such an all-embracing theory will be necessary in the long run, we thought it wiser to begin with the particular aspect of consciousness that is likely to yield most easily. What this aspect may be is a matter of personal judgment. We selected the mammalian visual system because humans are very visual animals and because so much experimental and theoretical work has already been done on it.
It is not easy to grasp exactly what we need to explain, and it will take many careful experiments before visual consciousness can be described scientifically. We did not attempt to define consciousness itself because of the dangers of premature definition. (If this seems like a copout, try defining the word "gene" - you will not find it easy.) Yet the experimental evidence that already exists provides enough of a glimpse of the nature of visual consciousness to guide research. In this article, we will attempt to show how this evidence opens the way to attack this profound and intriguing problem.
Visual theorists agree that the problem of visual consciousness is ill posed. The mathematical term "ill posed" means that additional constraints are needed to solve the problem. Although the main function of the visual system is to perceive objects and events in the world around us, the information available to our eyes is not sufficient by itself to provide the brain with its unique interpretation of the visual world. The brain must use past experience (either its own or that of our distant ancestors, which is embedded in our genes) to help interpret the information coming into our eyes. An example would be the derivation of the three-dimensional representation of the world from the two-dimensional signals falling onto the retinas of our two eyes or even onto one of them.
Visual theorists also would agree that seeing is a constructive process, one in which the brain has to carry out complex activities (sometimes called computations) in order to decide which interpretation to adopt of the ambiguous visual input. "Computation" implies that the brain acts to form a symbolic representation of the visual world, with a mapping (in the mathematical sense) of certain aspects of that world onto elements in the brain.
Ray Jackendoff of Brandeis University postulates, as do most cognitive scientists, that the computations carried out by the brain are largely unconscious and that what we become aware of is the result of these computations. But while the customary view is that this awareness occurs at the highest levels of the computational system, Jackendoff has proposed an intermediate-level theory of consciousness.
What we see, Jackendoff suggests, relates to a representation of surfaces that are directly visible to us, together with their outline, orientation, Collor, texture and movement. (This idea has similarities to what the late David C. Marr of the Massachusetts Institute of Technology called a "2 1/2-dimensional sketch." It is more than a two-dimensional sketch because it conveys the orientation of the visible surfaces. It is less than three-dimensional because depth information is not explicitly represented.) In the next stage this sketch is processed by the brain to produce a three-dimensional representation. Jackendoff argues that we are not visually aware of this three-dimensional representation.
An example may make this process clearer. If you look at a person whose back is turned to you, you can see the back of the head but not the face. Nevertheless, your brain infers that the person has a face. We can deduce as much because if that person turned around and had no face, you would be very surprised.
The viewer-entered representation that corresponds to the visible back of the head is what you are vividly aware of. What your brain infers about the front would come from some kind of three-dimensional representation. This does not mean that information flows only from the surface representation to the three-dimensional one; it almost certainly flows in both directions. When you imagine the front of the face, what you are aware of is a surface representation generated by information from the three-dimensional model.
It is important to distinguish between an explicit and an implicit representation. An explicit representation is something that is symbolized without further processing. An implicit representation contains the same information but requires further processing to make it explicit. The pattern of coloured dots on a television screen, for example, contains an implicit representation of objects (say, a person's face), but only the dots and their locations are explicit. When you see a face on the screen, there must be neurons in your brain whose firing, in some sense, symbolizes that face.
We call this pattern of firing neurons an active representation. A latent representation of a face must also be stored in the brain, probably as a special pattern of synaptic connections between neurons. For example, you probably have a representation of the Statue of Liberty in your brain, a representation that is usually inactive. If you do think about the Statue, the representation becomes active, with the relevant neurons firing away.
An object, incidentally, may be represented in more than one way—as a visual image, as a set of words and their related sounds, or even as a touch or a smell. These different representations are likely to interact with one another. The representation is likely to be distributed over many neurons, both locally and more globally. Such a representation may not be as simple and straightforward as uncritical introspection might indicate. There is suggestive evidence, partly from studying how neurons fire in various parts of a monkey's brain and partly from examining the effects of certain types of brain damage in humans, that different aspects of a face—and of the implications of a face—may be represented in different parts of the brain.
First, there is the representation of a face as a face: two eyes, a nose, a mouth and so on. The neurons involved are usually not too fussy about the exact size or position of this face in the visual field, nor are they very sensitive to small changes in its orientation. In monkeys, there are neurons that respond best when the face is turning in a particular direction, while others seem to be more concerned with the direction in which the eyes are gazing.
Then there are representations of the parts of a face, as separate from those for the face as a whole. Further, the implications of seeing a face, such as that person's sex, the facial expression, the familiarity or unfamiliarity of the face, and in particular whose face it is, may each be correlated with neurons firing in other places.
What we are aware of at any moment, in one sense or another, is not a simple matter. We have suggested that there may be a very transient form of fleeting awareness that represents only rather simple features and does not require an attentional mechanism. From this brief awareness the brain constructs a viewer-entered representation—what we see vividly and clearly—that does require attention. This in turn probably leads to three-dimensional object representations and thence to more cognitive ones.
Representations corresponding to vivid consciousness are likely to have special properties. William James thought that consciousness involved both attention and short-term memory. Most psychologists today would agree with this view. Jackendoff writes that consciousness is "enriched" by attention, implying that whereas attention may not be essential for certain limited types of consciousness, it is necessary for full consciousness. Yet it is not clear exactly which forms of memory are involved. Is long-term memory needed? Some forms of acquired knowledge are so embedded in the machinery of neural processing that they are almost certainly used in becoming aware of something. On the other hand, there is evidence from studies of brain-damaged patients that the ability to lay down new long-term episodic memories is not essential for consciousness to be experienced.
It is difficult to imagine that anyone could be conscious if he or she had no memory whatsoever of what had just happened, even an extremely short one. Visual psychologists talk of iconic memory, which lasts for a fraction of a second, and working memory (such as that used to remember a new telephone number) that lasts for only a few seconds unless it is rehearsed. It is not clear whether both of these are essential for consciousness. In any case, the division of short-term memory into these two categories may be too crude.
If these complex processes of visual awareness are localized in parts of the brain, which processes are likely to be where? Many regions of the brain may be involved, but it is almost certain that the cerebral neocortex plays a dominant role. Visual information from the retina reaches the neocortex mainly by way of a part of the thalamus (the lateral geniculate nucleus); another significant visual pathway from the retina is to the superior colliculus, at the top of the brain stem.
The cortex in humans consists of two intricately folded sheets of nerve tissue, one on each side of the head. These sheets are connected by a large tract of about half a billion axons called the corpus callosum. It is well known that if the corpus callosum is cut, as is done for certain cases of intractable epilepsy, one side of the brain is not aware of what the other side is seeing. In particular, the left side of the brain (in a right-handed person) appears not to be aware of visual information received exclusively by the right side. This shows that none of the information required for visual awareness can reach the other side of the brain by travelling down to the brain stem and, from there, back up. In a normal person, such information can get to the other side only by using the axons in the corpus callosum.
A different part of the brain—the hippocampal system—is involved in one-shot, or episodic, memories that, over weeks and months, it passes on to the neocortex. This system is so placed that it receives inputs from, and projects to, many parts of the brain. Thus, one might suspect that the hippocampal system is the essential seat of consciousness. This is not the case: evidence from studies of patients with damaged brains shows that this system is not essential for visual awareness, although naturally a patient lacking one is severely handicapped in everyday life because he cannot remember anything that took place more than a minute or so in the past.
In broad terms, the neocortex of alert animals probably acts in two ways. By building on crude and somewhat redundant wiring, produced by our genes and by embryonic processes, the neocortex draws on visual and other experience to slowly "rewire" itself to create categories (or "features") it can respond to. A new category is not fully created in the neocortex after exposure to only one example of it, although some small modifications of the neural connections may be made.
The second function of the neocortex (at least of the visual part of it) is to respond extremely rapidly to incoming signals. To do so, it uses the categories it has learned and tries to find the combinations of active neurons that, on the basis of its past experience, are most likely to represent the relevant objects and events in the visual world at that moment. The formation of such coalitions of active neurons may also be influenced by biases coming from other parts of the brain: for example, signals telling it what best to attend to or high-level expectations about the nature of the stimulus.
Consciousness, as James noted, is always changing. These rapidly formed coalitions occur at different levels and interact to form even broader coalitions. They are transient, lasting usually for only a fraction of a second. Because coalitions in the visual system are the basis of what we see, evolution has seen to it that they form as fast as possible; otherwise, no animal could survive. The brain is handicapped in forming neuronal coalitions rapidly because, by computer standards, neurons act very slowly. The brain compensates for this relative slowness partly by using very many neurons, simultaneously and in parallel, and partly by arranging the system in a roughly hierarchical manner.
If visual awareness at any moment corresponds to sets of neurons firing, then the obvious question is: Where are these neurons located in the brain, and in what way are they firing? Visual awareness is highly unlikely to occupy all the neurons in the neocortex that are firing above their background rate at a particular moment. We would expect that, theoretically, at least some of these neurons would be involved in doing computations - ing to arrive at the best coalitions - whereas others would express the results of these computations, in other words, what we see.
fortunately, some experimental evidence can be found to back up this theoretical conclusion. A phenomenon called binocular rivalry may help identify the neurons whose firing symbolizes awareness. This phenomenon can be seen in dramatic form in an exhibit prepared by Sally Duensing and Bob Miller at the Exploratorium in San Francisco.
Inocular rivalry occurs when each eye has a different visual input relating to the same part of the visual field. The early visual system on the left side of the brain receives an input from both eyes but sees only the part of the visual field to the right of the fixation point. The converse is true for the right side. If these two conflicting inputs are rivalrous, one sees not the two inputs superimposed but first one input, then the other, and so on in alternation.
This exhibit, called "The Cheshire Cat," viewers put their heads in a fixed place and are told to keep the gaze fixed. By means of a suitably placed mirror, one of the eyes can look at another person's face, directly in front, while the other eye sees a blank white screen to the side. If the viewer waves a hand in front of this plain screen at the same location in his or her visual field occupied by the face, the face is wiped out. The movement of the hand, being visually very salient, has captured the brain's attention. Without attention the face cannot be seen. If the viewer moves the eyes, the face reappears.
n some cases, only part of the face disappears. Sometimes, for example, one eye, or both eyes, will remain. If the viewer looks at the smile on the person's face, the face may disappear, leaving only the smile. For this reason, the effect has been called the Cheshire Cat effect, after the cat in Lewis Carroll's Alice's Adventures in Wonderland.
although it is very difficult to record activity in individual neurons in a human brain, such studies can be done in monkeys. A simple example of binocular rivalry has been studied in a monkey by Nikos K. Logothetis and Jeffrey D. Schall, both then at M.I.T. They trained a macaque to keep its eyes still and to signal whether it is seeing upward or downward movement of a horizontal grating. To produce rivalry, upward movement is projected into one of the monkey's eyes and downward movement into the other, so that the two images overlap in the visual field. The monkey signals that it sees up and down movements alternatively, just as humans would. Even though the motion stimulus coming into the monkey's eyes is always the same, the monkey's percept changes every second or so.
Ortical area MT (which some researchers prefer to label V5) is an area mainly concerned with movement. What do the neurons in MT do when the monkey's percept is sometimes up and sometimes down? (The researchers studied only the monkey's first response.) The simplified answer—the actual data are rather more messy—is that whereas the firing of some of the neurons correlates with the changes in the percept, for others the average firing rate is relatively unchanged and independent of which direction of movement the monkey is seeing at that moment. Thus, it is unlikely that the firing of all the neurons in the visual neocortex at one particular moment corresponds to the monkey's visual awareness. Exactly which neurons do correspond to awareness remains to be discovered.
e have postulated that when we clearly see something, there must be neurons actively firing that stand for what we see. This might be called the activity principle. Here, too, there is some experimental evidence. One example is the firing of neurons in a specific cortical visual area in response to illusory contours. Another and perhaps more striking case is the filling in of the blind spot. The blind spot in each eye is caused by the lack of photoreceptors in the area of the retina where the optic nerve leaves the retina and projects to the brain. Its location is about 15 degrees from the fovea (the visual centre of the eye). Yet if you close one eye, you do not see a hole in your visual field.
philosopher Daniel C. Dennett of Tufts University is unusual among philosophers in that he is interested both in psychology and in the brain. This interest is much to be welcomed. In a recent book, Consciousness Explained, he has argued that it is wrong to talk about filling in. He concludes, correctly, that "an absence of information is not the same as information about an absence." From this general principle he argues that the brain does not fill in the blind spot but rather ignores it.
Dennett's argument by itself, however, does not establish that filling in does not occur; it only suggests that it might not. Dennett also states that "your brain has no machinery for [filling in] at this location." This statement is incorrect. The primary visual cortex lacks a direct input from one eye, but normal "machinery" is there to deal with the input from the other eye. Ricardo Gattass and his colleagues at the Federal University of Rio de Janeiro have shown that in the macaque some of the neurons in the blind-spot area of the primary visual cortex do respond to input from both eyes, probably assisted by inputs from other parts of the cortex. Moreover, in the case of simple filling in, some of the neurons in that region respond as if they were actively filling in.
Thus, Dennett's claim about blind spots is incorrect. In addition, psychological experiments by Vilayanur S. Ramachandran [see "Blind Spots," Scientific American, May 1992] have shown that what is filled in can be quite complex depending on the overall context of the visual scene. How, he argues, can your brain be ignoring something that is in fact commanding attention?
Filling in, therefore, is not to be dismissed as nonexistent or unusual. It probably represents a basic interpolation process that can occur at many levels in the neocortex. It is, incidentally, a good example of what is meant by a constructive process.
How can we discover the neurons whose firing symbolizes a particular percept? William T. Newsome and his colleagues at Stanford University have done a series of brilliant experiments on neurons in cortical area MT of the macaque's brain. By studying a neuron in area MT, we may discover that it responds best to very specific visual features having to do with motion. A neuron, for instance, might fire strongly in response to the movement of a bar in a particular place in the visual field, but only when the bar is oriented at a certain angle, moving in one of the two directions perpendicular to its length within a certain range of speed.
It is technically difficult to excite just a single neuron, but it is known that neurons that respond to roughly the same position, orientation and direction of movement of a bar tend to be located near one another in the cortical sheet. The experimenters taught the monkey a simple task in movement discrimination using a mixture of dots, some moving randomly, the rest all in one direction. They showed that electrical stimulation of a small region in the right place in cortical area MT would bias the monkey's motion discrimination, almost always in the expected direction.
Thus, the stimulation of these neurons can influence the monkey's behaviour and probably its visual percept. Such experiments do not, however, show decisively that the firing of such neurons is the exact neural correlate of the percept. The correlate could be only a subset of the neurons being activated. Or perhaps the real correlate is the firing of neurons in another part of the visual hierarchy that are strongly influenced by the neurons activated in area MT.
These same reservations apply also to cases of binocular rivalry. Clearly, the problem of finding the neurons whose firing symbolizes a particular percept is not going to be easy. It will take many careful experiments to track them down even for one kind of percept.
It seems obvious that the purpose of vivid visual awareness is to feed into the cortical areas concerned with the implications of what we see; from there the information shuttles on the one hand to the hippocampal system, to be encoded (temporarily) into long-term episodic memory, and on the other to the planning levels of the motor system. But is it possible to go from a visual input to a behavioural output without any relevant visual awareness?
That such a process can happen is demonstrated by the remarkable class of patients with "blindsight." These patients, all of whom have suffered damage to their visual cortex, can point with fair accuracy at visual targets or track them with their eyes while vigorously denying seeing anything. In fact, these patients are as surprised as their doctors by their abilities. The amount of information that "gets through," however, is limited: blindsight patients have some ability to respond to wavelength, orientation and motion, yet they cannot distinguish a triangle from a square.
It is naturally of great interest to know which neural pathways are being used in these patients. Investigators originally suspected that the pathway ran through the superior colliculus. Recent experiments suggest that a direct although weak connection may be involved between the lateral geniculate nucleus and other visual areas in the cortex. It is unclear whether an intact primary visual cortex region is essential for immediate visual awareness. Conceivably the visual signal in blindsight is so weak that the neural activity cannot produce awareness, although it remains strong enough to get through to the motor system.
Normal-seeing people regularly respond to visual signals without being fully aware of them. In automatic actions, such as swimming or driving a car, complex but stereotypical actions occur with little, if any, associated visual awareness. In other cases, the information conveyed is either very limited or very attenuated. Thus, while we can function without visual awareness, our behaviour without it is rather restricted.
Clearly, it takes a certain amount of time to experience a conscious percept. It is difficult to determine just how much time is needed for an episode of visual awareness, but one aspect of the problem that can be demonstrated experimentally is that signals received close together in time are treated by the brain as simultaneous.
A disk of red light is flashed for, say, 20 milliseconds, followed immediately by a 20-millisecond flash of green light in the same place. The subject reports that he did not see a red light followed by a green light. Instead he saw a yellow light, just as he would have if the red and the green light had been flashed simultaneously. Yet the subject could not have experienced yellow until after the information from the green flash had been processed and integrated with the preceding red one.
Experiments of this type led psychologist Robert Efron, now at the University of California at Davis, to conclude that the processing period for perception is about 60 to 70 milliseconds. Similar periods are found in experiments with tones in the auditory system. It is always possible, however, that the processing times may be different in higher parts of the visual hierarchy and in other parts of the brain. Processing is also more rapid in trained, compared with naive, observers.
Because it appears to be involved in some forms of visual awareness, it would help if we could discover the neural basis of attention. Eye movement is a form of attention, since the area of the visual field in which we see with high resolution is remarkably small, roughly the area of the thumbnail at arm's length. Thus, we move our eyes to gaze directly at an object in order to see it more clearly. Our eyes usually move three or four times a second. Psychologists have shown, however, that there appears to be a faster form of attention that moves around, in some sense, when our eyes are stationary.
The exact psychological nature of this faster attentional mechanism is at present controversial. Several neuroscientists, however, including Robert Desimone and his colleagues at the National Institute of Mental Health, have shown that the rate of firing of certain neurons in the macaque's visual system depends on what the monkey is attending to in the visual field. Thus, attention is not solely a psychological concept; it also has neural correlates that can be observed. A number of researchers have found that the pulvinars, a region of the thalamus, appears to be involved in visual attention. We would like to believe that the thalamus deserves to be called "the organ of attention," but this status has yet to be established.
The major problem is to find what activity in the brain corresponds directly to visual awareness. It has been speculated that each cortical area produces awareness of only those visual features that are "columnar," or arranged in the stack or column of neurons perpendicular to the cortical surface. Thus, the primary visual cortex could code for orientation and area MT for motion. So far experientialists have not found one particular region in the brain where all the information needed for visual awareness appears to come together. Dennett has dubbed such a hypothetical place "The Cartesian Theatre." He argues on theoretical grounds that it does not exist.
Awareness seems to be distributed not just on a local scale, but more widely over the neocortex. Vivid visual awareness is unlikely to be distributed over every cortical area because some areas show no response to visual signals. Awareness might, for example, be associated with only those areas that connect back directly to the primary visual cortex or alternatively with those areas that project into one another's layer 4. (The latter areas are always at the same level in the visual hierarchy.)
The key issue, then, is how the brain forms its global representations from visual signals. If attention is indeed crucial for visual awareness, the brain could form representations by attending to just one object at a time, rapidly moving from one object to the next. For example, the neurons representing all the different aspects of the attended object could all fire together very rapidly for a short period, possibly in rapid bursts.
This fast, simultaneous firing might not only excite those neurons that symbolized the implications of that object but also temporarily strengthen the relevant synapses so that this particular pattern of firing could be quickly recalled - a form of short-term memory. If only one representation needs to be held in short-term memory, as in remembering a single task, the neurons involved may continue to fire for a period.
A problem arises if it is necessary to be aware of more than one object at exactly the same time. If all the attributes of two or more objects were represented by neurons firing rapidly, their attributes might be confused. The Collor of one might become attached to the shape of another. This happens sometimes in very brief presentations.
Some time ago Christoph von der Malsburg, now at the Ruhr-Universität Bochum, suggested that this difficulty would be circumvented if the neurons associated with any one object all fired in synchrony (that is, if their times of firing were correlated) but out of synchrony with those representing other objects. Recently two groups in Germany reported that there does appear to be correlated firing between neurons in the visual cortex of the cat, often in a rhythmic manner, with a frequency in the 35- to 75-hertz range, sometimes called 40-hertz, or g, oscillation.
Von der Malsburg's proposal prompted us to suggest that this rhythmic and synchronized firing might be the neural correlate of awareness and that it might serve to bind together activity concerning the same object in different cortical areas. The matter is still undecided, but at present the fragmentary experimental evidence does rather little to support such an idea. Another possibility is that the 40-hertz oscillations may help distinguish figure from ground or assist the mechanism of attention.
Are there some particular types of neurons, distributed over the visual neocortex, whose firing directly symbolizes the content of visual awareness? One very simplistic hypothesis is that the activities in the upper layers of the cortex are largely unconscious ones, whereas the activities in the lower layers (layers 5 and 6) mostly correlate with consciousness. We have wondered whether the pyramidal neurons in layer 5 of the neocortex, especially the larger ones, might play this latter role.
These are the only cortical neurons that project right out of the cortical system (that is, not to the neocortex, the thalamus or the claustrum). If visual awareness represents the results of neural computations in the cortex, one might expect that what the cortex sends elsewhere would symbolize those results. Moreover, the neurons in layer 5 show a rather unusual propensity to fire in bursts. The idea that layer 5 neurons may directly symbolize visual awareness is attractive, but it still is too early to tell whether there is anything in it.
Visual awareness is clearly a difficult problem. More work is needed on the psychological and neural basis of both attention and very short-term memory. Studying the neurons when a percept changes, even though the visual input is constant, should be a powerful experimental paradigm. We need to construct neurobiological theories of visual awareness and test them using a combination of molecular, neurobiological and clinical imaging studies.
We believe that once we have mastered the secret of this simple form of awareness, we may be close to understanding a central mystery of human life: how the physical events occurring in our brains while we think and act in the world relate to our subjective sensations—that is, how the brain relates to the mind.
Postscript: There have been several relevant developments since this article was first published. It now seems likely that there are rapid "on-line" systems for stereotyped motor responses such as hand or eye movement. These systems are unconscious and lack memory. Conscious seeing, on the other hand, seems to be slower and more subject to visual illusions. The brain needs to form a conscious representation of the visual scene that it then can use for many different actions or thoughts. Exactly how all these pathways work and how they interact is far from clear.
There have been more experiments on the behaviour of neurons that respond to bistable visual percepts, such as binocular rivalry, but it is probably too early to draw firm conclusions from them about the exact neural correlates of visual consciousness. We have suggested on theoretical grounds based on the neuroanatomy of the macaque monkey that primates are not directly aware of what is happening in the primary visual cortex, even though most of the visual information flows through it. This hypothesis is supported by some experimental evidence, but it is still controversial
There is no simple, agreed-upon definition of consciousness exists. Attempted definitions tend to be tautological (for example, consciousness defined as awareness) or merely descriptive (for example, consciousness described as sensations, thoughts, or feelings). Despite this problem of definition, the subject of consciousness has had a remarkable history. At one time the primary subject matter of psychology, consciousness as an area of study suffered an almost total demise, later reemerging to become a topic of current interest.
French thinker René Descartes applied rigorous scientific methods of deduction to his exploration of philosophical questions. Descartes is probably best known for his pioneering work in philosophical skepticism. Author Tom Sorell examines the concepts behind Descartes’s work Meditationes de Prima Philosophia (1641; Meditations on First Philosophy), focussing on its unconventional use of logic and the reactions it aroused.
Most of the philosophical discussions of consciousness arose from the mind-body issues posed by the French philosopher and mathematician René Descartes in the 17th century. Descartes asked: Is the mind, or consciousness, independent of matter? Is consciousness extended (physical) or unextended (nonphysical)? Is consciousness determinative, or is it determined? English philosophers such as John Locke equated consciousness with physical sensations and the information they provide, whereas European philosophers such as Gottfried Wilhelm Leibniz and Immanuel Kant gave a more central and active role to consciousness.
The philosopher who most directly influenced subsequent exploration of the subject of consciousness was the 19th-century German educator Johann Friedrich Herbart, who wrote that ideas had quality and intensity and that they may inhibit or facilitate one another. Thus, ideas may pass from “states of reality” (consciousness) to “states of tendency” (unconsciousness), with the dividing line between the two states being described as the threshold of consciousness. This formulation of Herbart clearly presages the development, by the German psychologist and physiologist Gustav Theodor Fechner, of the psychophysical measurement of sensation thresholds, and the later development by Sigmund Freud of the concept of the unconscious.
The experimental analysis of consciousness dates from 1879, when the German psychologist Wilhelm Max Wundt started his research laboratory. For Wundt, the task of psychology was the study of the structure of consciousness, which extended well beyond sensations and included feelings, images, memory, attention, duration, and movement. Because early interest focussed on the content and dynamics of consciousness, it is not surprising that the central methodology of such studies was introspection; that is, subjects reported on the mental contents of their own consciousness. This introspective approach was developed most fully by the American psychologist Edward Bradford Titchener at Cornell University. Setting his task as that of describing the structure of the mind, Titchener attempted to detail, from introspective self-reports, the dimensions of the elements of consciousness. For example, taste was “dimensionalized” into four basic categories: sweet, sour, salt, and bitter. This approach was known as structuralism.
By the 1920s, however, a remarkable revolution had occurred in psychology that was to essentially remove considerations of consciousness from psychological research for some 50 years: Behaviourism captured the field of psychology. The main initiator of this movement was the American psychologist John Broadus Watson. In a 1913 article, Watson stated, “I believe that we can write a psychology and never use the terms consciousness, mental states, mind . . . imagery and the like.” Psychologists then turned almost exclusively to behaviour, as described in terms of stimulus and response, and consciousness was totally bypassed as a subject. A survey of eight leading introductory psychology texts published between 1930 and the 1950s found no mention of the topic of consciousness in five texts, and in two it was treated as a historical curiosity.
Beginning in the late 1950s, however, interest in the subject of consciousness returned, specifically in those subjects and techniques relating to altered states of consciousness: sleep and dreams, meditation, biofeedback, hypnosis, and drug-induced states. Much of the surge in sleep and dream research was directly fuelled by a discovery relevant to the nature of consciousness. A physiological indicator of the dream state was found: At roughly 90-minute intervals, the eyes of sleepers were observed to move rapidly, and at the same time the sleepers' brain waves would show a pattern resembling the waking state. When people were awakened during these periods of rapid eye movement, they almost always reported dreams, whereas if awakened at other times they did not. This and other research clearly indicated that sleep, once considered a passive state, was instead an active state of consciousness (see Dreaming; Sleep).
During the 1960s, an increased search for “higher levels” of consciousness through meditation resulted in a growing interest in the practices of Zen Buddhism and Yoga from Eastern cultures. A full flowering of this movement in the United States was seen in the development of training programs, such as Transcendental Meditation, that were self-directed procedures of physical relaxation and focussed attention. Biofeedback techniques also were developed to bring body systems involving factors such as blood pressure or temperature under voluntary control by providing feedback from the body, so that subjects could learn to control their responses. For example, researchers found that persons could control their brain-wave patterns to some extent, particularly the so-called alpha rhythms generally associated with a relaxed, meditative state. This finding was especially relevant to those interested in consciousness and meditation, and a number of “alpha training” programs emerged.
Another subject that led to increased interest in altered states of consciousness was hypnosis, which involves a transfer of conscious control from the subject to another person. Hypnotism has had a long and intricate history in medicine and folklore and has been intensively studied by psychologists. Much has become known about the hypnotic state, relative to individual suggestibility and personality traits; the subject has now been largely demythologized, and the limitations of the hypnotic state are fairly well known. Despite the increasing use of hypnosis, however, much remains to be learned about this unusual state of focussed attention.
Finally, many people in the 1960s experimented with the psychoactive drugs known as hallucinogens, which produce disorders of consciousness. The most prominent of these drugs are lysergic acid diethylamide, or LSD; mescaline; and psilocybin; the latter two have long been associated with religious ceremonies in various cultures. LSD, because of its radical thought-modifying properties, was initially explored for its so-called mind-expanding potential and for its psychotomimetic effects (imitating psychoses). Little positive use, however, has been found for these drugs, and their use is highly restricted.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
open sidebar
As the concept of a direct, simple linkage between environment and behaviour became unsatisfactory in recent decades, the interest in altered states of consciousness may be taken as a visible sign of renewed interest in the topic of consciousness. That persons are active and intervening participants in their behaviour has become increasingly clear. Environments, rewards, and punishments are not simply defined by their physical character. Memories are organized, not simply stored (see Memory). An entirely new area called cognitive psychology has emerged that centres on these concerns. In the study of children, increased attention is being paid to how they understand, or perceive, the world at different ages. In the field of animal behaviour, researchers increasingly emphasize the inherent characteristics resulting from the way a species has been shaped to respond adaptively to the environment. Humanistic psychologists, with a concern for self-actualization and growth, have emerged after a long period of silence. Throughout the development of clinical and industrial psychology, the conscious states of persons in terms of their current feelings and thoughts were of obvious importance. The role of consciousness, however, was often de-emphasised in favour of unconscious needs and motivations. Trends can be seen, however, toward a new emphasis on the nature of states of consciousness.
All the same the first of Freud's innovations was his recognition of unconscious psychiatric processes that follow laws different from those that govern conscious experience. Under the influence of the unconscious, thoughts and feelings that belong together may be shifted or displaced out of context; two disparate ideas or images may be condensed into one; thoughts may be dramatized in the form of images rather than expressed as abstract concepts; and certain objects may be represented symbolically by images of other objects, although the resemblance between the symbol and the original object may be vague or farfetched. The laws of logic, indispensable for conscious thinking, do not apply to these unconscious mental productions.
Recognition of these modes of operation in unconscious mental processes made possible the understanding of such previously incomprehensible psychological phenomena as dreaming. Through analysis of unconscious processes, Freud saw dreams as serving to protect sleep against disturbing impulses arising from within and related to early life experiences. Thus, unacceptable impulses and thoughts, called the latent dream content, are transformed into a conscious, although no longer immediately comprehensible, experience called the manifest dream. Knowledge of these unconscious mechanisms permits the analyst to reverse the so-called dream work, that is, the process by which the latent dream is transformed into the manifest dream, and through dream interpretation, to recognize its underlying meaning.
A basic assumption of Freudian theory is that the unconscious conflicts involve instinctual impulses, or drives, that originate in childhood. As these unconscious conflicts are recognized by the patient through analysis, his or her adult mind can find solutions that were unattainable to the immature mind of the child. This depiction of the role of instinctual drives in human life is a unique feature of Freudian theory.
According to Freud's doctrine of infantile sexuality, adult sexuality is an end product of a complex process of development, beginning in childhood, involving a variety of body functions or areas (oral, anal, and genital zones), and corresponding to various stages in the relation of the child to adults, especially to parents. Of crucial importance is the so-called Oedipal period, occurring at about four to six years of age, because at this stage of development the child for the first time becomes capable of an emotional attachment to the parent of the opposite sex that is similar to the adult's relationship to a mate; the child simultaneously reacts as a rival to the parent of the same sex. Physical immaturity dooms the child's desires to frustration and his or her first step toward adulthood to failure. Intellectual immaturity further complicates the situation because it makes children afraid of their own fantasies. The extent to which the child overcomes these emotional upheavals and to which these attachments, fears, and fantasies continue to live on in the unconscious greatly influences later life, especially love relationships.
The conflicts occurring in the earlier developmental stages are no less significant as a formative influence, because these problems represent the earliest prototypes of such basic human situations as dependency on others and relationship to authority. Also basic in moulding the personality of the individual is the behaviour of the parents toward the child during these stages of development. The fact that the child reacts, not only to objective reality, but also to fantasy distortions of reality, however, greatly complicates even the best-intentioned educational efforts.
The effort to clarify the bewildering number of interrelated observations uncovered by psychoanalytic exploration led to the development of a model of the structure of the psychic system. Three functional systems are distinguished that are conveniently designated as the id, ego, and superego.
The first system refers to the sexual and aggressive tendencies that arise from the body, as distinguished from the mind. Freud called these tendencies Triebe, which literally means “drives,” but which is often inaccurately translated as “instincts” to indicate their innate character. These inherent drives claim immediate satisfaction, which is experienced as pleasurable; the id thus is dominated by the pleasure principle. In his later writings, Freud tended more toward psychological rather than biological conceptualization of the drives.
How the conditions for satisfaction are to be brought about is the task of the second system, the ego, which is the domain of such functions as perception, thinking, and motor control that can accurately assess environmental conditions. In order to fulfill its function of adaptation, or reality testing, the ego must be capable of enforcing the postponement of satisfaction of the instinctual impulses originating in the id. To defend itself against unacceptable impulses, the ego develops specific psychic means, known as defence mechanisms. These include repression, the exclusion of impulses from conscious awareness; projection, the process of ascribing to others one's own unacknowledged desires; and reaction formation, the establishment of a pattern of behaviour directly opposed to a strong unconscious need. Such defence mechanisms are put into operation whenever anxiety signals a danger that the original unacceptable impulses may reemerge.
An id impulse becomes unacceptable, not only as a result of a temporary need for postponing its satisfaction until suitable reality conditions can be found, but more often because of a prohibition imposed on the individual by others, originally the parents. The totality of these demands and prohibitions constitutes the major content of the third system, the superego, the function of which is to control the ego in accordance with the internalized standards of parental figures. If the demands of the superego are not fulfilled, the person may feel shame or guilt. Because the superego, in Freudian theory, originates in the struggle to overcome the Oedipal conflict, it has a power akin to an instinctual drive, is in part unconscious, and can give rise to feelings of guilt not justified by any conscious transgression. The ego, having to mediate among the demands of the id, the superego, and the outside world, may not be strong enough to reconcile these conflicting forces. The more the ego is impeded in its development because of being enmeshed in its earlier conflicts, called fixations or complexes, or the more it reverts to earlier satisfactions and archaic modes of functioning, known as regression, the greater is the likelihood of succumbing to these pressures. Unable to function normally, it can maintain its limited control and integrity only at the price of symptom formation, in which the tensions are expressed in neurotic symptoms.
A cornerstone of modern psychoanalytic theory and practice is the concept of anxiety, which institutes appropriate mechanisms of defence against certain danger situations. These danger situations, as described by Freud, are the fear of abandonment by or the loss of the loved one (the object), the risk of losing the object's love, the danger of retaliation and punishment, and, finally, the hazard of reproach by the superego. Thus, symptom formation, character and impulse disorders, and perversions, as well as sublimations, represent compromise formations—different forms of an adaptive integration that the ego tries to achieve through more or less successfully reconciling the different conflicting forces in the mind.
Various psychoanalytic schools have adopted other names for their doctrines to indicate deviations from Freudian theory.
The Swiss psychiatrist Carl Jung began his studies of human motivation in the early 1900s and created the school of psychoanalysis known as analytical psychology. A contemporary of Austrian psychoanalyst Sigmund Freud, Jung at first collaborated closely with Freud but eventually moved on to pursue his own theories, including the exploration of personality types. According to Jung, there are two basic personality types, extroverted and introverted, which alternate equally in the completely normal individual. Jung also believed that the unconscious mind is formed by the personal unconscious (the repressed feelings and thoughts developed during an individual’s life) and the collective unconscious (those feelings, thoughts, and memories shared by all humanity).
Carl Gustav Jung, one of the earliest pupils of Freud, eventually created a school that he preferred to call analytical psychology. Like Freud, Jung used the concept of the libido; however, to him it meant not only sexual drives, but a composite of all creative instincts and impulses and the entire motivating force of human conduct. According to his theories, the unconscious is composed of two parts; the personal unconscious, which contains the results of the individual's entire experience, and the collective unconscious, the reservoir of the experience of the human race. In the collective unconscious exist a number of primordial images, or archetypes, common to all individuals of a given country or historical era. Archetypes take the form of bits of intuitive knowledge or apprehension and normally exist only in the collective unconscious of the individual. When the conscious mind contains no images, however, as in sleep, or when the consciousness is caught off guard, the archetypes commence to function. Archetypes are primitive modes of thought and tend to personify natural processes in terms of such mythological concepts as good and evil spirits, fairies, and dragons. The mother and the father also serve as prominent archetypes.
An important concept in Jung's theory is the existence of two basically different types of personality, mental attitude, and function. When the libido and the individual's general interest are turned outward toward people and objects of the external world, he or she is said to be extroverted. When the reverse is true, and libido and interest are entered on the individual, he or she is said to be introverted. In a completely normal individual these two tendencies alternate, neither dominating, but usually the libido is directed mainly in one direction nor the other; as a result, two personality types are recognizable.
Jung rejected Freud's distinction between the ego and superego and recognized a portion of the personality, somewhat similar to the superego, that he called the persona. The persona consists of what a person appears to be to others, in contrast to what he or she actually is. The persona is the role the individual chooses to play in life, the total impression he or she wishes to make on the outside world.
The Austrian psychologist and psychiatrist Alfred Adler studied under Sigmund Freud, the founder of psychoanalysis, before developing his own theories about human behaviour. Adler’s best-known theories stress that individuals are mainly motivated by feelings of inferiority, which he called an inferiority complex.
Alfred Adler, another of Freud's pupils, differed from both Freud and Jung in stressing that the motivating force in human life is the sense of inferiority, which begins as soon as an infant is able to comprehend the existence of other people who are better able to care for themselves and cope with their environment. From the moment the feeling of inferiority is established, the child strives to overcome it. Because inferiority is intolerable, the compensatory mechanisms set up by the mind may get out of hand, resulting in self-entered neurotic attitudes, overcompensations, and a retreat from the real world and its problems.
Adler laid particular stress on inferiority feelings arising from what he regarded as the three most important relationships: those between the individual and work, friends, and loved ones. The avoidance of inferiority feelings in these relationships leads the individual to adopt a life goal that is often not realistic and is frequently expressed as an unreasoning will to power and dominance, leading to every type of antisocial behaviour from bullying and boasting to political tyranny. Adler believed that analysis can foster a sane and rational “community feeling” that is constructive rather than destructive.
Also the Austrian psychologist and psychotherapist Otto Rank worked with Sigmund Freud, the founder of psychoanalysis, before developing his own theories about mental and emotional disorders. Rank believed that an individual’s neurotic tendencies could be linked to the traumatic experience of birth.
Another student of Freud, Otto Rank, introduced a new theory of neurosis, attributing all neurotic disturbances to the primary trauma of birth. In his later writings he described individual development as a progression from complete dependence on the mother and family, to a physical independence coupled with intellectual dependence on society, and finally to complete intellectual and psychological emancipation. Rank also laid great importance on the will, defined as “a positive guiding organization and integration of self, which utilizes’ creatively as well as inhibits and controls the instinctual drives.”
The American psychoanalyst and social philosopher Erich Fromm stressed the importance of social and economic factors on human behaviour. His focus was a departure from traditional psychoanalysis, which emphasized the role of the subconscious. In this 1969 essay for Collier’s Year Book, Fromm presents various explanations for human violence. He argues that violence cannot be controlled by imposing stronger legal penalties, but rather by creating a more just society in which people connect with each other as humans and are able to control their own lives.
Later noteworthy modifications of psychoanalytic theory include those of the American psychoanalysts’ Erich Fromm, Karen Horney, and Harry Stack Sullivan. The theories of Fromm lay particular emphasis on the concept that society and the individual is not separate and opposing forces, that the nature of society is determined by its historic background, and that the needs and desires of individuals are largely formed by their society. As a result, Fromm believed, the fundamental problem of psychoanalysis and psychology is not to resolve conflicts between fixed and unchanging instinctive drives in the individual and the fixed demands and laws of society, but to bring about harmony and an understanding of the relationship between the individual and society. Fromm also stressed the importance to the individual of developing the ability to fully use his or her mental, emotional, and sensory powers.
Horney worked primarily in the field of therapy and the nature of neuroses, which she defined as of two types: Situation neuroses and character neuroses. Situation neuroses arise from the anxiety attendant on a single conflict, such for being faced with a difficult decision. Although they may paralyse the individual temporarily, making it impossible to think or act efficiently, such neuroses are not deeply rooted. Character neuroses are characterized by a basic anxiety and a basic hostility resulting from a lack of love and affection in childhood.
Sullivan believed that all development can be described exclusively in terms of interpersonal relations. Character types as well as neurotic symptoms are explained as results of the struggle against anxiety arising from the individual's relations with others and are a security system, maintained for the purpose of allaying anxiety.
An important school of thought is based on the teachings of the British psychoanalyst Melanie Klein. Because most of Klein's followers worked with her in England, this has come to be known as the English school. Its influence, nevertheless, is very strong throughout the European continent and in South America. Its principal theories were derived from observations made in the psychoanalysis of children. Klein posited the existence of complex unconscious fantasies in children under the age of six months. The principal source of anxiety arises from the threat to existence posed by the death instinct. Depending on how concrete representations of the destructive forces are dealt with in the unconscious fantasy life of the child, two basic early mental attitudes result that Klein characterized as a “depressive position” and a “paranoid position.” In the paranoid position, the ego's defence consists of projecting the dangerous internal object onto some external representative, which is treated as a genuine threat emanating from the external world. In the depressive position, the threatening object is introjected and treated in fantasy as concretely retained within the person. Depressive and hypochondriacal symptoms result. Although considerable doubt exists that such complex unconscious fantasies operate in the minds of infants, these observations have been of the utmost importance to the psychology of unconscious fantasies, paranoid delusions, and theory concerning early object relations.
FROM A MIND THAT FOUND ITSELF
After suffering a mental breakdown in 1900, Clifford Beers, an aspiring American businessman, spent the next three years in treatment at various mental hospitals. Upon his recovery, Beers wrote A Mind That Found Itself (1908), which chronicled the hardships he endured and revealed the callousness of many hospital attendants to the suffering of patients. The book aroused public concern about the care of people with mental illnesses and launched a worldwide movement for mental health. In the following excerpt, Beers describes his experiences in the violent ward of a state hospital. The passage also reveals the delusions brought about by his state of “elation,” or mania.
Even for a violent ward my entrance was spectacular—if not dramatic. The three attendants regularly in charge naturally jumped to the conclusion that, in me, a troublesome patient had been foisted upon them. They noted my arrival with an unpleasant curiosity, which in turn aroused my curiosity, for it took but a glance to convince me that my burly keepers were typical attendants of the brute-force type. Acting on the order of the doctor in charge, one of them stripped me of my outer garments; and, clad in nothing but underclothes, I was thrust into a cell.
Few, if any, prisons in this country contain worse holes than this cell proved to be. It was one of five, situated in a short corridor adjoining the main ward. It was about six feet wide by ten long and of a good height. A heavily screened and barred window admitted light and a negligible quantity of air, for the ventilation scarcely deserved the name. The walls and floor were bare, and there was no furniture. A patient confined here must lie on the floor with no substitute for a bed but one or two felt druggets [floor coverings]. Sleeping under such conditions becomes tolerable after a time, but not until one has become accustomed to lying on a surface nearly as hard as a stone. Here (as well, indeed, as in other parts of the ward) for a period of three weeks I was again forced to breathe and rebreathe air so vitiated that even when I occupied a larger room in the same ward, doctors and attendants seldom entered without remarking its quality.
My first meal increased my distaste for my semi-sociological experiment. For over a month I was kept in a half-starved condition. At each meal, to be sure, I was given as much food as was served to other patients, but an average portion was not adequate to the needs of a patient as active as I was at this time.
Worst of all, winter was approaching and these, my first quarters, were without heat. As my olfactory nerves soon became uncommunicative, the breathing of foul air was not a hardship. On the other hand, to be famished the greater part of the time was a very conscious hardship. But to be half-frozen, day in and day out for a long period, was exquisite torture. Of all the suffering I endured, that occasioned by confinement in cold cells seems to have made the most lasting impression. Hunger is a local disturbance, but when one is cold, every nerve in the body registers its call for help. Long before reading a certain passage of De Quincey's I had decided that cold could cause greater suffering than hunger; consequently, it was with great satisfaction that I read the following sentences from his "Confessions": "O ancient women, daughters of toil and suffering, among all the hardships and bitter inheritances of flesh that ye are called upon to face, not one—not even hunger—seems in my eyes comparable to that of nightly cold. . . . A more killing curse there does not exist for man or woman than the bitter combat between the weariness that prompts sleep and the keen, searching cold that forces you from that first access of sleep to start up horror-stricken, and to seek warmth vainly in renewed exercise, though long since fainting under fatigue."
The hardness of the bed and the coldness of the room were not all that interfered with sleep. The short corridor in which I was placed was known as the "Bull Pen"—a phrase eschewed by the doctors. It was usually in an uproar, especially during the dark hours of the early morning. Patients in a state of excitement may sleep during the first hours of the night, but seldom all night; and even should one have the capacity to do so, his companions in durance would wake him with a shout or a song or a curse or the kicking of a door. A noisy and chaotic medley frequently continued without interruption for hours at a time. Noise, unearthly noise, was the poetic license allowed the occupants of these cells. I spent several days and nights in one or another of them, and I question whether I averaged more than two or three hours' sleep a night during that time. Seldom did the regular attendants pay any attention to the noise, though even they must at times have been disturbed by it. In fact the only person likely to attempt to stop it was the night watch, who, when he did enter a cell for that purpose, almost invariably kicked or choked the noisy patient into a state of temporary quiet. I noted this and scented trouble.
Drawing and writing materials having been again taken from me, I cast about for some new occupation. I found one in the problem of warmth. Though I gave repeated expression to the benumbed messages of my tortured nerves, the doctor refused to return my clothes. For a semblance of warmth I was forced to depend upon ordinary undergarments and an extraordinary imagination. The heavy felt druggets were about as plastic as blotting paper and I derived little comfort from them until I hit upon the idea of rending them into strips. These strips I would weave into a crude Rip Van Winkle kind of suit; and so intricate was the warp and woof that on several occasions an attendant had to cut me out of these sartorial improvisations. At first, until I acquired the destructive knack, the tearing of one drugget into strips was a task of four or five hours. But in time I became so proficient that I could completely destroy more than one of these six-by-eight-foot druggets in a single night. During the following weeks of my close confinement I destroyed at least twenty of them, each worth, as I found out later, about four dollars; and I confess I found a peculiar satisfaction in the destruction of property belonging to a State that had deprived me of all my effects except underclothes. But my destructiveness was due to a variety of causes. It was occasioned primarily by a "pressure of activity," for which the tearing of druggets served as a vent. I was in a state of mind aptly described in a letter written during my first month of elation, in which I said, "I'm as busy as a nest of ants."
Though the habit of tearing druggets was the outgrowth of an abnormal impulse, the habit itself lasted longer than it could have done had I not, for so long a time, been deprived of suitable clothes and been held a prisoner in cold cells. But another motive soon asserted itself. Being deprived of all the luxuries of life and most of the necessities, my mother wit, always conspiring with a wild imagination for something to occupy my time, led me at last to invade the field of invention. With appropriate contrariety, an unfamiliar and hitherto almost detested line of investigation now attracted me. Abstruse mathematical problems that had defied solution for centuries began to appear easy. To defy the State and its puny representatives had become mere child's play. So I forthwith decided to overcome no less a force than gravity itself.
My conquering imagination soon tricked me into believing that I could lift myself by my boot-straps—or rather that I could do so when my laboratory should contain footgear that lent itself to the experiment. But what of the strips of felt torn from the druggets? Why, these I used as the straps of my missing boots; and having no boots to stand in, I used my bed as boots. I reasoned that for my scientific purpose a man in bed was as favourably situated as a man in boots. Therefore, attaching a sufficient number of my felt strips to the head and foot of the bed (which happened not to be screwed to the floor), and, in turn, attaching the free ends to the transom and the window guard, I found the problem very simple. For I next joined these cloth cables in such manner that by pulling downward I effected a readjustment of stress and strain, and my bed, with me in it, was soon dangling in space. My sensations at this momentous instant must have been much like those that thrilled Newton when he solved one of the riddles of the universe. Indeed, they must have been more intense, for Newton, knowing, had his doubts; I, not knowing, had no doubts at all. So epoch-making did this discovery appear to me that I noted the exact position of the bed so that a wondering posterity might ever afterward view and revere the exact spot on the earth's surface from where one of man's greatest thoughts had winged its way to immortality.
For weeks I believed I had uncovered a mechanical principle that would enable man to defy gravity. And I talked freely and confidently about it. That is, I proclaimed the impending results. The intermediate steps in the solution of my problem I ignored, for good reasons. A blind man may harness a horse. So long as the horse is harnessed, one need not know the office of each strap and buckle. Gravity was harnessed—that was all. Meanwhile I felt sure that another sublime moment of inspiration would intervene and clear the atmosphere, thus rendering flight of the body as easy as a flight of imagination.
While my inventive operations were in progress, I was chafing under the unjust and certainly unscientific treatment to which I was being subjected. In spite of my close confinement in vile cells, for a period of more than three weeks I was denied a bath. I do not regret this deprivation, for the attendants, who at the beginning were unfriendly, might have forced me to bathe in water that had first served for several other patients. Though such an unsanitary and disgusting practice was contrary to rules, it was often indulged in by the lazy brutes who controlled the ward.
I continued to object to the inadequate portions of food served me. On Thanksgiving Day (for I had not succeeded in escaping and joining in the celebration at home) an attendant, in the unaccustomed role of a ministering angel, brought me the usual turkey and cranberry dinner that, on two days a year, is provided by an intermittently generous State. Turkey being the rara avis of the imprisoned, it was but natural that I should desire to gratify a palate long insulted. I wished not only to satisfy my appetite, but to impress indelibly a memory that for months had not responded to so agreeable a stimulus. While lingering over the delights of this experience I forgot all about the ministering angel. But not for long. He soon returned. Observing that I had scarcely touched my feast, he said, "If you don't eat that dinner in a hurry, I'll take it from you."
"I don't see what difference it makes to you whether I eat it in a hurry or take my time about it," I said. "It's the best I've had in many a day, and I have a right to get as much pleasure out of it as I can."
"We'll see about that," he replied, and, snatching it away, he stalked out of the room, leaving me to satisfy my hunger on the memory of vanished luxuries. Thus did a feast become a fast.
Under this treatment I soon learned to be more noisy than my neighbours. I was never without a certain humour in contemplating not only my surroundings, but myself; and the demonstrations in which I began to indulge were partly in fun and partly by way of protest. In these outbursts I was assisted, and at times inspired, by a young man in the room next mine. He was about my own age and was enjoying the same phase of exuberance as myself. We talked and sang at all hours of the night. At the time we believed that the other patients enjoyed the spice that we added to the restricted variety of their lives, but later I learned that a majority of them looked upon us as the worst of nuisances.
We gave the doctors and attendants no rest—at least not intentionally. Whenever the assistant physician appeared, we upbraided him for the neglect that was then our portion. At one time or another we were banished to the Bull Pen for these indiscretions. And had there been a viler place of confinement still, our performances in the Bull Pen undoubtedly would have brought us to it. At last the doctor hit upon the expedient of transferring me to a room more remote from my inspiring, and, I may say, conspiring, companion. Talking to each other ceased to be the easy pastime it had been; so we gradually lapsed into a comparative silence that must have proved a boon to our ward-mates. The megaphonic Bull Pen, however, continued with irregularity, but annoying certainty to furnish its quota of noise.
On several occasions I concocted plans to escape, and not only that, but also to liberate others. That I did not make the attempt was the fault—or merit, perhaps—of a certain night watch, whose timidity, rather than sagacity, impelled him to refuse to unlock my door early one morning, although I gave him a plausible reason for the request. This night watch, I learned later, admitted that he feared to encounter me single-handed. And on this particular occasion well might he, for, during the night, I had woven a spider-web net in which I intended to enmesh him. Had I succeeded, there would have been a lively hour for him in the violent ward—had I failed, there would have been a lively hour for me. There were several comparatively sane patients (especially my elated neighbour) whose willing assistance I could have secured. Then the regular attendants could have been held prisoners in their own room, if, indeed, we had not in turn overpowered them and transferred them to the Bull Pen, where the several victims of their abuse might have given them a deserved dose of their own medicine. This scheme of mine was a prank rather than a plot. I had an inordinate desire to prove that one could escape if he had a mind to do so. Later I boasted to the assistant physician of my unsuccessful attempt. This boast he evidently tucked away in his memory.
My punishment for harmless antics of this sort was prompt in coming. The attendants seemed to think their whole duty to their closely confined charges consisted in delivering three meals a day. Between meals he was a rash patient who interfered with their leisure. Now one of my greatest crosses was their continued refusal to give me a drink when I asked for it. Except at meal time, or on those rare occasions when I was permitted to go to the wash room, I had to get along as best I might with no water to drink, and that too at a time when I was in a fever of excitement. My polite requests were ignored; impolite demands were answered with threats and curses. And this war of requests, demands, threats, and curses continued until the night of the fourth day of my banishment. Then the attendants made good their threats of assault. That they had been trying to goad me into a fighting mood I well knew, and often accused them of their mean purpose. They brazenly admitted that they were simply waiting for a chance to "slug" me, and promised to punish me well as soon as I should give them a slight excuse for doing so.
On the night of November 25th, 1902, the head attendant and one of his assistants passed my door. They were returning from one of the dances that, at intervals during the winter, the management provides for the nurses and attendants. While they were within hearing, I asked for a drink of water. It was a carefully worded request. But they were in a hurry to get to bed, and refused me with curses. Then I replied in kind.
"If I come there I'll kill you," one of them said.
"Well, you won't get in if I can help it," I replied, as I braced my iron bedstead against the door.
My defiance and defences gave the attendants the excuse for which they had said they were waiting; and my success in keeping them out for two or three minutes only served to enrage them. By the time they had gained entrance they had become furies. One was a young man of twenty-seven. Physically he was a fine specimen of manhood; morally he was deficient - thanks to the dehumanizing effect of several years in the employ of different institutions whose officials countenanced improper methods of care and treatment. It was he who now attacked me in the dark of my prison room. The head attendant stood by, holding a lantern that shed a dim light.
The door once open, I offered no further resistance. First I was knocked down. Then for several minutes I was kicked about the room - struck, kneed and choked. My assailant even attempted to grind his heel into my cheek. In this he failed, for I was there protected by a heavy beard that I wore at that time. But my shins, elbows, and back were cut by his heavy shoes; and had I not instinctively drawn up my knees to my elbows for the protection of my body, I might have been seriously, perhaps fatally, injured. As it was, I was severely cut and bruised. When my strength was nearly gone, I feigned unconsciousness. This ruse alone saved me from further punishment, for usually a premeditated assault is not ended until the patient is mute and helpless. When they had accomplished their purpose, they left me huddled in a corner to wear out the night as best I might—to live or die for all they cared.
Strange as it may seem, I slept well. But not at once. Within five minutes I was busily engaged writing an account of the assault. A trained war correspondent could not have pulled himself together in less time. As usual I had recourse to my bit of contraband lead pencil, this time a pencil that had been smuggled to me the very first day of my confinement in the Bull Pen by a sympathetic fellow-patient. When he had pushed under my cell door that little implement of war, it had loomed as large in my mind as a battering-ram. Paper I had none; but I had previously found walls to be a fair substitute. I therefore now selected and wrote upon a rectangular spot—about three feet by two - which marked the reflection of a light in the corridor just outside my transom.
The next morning, when the assistant physician appeared, he was accompanied as usual by the guilty head attendant who, on the previous night, had held the lantern.
"Doctor," I said, "I have something to tell you,"—and I glanced significantly at the attendant. "Last night I had a most unusual experience. I have had many imaginary experiences during the past two years and a half, and it may be that last night's was not real. Perhaps the whole thing was phantasmagoric—like what I used to see during the first months of my illness. Whether it was so or not I shall leave you to judge. It just happens to be my impression that I was brutally assaulted last night. If it was a dream, it is the first thing of the kind that ever left visible evidence on my body."
With that I uncovered to the doctor a score of bruises and lacerations. I knew these would be more impressive than any other words of mine. The doctor put on a knowing look, but said nothing and soon left the room. His guilty subordinate tried to appear unconcerned, and I really believe he thought me not absolutely sure of the events of the previous night, or at least unaware of his share in them.
In most types of psychotherapy, a person discusses his or her problems one-on-one with a therapist. The therapist tries to understand the person’s problems and to help the individual change distressing thoughts, feelings, or behaviours.
A psychologist listens to her client during a psychotherapy session. Psychotherapy can be an effective treatment for many mental disorders. Some forms of psychotherapy try to help people resolve their internal, unconscious conflicts, and other forms teach people skills to correct their abnormal behaviour.
People often seek psychotherapy when they have tried other approaches to solving a personal problem. For example, people who are depressed, anxious, or have drug or alcohol problems may find that talking to friends or family members is not enough to resolve their problems. Sometimes people may want to talk to a therapist about problems they would feel uncomfortable discussing with friends or family, such as being sexually abused as a child. Finding a therapist to talk to who is knowledgeable about emotional problems, has patients’ best interests at heart, and is relatively objective can be extremely helpful.
Psychotherapy differs in two ways from the informal help or advice that one person may give another. First, psychotherapy is conducted by a trained, certified, or licensed therapist. In addition, treatment methods in psychotherapy are guided by well-developed theories about the sources of personal problems.
At one time the term psychotherapy referred to a form of psychiatric treatment used with severely disturbed individuals, whereas counsel ing referred to the treatment of people with milder psychological problems or to advice given on vocational and educational matters. Today the distinction between psychotherapy and counsel ing is quite blurred, and many mental health professionals use the terms interchangeably. Psychotherapists and counsellors often treat the same kinds of problems and use the same set of techniques.
Psychotherapy is an important form of treatment for many kinds of psychological problems. Two of the most common problems for which people seek help from a therapist are depression and persistent anxiety. People with depression may have low self-esteem, a sense of hopelessness about the future, and a lack of interest in people and activities once found pleasurable. People with anxiety disorders may feel anxious all the time or suffer from phobias, a fear of specific objects or situations. Psychotherapy, by itself or in combination with drug treatment, can often help people overcome or manage these problems.
People experiencing an emotional crisis due to marital problems, family disputes, problems at work, loneliness, or troubled social relationships may benefit from psychotherapy. Other problems often treated with psychotherapy include obsessive-compulsive disorder, personality disorders, alcoholism and other forms of drug dependence, problems stemming from child abuse, and behavioural problems, such as eating disorders and juvenile delinquency.
Mental health professionals do not rely on psychotherapy to treat schizophrenia, a severe mental illness. Drugs are used to treat this disorder. However, some psychotherapeutic techniques may help people with schizophrenia learn appropriate social skills and skills for managing anxiety. Another severe mental illness, bipolar disorder (popularly called manic depression), is treated with drugs or a combination of drugs and psychotherapy.
Before 1950 psychoanalysis was virtually the only form of psychotherapy available. In traditional psychoanalysis, patients met with a therapist several times a week. Patients would lie on a couch and talk about their childhood, their dreams, or whatever came to mind. The psychoanalyst interpreted these thoughts and helped patients resolve unconscious conflicts. This type of therapy often took years and was very expensive.
Over the next several decades the field of psychotherapy and counsel ing expanded enormously, both in the number of approaches available and in the number of people choosing to enter the profession. Variants of psychoanalysis emerged that focussed more on the patient’s current level of functioning and required less time in therapy. In the 1950s and 1960s therapists began using behavioural and cognitive therapies that focussed less on the inner world of the client and more on the client’s problem behaviours or thoughts.
As the number of approaches to therapy grew throughout the 1960s and 1970s, the practice of psychotherapy and counsel ing spread from hospitals and private psychiatric offices to new settings—elementary schools, high schools, colleges, prisons, mental health clinics, military bases, businesses, and churches and synagogues. With more opportunities for individuals to receive help for their problems, and with more affordable treatments, psychotherapy has become increasingly popular. Although a reliable count of the number of people who receive psychotherapy is difficult to obtain, researchers estimate that 3.5 percent of women and 2.5 percent of men in the United States receive psychotherapy in any given year.
The increased availability and use of psychotherapy has led to more positive attitudes toward mental health care among the general public. Before the 1960s, people often viewed the need for psychotherapy as a sign of personal weakness or a sign that the person was abnormal. Those who received therapy seldom told others about their treatment. Since then the stigma attached to psychotherapy has decreased significantly. It is now common for people to consider seeing a therapist for an emotional problem, and recipients of therapy are more willing to disclose their therapy to friends. Today psychotherapy is a topic of immense public interest. In the scientific community and in the media, people assess methods of therapy and debate which approaches are best for particular problems and disorders.
One of the strongest trends in psychotherapy in recent years has been the shift toward short-term treatment, or brief therapy. Rather than spending years in therapy, clients receive treatment over the course of several weeks or months. Brief therapies usually focus on the client’s specific problems and may make use of techniques from a variety of theoretical orientations. Brief approaches to therapy evolved in part from consumer dissatisfaction with the length, scope, and cost of psychoanalysis and similar approaches. With extensive publicity about short-term therapies, many consumers have come to expect faster treatment for mental health problems than in the past has further driven the movement toward shorter therapies. To provide mental health care at lower costs, managed-care firms, such as health maintenance organizations (HMOs), limit the number of therapy sessions that they will pay for during a year for each insured person. Typical managed-care firms allow up to 20 sessions per year, but some allow as few as 8 sessions per year. Case reviewers for the managed-care company decide how many sessions of therapy each person should receive. Usually a case reviewer will authorize only a small number of sessions at first. If the therapist and client wish to continue beyond this number, the therapist must get approval from the case reviewer for additional sessions. If the client wishes to continue after reaching the maximum, he or she must pay the full cost of therapy.
Other managed-care companies pay therapists a set fee to meet with a client for up to a specified maximum number of sessions depending on the nature of the problem, free of interference from case reviewers. For example, a managed-care firm may pay a therapist $200 to hold up to eight sessions with a person. If the client uses all eight sessions, the therapist normally loses money. But if treatment stops after two or three sessions, the therapist makes a profit. This relatively new system is controversial because it creates a financial incentive for the therapist to shorten the length of treatment.
Managed care has affected the practice of psychotherapy in other important ways. Rather than selecting a therapist based on personal referrals, people enrolled in managed-care plans must select from a list of therapists provided by their managed-care organization. Clients cannot be assured of complete confidentiality because therapists must provide case reviewers with treatment plans and details of progress. Increasingly, managed-care companies are reluctant to authorize more than several sessions of psychotherapy, favour ing drug treatment instead.
Critics argue that managed-care companies have embraced a “quick fix” mentality that pushes short-term therapy even when long-term therapy may be more appropriate. Others note that managed care has brought greater accountability to the profession of psychotherapy, forcing therapists to justify the effectiveness of their treatment approach. In the late 1990s most Americans with health insurance were enrolled in plans with managed mental health care.
Psychotherapists and counsellors come principally from the fields of psychiatry, psychology, social work, and psychiatric nursing. Their training is quite different, considering that their actual therapeutic techniques may be quite similar.
Psychiatrists are physicians who specialize in the treatment of psychological disorders. They attend medical school for four years to earn an MD. (doctor of medicine) degree. Then they receive training in psychiatry during a residency of three or four years. They differ from other therapists in that they can prescribe medications, such as antidepressants and antianxiety drugs.
Clinical psychologists and counsel ing psychologists have a PhD (doctor of philosophy) or Psy.D. (doctor of psychology) degree that requires four to six years of graduate study. They work in settings such as businesses, schools, mental health centres, and hospitals. Licensing requirements vary in the United States, but most states require psychologists to have postdoctoral training.
Psychiatric social workers have a master’s degree in social work (M.S.W.), usually requiring two years of graduate study. They may work in mental health agencies or medical settings practising individual therapy or family and marital therapy. Psychiatric social workers make up the single largest group of mental health professionals. Licensing requirements vary in the United States.
Psychiatric nurses are registered nurses who usually have a master’s degree in psychiatric nursing. They often work in a hospital setting conducting individual or group therapy with patients under the supervision of a psychiatrist.
Psychoanalysts specialize in psychoanalysis. Although anyone may use the title of psychoanalyst, those accredited by the International Psychoanalytic Association are usually psychiatrists, psychologists, or social workers who have completed six to ten years of psychoanalytic training. They are also required to undergo a personal analysis themselves.
All but a few states license professional counsellors, usually under the title of licensed professional counsellor or licensed mental health counsellor. The National Board for Certified Counsellors offers certification for counsellors who have a minimum of a master’s degree and who meet the organization’s professional standards.
Members of the clergy—priests, ministers, and rabbis—usually take courses in counsel ing and psychology as part of their seminary training. Some ministers specialize in pastoral counsel ing, working with members of a congregation who are in distress.
Any person, even one with no training, can legally use the title of therapist, psychotherapist, or other titles not covered under licensing and certification laws. Therefore, clients should ask therapists who practice under such titles about their academic and professional training.
Psychotherapy encompasses a large number of treatment methods, each developed from different theories about the causes of psychological problems and mental illnesses. There are more than 250 kinds of psychotherapy, but only a fraction of these have found mainstream acceptance. Many kinds of psychotherapy are offshoots of well-known approaches or build upon the work of earlier theorists.
In individual therapy, a patient or client meets regularly with a therapist, typically over a period of weeks or months. The methods of therapists vary depending on their theory of personality, or way of understanding another individual. Most therapies can be classified as (1) psychodynamic, (2) humanistic, (3) behavioural, (4) cognitive, or (5) eclectic. In the United States, about 40 percent of therapists consider their approach eclectic, which means they combine techniques from a number of theoretical approaches and often tailor their treatment to the particular psychological problem of a client.
Forms of therapy that treat more than one person at a time include group therapy, family therapy, and couples therapy. These therapies may use techniques from any theoretical approach. Other forms of therapy specialize in treating children or adolescents with psychological problems.
People seeking help for their problems most often select individual therapy over group therapy and other forms of therapy. People may prefer individual therapy because it allows the therapist to focus exclusively on their problems, without distractions from others. Also, individuals may desire more privacy and confidentiality than is possible in a group setting. Sometimes people combine individual therapy and group therapy.
In the late 19th century Viennese neurologist Sigmund Freud developed a theory of personality and a system of psychotherapy known as psychoanalysis. According to this theory, people are strongly influenced by unconscious forces, including innate sexual and aggressive drives. In this 1938 British Broadcasting Corporation interview, Freud recounts the early resistance to his ideas and later acceptance of his work. Freud’s speech is slurred because he was suffering from cancer of the jaw. He died the following year.
Psychodynamic therapies are those therapies in some way derived from the work of Austrian physician Sigmund Freud, the founder of psychoanalysis. In general, psychodynamic therapists emphasize the importance of discovering and resolving internal, unconscious conflicts, often through an exploration of one’s childhood and past experiences. Although psychoanalysis is the best-known form of psychodynamic therapy, theorists have developed many other psychodynamic therapies, some very different from Freud’s original techniques.
Sigmund Freud, the founder of psychoanalysis, compared the human mind with an iceberg. The tip above the water represents consciousness, and the vast region below the surface symbolizes the unconscious mind. Of Freud’s three basic personality structures—id, ego, and superego—only the id is totally unconscious.
Freud developed the theory and techniques of psychoanalysis in the 1890s. He believed that much of an individual's personality develops before the age of six. He also proposed that children pass through a series of psychosexual stages, during which they express sexual energy in different ways. For example, during the phallic stage, from about age three to age five, children focus on feelings of pleasure in their genital organs. At this time, according to Freud, boys become sexually attracted to their mothers and feel hostility and jealousy toward their fathers. Similarly, girls develop sexual feelings toward their fathers and feel rage toward their mothers. In Freud’s view, such innate sexual and aggressive drives cause feelings and thoughts that the person regards as unacceptable. In response, the individual represses these feelings, driving them into the unconscious mind. In the process, three basic personality structures are formed: the id, the ego, and the superego. The id represents unchecked, instinctual drives; the superego is the voice of social conscience; and the ego is the rational thinking that mediates between the id and superego and deals with reality. These three systems function as a whole, not separately. Id forces are unconscious and often emerge without an individual’s awareness, causing fear, anxiety, depression, or other distressing symptoms. Freud used the term neurosis to refer to such symptoms.
In psychoanalysis, Freud sought to eliminate neurotic symptoms by bringing the individual’s repressed fantasies, memories, and emotions into consciousness. He placed particular emphasis on helping patients uncover memories about early childhood trauma and conflict, which he regarded as the source of emotional problems in adults. At first, he used hypnosis as a way to gain access to a person’s unconscious. Later he developed free association, a method in which patients say whatever thoughts come to their minds about dreams, fantasies, and memories. The analyst’s interpretations of this material, Freud believed, could provide patients with insight into their unconscious - insight that would help them become less anxious, less depressed, or better in other ways.
Freud also placed great value on what could be learned from transference, the patient’s emotional response to the therapist. Freud believed that during therapy, patients transfer repressed feelings toward their family members to their relationship with the therapist. Transference exposes these repressed feelings and allows the patient to work through them. Free association and transference are still central features of Freudian psychoanalysis.
In traditional or classical psychoanalysis, the patient lies on a couch and the therapist sits out of sight of the patient. This practice is intended to minimize the presence of the therapist and allow the patient to engage in free association more easily. Classical psychoanalysis requires three to four sessions of therapy each week for several years. At a rate of $100 or more per session, three sessions per week costs more than $15,000 per year. Classical psychoanalysis is not typically covered by insurance plans with managed mental health care. Therefore, relatively few individuals choose this intensive and long-term therapy.
In contemporary forms of psychoanalysis, the duration of therapy is often shorter-between one and four years-and meetings may take place one or two times a week. Other psychoanalytically oriented therapists work in a brief format of 30 sessions or less. The patient sits on a chair across from the therapist rather than lying on a couch. Modern psychoanalysts tend to focus more on current functioning and make less use of free association techniques.
American psychoanalyst and social philosopher Erich Fromm stressed the importance of social and economic factors on human behaviour. His focus was a departure from traditional psychoanalysis, which emphasized the role of the subconscious. In this 1969 essay for Collier’s Year Book, Fromm presents various explanations for human violence. He argues that violence cannot be controlled by imposing stronger legal penalties, but rather by creating a more just society in which people connect with each other as humans and are able to control their own lives.
Several of Freud's followers developed new theories about the causes of psychological disorders. Three important neo-Freudians were Erich Fromm, Karen Horney, and Erik Erikson, who emphasized the role of social and cultural influences in the formation of personality. All three emigrated from Germany to the United States in the 1930s. Their theories have influenced modern psychodynamic therapists.
Fromm believed that the fundamental problem people confront is a sense of isolation deriving from their own separateness. According to Fromm, the goal of therapy is to orient oneself, establish roots, and find security by uniting with other people while remaining a separate individual.
Horney departed from Freud in her belief in the importance of social forces in personality formation. She asserted that people develop anxiety and other psychological problems because of feelings of isolation during childhood and unmet needs for love and respect from their parents. The goal of therapy, in her view, is to help patients overcome anxiety-driven neurotic needs and move toward a more realistic image of themselves.
Erikson extended Freud's emphasis on childhood development to cover the entire lifespan. Referred to as an ego psychologist, he emphasized the importance of the ego in helping individuals develop healthy ways to deal with their environment. Often working with children, Erikson helped individuals develop the basic trust and confidence needed for the development of a healthy ego.
Other psychoanalytic therapists focussed on how relationships develop between the child and others, especially the mother. British pediatrician Donald Winnicott and Austrian-American pediatrician Margaret Mahler were known as object-relations analysts because of their emphasis on the child’s love object (such as the mother or father). They and other object-relations therapists, such as Austrian-born British psychoanalyst Melanie Klein, helped patients deal with problems that arose from being separated inappropriately or at too early an age or from their mothers.
Swiss psychiatrist Carl Jung began his studies of human motivation in the early 1900s and created the school of psychoanalysis known as analytical psychology. A contemporary of Austrian psychoanalyst Sigmund Freud, Jung at first collaborated closely with Freud but eventually moved on to pursue his own theories, including the exploration of personality types. According to Jung, there are two basic personality types, extroverted and introverted, which alternate equally in the completely normal individual. Jung also believed that the unconscious mind is formed by the personal unconscious (the repressed feelings and thoughts developed during an individual’s life) and the collective unconscious (those inherited feelings, thoughts, and memories shared by all humanity).
Unlike the psychoanalytic therapists, Swiss psychiatrist Carl Jung developed a very different system of therapy. He had worked closely with Freud, but broke away totally from Freud in his own work.
Jung created a school of psychology that he called analytical psychology. He felt that Freud focussed too much on sexual drives and not enough on all of the creative instincts and impulses that motivate individuals. Whereas Freud had described the personal unconscious, which reflected the sum of one person’s experience, Jung added the concept of the collective unconscious, which he defined as the reservoir of the experience of the entire human race. The collective unconscious contains images called archetypes that are common to all individuals. They are often expressed in mythological concepts such as good and evil spirits, fairies, dragons, and gods.
In general, Jungian therapists see psychological problems as arising from unconscious conflicts that create disturbances in psychic energy. They treat psychological problems by helping their patients bring material from their personal and collective unconscious into conscious awareness. The therapists do this through a knowledge of symbolism - not only symbols from mythology and folk culture, but also current cultural symbols. By interpreting dreams and other materials, Jungian therapists help their patients become more aware of unconscious processes and become stronger individuals.
Austrian psychologist and psychiatrist Alfred Adler studied under Sigmund Freud, the founder of psychoanalysis, before developing his own theories about human behaviour. Adler’s best-known theories stress that individuals are mainly motivated by feelings of inferiority, which he called an inferiority complex.
Like Jung, Austrian physician Alfred Adler believed that Freud overemphasized the importance of sexual and aggressive drives. Adler was particularly interested in sibling relationships, birth order, and relationships with parents. He would ask patients about their early memories and use this information to analyze their attitudes, beliefs, and behaviours. He helped his patients by encouraging them to meet important life goals: love, work, and friendship.
For Adler and modern therapists who draw from his work, interest in others and participation in society are important goals of therapy. Adlerian therapists see therapy in part as educational, and they use a number of innovative action techniques to help patients change mistaken beliefs and interact more fully with family members and others.
Humanistic therapies focus on the client's present rather than past experiences, and on conscious feelings rather than unconscious thoughts. Therapists try to create a caring, supportive atmosphere and to guide clients toward personal realizations and insights. Clients are encouraged to take responsibility for their lives, to accept themselves, and to recognize their own potential for growth and change.
The length of therapy depends on the severity of the problem and on a client's ability to change and try new behaviours. Because humanistic therapies emphasize the relationship between client and therapist and a gradual development of increased responsibility by the client, these therapies typically take a year or two of weekly sessions.
Three of the most influential forms of humanistic therapy are existential therapy, person-entered therapy, and Gestalt therapy. 1. Existential Therapy, is based on a philosophical approach to people and their existence, existential therapy deals with important life themes. These themes include living and dying, freedom, responsibility to self and others, finding meaning in life, and dealing with a sense of meaninglessness. More than other kinds of therapists, existential therapists examine individuals' awareness of themselves and their ability to look beyond their immediate problems and daily events to problems of human existence.
The first existential therapists were European psychiatrists trained in psychoanalysis who were dissatisfied with Freud's emphasis on biological drives and unconscious processes. Existential therapists help their clients confront and explore anxiety, loneliness, despair, fear of death, and the feeling that life is meaningless. There are few techniques specific to existential therapy. Therapists normally draw on techniques from a variety of therapies. One well-known existential therapy is logotherapy, developed by Austrian psychiatrist Viktor E. Frankl in the 1940s (logos is Greek for meaning).
2. Person-Entered Therapy, whereby Carl Rogers in the 1940s and 1950s American psychologist Carl Rogers developed a form of psychotherapy known as person-entered therapy. This approach emphasizes that each person has the capacity for self-understanding and self-healing. The therapist tries to demonstrate empathy and true caring for clients, allowing them to reveal their true feelings without fear of being judged.
Person-entered therapy, originally called client-entered therapy, is perhaps the best-known form of humanistic therapy. American psychologist Carl Rogers developed this type of therapy in the 1940s and 1950s. Rogers believed that people, like other living organisms, are driven by an innate tendency to maintain and enhance themselves, which in turn moves them toward growth, maturity, and life enrichment. Within each person, Rogers believed, is the capacity for self-understanding and constructive change.
Person-entered therapy emphasizes understanding and caring rather than diagnosis, advice, and persuasion. Rogers strongly believed that the quality of the therapist-client relationship influences the success of therapy. He felt that effective therapists must be genuine, accepting, and empathic. A genuine therapist expresses true interest in the client and is open and honest. An accepting therapist cares for the client unconditionally, even if the therapist does not always agree with him or her. An empathic therapist demonstrates a deep understanding of the client's thoughts, ideas, experiences, and feelings and communicates this empathic understanding to the client. Rogers believed that when clients feel unconditional positive regard from a genuine therapist and feel empathically understood, they will be less anxious and more willing to reveal themselves and their weaknesses. By doing so, clients gain a better understanding of their own lives, move toward self-acceptance, and can make progress in resolving a wide variety of personal problems.
Person-entered therapists use an approach called active listening to demonstrate empathy - letting clients know that they are being fully listened to and understood. First, therapists must show through their body position and facial expression that they are paying attention—for example, by directly facing the client and making good eye contact. During the therapy session, the therapist tries to restate what the client has said and seeks clarification of the client’s feelings. The therapist may use such phrases as “What I hear you saying is . . . ” and “You’re feeling like . . . “ The therapist seeks mainly to reflect the client’s statements back to the client accurately, and does not try to analyze, judge, or lead the direction of discussion. For example:
Client: I always felt my husband loved me. I just don’t understand why this happened.
Therapist: You feel surprised by the fact that he left you, because you thought he loved you. It comes as a real surprise.
Client: M-hm. I guess I haven’t really accepted that he could do this to me. A big part of me still loves him.
Therapist: You seem to still be hurting from what he did. The love you have for him is so strong.
Many therapists, not just those of humanistic orientation, have adopted elements of Rogers’s approach. Gestalt Therapy is a German word referring to wholeness and the concept that a whole unit is more than the sum of its parts. Gestalt therapy was developed in the 1940s and 1950s by Frederick (Fritz) Perls, a German-born psychiatrist who immigrated to the United States. Like person-entered therapy, Gestalt therapy tries to make individuals take responsibility for their own lives and personal growth and to recognize their capacity for healing themselves. However, Gestalt therapists are willing to use confrontational questions and techniques to help clients express their true feelings. In the following example, the therapist helps the client become more aware of her own behaviour and her responsibility for it:
Client: You know, you just can't do anything right in today's world.
Therapist: Please repeat that phrase using the word I instead of you.
Client: I can't do anything right, it seems.
Therapist: Would you change the word can't to won't?
Client: I won't do anything right.
Therapist: What won't you do that you want to do?
The general goal of Gestalt therapy is awareness of self, others, and the environment that brings about growth, wholeness, and integration of one’s thoughts, feelings, and actions. Gestalt therapists use a wide variety of techniques to make clients more aware of themselves, and they often invent or experiment with techniques that might help to accomplish this goal. One of the best-known Gestalt techniques is the empty-chair technique, in which an empty chair represents another person or another part of the client’s self. For example, if a client is angry with herself for not being kinder to her mother, the client may pretend her mother is sitting in an empty chair. The client may then express her feelings by speaking in the direction of the chair. Alternatively, the client might play the role of the understanding daughter while sitting in one chair and the angry daughter while sitting in another. As she talks to different parts of herself, differences may be resolved. The empty-chair technique reflects Gestalt therapy’s strong emphasis on dealing with problems in the present.
Behavioural therapies differ dramatically from psychodynamic and humanistic therapies. Behavioural therapists do not explore an individual’s thoughts, feelings, dreams, or past experiences. Rather, they focus on the behaviour that is causing distress for their clients. They believe that behaviour of all kinds, both normal and abnormal, is the product of learning. By applying the principles of learning, they help individuals replace distressing behaviours with more appropriate ones.
Typical problems treated with behavioural therapy include alcohol or drug addiction, phobias (such as a fear of heights), and anxiety. Modern behavioural therapists work with other problems, such as depression, by having clients develop specific behavioural goals—such as returning to work, talking with others, or cooking a meal. Because behavioural therapy can work through nonverbal means, it can also help people who would not respond to other forms of therapy. For example, behavioural therapists can teach social and self-care skills to children with severe learning disabilities and to individuals with schizophrenia who are out of touch with reality.
Behavioural therapists begin treatment by finding out as much as they can about the client's problem and the circumstances surrounding it. They do not infer causes or look for hidden meanings, but rather focus on observable and measurable behaviours. Therapists may use a number of specific techniques to alter behaviour. These techniques include relaxation training, systematic desensitization, exposure and response prevention, aversive conditioning, and social skills training.
Relaxation training is a method of helping people with high levels of anxiety and stress. It also serves as an important component of some other behavioural treatments.
In one type of relaxation exercise, people learn to tighten and then relax one muscle group at a time. This method, called progressive relaxation, was developed in the 1930s by American physiologist and psychologist Edmund Jacobson. At first, the therapist gives spoken instructions to the client. Later the client can practice relaxation exercises at home using a tape recording of the therapist’s voice. The following example, adapted from Jacobson’s work, illustrates a brief relaxation procedure:
Just settle back as comfortably as you can, close your eyes, and let yourself relax to the best of your ability. Now clench up both fists tighter and tighter and study the tension as you do so. Keep them clenched and feel the tension in your fists, hands, forearms … Now relax. Let the fingers of your hands become loose and observe the contrast in your feelings … Now let yourself go and try to become more relaxed all over. Take a deep breath. Just let your whole body become more and more relaxed.
Another relaxation technique is meditation. In meditation, people try to relax both the mind and the body. In many forms of meditation, people begin by sitting comfortably on a cushion or chair. Then they gradually relax their body, begin to breathe slowly, and concentrate on a sensation—such as the inhaling and exhaling of breath—or on an image or object. In Transcendental Meditation, a person does not try to concentrate on anything, but merely sits in a quiet atmosphere and repeats a mantra (a specially chosen word) to try to achieve a state of restful alertness.
Participants in a program to overcome a phobia (fear) of flying on aeroplanes get ready to “graduate” by taking a short flight. The program uses a type of behavioural therapy called systematic desensitization, which teaches people to relax in a situation that would normally produce anxiety.
Systematic desensitization, a procedure developed by South African psychiatrist Joseph Wolpe in the 1950s, gradually teaches people to be relaxed in a situation that would otherwise frighten them. It is often used to treat phobias and other anxiety disorders. The word desensitization refers to making people less sensitive to or frightened of certain situations.
In the first step of desensitization, the therapist and client establish an anxiety hierarchy-a list of fear-provoking situations arranged in order of how much fear they provoke in the client. For a man afraid of spiders, for example, holding a spider may rank at the top of his anxiety hierarchy, whereas seeing a small picture of a spider may rank at the bottom. In the second step, the therapist has the client relax using one of the relaxation techniques described above. Then the therapist asks the client to imagine each situation on the anxiety hierarchy, beginning with the least-feared situation and moving upward. For example, the man may first imagine seeing a picture of a spider, then imagine seeing a real spider from far away, then from a short distance, and so forth. If the client feels anxiety at any stage, he or she is instructed to stop thinking about the situation and to return to a state of deep relaxation. The relaxation and the imagined scene are paired until the client feels no further anxiety. Eventually the client can remain free of anxiety while imagining the most-feared situation.
Asking a client to encounter the feared situation is a technique called in vivo exposure. For the man who is afraid of spiders, a therapist might arrange to go to a park or zoo where visitors can touch large spiders. The therapist would model for the client how to approach a spider and how to handle it. The therapist may also encourage the man to walk gradually closer to the spider, reinforcing his progress with praise and reassurance as he does so. The goal for the therapist and patient would be for the man to pick up the spider.
Problems are rarely as clear and simple as fear of spiders. Therapists may spend considerable time deciding on appropriate goals, which ones to pursue first, and then reevaluating or changing goals as therapy progresses. Systematic desensitization typically takes from 10 to 30 sessions, depending on the severity of the problem. In vivo therapies are more direct and may take less time.
Exposure and response prevention is a behavioural technique often used to treat people with obsessive-compulsive disorder. In this technique, the therapist exposes the client to the situation that causes obsessive thoughts, but then prevents the client from acting on these thoughts. For example, to treat people who compulsively wash their hands because they fear contamination from germs, a therapist might have them handle something dirty and then prevent them from washing their hands. Therapists have also experimented with exposure and response prevention to treat people with bulimia nervosa, an eating disorder in which people engage in binge eating and afterward force themselves to vomit or, more occasionally, take laxatives. The therapist feeds the bulimic patients small amounts of food but prevents them from binging, taking laxatives, or vomiting.
Behavioural therapists occasionally use a technique called aversive conditioning or aversion therapy. In this method, clients receive an unpleasant stimulus, such as an electric shock, whenever they perform an undesirable behaviour. For example, therapists treating patients with alcoholism may have them ingest the drug disulfiram (Antabuse). The drug makes the patients violently sick if they drink alcohol. Many therapists have found that aversive conditioning is not as effective as other behavioural techniques, and as a result, they use this technique very infrequently. For some problems, however, aversive conditioning can work when all other techniques have failed. For example, therapists have found that immediate application of an unpleasant stimulus can eliminate self-mutilation and other self-destructive behaviours in children with autism.
Social skills training is a method of helping people who have problems interacting with others. Clients learn basic social skills such as initiating conversations, making eye contact, standing at the appropriate distance, controlling voice volume and pitch, and responding to questions. The therapist first describes and models the behaviour. Then the patient or client practices the behaviour in skits or role-playing exercises. The therapist watches the exercises and provides constructive criticism and further model ing. Therapists often conduct this kind of training with groups of people with similar problems. Social skills training can often help people with schizophrenia function more easily in public situations and reduce their risk of relapse or rehospitalization.
One popular form of social skills training is assertiveness training, another technique pioneered by Joseph Wolpe. This technique teaches people, often those who are shy, to make appropriate responses when someone does something to them that seems inappropriate or offensive or violates their rights. For example, if a woman has trouble saying no to a coworker who inappropriately asks her to handle some of his job responsibilities, she may benefit from learning how to become more assertive. In this example, the therapist would model assertive behaviour for the client, who would then role-play and rehearse appropriate responses to her coworker.
Cognitive therapies are similar to behavioural therapies in that they focus on specific problems. However, they emphasize changing beliefs and thoughts, rather than observable behaviours. Cognitive therapists believe that irrational beliefs or distorted thinking patterns can cause a variety of serious problems, including depression and chronic anxiety. They try to teach people to think in more rational, constructive ways.
In the mid-1950s American psychologist Albert Ellis developed one of the first cognitive approaches to therapy, rational-emotive therapy, now commonly called rational-emotive behaviour therapy. Trained in psychoanalysis in the 1940s, Ellis quickly became disillusioned with psychoanalytic methods, viewing them as slow and inefficient. Influenced by Alfred Adler’s work, Ellis came to regard irrational beliefs and illogical thinking as the major cause of most emotional disturbances. In his view, negative events such as losing a job or breaking up with a lover do not by themselves cause depression or anxiety. Rather, emotional disorders result when a person perceives the events in an irrational way, such as by thinking, “I’m a worthless human being.”
Although rational-emotive behaviour therapists use many techniques, the most common technique is that of disputing irrational thoughts. First the therapist identifies irrational beliefs by talking with the client about his or her problems. Examples of irrational beliefs, according to Ellis, include the idea that unhappiness is caused by external events, the idea that one must be accepted and loved by everyone, and the idea that one must always be competent and successful to be a worthwhile person.
To dispute the client’s irrational beliefs and longstanding assumptions, rational-emotive behaviour therapists often use confrontational techniques. For example, if a student tells the therapist, “I must get an A on this test or I will be a failure in life,” the therapist might say, “Why must you? Do you think your entire career as a student will be through if you get a B?” The therapist helps the client replace irrational thoughts with more reasonable ones, such as “I would like to get an A on the test, but if I don't, I have strategies I can use to do better next time.”
Like Ellis before him, American psychiatrist Aaron T. Beck became disenchanted with psychoanalysis, finding that it often did not help relieve depression for his patients. In the 1960s Beck developed his own form of cognitive therapy for treating depression, and later applied it to other disorders. In Beck’s view, depressed people tend to have negative views of themselves, interpret their experiences negatively, and feel hopeless about their future. He sees these tendencies as a problem of faulty thinking. Like rational-emotive behaviour therapists, practitioners of Beck’s technique challenge the client's absolute, extreme statements. They try to help the client identify distorted thinking, such as thinking about negative events in catastrophic terms, and then suggest ways to change this thinking. The following example illustrates how a cognitive therapist might challenge a client’s absolute statement.
Client: Everyone at work is smarter than me.
Therapist: Everyone? Every single person at work is smarter than you?
Client: Well, maybe not. There are a lot of people at work I don't know well at all. But my boss seems smarter; she seems to really know what's going on.
Therapist: Notice how we went from everyone at work being smarter than you to just your boss.
Cognitive therapists often give their clients homework assignments designed to help them identify their own irrational patterns of thinking and to reinforce what they learn in therapy. For example, clients often keep a daily log in which they write down distressing emotions, the situation that caused the emotions, their thoughts at the time, whether the thoughts were distorted or not, and alternative ways of thinking about the situation.
Helping individuals change problematic behaviours, thoughts, or feelings is not an easy task. Therapists have tried many creative approaches to help patients, some of which do not fall neatly into the major categories of psychodynamic, humanistic, behavioural, or cognitive. Two such therapies still in use today are transactional analysis and reality therapy.
In the 1950s and 1960s Canadian-American psychiatrist Eric Berne developed a form of therapy he called transactional analysis. Although trained in psychoanalysis, Berne felt that the complexity of psychoanalytic terminology excluded patients from full participation in their own treatment. He developed a theory of personality based on the view that when people interact with each other, they function as either a parent, adult, or child. For example, he would characterize social interactions between two people as parent-adult, parent-child, adult-child, adult-adult, and so forth depending on the situation. He referred to social interactions as transactions and to analysis of these interactions as transactional analysis.
In therapy, which is often conducted in groups, patients learn to recognize when they are assuming one of these roles and to understand when being an authoritarian parent or an impulsive child is appropriate or inappropriate. In addition to identifying these roles, clients learn how to change roles in order to behave in more desirable ways.
American psychiatrist William Glasser developed reality therapy in the 1960s, after working with teenage girls in a correctional institution and observing work with severely disturbed schizophrenic patients in a mental hospital. He observed that psychoanalysis did not help many of his patients change their behaviour, even when they understood the sources of it. Glasser felt it was important to help individuals take responsibility for their own lives and to blame others less. Largely because of this emphasis on personal responsibility, his approach has found widespread acceptance among drug- and alcohol-abuse counsellors, corrections workers, school counsellors, and those working with clients who may be disruptive to others.
Reality therapy is based on the premise that all human behaviour is motivated by fundamental needs and specific wants. The reality therapist first seeks to establish a friendly, trusting relationship with clients in which they can express their needs and wants. Then the therapist helps clients explore the behaviours that created problems for them. Clients are encouraged to examine the consequences of their behaviour and to evaluate how well their behaviour helped them fulfill their wants. The therapist does not accept excuses from clients. Finally, the therapist helps the client formulate a concrete plan of action to change certain behaviours, based on the client’s own goals and ability to make choices.
Currently, many therapists describe their approach as eclectic or integrative, meaning that they use ideas and techniques from a variety of therapies. Many therapists like the opportunity to draw from many theories and not limit themselves to one or two. Most therapists who adopt an eclectic approach have a rationale for which techniques they use with specific clients, rather than just choosing an approach randomly or because it suits them at the time.
One of the most influential eclectic approaches is cognitive-behavioural therapy. Other eclectic approaches use other combinations of therapies.
There are almost no pure cognitive or behavioural therapists. Usually therapists combine cognitive and behavioural techniques in an approach known as cognitive-behavioural therapy. For example, to treat a woman with depression, a therapist may help her identify irrational thinking patterns that cause the distressing feelings and to replace these irrational thoughts with new ways of thinking. The therapist may also train her in relaxation techniques and have her try new behaviours that help her become more active and less depressed. The client then reports the results back to the therapist.
Cognitive-behavioural therapy has rapidly become one of the most popular and influential forms of psychotherapy, in part because it takes a relatively short period of time compared with humanistic and psychoanalytic therapies, and also because of its ability to treat a wide range of problems. Sometimes cognitive-behavioural therapy takes only a few sessions, but more often it extends for 20 or 30 sessions over four to six months. The length of therapy usually depends on the severity and number of the client’s problems.
Some therapists have one particular way of understanding clients—that is, they adhere to one theory of personality - but use many techniques from a variety of theories. Other therapists may understand clients using two or three theories of personality and only use techniques to bring about change that are consistent with those theories. Some therapists have combined psychodynamic and behavioural therapies in ways to help their clients deal with fears and anxieties but also understand their causes.
Therapists may use different approaches to treat different problems. For example, a therapist might find that clients who are grieving over the loss of a spouse may respond best to a humanistic approach, in which they can share their grieving and their hurts with the therapist. However, the same therapist may use a cognitive-behavioural approach with a person who reports being anxious most of the time.
Teenage girls talk with a therapist, top right, during a group therapy session. Group therapy allows people to see how others deal with problems and to receive support and encouragement from group members.
All of the individual therapies can also be used with groups. People may choose group therapy for several reasons. First, group therapy is usually less expensive than individual therapy, because group members share the cost. Group therapy also allows a therapist to provide treatment to more people than would be possible otherwise. Aside from cost and efficiency advantages, group therapy allows people to hear and see how others deal with their problems. In addition, group members receive vital support and encouragement from others in the group. They can try out new ways of behaving in a safe, supportive environment and learn how others perceive them.
Groups also have disadvantages. Individuals spend less time talking about their own problems than they would in one-on-one therapy. Also, certain group members may interact with other group members in hurtful ways, such as by yelling at them or criticizing them harshly. Generally, therapists try to intercede when group members act in destructive ways. Another disadvantage of group therapy involves confidentiality. Although group members usually promise to treat all therapy discussions as confidential, some group members may worry that other members will share their secrets outside of the group. Group members who believe this may be less willing to disclose all of their problems, lessening the effectiveness of therapy for them.
Groups vary widely in how they work. The typical group size is from six to ten people with one or two therapists. Often two therapists prefer to work together in a group so that they can respond not only to one person’s issues, but also to discussions between group members that may be occurring quickly. Some groups are open or drop-in groups - new clients may join at any time and members may attend or skip whatever sessions they desire. Other groups are closed and admit new members only when all members agree. Regular attendance is usually required in these groups. In closed groups, both the therapist and group members will ask a member to provide an explanation for missing a meeting.
When forming a group, therapists try to make clear to potential participants the goals of the group and for whom it is appropriate. Therapists will often screen potential participants to learn about their problems and decide whether the group is right for them. Sometimes therapists prefer diversity among group members in terms of age, gender, and problem. In other cases, therapists may limit membership in a group to individuals with similar problems and backgrounds. For example, some groups may form specifically for individuals who are grieving the loss of a loved one, individuals who abuse drugs or alcohol, people with eating disorders, people suffering from depression, or troubled elderly individuals.
The techniques used in group therapy depend largely on the theoretical orientation of the therapist. Humanistic therapists tend to respond to the feelings and experiences of other members. They may also interpret or comment on social interactions between group members. In cognitive-behavioural groups, group members try to change their own thoughts and behaviours and support and encourage other members to do the same. Psychoanalytic groups focus on childhood experiences and their impact on participants’ current behaviours, thoughts, and feelings.
Psychodrama, the first form of group therapy, was developed in the 1920s by Jacob L. Moreno, an Austrian psychiatrist. Moreno brought his method to the United States in 1925, and its use spread to other parts of the world. Participants in psychodrama act out their problems—often on a real stage and with props—as a means of heightening their awareness of them. The therapist serves as the director, suggesting how participants might act out problems and assigning roles to other group members. For example, a woman might reenact a scene from her childhood with other group members playing her father, mother, brother, or sister. Groups who use psychodrama may do so weekly or simply as a one-time demonstration.
A self-help group or support group involves people with a common problem who meet regularly to share their experiences, support each other emotionally, and encourage change or recovery. They are usually free of charge to interested participants. Self-help groups are not strictly considered psychotherapy because they are not led by a licensed mental health professional. However, they can serve as an important source of help for people in emotional distress.
There are thousands of self-help and support groups in the United States and Canada. The oldest and best known is Alcoholics Anonymous, which uses a 12-step program to treat alcoholism. Other groups have formed for cancer patients, parents whose children have been murdered, compulsive gamblers, battered women, obese people, and many other types of people.
Family therapy involves the participation of one or more members of the same family who seek help for troubled family relationships or the problems of individual family members. Typical problems that bring families into family therapy are delinquent behaviour by a child or adolescent, a child’s poor performance in school, hostilities between a parent and child or between siblings, and severe psychological disturbance or mental illness in a parent or child.
One of the most influential forms of family therapy, family systems therapy, views the family as a single, complex system or unit. Individual members are interdependent parts of the system. Rather than treating one person’s symptoms in isolation, therapists try to understand the symptoms in the larger context of the family. For example, a boy who begins picking fights with classmates might do so to get more attention from his busy parents. Therapists work from the rationale that current family relationships profoundly affect, and are affected by, an individual family member’s psychological problems. For this reason, most family therapists prefer to work with the entire family during a session, rather than meeting with family members individually.
In most family therapy sessions, the therapist encourages family members to air their feelings, frustrations, and hostilities. By observing how they interact, the therapist can help them recognize their roles and relationships with each other. The therapist tries to avoid assigning blame to any particular family member. Instead, the therapist makes suggestions about how family members might adjust their roles and prevent future conflict.
Couples therapy, also called marital therapy or marriage counsel ing, is designed to help intimate partners improve their relationship. Therapists treat married couples as well as unmarried couples of the opposite or same sex. Therapists normally hold sessions with both partners present. At certain times during therapy, however, the therapist may choose to see the partners individually.
Couples may seek therapy for a variety of problems, many of which concern a breakdown of communication or trust between the partners. For example, an extramarital affair by one partner may cause the other partner to feel emotional pain, anger, and distrust. Some partners may feel distant from one another or experience sexual problems. In other cases, one or both partners may have psychological problems or alcohol or drug problems that negatively affect their relationship.
The techniques used in therapy vary depending on the theoretical orientation of the therapist and the nature of the couple’s problem. Most often, therapists focus on improving communication between partners and on helping them learn to manage conflict. By observing the partners as they talk to each other, the therapist can learn about their communication patterns and the roles they assume in their relationship. The therapist may then teach the partners new ways of expressing their feelings verbally, how to listen to each other, and how to work together to solve problems. The therapist may also suggest that they try out new roles. For example, if one partner makes all of the decisions in the relationship, the therapist may encourage the couple to try sharing decision-making power.
Because most couples therapists also have training in family therapy, they often examine the influence of the couple’s relationships with parents, children, and siblings. Psychoanalytically oriented therapists may focus on how the partners’ childhood experiences affect their current relationship with each other. For couples who cannot work through their differences or reestablish trust and intimacy, separation or divorce may be the best choice. Therapists can help such partners separate in constructive ways.
Some psychotherapists specialize in working with children. Therapists deal with children who are anxious, depressed, or have difficulty getting along with others at home or school. Some children have psychological problems resulting from family issues such as divorce, new stepparents, single-parent homes, death of a parent or sibling, being homeless, or being raised in an alcoholic family. Other children have emotional problems related to physical disabilities, learning disabilities, or attention-deficit hyperactivity disorder.
Play therapy is a special technique that therapists often use with children aged 2 to 12. For children, play is a natural way of learning and relating to others. Play therapy can help therapists both to understand children's problems and to help children deal with their feelings, behaviours, and thoughts. Therapists may use playhouses, puppets, a toy telephone, dolls, sandboxes, food, finger paints, and other toys or objects to help children express their thoughts and feelings. In addition to projecting a caring and gentle manner, therapists who work with children are trained to understand and interpret children’s nonverbal and verbal expressions.
For most people, psychotherapy involves a common sequence of events: finding a therapist, assessing the problem, exploring the problem, resolving the problem, and terminating therapy. Sometimes therapy will end prematurely, before the problem is resolved. For example, the therapist or client may move to a new city.
When someone has a personal problem and seeks help from a therapist, the individual may turn to a variety of people to get a referral—a friend, a pastor or rabbi, or a family physician. Phone books list associations of psychologists, psychiatrists, and social workers that can also provide referrals to therapists. As noted earlier, however, some health insurance plans may restrict a person’s choice of therapist.
When prospective clients call a therapist for an appointment, they may discuss several aspects of therapy. One concern is availability—is the therapist taking on new patients? Are there hours when both patient and therapist can meet? Another issue is fees. Both therapists in private practice and those in community mental health agencies have to negotiate fees depending in part on the client’s health insurance plan. Some agencies do not require health insurance and have very low fees or a sliding scale that sets fees depending on the ability of the client to pay.
During the first meeting, clients try to explain their problems to the therapist. The therapist usually asks about the nature of the problems, what may make the problems better or worse, and how long the problems have existed. For many therapists, hearing details, even small ones, helps them to assess the problems and to decide the best form of treatment. Some therapists collaborate with clients in deciding the goals of therapy and what treatment methods will be used. Assessment does not stop with the first session, but continues through therapy. Occasionally, goals of therapy change upon assessment of new issues or problems.
No comments:
Post a Comment