Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense in which it applies to species quickly links up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity,. The associations of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle’s philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with he rest of hat we would call the natural world, including women, slaves, children and other species, not quite making it.
Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the ‘forms’. The theory of ‘forms’ is probably the most characteristic, and most contested of the doctrines of Plato. In the background ie the Pythagorean conception of form as the key to physical nature, bu also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is preeminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), earth, and water. Although he is principally remember for the doctrine of the ‘flux’ of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since ‘regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one’s finger. Plato ‘s theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.
The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom loses its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy , regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast with in integrated phenomenon may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived of a distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, foe example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is women’s nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the ‘masculine’ self-image, itself a socially variable and potentially distorting picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to he relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.
In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits. At its silliest the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.
The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a ‘science of man’, devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples’ own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.
The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.
Among the features that are proposed for this kind o f explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty’, however, there is no need for the approach to commit such errors, since the feature explained sociobiological may be indexed to environment: For instance, it ma y be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.
Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903),. His first major work was the book Social Statics (1851), which advocated an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voices. T.H. Huxley said that Spencer’s definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the ‘hurdy-gurdy’ monotony of him, his whole system wooden, as if knocked together out of cracked hemlock.
The premise is that later elements in an evolutionary path are better than earlier ones, the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggle, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
In that, the study of the say in which a variety of higher mental function may be adaptions applicable of a psychology of evolution, a formed in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on a
agreement or who free-ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.
For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and one’s self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that themselves are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley’s general dissent from empiricism, his holism, and the brilliance and style of his writing continue to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).
Understandably, something less than the fragmented division that belonging of Bradley’s case has a preference, voiced much earlier by the German philosopher, mathematician and polymath was Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which ids known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854) foregathers nature of becoming a creative spirit whose aspiration is ever further and more to completed self-realization. Although a movement of more general to naturalized imperative. Romanticism drew on the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegel (1770-1831) and of absolute idealism.
Being such in comparison with nature may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and intelligence, conceived of as distinct from the biological and physical order, (4) that which is manufactured and artefactual, or the product of human invention, and (5) related to it, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, for example, the conception of ‘nature red in tooth and claw’ often provide a justification for aggressive personal and political relations, or the idea that it is a women’s nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotype, and is a proper target of much ‘feminist’ writing.
This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on ‘such-things’ as preservation of species, or protection of the wilderness. Such protection can be supported as a mans to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that thing consist. They put u in our proper place, and failure to appreciate this value is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.
Many concerns and disputed cluster around the idea associated with the term ‘substance’. The substance of a thin may be considered in: (1) Its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notion of substances tend to disappear in empiricist thought in fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of instance of qualities, not of quantities themselves. So the problem of what it is for a value quality to be the instance that remains.
Metaphysics inspired by modern science tends to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.
It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but deriving from the 1st century rhetorical treatise On the Sublime, by Longinus (first c. AD). The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard’s writing in 1759, ‘When a large object is presented, the mind expands itself to the extent of that objects, and is filled with one grand sensation, which totally possessing it, composes it into a solemn sedateness and strikes it with deep silent wonder, and administration’: It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.
In Kant’s aesthetic theory the sublime ‘raises the soul above the height of vulgar complacency’. We experience the vast spectacles of nature as ‘absolutely great’ and of irresistible might and power. This perception is fearful, but by conquering this fear, and by regarding as small ‘those things of which we are wont to be solicitous’ we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of ourselves as transcending nature, than in an awareness of ourselves as a frail and insignificant part of it.
Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosophers George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of ‘essentialism’, stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.
The countering partitions a doctrine that bears some resemblance to the metaphysically based view of the German philosopher and mathematician Gottfried Leibniz (1646-1716), that if a person had any other attributes that the ones he has, he would not have been the same every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant. person. Leibniz thought that when asked hat would have happened if Peter had not denied Christ. That being that if I am asking what would have happened if Peter had not been Peter, denying Christ is contained in the complete notion of Peter. But he allowed that by the name ‘Peter’ might be understood as ‘what is involved in those attributes [of Peter] from which the denial does not follow’. In order that we are held accountable to allow of external relations, in that these being relations which individuals could have or not depending upon contingent circumstances. The relations of ideas is used by the Scottish philosopher David Hume (1711-76) in the First Enquiry of Theoretical Knowledge. All the objects of human reason or enquiring naturally, be divided into two kinds: To unit all the , ‘relations of ideas’ and ‘matter of fact ‘ (Enquiry Concerning Human Understanding) the terms reflect the belief that any thing that can be known dependently must be internal to the mind, and hence transparent to us.
In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called “Hume’s Fork’, is a version of the speculative deductivity distinction, but reflects the 17th and early 18th centauries behind that the deductivity is established by chains of infinite certainty as comparable to ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of ‘intuitive’ comparable ideas, whereby a principle or maxim can be established by reason alone. It ids in this sense that the English philosopher John Locke (1632-1704) who believed that theological and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.
A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrates, using the rules of logic, that if the premises are true then a particular conclusion must also be true.
The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean theorem, named after the 5th century Bc Greek mathematician and philosopher Pythagoras, which states that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinion do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers. But an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of 1 is the irrational number Ã.
The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.
In the 20th century, proofs have been written that are so complex that no one person understands every argument used in them. In 1976, a computer was used to complete the proof of the four-color theorem. This theorem states that four colors are sufficient to color any map in such a way that regions with a common boundary line have different colors. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof.
The study of the relations of deductibility among sentences in a logical calculus which benefits the prof theory. Deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitary methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel’s second incompleteness theorem.
What is more, the use of a model to test for consistencies in an ‘axiomatized system’ which is older than modern logic. Descartes’ algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar representation had been used by mathematicians in the 19th century, for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The ‘proof theory’ studies relations of deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpret rations) and semantic consequence (a formula ‘B’ is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨ B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B if and only if {A1 . . . An} ⊢ B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only ‘tautologies’. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.
The Euclidean geometry is the greatest example of the pure ‘axiomatic method’, and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (parallel lines never meet) could be denied without inconsistency, leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. Its most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid’s Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.
The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.
The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.
The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behavior. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.
In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analyzed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision making are also amenable to such study.
Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries is not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.
All is the same in the classical theory of the syllogism, a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in ‘all dogs bark’ the term ‘dogs’ is distributed, since it entails ‘all terriers bark’, which is obtained from it by a substitution. In ‘Not all dogs bark’, the same term is not distributed, since it may be true while ‘not all terriers bark’ is false.
When a representation of one system by another is usually more familiar, in and for itself, that those extended in representation that their workings are supposed analogous to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful ‘heuristic’ role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of topic was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in ‘The Aim and Structure of Physical Theory’ (1954) by which Duhem’s conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.
Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. There latter are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to there deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are scientifically tractable, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the state of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object’s causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size,. And mobility are. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.
Continuing as such, is the doctrine advocated by the American philosopher David Lewis (1941-2002), in that different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference which world is actual. Critics also charge either that the notion fails to fit with a coherent theory lf how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.
The proposal set forth that characterizes the ‘modality’ of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators, ‘it will be the case that ‘p’, or ‘it was the case that ‘p’, and there are affinities between the ‘deontic’ indicators, ‘it ought to be the case that ‘p’, or ‘it is permissible that ‘p’, and the of necessity and possibility.
The aim of a logic is to make explicit the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of answer is that if we do not we contradict ourselves(or, strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or fer set of beliefs.) There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such hat anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century., and has become increasingly recognized in the 20th century, in that finer work that were done within that tradition, but syllogistic reasoning is now generally regarded as a limited special case of the form of reasoning that can be reprehend within the promotion and predated values, these form the heart of modern logic, as their central notions or qualifiers, variables, and functions were the creation of the German mathematician Gottlob Frége, who is recognized as the father of modern logic, although his treatment of a logical system as an abreact mathematical structure, or algebraic, has been heralded by the English mathematician and logician George Boole (1815-64), his pamphlet The Mathematical Analysis of Logic (1847) pioneered the algebra of classes. The work was made of in An Investigation of the Laws of Thought (1854). Boole also published many works in our mathematics, and on the theory of probability. His name is remembered in the title of Boolean algebra, and the algebraic operations he investigated are denoted by Boolean operations.
The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The term that ds not occur in the conclusion is called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term). So the first premise of the example in the minor premise the second the major term. So the first premise of the example is the minor premise, the second the major premise and ‘having a tail’ is the middle term. This enable syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.
Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been reargued actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the may range over predicate and functions themselves. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus I t may be defined by law that χ = y iff (∀F)(Fχ ↔ Fy), which gives grater expressive power for less complexity.
Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His two independent proofs showing that from a contradiction anything follows a relevance logic, using a notion of entailment stronger than that of strict implication.
The imparting information has been conduced or carried out of the prescribed procedures, as impeding of something that tajes place in the chancing encounter out to be to enter ons’s mind may from time to time occasion of various doctrines concerning the necessary properties, ;east of mention, by adding to a prepositional or predicated calculus two operator, □ and ◊ (sometimes written ‘N’ and ‘M’),meaning necessarily and possible, respectfully. These like ‘p ➞ ◊p and □p ➞ p will be wanted. Controversial these include □p ➞ □□p (if a proposition is necessary,. It its necessarily, characteristic of a system known as S4) and ◊p ➞ □◊p (if as preposition is possible, it its necessarily possible, characteristic of the system known as S5). The classical modal theory for modal logic, due to the American logician and philosopher (1940-) and the Swedish logician Sig Kanger, involves valuing prepositions not true or false simpliciter, but as true or false at possible worlds with necessity then corresponding to truth in all worlds, and possibility to truth in some world. Various different systems of modal logic result from adjusting the accessibility relation between worlds.
In Saul Kripke, gives the classical modern treatment of the topic of reference, both clarifying the distinction between names and definite description, and opening te door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.
One of the three branches into which ‘semiotic’ is usually divided, the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable. In that, in formal studies, a semantics is provided for a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds have on the truth conditions of sentences containing them.
Holding that the basic casse of reference is the relation between a name and the persons or object which it names. The philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description an what it describes, or that between myself or the word ‘I’, are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approach, searching for a more substantive possibly that causality or psychological or social constituents are pronounced between words and things.
However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family,, Berry, Richard, etc. form the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type sem to depend upon an element of self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although self-reference itself is often benign (for instance, the sentence ‘All English sentences should have a verb’, includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that existence only pathological self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient. In allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes, our understand of Russell’s paradox may be imperfect as well.
Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and non has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations o vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary make an agreement valid, or a position tenable, a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philologer and historian George Collingwood (1889-1943), announces hat any proposition capable of truth or falsity stand on bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore mans that either another of a truth value is fond, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion carries coss, and there is some consensus that at least who where definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.
Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carry an implicature, thus one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.
It is, nonetheless, that we find in classical logic a proposition that may be true or false,. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogues between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.
Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with its associated, but different truth-predicate. Whist this enables the approach to avoid the contradictions of paradoxical contemplations, it conflicts with the idea that a language should be able to say everything that there is to be said, and other approaches have become increasingly important.
So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantives theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
Taken to be the view, inferential semantics take on the role of sentence in inference give a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clar association with things in the world.
Moreover, a theory of semantic truth be that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.
The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoses, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms e.g., quark, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.
All the while, both Frége and Ramsey are agreed that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) that in less direct contexts, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true preposition. For example, the second ma y translate as ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writing frequently advocates that we must abandon such norms. Along with a discredited ‘objective’ conception of truth. Perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.
Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or join of something might that there be more so as to a larger combination for us to consider the simplest formulation , is that the claim that expression of the form ‘S is true’ mean the same as expression of the form ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ id Tue, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and tis is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.
The relationship between a set of premises and a conclusion when the conclusion follows from the premise,. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.
From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is , a it were, a purely empirical enterprise.
But this point of view by no means embraces the whole of the actual process, for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develops a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.
Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. THE Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanisms for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.
In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). The premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggle, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
Once again, the psychology proving attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive , our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on =the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The term of use are applied, more or less aggressively, especially to explanations offered in sociobiology and evolutionary psychology.
Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. It is complementary relationships between such results that are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
According to E.O Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it id also clear that the ‘gods’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral an religious sentiments. The eventual result of the competition between each of the other, will be the secularization of the human epic and of religion itself.
Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide ‘comprehensible’ guides to living. In thus way. Man’s imagination and intellect play vital roles on his survival and evolution.
Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering law are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it ma y not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements we make of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, an d pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form,. And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics include that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Concepcion of meaning s truth-conditions need not and should not be advanced as being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of sentence in the language, and must have some idea of the insufficiencies of various kinds of speech act. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentence differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it are true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of he semantic values of the sentences on which it operates.
The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.
Since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. Its conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of ruth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and - confusing and inconsistently if this article is correct - Frége himself. but is the minimal theory correct?
The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’.
Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form ‘if p were to happen q would’, or ‘if p were to have happened q would have happened’, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, use=ful ‘if you broken the bone, the X-ray would have looked different’, or ‘if the reactor were to fail, this mechanism wold click in’ are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals comes out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.
Although the subjunctive form indicates a counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.
The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach needs has proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing awareness tat the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not be of limited use.
The pronouncing of any conditional; preposition of the form ‘if p then q’. The condition hypothesizes, ‘p’. Its called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weaken in that of material implication, merely telling us that with not-p. or q. stronger conditionals include elements of modality, corresponding to the thought that ‘if p is true then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentence ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for example, belief in God, are the widest sense of the works satisfactorily in the widest sense of the word. On James’s view almost any belief might be respectable, and even rue, provided it works (but working is no s simple matter for James). The apparently subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20 century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an ‘automatic sweetheart’ or female zombie) and remarks hat the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implication that this is what makes it true that the other persons have minds in the disturbing part.
Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who have usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because belief have effects, as they work. Pragmatism can be found in Kant’s doctrine of the primary of practical over pure reason, and continues to play an influential role in the theory of meaning and of truth.
In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdates, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what effects it is likely to have on behaviour, then we would have done all tat is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or ‘realization’ of the program the machine is running. The principle advantage of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items tat do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to different from our own, it may then seem as though beliefs and desires can be ‘variably realized’ causal architecture, just as much as they can be in different neurophysiological states.
The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truth is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
The Association for International Conciliation first published William James’s pacifist statement, 'The Moral Equivalent of War', in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism - a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represent standards of the time.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.
Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.
The pragmatist tradition was revitalized in the 1980's by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - have an alternative to Rorty’s interpretation of the tradition.
In defining certainty, it is crucial to note that the term has both an absolute and relative sense. More or less, we take a proposition to be certain when we have no doubt about its truth.. We may do this in error or unreasonably, but objectively a proposition is certain when such absence of doubt is justifiable. The sceptical tradition in philosophy denies that objective certainty is often possible, or ever possible, either for any proposition at all, or for any proposition at all, or for any proposition from some suspect family (ethics, theory, memory, empirical judgement etc.) a major sceptical weapon is the possibility of upsetting events that Can cast doubt back onto what were hitherto taken to be certainties. Others include reminders of the divergence of human opinion, and the fallible source of our confidence. Fundamentalist approaches to knowledge look for a basis of certainty, upon which the structure of our system is built. Others reject the metaphor, looking for mutual support and coherence, without foundation.
However, in moral theory, the view that there are inviolable moral standards or absolute variable human desires or policies or prescriptions.
In spite of the notorious difficulty of reading Kantian ethics, a hypothetical imperative embeds a command which is in place only given some antecedent desire or project: ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applies to those with the antecedent desire or inclination. If one has no desire to look wise the injunction cannot be so avoided: It is a requirement that binds anybody, regardless of their inclination. It could be represented as, for example, ‘tell the truth (regardless of whether you want to or not)’. The distinction is not always signalled by presence or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed five forms of the categorical imperative: (1) the formula of universal law: ‘act only on that maxim through which you can at the same times will that it should become universal law : (2) the formula of the law of nature: ‘act as if the maxim of your action were to become through your will a universal law of nature’: (3) the formula of the end-in-itself: ‘act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end’: (4) the formula of autonomy, or considering ‘the will of every rational being as a will which makes universal law’: (5) the formula of the Kingdom of Ends, which provides a model for the systematic union of different rational beings under common laws.
Even so, a proposition that is not a conditional ‘p’. Moreover, the affirmative and negative, modern opinion is wary of this distinction, since what appears categorical may vary notation. Apparently, categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) = if ‘X’ is given a range of tasks she performs them better than many people (conditional?) The problem. Nonetheless, is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
A limited area of knowledge or endeavour to which pursuits, activities and interests are a central representation held to a concept of physical theory. In this way, a field is defined by the distribution of a physical quantity, such as temperature, mass density, or potential energy y, at different points in space. In the particularly important example of force fields, such ad gravitational, electrical, and magnetic fields, the field value at a point is the force which a test particle would experience if it were located at that point. The philosophical problem is whether a force field is to be thought of as purely potential, so the presence of a field merely describes the propensity of masses to move relative to each other, or whether it should be thought of in terms of the physically real modifications of a medium, whose properties result in such powers that is, are force fields purely potential, fully characterized by dispositional statements or conditionals, or are they categorical or actual? The former option seems to require within ungrounded dispositions, or regions of space hat differ only in what happens if an object is placed there. The law-like shape of these dispositions, apparent for example in the curved lines of force of the magnetic field, may then seem quite inexplicable. To atomists, such as Newton it would represent a return to Aristotelian entelechies, or quasi-psychological affinities between things, which are responsible for their motions. The latter option requires understanding of how forces of attraction and repulsion can be ‘grounded’ in the properties of the medium.
The basic idea of a field is arguably present in Leibniz, who was certainly hostile to Newtonian atomism. Although his equal hostility to ‘action at a distance’ muddies the water. It is usually credited to the Jesuit mathematician and scientist Joseph Boscovich (1711-87) and Immanuel Kant (1724-1804), both of whom influenced the scientist Faraday, with whose work the physical notion became established. In his paper ‘On the Physical Character of the Lines of Magnetic Force’ (1852). Faraday was to suggest several criteria for assessing the physical reality of lines of force, such as whether they are affected by an intervening material medium, whether the motion depends on the nature of what is placed at the receiving end. As far as electromagnetic fields go, Faraday himself inclined to the view that the mathematical similarity between heat flow, currents, and electro-magnetic lines of force was evidence for the physical reality of the intervening medium.
Once, again, our mentioning recognition for which its case value, whereby its view is especially associated the American psychologist and philosopher William James (1842-1910), that the truth of a statement can be defined in terms of a ‘utility’ of accepting it. Communicated, so much as a dispiriting position for which its place of valuation may be viewed as an objection. Since there are things that are false, as it may be useful to accept, and conversely there are things that are true and that it may be damaging to accept. Nevertheless, there are deep connections between the idea that a representation system is accorded, and the likely success of the projects in progressive formality, by its possession. The evolution of a system of representation either perceptual or linguistic, seems bounded to connect successes with everything adapting or with utility in the modest sense. The Wittgenstein doctrine stipulates the meaning of use that upon the nature of belief and its relations with human attitude, emotion and the idea that belief in the truth on one hand, the action of the other. One way of binding with cement, wherefore the connection is found in the idea that natural selection becomes much as much in adapting us to the cognitive creatures, because beliefs have effects, they work. Pragmatism can be found in Kant’s doctrine, and continues to play an influencing role in the theory of meaning and truth.
James, (1842-1910), although with characteristic generosity exaggerated in his debt to Charles S. Peirce (1839-1914), he charted that the method of doubt encouraged people to pretend to doubt what they did not doubt in their hearts, and criticize its individualists insistence, that the ultimate test of certainty is to be found in the individuals personalized consciousness.
From his earliest writings, James understood cognitive processes in teleological terms. Thought, he held, assists us in the satisfactory interests. His will to Believe doctrine, the view that we are sometimes justified in believing beyond the evidential relics upon the notion that a belief’s benefits are relevant to its justification. His pragmatic method of analysing philosophical problems, for which requires that we find the meaning of terms by examining their application to objects in experimental situations, similarly reflects the teleological approach in its attention to consequences.
Such an approach, however, set’s James’ theory of meaning apart from verification, dismissive of metaphysics. Unlike the verificationalist, who takes cognitive meaning to be a matter only of consequences in sensory experience. James’ took pragmatic meaning to include emotional and matter responses. Moreover, his ,metaphysical standard of value, not a way of dismissing them as meaningless. It should also be noted that in a greater extent, circumspective moments James did not hold that even his broad set of consequences were exhaustive of a terms meaning. ‘Theism’, for example, he took to have antecedent, definitional meaning, in addition to its varying degree of importance and chance upon an important pragmatic meaning.
James’ theory of truth reflects upon his teleological conception of cognition, by considering a true belief to be one which is compatible with our existing system of beliefs, and leads us to satisfactory interaction with the world.
However, Peirce’s famous pragmatist principle is a rule of logic employed in clarifying our concepts and ideas. Consider the claim the liquid in a flask is an acid, if, we believe this, we except that it would turn red: We except an action of ours to have certain experimental results. The pragmatic principle holds that listing the conditional expectations of this kind, in that we associate such immediacy with applications of a conceptual representation that provides a complete and orderly sets clarification of the concept. This is relevant ti the logic of abduction: Clarificationists using the pragmatic principle provides all the information about the content of a hypothesis that is relevantly to decide whether it is worth testing.
To a greater extent, and most importantly, is the famed apprehension of the pragmatic principle, in so that, Pierces’s account of reality: When we take something to be rea, that by this single case, we think it is ‘fated to be agreed upon by all who investigate’ the matter to which it stand, in other words, if I believe that it is really the case that ‘P’, then I except that if anyone were to inquire most fully into the finding measure to whether ‘p’, they would arrive at the belief that ‘p’. It is not part of the theory that the experimental consequences of our actions should be specified by a warranted empiricist vocabulary - Peirce insisted that perceptual theories are abounding in latency. Even so, nor is it his view that the collected conditionals do or not clarify a concept as all analytic. In addition, in later writings, he argues that the pragmatic principle could only be made plausible to someone who accepted its metaphysical realism: It requires that ‘would-bees’ are objective and, of course, real.
If realism itself can be given a fairly quick clarification, it is more difficult to chart the various forms of supposition, for they seem legendary. Other opponents deny that the entitles posited by the relevant discourse that exist or at least exists: The standard example is ‘idealism’, that reality id somehow mind-curative or mind-co-ordinated - that real object comprising the ‘external world’ are not independently of eloping minds, but only exist as in some way correlative to the mental operations. The doctrine assembled of ‘idealism’ enters on the conceptual note that reality as we understand this as meaningful and reflects the working of mindful purposes. And it construes this as meaning that the inquiring mind itself makes of a formative constellation and not of any mere understanding of the nature of the ‘real’ bit even the resulting charger we attribute to it.
Wherefore, the term ids most straightforwardly used when qualifying another linguistic form of grammatik: a real ‘x’ may be contrasted with a fake, a failed ‘x’, a near ‘x’, and so on. To trat something as real, without qualification, is to suppose it to be part of the actualized world. To reify something is to suppose that we have committed by some indoctrinated treatise, as that of a theory. The central error in thinking of reality and the totality of existence is to think of the ‘unreal’ as a separate domain of things, perhaps, unfairly to that of the benefits of existence.
Such that non-existence of all things, as the product of logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a ‘quantifier’. (Stating informally as a quantifier is an expression that reports of a quantity of times that a predicate is satisfied in some class of things, i.e., in a domain.) This confusion leads the unsuspecting to think that a sentence such as ‘Nothing is all around us’ talks of a special kind of ting that is all around us, when in fact it merely denies that the predicate ‘is all around us’ has appreciations. The feelings that lad some philosophers and theologians, notably Heidegger, to talk of the experience of Nothing, is not properly the experience of nothing, but rather the failure of a hope or expectations that there would be something of some kind at some point. This may arise in quite everyday cases, as when one finds that the article of functions one expected to see as usual, in the corner has disappeared. The difference between ‘existentialist’’ and ‘analytic philosophy’, on the point of what, whereas the former is afraid of nothing, and the latter thinks that there is nothing to be afraid of.
A rather different set of concerns arise when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Still, other substitutional problems arise over conceptualizing empty space and time.
Whereas, the standard opposition between those who affirm and those who deny, the real existence of some kind of thing or some kind of fact or state of affairs. Almost any area of discourse may be the focus of tis dispute: The external world, the past and future, other minds, mathematical objects, possibilities, universals, moral or aesthetic properties are examples. There be to one influential suggestion, as associated with the British philosopher of logic and language, and the most determinative of philosophers centres round Anthony Dummett (1925), to which is borrowed from the ‘intuitionistic’ critique of classical mathematics, and suggested that the unrestricted use of the ‘principle of bivalence’ is the trademark of ‘realism’. However, this ha to overcome counter-examples both ways: Although Aquinas wads a moral ‘realist’, he held that moral really was not sufficiently structured to make true or false every moral claim. Unlike Kant who believed that he could use the law of bivalence happily in mathematics, precisely because it wad only our own construction. Realism can itself be subdivided: Kant, for example, combines empirical realism (within the phenomenal world the realist says the right things - surrounding objects really exist and independent of us and our mental stares) with transcendental idealism (the phenomenal world asa whole reflects the structures imposed on it by the activity of our minds as they render it intelligible to us). In modern philosophy the orthodox oppositions to realism has been from philosopher such as Goodman, who, impressed by the extent to which we perceive the world through conceptual and linguistic lenses of our own making.
Assigned to the modern treatment of existence in the theory of ‘quantification’ is sometimes put by saying that existence is not a predicate. The idea is that the existential quantifies itself ads an operator on a predicate, indicating that the property it expresses has instances. Existence is therefore treated as a second-order property, or a property of properties. It is fitting to say, that in this it is like number, for when we say that these things of a kind, we do not describe the thing (ad we would if we said there are red things of the kind), but instead attribute a property to the kind itself. The parallelled numbers are exploited by the German mathematician and philosopher of mathematics Gottlob Frége in the dictum that affirmation of existence is merely denied of the number nought. A problem, nevertheless, proves accountable for its crated by sentences like ‘This exists’, where some particular thing is undirected, such that a sentence seems to express a contingent truth (for this insight has not existed), yet no other predicate is involved. ‘This exists’ is, therefore unlike ‘Tamed tigers exist’, where a property is said to have an instance, for the word ‘this’ and does not locate a property, but only an individual.
Possible worlds seem able to differ from each other purely in the presence or absence of individuals, and not merely in the distribution of exemplification of properties.
The philosophical ponderance over which to set upon the unreal, as belonging to the domain of Being. Nonetheless, there is little for us that can be said with the philosopher’s study. So it is not apparent that there can be such a subject as Being by itself. Nevertheless, the concept had a central place in philosophy from Parmenides to Heidegger. The essential question of ‘why is there something and not of nothing’? Prompting over logical reflection on what it is for a universal to have an instance, nd as long history of attempts to explain contingent existence, by which id to reference and a necessary ground.
In the transition, ever since Plato, this ground becomes a self-sufficient, perfect, unchanging, and external something, identified with the Good or God, but whose relation with the everyday world remains obscure. The celebrated argument for the existence of God first propounded by Anselm in his Proslogin. The argument by defining God as ‘something than which nothing greater can be conceived’. God then exists in the understanding since we understand this concept. However, if He only existed in the understanding something greater could be conceived, for a being that exists in reality is greater than one that exists in the understanding. Bu then, we can conceive of something greater than that than which nothing greater can be conceived, which is contradictory. Therefore, God cannot exist on the understanding, but exists in reality.
An influential argument (or family of arguments) for the existence of God, finding its premisses are that all natural things are dependent for their existence on something else. The totality of dependent brings must then itself depend upon a non-dependent, or necessarily existent bring of which is God. Like the argument to design, the cosmological argument was attacked by the Scottish philosopher and historian David Hume (1711-76) and Immanuel Kant.
Its main problem, nonetheless, is that it requires us to make sense of the notion of necessary existence. For if the answer to the question of why anything exists is that some other tings of a similar kind exists, the question merely arises gain. So the ‘God’ that ends the question must exist necessarily: It must not be an entity of which the same kinds of questions can be raised. The other problem with the argument is attributing concern and care to the deity, not for connecting the necessarily existent being it derives with human values and aspirations.
The ontological argument has been treated by modern theologians such as Barth, following Hegel, not so much as a proof with which to confront the unconverted, but as an explanation of the deep meaning of religious belief. Collingwood, regards the argument s proving not that because our idea of God is that of id quo maius cogitare viequit, therefore God exists, but proving that because this is our idea of God, we stand committed to belief in its existence. Its existence is a metaphysical point or absolute pre-supposition of certain forms of thought.
In the 20th century, modal versions of the ontological argument have been propounded by the American philosophers Charles Hertshorne, Norman Malcolm, and Alvin Plantinge. One version is to define something as warranty existent and is perfect in every ‘possible world’. Then, to allow that it is at least possible that an unsurpassable great being exists. This means that there is a possible world in which such a being exists. However, if it exists in one world, it exists in all (for the fact that such a being exists in a world that entails, in at least, it exists and is perfect in every world), so, it exists necessarily. The correct response to this argument is to disallow the apparently reasonable concession that it is possible that such a being exists. This concession is much more dangerous than it looks, since in the modal logic, involved from possibly necessarily ‘p’, we can device necessarily ‘p’. A symmetrical proof starting from the assumption that it is possible that such a being not exist would derive that it is impossible that it exists.
The doctrine that it makes an ethical difference of whether an agent actively intervenes to bring about a result, or omits to act in circumstances in which it is foreseen, that as a result of the omission the same result occurs. Thus, suppose that I wish you dead. If I act to bring about your death, I am a murderer, however, if I happily discover you in danger of death, and fail to act to save you, I am not acting, and therefore, according to the doctrine of acts and omissions not a murderer. Critics implore that omissions can be as deliberate and immoral as I am responsible for your food and fact to feed you. Only omission is surely a killing, ‘Doing nothing’ can be a way of doing something, or in other worlds, absence of bodily movement can also constitute acting negligently, or deliberately, and defending on the context ,may be a way of deceiving, betraying, or killing. Nonetheless, criminal law offers to find its conveniences, from which to distinguish discontinuous intervention, for which is permissible, from bringing about result, which may not be, if, for instance, the result is death of a patient. The question is whether the difference, if there is one, is, between acting and omitting to act be discernibly or defined in a way that bars a general moral might.
The double effect of a principle attempting to define when an action that had both good and bad results is morally permissible. I one formation such an action is permissible if (1) The action is not wrong in itself, (2) the bad consequences is not that which is intended (3) the good is not itself a result of the bad consequences, and (4) the two consequential affects are commensurate. Thus, for instance, I might justifiably bomb an enemy factory, foreseeing but intending that the death of nearby civilians, whereas bombing the death of nearby civilians intentionally would be disallowed. The principle has its roots in Thomist moral philosophy, accordingly. St. Thomas Aquinas (1225-74), held that it is meaningless to ask whether a human being is two tings (soul and body) or, only just as it is meaningless to ask whether the wax and the shape given to it by the stamp are one: On this analogy the sound is ye form of the body. Life after death is possible only because a form itself doe not perish (pricking is a loss of form).
And is, therefore, in some sense available to reactivate a new body., therefore, not I who survives body death, but I ma y be resurrected in the same personalized bod y that becomes reanimated by the same form, that which Aquinas’s account, as a person has no privileged self-understanding, we understand ourselves as we do everything else, by way of sense experience and abstraction, and knowing the principle of our own lives is an achievement, not as a given.
The special way that we each have of knowing our own thoughts, intentions, and sensationalist have brought in the many philosophical ‘behaviorist and functionalist tendencies, that have found it important to deny that there is such a special way , arguing the way that I know of my own mind inasmuch as the way that I know of yours, e.g., by seeing what I say when asked. Others, however, point out that the behaviour of reporting the result of introspection in a particular and legitimate kind of behavioural access that deserves notice in any account of historically human psychology. The historical philosophy of reflection upon the astute of history, or of historical, thinking, finds the term was used in the 18th century, e.g., by Volante was to mean critical historical thinking as opposed to the mere collection and repetition of stories about the past. In Hegelian, particularly by conflicting elements within his own system, however, it came to man universal or world history. The Enlightenment confidence was being replaced by science, reason, and understanding that gave history a progressive moral thread, and under the influence of the German philosopher, whom is in spreading Romanticism, came Gottfried Herder (1744-1803),and, Immanuel Kant. This essential speculative philosophy of history is given a extra Kantian twist in the German idealist Johann Fichte, in whom the extra association of temporal succession with logical implication introduces the idea that concepts themselves are the dynamic engine of historical change. The idea is readily intelligible in that there world of nature and of thought become identified. The work of Herder, Kant, Flichte and Schelling is synthesized by Hegel: History has a plot, as too, this to the moral development of man, equates with freedom within the state, this in turn is the development of thought, or a logical development in which various necessary moment in the life of the concept are successively achieved and improved upon. Hegel’s method is at its most successful, when the object is the history of ideas, and the evolution of thinking may march in steps with logical oppositions and their resolution encounters red by various systems of thought.
Within the revolutionary communism, Karl Marx (1818-83) and the German social philosopher Friedrich Engels (1820-95), there emerges a rather different kind of story, based upon Hefl’s progressive structure not laying the achievement of the goal of history to a future in which the political condition for freedom comes to exist, so that economic and political fears than ‘reason’ is in the engine room. Although, itself is such that speculations upon the history may that it be continued to be written, notably: late examples, by the late 19th century large-scale speculation of tis kind with the nature of historical understanding, and in particular with a comparison between the ,methos of natural science and with the historians. For writers such as the German neo-Kantian Wilhelm Windelband and the German philosopher and literary critic and historian Wilhelm Dilthey, it is important to show that the human sciences such. as history are objective and legitimate, nonetheless they are in some way deferent from the enquiry of the scientist. Since the subjective-matter is the past thought and actions of human brings, what is needed and actions of human beings, past thought and actions of human beings, what is needed is an ability to re-live that past thought, knowing the deliberations of past agents, as if they were the historian’s own.. The most influential British writer on this theme was the philosopher and historian George Collingwood (1889-1943) whose, The Idea of History (1946), contains an extensive defence of the verstehe approach, but it is nonetheless, the explanation from there actions, however, by re-living the situation as our understanding that understanding others is not gained by the tactic use of a ‘theory’, enabling us to infer what thoughts or intentionality experienced, again, the matter to which the subjective-matters of past thoughts and actions , as I have a human ability of knowing the deliberations of past agents as if they were the historian’s own. The immediate question of the form of historical explanation, and the fact that general laws have other than no place or any apprentices in the order of a minor place in the human sciences, it is also prominent in thoughts about distinctiveness as to regain their actions, but by re-living the situation
in or thereby an understanding of what they experience and thought.
The view that everyday attributions of intention, belief and meaning to other persons proceeded via tacit use of a theory that enables ne to construct these interpretations as explanations of their doings. The view is commonly hld along with functionalism, according to which psychological states theoretical entities, identified by the network of their causes and effects. The theory-theory had different implications, depending on which feature of theories is being stressed. Theories may be though of as capable of formalization, as yielding predications and explanations, as achieved by a process of theorizing, as achieved by predictions and explanations, as achieved by a process of theorizing, as answering to empirical evince that is in principle describable without them, as liable to be overturned by newer and better theories, and o on. The main problem with seeing our understanding of others as the outcome of a piece of theorizing is the non-existence of a medium in which this theory can be couched, as the child learns simultaneously he minds of others and the meaning of terms in its native language.
Our understanding of others is not gained by the tacit use of a ‘theory’. Enabling us to infer what thoughts or intentions explain their actions, however, by re-living the situation ‘in their moccasins’, or from their point of view, and thereby understanding what hey experienced and thought, and therefore expressed. Understanding others is achieved when we can ourselves deliberate as they did, and hear their words as if they are our own. The suggestion is a modern development of the ‘verstehen’ tradition associated with Dilthey, Weber and Collngweood.
Much as much, it is therefore, in some sense available to reactivate a new body, however, not that I, who survives bodily death, but I may be resurrected in the same body that becomes reanimated by the same form, in that of Aquinas’s account, a person has no privileged self-understanding. We understand ourselves, just as we do everything else, that through the sense experience, in that of an abstraction, may justly be of knowing the principle of our own lives, is to obtainably achieve, and not as a given. In the theory of knowledge that knowing Aquinas holds the Aristotelian doctrine that knowing entails some similarities between the knower and what there is to be known: A human’s corporal nature, therefore, requires that knowledge start with sense perception. As yet, the same limitations that do not apply of bringing further he levelling stabilities that are contained within the hierarchical mosaic, such as the celestial heavens that open in bringing forth to angles.
In the domain of theology Aquinas deploys the distraction emphasized by Eringena, between the existence of God in understanding the significance; of five arguments: The are (1) Motion is only explicable if there exists an unmoved, a first mover (2) the chain of efficient causes demands a first cause (3) the contingent character of existing things in the wold demands a different order of existence, or in other words as something that has a necessary existence (4) the gradation of value in things in the world require the existence of something that is most valuable, or perfect, and (5) the orderly character of events points to a final cause, or end t which all things are directed, and the existence of this end demands a being that ordained it. All the arguments are physico-theological arguments, in that between reason and faith, Aquinas lays out proofs of the existence of God.
He readily recognizes that there are doctrines such that are the Incarnation and the nature of the Trinity, know only through revelations, and whose acceptance is more a matter of moral will. God’s essence is identified with his existence, as pure activity. God is simple, containing no potential. No matter how, we cannot obtain knowledge of what God is (his quiddity), perhaps, doing the same work as the principle of charity, but suggesting that we regulate our procedures of interpretation by maximizing the extent to which we see the subject s humanly reasonable, than the extent to which we see the subject as right about things. Whereby remaining content with descriptions that apply to him partly by way of analog y , God reveals of himself is not himself.
The immediate problem availed of ethics is posed b y the English philosopher Phillippa Foot, in her ‘The Problem of Abortion and the Doctrine of the Double Effect’ (1967). A runaway train or trolley comes to a section in the track that is under construction and impassable. One person is working on one part and five on the other, and the trolley will put an end to anyone working on the branch it enters. Clearly, to most minds, the driver should steer for the fewest populated branch. But now suppose that, left to itself, it will enter the branch with its five employs that are there, and you as a bystander can intervene, altering the points so that it veers through the other. Is it right or obligors, or even permissible for you to do this, thereby, apparently involving yourself in ways that responsibility ends in a death of one person? After all, whom have you wronged if you leave it to go its own way? The situation is similarly standardized of others in which utilitarian reasoning seems to lead to one course of action, but a person’s integrity or principles may oppose it.
Describing events that haphazardly happen does not of itself permit us to talk of rationality and intention, which are the categories we may apply if we conceive of them as action. We think of ourselves not only passively, as creatures that make things happen. Understanding this distinction gives forth of its many major problems concerning the nature of an agency for the causation of bodily events by mental events, and of understanding the ‘will’ and ‘free will’. Other problems in the theory of action include drawing the distinction between an action and its consequence, and describing the structure involved when we do one thing ‘by;’ dong another thing. Even the planning and dating where someone shoots someone on one day and in one place, whereby the victim then dies on another day and in another place. Where and when did the murderous act take place?
Causation, least of mention, is not clear that only events are created by and for itself. Kant cites the example o a cannonball at rest and stationed upon a cushion, but causing the cushion to be the shape that it is, and thus to suggest that the causal states of affairs or objects or facts may also be casually related. All of which, the central problem is to understand the elements of necessitation or determinacy of the future. Events, Hume thought, are in themselves ‘loose and separate’: How then are we to conceive of others? The relationship seems not to perceptible, for all that perception gives us (Hume argues) is knowledge of the patterns that events do, actually falling into than any acquaintance with the connections determining the pattern. It is, however, clear that our conception of everyday objects ids largely determined by their casual powers, and all our action is based on the belief that these causal powers are stable and reliable. Although scientific investigation can give us wider and deeper dependable patterns, it seems incapable of bringing us any nearer to the ‘must’ of causal necessitation. Particular examples o f puzzles with causalities are quite apart from general problems of forming any conception of what it is: How are we to understand the casual interaction between mind and body? How can the present, which exists, or its existence to a past that no longer exists? How is the stability of the casual order to be understood? Is backward causality possible? Is causation a concept needed in science, or dispensable?
The news concerning free-will, is nonetheless, a problem for which is to reconcile our everyday consciousness of ourselves as agent, with the best view of what science tells us that we are. Determinism is one part of the problem. It may be defined as the doctrine that every event has a cause. More precisely, for any event ‘C’, there will be one antecedent states of nature ‘N’, and a law of nature ‘L’, such that given L, N will be followed by ‘C’. But if this is true of every event, it is true of events such as my doing something or choosing to do something. So my choosing or doing something is fixed by some antecedent state ‘N’ an d the laws. Since determinism is universal these in turn are fixed, and so backwards to events for which I am clearly not responsible (events before my birth, for example). So, no events can be voluntary or free, where that means that they come about purely because of my willing them I could have done otherwise. If determinism is true, then there will be antecedent states and laws already determining such events: How then can I truly be said to be their author, or be responsible for them?
Reactions to this problem are commonly classified as: (1) Hard determinism. This accepts the conflict and denies that you have real freedom or responsibility (2) Soft determinism or compatibility, whereby reactions in this family assert that everything you should ant from a notion of freedom is quite compatible with determinism. In particular, if your actions are caused, it can often be true of you that you could have done otherwise if you had chosen, and this may be enough to render you liable to be held unacceptable (the fact that previous events will have caused you to choose as you did is deemed irrelevant on this option). (3) Libertarianism, as this is the view that while compatibilism is only an evasion, there is a more substantiative, real notion of freedom that can yet be preserved in the face of determinism (or, of indeterminism). In Kant, while the empirical or phenomenal self is determined and not free, whereas the noumenal or rational self is capable of being rational, free action. However, the noumeal self exists outside the categorical priorities of space and time, as this freedom seems to be of a doubtful value as other libertarian avenues do include of suggesting that the problem is badly framed, for instance, because the definition of determinism breaks down, or postulates by its suggesting that there are two independent but consistent ways of looking at an agent, the scientific and the humanistic, wherefore it ids only through confusing them that the problem seems urgent. Nevertheless, these avenues have gained general popularity, as an error to confuse determinism and fatalism.
The dilemma for which determinism is for itself often supposes of an action that seems as the end of a causal chain, or, perhaps, by some hieratical set of suppositional actions that would stretch back in time to events for which an agent has no conceivable responsibility, then the agent is not responsible for the action.
Once, again, the dilemma adds that if an action is not the end of such a chain, then either to or one of its causes occurs at random, in that no antecedent events brought it about, and in that case nobody is responsible for its ever to occur. So, whether or not determinism is true, responsibility is shown to be illusory.
Still, there is to say, to have a will is to be able to desire an outcome and to purpose to bring it about. Strength of will, or firmness of purpose, is supposed to be good and weakness of will or akrasia bad.
A mental act of willing or trying whose presence is sometimes supposed to make the difference between intentional or voluntary action, as well of mere behaviour. The theory that there are such acts is problematic, and the idea that they make the required difference is a case of explaining a phenomenon by citing another that raises exactly the same problem, since the intentional or voluntary nature of the set of volition now needs explanation. For determinism to act in accordance with the law of autonomy or freedom, is that in ascendance with universal moral law and regardless of selfish advantage.
A categorical notion in the work as contrasted in Kantian ethics show of a hypothetical imperative that embeds of a commentary which is in place only given some antecedent desire or project. ‘If you want to look wise, stay quiet’. The injunction to stay quiet only applies to those with the antecedent desire or inclination: If one has no desire to look wise the injunction or advice lapses. A categorical imperative cannot be so avoided, it is a requirement that binds anybody, regardless of their inclination,. It could be repressed as, for example, ‘Tell the truth (regardless of whether you want to or not)’. The distinction is not always mistakably presumed or absence of the conditional or hypothetical form: ‘If you crave drink, don’t become a bartender’ may be regarded as an absolute injunction applying to anyone, although only activated in the case of those with the stated desire.
In Grundlegung zur Metaphsik der Sitten (1785), Kant discussed some of the given forms of categorical imperatives, such that of (1) The formula of universal law: ‘act only on that maxim through which you can at the same time will that it should become universal law’, (2) the formula of the law of nature: ‘Act as if the maxim of your action were to become through your will a universal law of nature’, (3) the formula of the end-in-itself, ‘Act in such a way that you always trat humanity of whether in your own person or in the person of any other, never simply as an end, but always at the same time as an end’, (4) the formula of autonomy, or consideration; ’the will’ of every rational being a will which makes universal law’, and (5) the formula of the Kingdom of Ends, which provides a model for systematic union of different rational beings under common laws.
A central object in the study of Kant’s ethics is to understand the expressions of the inescapable, binding requirements of their categorical importance, and to understand whether they are equivalent at some deep level. Kant’s own application of the notions are always convincing: One cause of confusion is relating Kant’s ethical values to theories such as ;expressionism’ in that it is easy but imperatively must that it cannot be the expression of a sentiment, yet, it must derive from something ‘unconditional’ or necessary’ such as the voice of reason. The standard mood of sentences used to issue request and commands are their imperative needs to issue as basic the need to communicate information, and as such to animals signalling systems may as often be interpreted either way, and understanding the relationship between commands and other action-guiding uses of language, such as ethical discourse. The ethical theory of ‘prescriptivism’ in fact equates the two functions. A further question is whether there is an imperative logic. ‘Hump that bale’ seems to follow from ‘Tote that barge and hump that bale’, follows from ‘Its windy and its raining’:.But it is harder to say how to include other forms, does ‘Shut the door or shut the window’ follow from ‘Shut the window’, for example? The usual way to develop an imperative logic is to work in terms of the possibility of satisfying the other one command without satisfying the other, thereby turning it into a variation of ordinary deductive logic.
Despite the fact that the morality of people and their ethics amount to the same thing, there is a usage that I restart morality to systems such as that of Kant, based on notions given as duty, obligation, and principles of conduct, reserving ethics for the more Aristotelian approach to practical reasoning as based on the valuing notions that are characterized by their particular virtue, and generally avoiding the separation of ‘moral’ considerations from other practical considerations. The scholarly issues are complicated and complex, with some writers seeing Kant as more Aristotelian,. And Aristotle as more involved with a separate sphere of responsibility and duty, than the simple contrast suggests.
A major topic of philosophical inquiry, especially in Aristotle, and subsequently since the 17th and 18th centuries, when the ‘science of man’ began to probe into human motivation and emotion. For such as these, the French moralistes, or Hutcheson, Hume, Smith and Kant, a prime task as to delineate the variety of human reactions and motivations. Such an inquiry would locate our propensity for moral thinking among other faculties, such as perception and reason, and other tendencies as empathy, sympathy or self-interest. The task continues especially in the light of a post-Darwinian understanding of ourselves.
In some moral systems, notably that of Immanuel Kant, real moral worth comes only with interactivity, justly because it is right. However, if you do what is purposely becoming, equitable, but from some other equitable motive, such as the fear or prudence, no moral merit accrues to you. Yet, that in turn seems to discount other admirable motivations, as acting from main-sheet benevolence, or ‘sympathy’. The question is how to balance these opposing ideas and how to understand acting from a sense of obligation without duty or rightness , through which their beginning to seem a kind of fetish. It thus stands opposed to ethics and relying on highly general and abstractive principles, particularly those associated with the Kantian categorical imperatives. The view may go as far back as to say that taken in its own, no consideration point, for that which of any particular way of life, that, least of mention, the contributing steps so taken as forwarded by reason or be to an understanding estimate that can only proceed by identifying salient features of a situation that weigh on one’s side or another.
As random moral dilemmas set out with intense concern, inasmuch as philosophical matters that exert a profound but influential defence of common sense. Situations in which each possible course of action breeches some otherwise binding moral principle, are, nonetheless, serious dilemmas making the stuff of many tragedies. The conflict can be described in different was. One suggestion is that whichever action the subject undertakes, that he or she does something wrong. Another is that his is not so, for the dilemma means that in the circumstances for what she or he did was right as any alternate. It is important to the phenomenology of these cases that action leaves a residue of guilt and remorse, even though it had proved it was not the subject’s fault that she or he were considering the dilemma, that the rationality of emotions can be contested. Any normality with more than one fundamental principle seems capable of generating dilemmas, however, dilemmas exist, such as where a mother must decide which of two children to sacrifice, least of mention, no principles are pitted against each other, only if we accept that dilemmas from principles are real and important, this fact can then be used to approach in themselves, such as of ‘utilitarianism’, to espouse various kinds may, perhaps, be centred upon the possibility of relating to independent feelings, liken to recognize only one sovereign principle. Alternatively, of regretting the existence of dilemmas and the unordered jumble of furthering principles, in that of creating several of them, a theorist may use their occurrences to encounter upon that which it is to argue for the desirability of locating and promoting a single sovereign principle.
Nevertheless, some theories into ethics see the subject in terms of a number of laws (as in the Ten Commandments). The status of these laws may be that they are the edicts of a divine lawmaker, or that they are truths of reason, given to its situational ethics, virtue ethics, regarding them as at best rules-of-thumb, and, frequently disguising the great complexity of practical representations that for reason has placed the Kantian notions of their moral law.
In continence, the natural law possibility points of the view of the states that law and morality are especially associated with St Thomas Aquinas (1225-74), such that his synthesis of Aristotelian philosophy and Christian doctrine was eventually to provide the main philosophical underpinning of the Catholic church. Nevertheless, to a greater extent of any attempt to cement the moral and legal order and together within the nature of the cosmos or the nature of human beings, in which sense it found in some Protestant writings, under which had arguably derived functions. From a Platonic view of ethics and its agedly implicit advance of Stoicism. Its law stands above and apart from the activities of human lawmakers: It constitutes an objective set of principles that can be seen as in and for themselves by means of ‘natural usages’ or by reason itself, additionally, (in religious verses of them), that express of God’s will for creation. Non-religious versions of the theory substitute objective conditions for humans flourishing as the source of constraints, upon permissible actions and social arrangements within the natural law tradition. Different views have been held about the relationship between the rule of the law and God’s will. Grothius, for instance, sides with the view that the content of natural law is independent of any will, including that of God.
While the German natural theorist and historian Samuel von Pufendorf (1632-94) takes the opposite view. His great work was the De Jure Naturae et Gentium, 1672, and its English translation is ‘Of the Law of Nature and Nations, 1710. Pufendorf was influenced by Descartes, Hobbes and the scientific revolution of the 17th century, his ambition was to introduce a newly scientific ‘mathematical’ treatment on ethics and law, free from the tainted Aristotelian underpinning of ‘scholasticism’. Like that of his contemporary - Locke. His conception of natural laws include rational and religious principles, making it only a partial forerunner of more resolutely empiricist and political treatment in the Enlightenment.
Pufendorf launched his explorations in Plato’s dialogue ‘Euthyphro’, with whom the pious things are pious because the gods love them, or do the gods love them because they are pious? The dilemma poses the question of whether value can be conceived as the upshot o the choice of any mind, even a divine one. On the fist option the choice of the gods crates goodness and value. Even if this is intelligible it seems to make it impossible to praise the gods, for it is then vacuously true that they choose the good. On the second option we have to understand a source of value lying behind or beyond the will even of the gods, and by which they can be evaluated. The elegant solution of Aquinas is and is therefore distinct from is will, but not distinct from him.
The dilemma arises whatever the source of authority is supposed to be. Do we care about the good because it is good, or do we just call good those things that we care about? It also generalizes to affect our understanding of the authority of other things: Mathematics, or necessary truth, for example, are truths necessary because we deem them to be so, or do we deem them to be so because they are necessary?
The natural aw tradition may either assume a stranger form, in which it is claimed that various facts entails of primary and secondary qualities, any of which is claimed that various facts entail values, reason by itself is capable of discerning moral requirements. As in the ethics of Knt, these requirements are supposed binding on all human beings, regardless of their desires.
The supposed natural or innate abilities of the mind to know the first principle of ethics and moral reasoning, wherein, those expressions are assigned and related to those that distinctions are which make in terms contribution to the function of the whole, as completed definitions of them, their phraseological impression is termed ‘synderesis’ (or, syntetesis) although traced to Aristotle, the phrase came to the modern era through St Jerome, whose scintilla conscientiae (gleam of conscience) wads a popular concept in early scholasticism. Nonetheless, it is mainly associated in Aquinas as an infallible natural, simple and immediate grasp of first moral principles. Conscience, by contrast, is ,more concerned with particular instances of right and wrong, and can be in error, under which the assertion that is taken as fundamental, at least for the purposes of the branch of enquiry in hand.
It is, nevertheless, the view interpreted within he particular states of law and morality especially associated with Aquinas and the subsequent scholastic tradition, showing for itself the enthusiasm for reform for its own sake. Or for ‘rational’ schemes thought up by managers and theorists, is therefore entirely misplaced. Major o exponent s of this theme include the British absolute idealist Herbert Francis Bradley (1846-1924) and Austrian economist and philosopher Friedrich Hayek. The notably the idealism of Bradley, there ids the same doctrine that change is contradictory and consequently unreal: The Absolute is changeless. A way of sympathizing a little with his idea is to reflect that any scientific explanation of change will proceed by finding an unchanging law operating, or an unchanging quantity conserved in the change, so that explanation of change always proceeds by finding that which is unchanged. The metaphysical problem of change is to shake off the idea that each moment is created afresh, and to obtain a conception of events or processes as having a genuinely historical reality, Really extended and unfolding in time, as opposed to being composites of discrete temporal atoms. A step towards this end may be to see time itself not as an infinite container within which discrete events are located, bu as a kind of logical construction from the flux of events. This relational view of time was advocated by Leibniz and a subject of the debate between him and Newton’s Absolutist pupil, Clarke.
Generally, nature is an indefinitely mutable term, changing as our scientific conception of the world changes, and often best seen as signifying a contrast with something considered not part of nature. The term applies both to individual species (it is the nature of gold to be dense or of dogs to be friendly), and also to the natural world as a whole. The sense in which it applies to species quickly links up with ethical and aesthetic ideals: A thing ought to realize its nature, what is natural is what it is good for a thing to become, it is natural for humans to be healthy or two-legged, and departure from this is a misfortune or deformity,. The associations of what is natural with what it is good to become is visible in Plato, and is the central idea of Aristotle’s philosophy of nature. Unfortunately, the pinnacle of nature in this sense is the mature adult male citizen, with he rest of hat we would call the natural world, including women, slaves, children and other species, not quite making it.
Nature in general can, however, function as a foil to any idea inasmuch as a source of ideals: In this sense fallen nature is contrasted with a supposed celestial realization of the ‘forms’. The theory of ‘forms’ is probably the most characteristic, and most contested of the doctrines of Plato. In the background ie the Pythagorean conception of form as the key to physical nature, bu also the sceptical doctrine associated with the Greek philosopher Cratylus, and is sometimes thought to have been a teacher of Plato before Socrates. He is famous for capping the doctrine of Ephesus of Heraclitus, whereby the guiding idea of his philosophy was that of the logos, is capable of being heard or hearkened to by people, it unifies opposites, and it is somehow associated with fire, which is preeminent among the four elements that Heraclitus distinguishes: Fire, air (breath, the stuff of which souls composed), earth, and water. Although he is principally remember for the doctrine of the ‘flux’ of all things, and the famous statement that you cannot step into the same river twice, for new waters are ever flowing in upon you. The more extreme implication of the doctrine of flux, e.g., the impossibility of categorizing things truly, do not seem consistent with his general epistemology and views of meaning, and were to his follower Cratylus, although the proper conclusion of his views was that the flux cannot be captured in words. According to Aristotle, he eventually held that since ‘regarding that which everywhere in every respect is changing nothing ids just to stay silent and wag one’s finger. Plato ‘s theory of forms can be seen in part as an action against the impasse to which Cratylus was driven.
The Galilean world view might have been expected to drain nature of its ethical content, however, the term seldom loses its normative force, and the belief in universal natural laws provided its own set of ideals. In the 18th century for example, a painter or writer could be praised as natural, where the qualities expected would include normal (universal) topics treated with simplicity, economy , regularity and harmony. Later on, nature becomes an equally potent emblem of irregularity, wildness, and fertile diversity, but also associated with progress of human history, its incurring definition that has been taken to fit many things as well as transformation, including ordinary human self-consciousness. Nature, being in contrast with in integrated phenomenon may include (1) that which is deformed or grotesque or fails to achieve its proper form or function or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and unintelligence, conceived of a distinct from the biological and physical order, or the product of human intervention, and (5) related to that, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, foe example, the conception of ‘nature red in tooth and claw’ often provides a justification for aggressive personal and political relations, or the idea that it is women’s nature to be one thing or another is taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotypes, and is a proper target of much feminist writings. Feminist epistemology has asked whether different ways of knowing for instance with different criteria of justification, and different emphases on logic and imagination, characterize male and female attempts to understand the world. Such concerns include awareness of the ‘masculine’ self-image, itself a socially variable and potentially distorting picture of what thought and action should be. Again, there is a spectrum of concerns from the highly theoretical to he relatively practical. In this latter area particular attention is given to the institutional biases that stand in the way of equal opportunities in science and other academic pursuits, or the ideologies that stand in the way of women seeing themselves as leading contributors to various disciplines. However, to more radical feminists such concerns merely exhibit women wanting for themselves the same power and rights over others that men have claimed, and failing to confront the real problem, which is how to live without such symmetrical powers and rights.
In biological determinism, not only influences but constraints and makes inevitable our development as persons with a variety of traits. At its silliest the view postulates such entities as a gene predisposing people to poverty, and it is the particular enemy of thinkers stressing the parental, social, and political determinants of the way we are.
The philosophy of social science is more heavily intertwined with actual social science than in the case of other subjects such as physics or mathematics, since its question is centrally whether there can be such a thing as sociology. The idea of a ‘science of man’, devoted to uncovering scientific laws determining the basic dynamic s of human interactions was a cherished ideal of the Enlightenment and reached its heyday with the positivism of writers such as the French philosopher and social theorist Auguste Comte (1798-1957), and the historical materialism of Marx and his followers. Sceptics point out that what happens in society is determined by peoples’ own ideas of what should happen, and like fashions those ideas change in unpredictable ways as self-consciousness is susceptible to change by any number of external event s: Unlike the solar system of celestial mechanics a society is not at all a closed system evolving in accordance with a purely internal dynamic, but constantly responsive to shocks from outside.
The sociological approach to human behaviour is based on the premise that all social behaviour has a biological basis, and seeks to understand that basis in terms of genetic encoding for features that are then selected for through evolutionary history. The philosophical problem is essentially one of methodology: Of finding criteria for identifying features that can usefully be explained in this way, and for finding criteria for assessing various genetic stories that might provide useful explanations.
Among the features that are proposed for this kind o f explanation are such things as male dominance, male promiscuity versus female fidelity, propensities to sympathy and other emotions, and the limited altruism characteristic of human beings. The strategy has proved unnecessarily controversial, with proponents accused of ignoring the influence of environmental and social factors in moulding people’s characteristics, e.g., at the limit of silliness, by postulating a ‘gene for poverty’, however, there is no need for the approach to commit such errors, since the feature explained sociobiological may be indexed to environment: For instance, it ma y be a propensity to develop some feature in some other environments (for even a propensity to develop propensities . . .) The main problem is to separate genuine explanation from speculative, just so stories which may or may not identify as really selective mechanisms.
Subsequently, in the 19th century attempts were made to base ethical reasoning on the presumed facts about evolution. The movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903),. His first major work was the book Social Statics (1851), which advocated an extreme political libertarianism. The Principles of Psychology was published in 1855, and his very influential Education advocating natural development of intelligence, the creation of pleasurable interest, and the importance of science in the curriculum, appeared in 1861. His First Principles (1862) was followed over the succeeding years by volumes on the Principles of biology and psychology, sociology and ethics. Although he attracted a large public following and attained the stature of a sage, his speculative work has not lasted well, and in his own time there was dissident voices. T.H. Huxley said that Spencer’s definition of a tragedy was a deduction killed by a fact. Writer and social prophet Thomas Carlyle (1795-1881) called him a perfect vacuum, and the American psychologist and philosopher William James (1842-1910) wondered why half of England wanted to bury him in Westminister Abbey, and talked of the ‘hurdy-gurdy’ monotony of him, his whole system wooden, as if knocked together out of cracked hemlock.
The premise is that later elements in an evolutionary path are better than earlier ones, the application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasizes the struggle for natural selection, and drawn the conclusion that we should glorify such struggle, usually by enhancing competitive and aggressive relations between people in society or between societies themselves. More recently the relation between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
In that, the study of the say in which a variety of higher mental function may be adaptions applicable of a psychology of evolution, a formed in response to selection pressures on human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capabilities for love and friendship, the development of language as a signalling system, cooperative and aggressive tendencies, our emotional repertoires, our moral reaction, including the disposition to direct and punish those who cheat on a
agreement or who free-ride on the work of others, our cognitive structure and many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify.
For all that, an essential part of the British absolute idealist Herbert Bradley (1846-1924) was largely on the ground s that the self-sufficiency individualized through community and one’s self is to contribute to social and other ideals. However, truth as formulated in language is always partial, and dependent upon categories that themselves are inadequate to the harmonious whole. Nevertheless, these self-contradictory elements somehow contribute to the harmonious whole, or Absolute, lying beyond categorization. Although absolute idealism maintains few adherents today, Bradley’s general dissent from empiricism, his holism, and the brilliance and style of his writing continue to make him the most interesting of the late 19th century writers influenced by the German philosopher Friedrich Hegel (1770-1831).
Understandably, something less than the fragmented division that belonging of Bradley’s case has a preference, voiced much earlier by the German philosopher, mathematician and polymath was Gottfried Leibniz (1646-1716), for categorical monadic properties over relations. He was particularly troubled by the relation between that which ids known and the more that knows it. In philosophy, the Romantics took from the German philosopher and founder of critical philosophy Immanuel Kant (1724-1804) both the emphasis on free-will and the doctrine that reality is ultimately spiritual, with nature itself a mirror of the human soul. To fix upon one among alternatives as the one to be taken, Friedrich Schelling (1775-1854) foregathers nature of becoming a creative spirit whose aspiration is ever further and more to completed self-realization. Although a movement of more general to naturalized imperative. Romanticism drew on the same intellectual and emotional resources as German idealism was increasingly culminating in the philosophy of Hegel (1770-1831) and of absolute idealism.
Being such in comparison with nature may include (1) that which is deformed or grotesque, or fails to achieve its proper form or function, or just the statistically uncommon or unfamiliar, (2) the supernatural, or the world of gods and invisible agencies, (3) the world of rationality and intelligence, conceived of as distinct from the biological and physical order, (4) that which is manufactured and artefactual, or the product of human invention, and (5) related to it, the world of convention and artifice.
Different conceptions of nature continue to have ethical overtones, for example, the conception of ‘nature red in tooth and claw’ often provide a justification for aggressive personal and political relations, or the idea that it is a women’s nature to be one thing or another, as taken to be a justification for differential social expectations. The term functions as a fig-leaf for a particular set of stereotype, and is a proper target of much ‘feminist’ writing.
This brings to question, that most of all ethics are contributively distributed as an understanding for which a dynamic function in and among the problems that are affiliated with human desire and needs the achievements of happiness, or the distribution of goods. The central problem specific to thinking about the environment is the independent value to place on ‘such-things’ as preservation of species, or protection of the wilderness. Such protection can be supported as a mans to ordinary human ends, for instance, when animals are regarded as future sources of medicines or other benefits. Nonetheless, many would want to claim a non-utilitarian, absolute value for the existence of wild things and wild places. It is in their value that thing consist. They put u in our proper place, and failure to appreciate this value is not only an aesthetic failure but one of due humility and reverence, a moral disability. The problem is one of expressing this value, and mobilizing it against utilitarian agents for developing natural areas and exterminating species, more or less at will.
Many concerns and disputed cluster around the idea associated with the term ‘substance’. The substance of a thin may be considered in: (1) Its essence, or that which makes it what it is. This will ensure that the substance of a thing is that which remains through change in properties. Again, in Aristotle, this essence becomes more than just the matter, but a unity of matter and form. (2) That which can exist by itself, or does not need a subject for existence, in the way that properties need objects, hence (3) that which bears properties, as a substance is then the subject of predication, that about which things are said as opposed to the things said about it. Substance in the last two senses stands opposed to modifications such as quantity, quality, relations, etc. it is hard to keep this set of ideas distinct from the doubtful notion of a substratum, something distinct from any of its properties, and hence, as an incapable characterization. The notion of substances tend to disappear in empiricist thought in fewer of the sensible questions of things with the notion of that in which they infer of giving way to an empirical notion of their regular occurrence. However, this is in turn is problematic, since it only makes sense to talk of the occurrence of instance of qualities, not of quantities themselves. So the problem of what it is for a value quality to be the instance that remains.
Metaphysics inspired by modern science tends to reject the concept of substance in favour of concepts such as that of a field or a process, each of which may seem to provide a better example of a fundamental physical category.
It must be spoken of a concept that is deeply embedded in 18th century aesthetics, but deriving from the 1st century rhetorical treatise On the Sublime, by Longinus. The sublime is great, fearful, noble, calculated to arouse sentiments of pride and majesty, as well as awe and sometimes terror. According to Alexander Gerard’s writing in 1759, ‘When a large object is presented, the mind expands itself to the extent of that objects, and is filled with one grand sensation, which totally possessing it, composes it into a solemn sedateness and strikes it with deep silent wonder, and administration’: It finds such a difficulty in spreading itself to the dimensions of its object, as enliven and invigorates which this occasions, it sometimes images itself present in every part of the sense which it contemplates, and from the sense of this immensity, feels a noble pride, and entertains a lofty conception of its own capacity.
In Kant’s aesthetic theory the sublime ‘raises the soul above the height of vulgar complacency’. We experience the vast spectacles of nature as ‘absolutely great’ and of irresistible might and power. This perception is fearful, but by conquering this fear, and by regarding as small ‘those things of which we are wont to be solicitous’ we quicken our sense of moral freedom. So we turn the experience of frailty and impotence into one of our true, inward moral freedom as the mind triumphs over nature, and it is this triumph of reason that is truly sublime. Kant thus paradoxically places our sense of the sublime in an awareness of ourselves as transcending nature, than in an awareness of ourselves as a frail and insignificant part of it.
Nevertheless, the doctrine that all relations are internal was a cardinal thesis of absolute idealism, and a central point of attack by the British philosophers George Edward Moore (1873-1958) and Bertrand Russell (1872-1970). It is a kind of ‘essentialism’, stating that if two things stand in some relationship, then they could not be what they are, did they not do so, if, for instance, I am wearing a hat mow, then when we imagine a possible situation that we would be got to describe as my not wearing the hat now, we would strictly not be imaging as one and the hat, but only some different individual.
In Hume, objects of knowledge are divided into matter of fact (roughly empirical things known by means of impressions) and the relation of ideas. The contrast, also called “Hume’s Fork’, is a version of the speculative deductivity distinction, but reflects the 17th and early 18th centauries behind that the deductivity is established by chains of infinite certainty as comparable to ideas. It is extremely important that in the period between Descartes and J.S. Mill that a demonstration is not, but only a chain of ‘intuitive’ comparable ideas, whereby a principle or maxim can be established by reason alone. It ids in this sense that the English philosopher John Locke (1632-1704) who believed that theological and moral principles are capable of demonstration, and Hume denies that they are, and also denies that scientific enquiries proceed in demonstrating its results.
A mathematical proof is formally inferred as to an argument that is used to show the truth of a mathematical assertion. In modern mathematics, a proof begins with one or more statements called premises and demonstrates, using the rules of logic, that if the premises are true then a particular conclusion must also be true.
The accepted methods and strategies used to construct a convincing mathematical argument have evolved since ancient times and continue to change. Consider the Pythagorean theorem, named after the 5th century Bc Greek mathematician and philosopher Pythagoras, which states that in a right-angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides. Many early civilizations considered this theorem true because it agreed with their observations in practical situations. But the early Greeks, among others, realized that observation and commonly held opinion do not guarantee mathematical truth. For example, before the 5th century Bc it was widely believed that all lengths could be expressed as the ratio of two whole numbers. But an unknown Greek mathematician proved that this was not true by showing that the length of the diagonal of a square with an area of 1 is the irrational number Ã.
The Greek mathematician Euclid laid down some of the conventions central to modern mathematical proofs. His book The Elements, written about 300 Bc, contains many proofs in the fields of geometry and algebra. This book illustrates the Greek practice of writing mathematical proofs by first clearly identifying the initial assumptions and then reasoning from them in a logical way in order to obtain a desired conclusion. As part of such an argument, Euclid used results that had already been shown to be true, called theorems, or statements that were explicitly acknowledged to be self-evident, called axioms; this practice continues today.
In the 20th century, proofs have been written that are so complex that no one person understands every argument used in them. In 1976, a computer was used to complete the proof of the four-color theorem. This theorem states that four colors are sufficient to color any map in such a way that regions with a common boundary line have different colors. The use of a computer in this proof inspired considerable debate in the mathematical community. At issue was whether a theorem can be considered proven if human beings have not actually checked every detail of the proof.
The study of the relations of deductibility among sentences in a logical calculus which benefits the prof theory. Deductibility is defined purely syntactically, that is, without reference to the intended interpretation of the calculus. The subject was founded by the mathematician David Hilbert (1862-1943) in the hope that strictly finitary methods would provide a way of proving the consistency of classical mathematics, but the ambition was torpedoed by Gödel’s second incompleteness theorem.
What is more, the use of a model to test for consistencies in an ‘axiomatized system’ which is older than modern logic. Descartes’ algebraic interpretation of Euclidean geometry provides a way of showing that if the theory of real numbers is consistent, so is the geometry. Similar representation had been used by mathematicians in the 19th century, for example to show that if Euclidean geometry is consistent, so are various non-Euclidean geometries. Model theory is the general study of this kind of procedure: The ‘proof theory’ studies relations of deductibility between formulae of a system, but once the notion of an interpretation is in place we can ask whether a formal system meets certain conditions. In particular, can it lead us from sentences that are true under some interpretation? And if a sentence is true under all interpretations, is it also a theorem of the system? We can define a notion of validity (a formula is valid if it is true in all interpret rations) and semantic consequence (a formula ‘B’ is a semantic consequence of a set of formulae, written {A1 . . . An} ⊨ B, if it is true in all interpretations in which they are true) Then the central questions for a calculus will be whether all and only its theorems are valid, and whether {A1 . . . An} ⊨ B if and only if {A1 . . . An} ⊢ B. There are the questions of the soundness and completeness of a formal system. For the propositional calculus this turns into the question of whether the proof theory delivers as theorems all and only ‘tautologies’. There are many axiomatizations of the propositional calculus that are consistent and complete. The mathematical logician Kurt Gödel (1906-78) proved in 1929 that the first-order predicate under every interpretation is a theorem of the calculus.
The Euclidean geometry is the greatest example of the pure ‘axiomatic method’, and as such had incalculable philosophical influence as a paradigm of rational certainty. It had no competition until the 19th century when it was realized that the fifth axiom of his system (parallel lines never meet) could be denied without inconsistency, leading to Riemannian spherical geometry. The significance of Riemannian geometry lies in its use and extension of both Euclidean geometry and the geometry of surfaces, leading to a number of generalized differential geometries. Its most important effect was that it made a geometrical application possible for some major abstractions of tensor analysis, leading to the pattern and concepts for general relativity later used by Albert Einstein in developing his theory of relativity. Riemannian geometry is also necessary for treating electricity and magnetism in the framework of general relativity. The fifth chapter of Euclid’s Elements, is attributed to the mathematician Eudoxus, and contains a precise development of the real number, work which remained unappreciated until rediscovered in the 19th century.
The Axiom, in logic and mathematics, is a basic principle that is assumed to be true without proof. The use of axioms in mathematics stems from the ancient Greeks, most probably during the 5th century Bc, and represents the beginnings of pure mathematics as it is known today. Examples of axioms are the following: 'No sentence can be true and false at the same time' (the principle of contradiction); 'If equals are added to equals, the sums are equal'. 'The whole is greater than any of its parts'. Logic and pure mathematics begin with such unproved assumptions from which other propositions (theorems) are derived. This procedure is necessary to avoid circularity, or an infinite regression in reasoning. The axioms of any system must be consistent with one another, that is, they should not lead to contradictions. They should be independent in the sense that they cannot be derived from one another. They should also be few in number. Axioms have sometimes been interpreted as self-evident truths. The present tendency is to avoid this claim and simply to assert that an axiom is assumed to be true without proof in the system of which it is a part.
The terms 'axiom' and 'postulate' are often used synonymously. Sometimes the word axiom is used to refer to basic principles that are assumed by every deductive system, and the term postulate is used to refer to first principles peculiar to a particular system, such as Euclidean geometry. Infrequently, the word axiom is used to refer to first principles in logic, and the term postulate is used to refer to first principles in mathematics.
The applications of game theory are wide-ranging and account for steadily growing interest in the subject. Von Neumann and Morgenstern indicated the immediate utility of their work on mathematical game theory by linking it with economic behavior. Models can be developed, in fact, for markets of various commodities with differing numbers of buyers and sellers, fluctuating values of supply and demand, and seasonal and cyclical variations, as well as significant structural differences in the economies concerned. Here game theory is especially relevant to the analysis of conflicts of interest in maximizing profits and promoting the widest distribution of goods and services. Equitable division of property and of inheritance is another area of legal and economic concern that can be studied with the techniques of game theory.
In the social sciences, n-person game theory has interesting uses in studying, for example, the distribution of power in legislative procedures. This problem can be interpreted as a three-person game at the congressional level involving vetoes of the president and votes of representatives and senators, analyzed in terms of successful or failed coalitions to pass a given bill. Problems of majority rule and individual decision making are also amenable to such study.
Sociologists have developed an entire branch of game theory devoted to the study of issues involving group decision making. Epidemiologists also make use of game theory, especially with respect to immunization procedures and methods of testing a vaccine or other medication. Military strategists turn to game theory to study conflicts of interest resolved through 'battles' where the outcome or payoff of a given war game is either victory or defeat. Usually, such games are not examples of zero-sum games, for what one player loses in terms of lives and injuries is not won by the victor. Some uses of game theory in analyses of political and military events have been criticized as a dehumanizing and potentially dangerous oversimplification of necessarily complicating factors. Analysis of economic situations is also usually more complicated than zero-sum games because of the production of goods and services within the play of a given 'game'.
All is the same in the classical theory of the syllogism, a term in a categorical proposition is distributed if the proposition entails any proposition obtained from it by substituting a term denoted by the original. For example, in ‘all dogs bark’ the term ‘dogs’ is distributed, since it entails ‘all terriers bark’, which is obtained from it by a substitution. In ‘Not all dogs bark’, the same term is not distributed, since it may be true while ‘not all terriers bark’ is false.
When a representation of one system by another is usually more familiar, in and for itself, that those extended in representation that their workings are supposed analogous to that of the first. This one might model the behaviour of a sound wave upon that of waves in water, or the behaviour of a gas upon that to a volume containing moving billiard balls. While nobody doubts that models have a useful ‘heuristic’ role in science, there has been intense debate over whether a good model, or whether an organized structure of laws from which it can be deduced and suffices for scientific explanation. As such, the debate of topic was inaugurated by the French physicist Pierre Marie Maurice Duhem (1861-1916), in ‘The Aim and Structure of Physical Theory’ (1954) by which Duhem’s conception of science is that it is simply a device for calculating as science provides deductive system that is systematic, economical, and predictive, but not that represents the deep underlying nature of reality. Steadfast and holding of its contributive thesis that in isolation, and since other auxiliary hypotheses will always be needed to draw empirical consequences from it. The Duhem thesis implies that refutation is a more complex matter than might appear. It is sometimes framed as the view that a single hypothesis may be retained in the face of any adverse empirical evidence, if we prepared to make modifications elsewhere in our system, although strictly speaking this is a stronger thesis, since it may be psychologically impossible to make consistent revisions in a belief system to accommodate, say, the hypothesis that there is a hippopotamus in the room when visibly there is not.
Primary and secondary qualities are the division associated with the 17th-century rise of modern science, wit h its recognition that the fundamental explanatory properties of things that are not the qualities that perception most immediately concerns. There latter are the secondary qualities, or immediate sensory qualities, including colour, taste, smell, felt warmth or texture, and sound. The primary properties are less tied to there deliverance of one particular sense, and include the size, shape, and motion of objects. In Robert Boyle (1627-92) and John Locke (1632-1704) the primary qualities are scientifically tractable, objective qualities essential to anything material, are of a minimal listing of size, shape, and mobility, i.e., the state of being at rest or moving. Locke sometimes adds number, solidity, texture (where this is thought of as the structure of a substance, or way in which it is made out of atoms). The secondary qualities are the powers to excite particular sensory modifications in observers. Once, again, that Locke himself thought in terms of identifying these powers with the texture of objects that, according to corpuscularian science of the time, were the basis of an object’s causal capacities. The ideas of secondary qualities are sharply different from these powers, and afford us no accurate impression of them. For Renè Descartes (1596-1650), this is the basis for rejecting any attempt to think of knowledge of external objects as provided by the senses. But in Locke our ideas of primary qualities do afford us an accurate notion of what shape, size,. And mobility are. In English-speaking philosophy the first major discontent with the division was voiced by the Irish idealist George Berkeley (1685-1753), who probably took for a basis of his attack from Pierre Bayle (1647-1706), who in turn cites the French critic Simon Foucher (1644-96). Modern thought continues to wrestle with the difficulties of thinking of colour, taste, smell, warmth, and sound as real or objective properties to things independent of us.
Continuing as such, is the doctrine advocated by the American philosopher David Lewis (1941-2002), in that different possible worlds are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different. The view has been charged with making it impossible to see why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference which world is actual. Critics also charge either that the notion fails to fit with a coherent theory lf how we know about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denied that any other way of interpreting modal statements is tenable.
The proposal set forth that characterizes the ‘modality’ of a proposition as the notion for which it is true or false. The most important division is between propositions true of necessity, and those true as things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators, ‘it will be the case that ‘p’, or ‘it was the case that ‘p’, and there are affinities between the ‘deontic’ indicators, ‘it ought to be the case that ‘p’, or ‘it is permissible that ‘p’, and the of necessity and possibility.
The aim of a logic is to make explicit the rules by which inferences may be drawn, than to study the actual reasoning processes that people use, which may or may not conform to those rules. In the case of deductive logic, if we ask why we need to obey the rules, the most general form of answer is that if we do not we contradict ourselves(or, strictly speaking, we stand ready to contradict ourselves. Someone failing to draw a conclusion that follows from a set of premises need not be contradicting him or herself, but only failing to notice something. However, he or she is not defended against adding the contradictory conclusion to his or fer set of beliefs.) There is no equally simple answer in the case of inductive logic, which is in general a less robust subject, but the aim will be to find reasoning such hat anyone failing to conform to it will have improbable beliefs. Traditional logic dominated the subject until the 19th century., and has become increasingly recognized in the 20th century, in that finer work that were done within that tradition, but syllogistic reasoning is now generally regarded as a limited special case of the form of reasoning that can be reprehend within the promotion and predated values, these form the heart of modern logic, as their central notions or qualifiers, variables, and functions were the creation of the German mathematician Gottlob Frége, who is recognized as the father of modern logic, although his treatment of a logical system as an abreact mathematical structure, or algebraic, has been heralded by the English mathematician and logician George Boole (1815-64), his pamphlet The Mathematical Analysis of Logic (1847) pioneered the algebra of classes. The work was made of in An Investigation of the Laws of Thought (1854). Boole also published many works in our mathematics, and on the theory of probability. His name is remembered in the title of Boolean algebra, and the algebraic operations he investigated are denoted by Boolean operations.
The syllogistic, or categorical syllogism is the inference of one proposition from two premises. For example is, ‘all horses have tails, and things with tails are four legged, so all horses are four legged. Each premise has one term in common with the other premises. The term that ds not occur in the conclusion is called the middle term. The major premise of the syllogism is the premise containing the predicate of the contraction (the major term). And the minor premise contains its subject (the minor term). So the first premise of the example in the minor premise the second the major term. So the first premise of the example is the minor premise, the second the major premise and ‘having a tail’ is the middle term. This enable syllogisms that there of a classification, that according to the form of the premises and the conclusions. The other classification is by figure, or way in which the middle term is placed or way in within the middle term is placed in the premise.
Although the theory of the syllogism dominated logic until the 19th century, it remained a piecemeal affair, able to deal with only relations valid forms of valid forms of argument. There have subsequently been reargued actions attempting, but in general it has been eclipsed by the modern theory of quantification, the predicate calculus is the heart of modern logic, having proved capable of formalizing the calculus rationing processes of modern mathematics and science. In a first-order predicate calculus the variables range over objects: In a higher-order calculus the may range over predicate and functions themselves. The fist-order predicated calculus with identity includes ‘=’ as primitive (undefined) expression: In a higher-order calculus I t may be defined by law that χ = y iff (∀F)(Fχ ↔ Fy), which gives grater expressive power for less complexity.
Modal logic was of great importance historically, particularly in the light of the deity, but was not a central topic of modern logic in its gold period as the beginning of the 20th century. It was, however, revived by the American logician and philosopher Irving Lewis (1883-1964), although he wrote extensively on most central philosophical topis, he is remembered principally as a critic of the intentional nature of modern logic, and as the founding father of modal logic. His two independent proofs showing that from a contradiction anything follows a relevance logic, using a notion of entailment stronger than that of strict implication.
The imparting information has been conduced or carried out of the prescribed procedures, as impeding of something that tajes place in the chancing encounter out to be to enter ons’s mind may from time to time occasion of various doctrines concerning the necessary properties, ;east of mention, by adding to a prepositional or predicated calculus two operator, □ and ◊ (sometimes written ‘N’ and ‘M’),meaning necessarily and possible, respectfully. These like ‘p ➞ ◊p and □p ➞ p will be wanted. Controversial these include □p ➞ □□p (if a proposition is necessary,. It its necessarily, characteristic of a system known as S4) and ◊p ➞ □◊p (if as preposition is possible, it its necessarily possible, characteristic of the system known as S5). The classical modal theory for modal logic, due to the American logician and philosopher (1940-) and the Swedish logician Sig Kanger, involves valuing prepositions not true or false simpliciter, but as true or false at possible worlds with necessity then corresponding to truth in all worlds, and possibility to truth in some world. Various different systems of modal logic result from adjusting the accessibility relation between worlds.
In Saul Kripke, gives the classical modern treatment of the topic of reference, both clarifying the distinction between names and definite description, and opening te door to many subsequent attempts to understand the notion of reference in terms of a causal link between the use of a term and an original episode of attaching a name to the subject.
One of the three branches into which ‘semiotic’ is usually divided, the study of semantical meaning of words, and the relation of signs to the degree to which the designs are applicable. In that, in formal studies, a semantics is provided for a formal language when an interpretation of ‘model’ is specified. However, a natural language comes ready interpreted, and the semantic problem is not that of specification but of understanding the relationship between terms of various categories (names, descriptions, predicate, adverbs . . . ) and their meaning. An influential proposal by attempting to provide a truth definition for the language, which will involve giving a full structure of different kinds have on the truth conditions of sentences containing them.
Holding that the basic casse of reference is the relation between a name and the persons or object which it names. The philosophical problems include trying to elucidate that relation, to understand whether other semantic relations, such s that between a predicate and the property it expresses, or that between a description an what it describes, or that between myself or the word ‘I’, are examples of the same relation or of very different ones. A great deal of modern work on this was stimulated by the American logician Saul Kripke’s, Naming and Necessity (1970). It would also be desirable to know whether we can refer to such things as objects and how to conduct the debate about each and issue. A popular approach, following Gottlob Frége, is to argue that the fundamental unit of analysis should be the whole sentence. The reference of a term becomes a derivative notion it is whatever it is that defines the term’s contribution to the trued condition of the whole sentence. There need be nothing further to say about it, given that we have a way of understanding the attribution of meaning or truth-condition to sentences. Other approach, searching for a more substantive possibly that causality or psychological or social constituents are pronounced between words and things.
However, following Ramsey and the Italian mathematician G. Peano (1858-1932), it has been customary to distinguish logical paradoxes that depend upon a notion of reference or truth (semantic notions) such as those of the ‘Liar family,, Berry, Richard, etc. form the purely logical paradoxes in which no such notions are involved, such as Russell’s paradox, or those of Canto and Burali-Forti. Paradoxes of the fist type sem to depend upon an element of self-reference, in which a sentence is about itself, or in which a phrase refers to something about itself, or in which a phrase refers to something defined by a set of phrases of which it is itself one. It is to feel that this element is responsible for the contradictions, although self-reference itself is often benign (for instance, the sentence ‘All English sentences should have a verb’, includes itself happily in the domain of sentences it is talking about), so the difficulty lies in forming a condition that existence only pathological self-reference. Paradoxes of the second kind then need a different treatment. Whilst the distinction is convenient. In allowing set theory to proceed by circumventing the latter paradoxes by technical mans, even when there is no solution to the semantic paradoxes, it may be a way of ignoring the similarities between the two families. There is still the possibility that while there is no agreed solution to the semantic paradoxes, our understand of Russell’s paradox may be imperfect as well.
Truth and falsity are two classical truth-values that a statement, proposition or sentence can take, as it is supposed in classical (two-valued) logic, that each statement has one of these values, and non has both. A statement is then false if and only if it is not true. The basis of this scheme is that to each statement there corresponds a determinate truth condition, or way the world must be for it to be true: If this condition obtains the statement is true, and otherwise false. Statements may indeed be felicitous or infelicitous in other dimensions (polite, misleading, apposite, witty, etc.) but truth is the central normative notion governing assertion. Considerations o vagueness may introduce greys into this black-and-white scheme. For the issue to be true, any suppressed premise or background framework of thought necessary make an agreement valid, or a position tenable, a proposition whose truth is necessary for either the truth or the falsity of another statement. Thus if ‘p’ presupposes ‘q’, ‘q’ must be true for ‘p’ to be either true or false. In the theory of knowledge, the English philologer and historian George Collingwood (1889-1943), announces hat any proposition capable of truth or falsity stand on bed of ‘absolute presuppositions’ which are not properly capable of truth or falsity, since a system of thought will contain no way of approaching such a question (a similar idea later voiced by Wittgenstein in his work On Certainty). The introduction of presupposition therefore mans that either another of a truth value is fond, ‘intermediate’ between truth and falsity, or the classical logic is preserved, but it is impossible to tell whether a particular sentence empresses a preposition that is a candidate for truth and falsity, without knowing more than the formation rules of the language. Each suggestion carries coss, and there is some consensus that at least who where definite descriptions are involved, examples equally given by regarding the overall sentence as false as the existence claim fails, and explaining the data that the English philosopher Frederick Strawson (1919-) relied upon as the effects of ‘implicature’.
Views about the meaning of terms will often depend on classifying the implicature of sayings involving the terms as implicatures or as genuine logical implications of what is said. Implicatures may be divided into two kinds: Conversational implicatures of the two kinds and the more subtle category of conventional implicatures. A term may as a matter of convention carry an implicature, thus one of the relations between ‘he is poor and honest’ and ‘he is poor but honest’ is that they have the same content (are true in just the same conditional) but the second has implicatures (that the combination is surprising or significant) that the first lacks.
It is, nonetheless, that we find in classical logic a proposition that may be true or false,. In that, if the former, it is said to take the truth-value true, and if the latter the truth-value false. The idea behind the terminological phrases is the analogues between assigning a propositional variable one or other of these values, as is done in providing an interpretation for a formula of the propositional calculus, and assigning an object as the value of any other variable. Logics with intermediate value are called ‘many-valued logics’.
Nevertheless, an existing definition of the predicate’ . . . is true’ for a language that satisfies convention ‘T’, the material adequately condition laid down by Alfred Tarski, born Alfred Teitelbaum (1901-83), whereby his methods of ‘recursive’ definition, enabling us to say for each sentence what it is that its truth consists in, but giving no verbal definition of truth itself. The recursive definition or the truth predicate of a language is always provided in a ‘metalanguage’, Tarski is thus committed to a hierarchy of languages, each with its associated, but different truth-predicate. Whist this enables the approach to avoid the contradictions of paradoxical contemplations, it conflicts with the idea that a language should be able to say everything that there is to said, and other approaches have become increasingly important.
So, that the truth condition of a statement is the condition for which the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although this sounds as if it gives a solid anchorage for meaning, some of the securities disappear when it turns out that the truth condition can only be defined by repeating the very same statement: The truth condition of ‘now is white’ is that ‘snow is white’, the truth condition of ‘Britain would have capitulated had Hitler invaded’, is that ‘Britain would have capitulated had Hitler invaded’. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantives theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to use it in a network of inferences.
Taken to be the view, inferential semantics take on the role of sentence in inference give a more important key to their meaning than this ‘external’ relations to things in the world. The meaning of a sentence becomes its place in a network of inferences that it legitimates. Also known as functional role semantics, procedural semantics, or conception to the coherence theory of truth, and suffers from the same suspicion that it divorces meaning from any clar association with things in the world.
Moreover, a theory of semantic truth be that of the view if language is provided with a truth definition, there is a sufficient characterization of its concept of truth, as there is no further philosophical chapter to write about truth: There is no further philosophical chapter to write about truth itself or truth as shared across different languages. The view is similar to the disquotational theory.
The redundancy theory, or also known as the ‘deflationary view of truth’ fathered by Gottlob Frége and the Cambridge mathematician and philosopher Frank Ramsey (1903-30), who showed how the distinction between the semantic paradoses, such as that of the Liar, and Russell’s paradox, made unnecessary the ramified type theory of Principia Mathematica, and the resulting axiom of reducibility. By taking all the sentences affirmed in a scientific theory that use some terms e.g., quark, and to a considerable degree of replacing the term by a variable instead of saying that quarks have such-and-such properties, the Ramsey sentence says that there is something that has those properties. If the process is repeated for all of a group of the theoretical terms, the sentence gives ‘topic-neutral’ structure of the theory, but removes any implication that we know what the terms so treated denote. It leaves open the possibility of identifying the theoretical item with whatever it is that best fits the description provided. However, it was pointed out by the Cambridge mathematician Newman, that if the process is carried out for all except the logical bones of a theory, then by the Löwenheim-Skolem theorem, the result will be interpretable, and the content of the theory may reasonably be felt to have been lost.
All the while, both Frége and Ramsey are agreed that the essential claim is that the predicate’ . . . is true’ does not have a sense, i.e., expresses no substantive or profound or explanatory concept that ought to be the topic of philosophical enquiry. The approach admits of different versions, but centres on the points (1) that ‘it is true that ‘p’ says no more nor less than ‘p’ (hence, redundancy): (2) that in less direct contexts, such as ‘everything he said was true’, or ‘all logical consequences of true propositions are true’, the predicate functions as a device enabling us to generalize than as an adjective or predicate describing the things he said, or the kinds of propositions that follow from true preposition. For example, the second ma y translate as ‘(∀p, q)(p & p ➞ q ➞ q)’ where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, nevertheless, they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such as ‘science aims at the truth’, or ‘truth is a norm governing discourse’. Postmodern writing frequently advocates that we must abandon such norms. Along with a discredited ‘objective’ conception of truth. Perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whatever science holds that ‘p’, then ‘p’. Discourse is to be regulated by the principle that it is wrong to assert ‘p’, when ‘not-p’.
Something that tends of something in addition of content, or coming by way to justify such a position can very well be more that in addition to several reasons, as to bring in or join of something might that there be more so as to a larger combination for us to consider the simplest formulation , is that the claim that expression of the form ‘S is true’ mean the same as expression of the form ‘S’. Some philosophers dislike the ideas of sameness of meaning, and if this I disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. This is, it makes no difference whether people say ‘Dogs bark’ id Tue, or whether they say, ‘dogs bark’. In the former representation of what they say of the sentence ‘Dogs bark’ is mentioned, but in the later it appears to be used, of the claim that the two are equivalent and needs careful formulation and defence. On the face of it someone might know that ‘Dogs bark’ is true without knowing what it means (for instance, if he kids in a list of acknowledged truths, although he does not understand English), and tis is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the ‘redundancy theory of truth’.
The relationship between a set of premises and a conclusion when the conclusion follows from the premise,. Many philosophers identify this with it being logically impossible that the premises should all be true, yet the conclusion false. Others are sufficiently impressed by the paradoxes of strict implication to look for a stranger relation, which would distinguish between valid and invalid arguments within the sphere of necessary propositions. The seraph for a strange notion is the field of relevance logic.
From a systematic theoretical point of view, we may imagine the process of evolution of an empirical science to be a continuous process of induction. Theories are evolved and are expressed in short compass as statements of as large number of individual observations in the form of empirical laws, from which the general laws can be ascertained by comparison. Regarded in this way, the development of a science bears some resemblance to the compilation of a classified catalogue. It is , a it were, a purely empirical enterprise.
But this point of view by no means embraces the whole of the actual process, for it slurs over the important part played by intuition and deductive thought in the development of an exact science. As soon as a science has emerged from its initial stages, theoretical advances are no longer achieved merely by a process of arrangement. Guided by empirical data, the investigators rather develops a system of thought which, in general, it is built up logically from a small number of fundamental assumptions, the so-called axioms. We call such a system of thought a ‘theory’. The theory finds the justification for its existence in the fact that it correlates a large number of single observations, and is just here that the ‘truth’ of the theory lies.
Corresponding to the same complex of empirical data, there may be several theories, which differ from one another to a considerable extent. But as regards the deductions from the theories which are capable of being tested, the agreement between the theories may be so complete, that it becomes difficult to find any deductions in which the theories differ from each other. As an example, a case of general interest is available in the province of biology, in the Darwinian theory of the development of species by selection in the struggle for existence, and in the theory of development which is based on the hypophysis of the hereditary transmission of acquired characters. THE Origin of Species was principally successful in marshalling the evidence for evolution, than providing a convincing mechanisms for genetic change. And Darwin himself remained open to the search for additional mechanisms, while also remaining convinced that natural selection was at the hart of it. It was only with the later discovery of the gene as the unit of inheritance that the synthesis known as ‘neo-Darwinism’ became the orthodox theory of evolution in the life sciences.
In the 19th century the attempt to base ethical reasoning o the presumed facts about evolution, the movement is particularly associated with the English philosopher of evolution Herbert Spencer (1820-1903). The premise is that later elements in an evolutionary path are better than earlier ones: The application of this principle then requires seeing western society, laissez-faire capitalism, or some other object of approval, as more evolved than more ‘primitive’ social forms. Neither the principle nor the applications command much respect. The version of evolutionary ethics called ‘social Darwinism’ emphasises the struggle for natural selection, and draws the conclusion that we should glorify and assist such struggle, usually by enhancing competition and aggressive relations between people in society or between evolution and ethics has been re-thought in the light of biological discoveries concerning altruism and kin-selection.
Once again, the psychology proving attempts are founded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive , our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who ‘free-ride’ on =the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The term of use are applied, more or less aggressively, especially to explanations offered in sociobiology and evolutionary psychology.
Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin’s view of natural selection as a war-like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. It is complementary relationships between such results that are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
According to E.O Wilson, the ‘human mind evolved to believe in the gods’ and people ‘need a sacred narrative’ to have a sense of higher purpose. Yet it id also clear that the ‘gods’ in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. ‘Science for its part’, said Wilson, ‘will test relentlessly every assumption about the human condition and in time uncover the bedrock of the moral an religious sentiments. The eventual result of the competition between each of the other, will be the secularization of the human epic and of religion itself.
Man has come to the threshold of a state of consciousness, regarding his nature and his relationship to te Cosmos, in terms that reflect ‘reality’. By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing ‘reality’ as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide ‘comprehensible’ guides to living. In thus way. Man’s imagination and intellect play vital roles on his survival and evolution.
Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of ‘logical positivist’ approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the ‘exlanans’ (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton’s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering law are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it ma y not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements we make of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship the understanding speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, an d pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form,. And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification. Pragmatics include that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The Concepcion of meaning s truth-conditions need not and should not be advanced as being in itself as complete account of meaning. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of sentence in the language, and must have some idea of the insufficiencies of various kinds of speech act. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentence differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms - proper names, indexical, and certain pronouns - this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it are true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of he semantic values of the sentences on which it operates.
The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: ‘London’ refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that ‘London is beautiful’ is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name ‘London’ without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person’s language to be truly describable by as semantic theory containing a given semantic axiom.
Since the content of a claim that the sentence ‘Paris is beautiful’ is true amounts to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than grasp of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. Its conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of ruth and a truth conditional account of meaning. If the claim that the sentence ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and - confusing and inconsistently if this article is correct - Frége himself. but is the minimal theory correct?
The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: ‘London is beautiful’ is true if and only if London is beautiful. This would be a pseudo-explanation if the fact that ‘London’ refers to London consists in part in the fact that ‘London is beautiful’ has the truth-condition it does. But it is very implausible, it is, after all, possible to understand the name ‘London’ without understanding the predicate ‘is beautiful’.
Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form ‘if p were to happen q would’, or ‘if p were to have happened q would have happened’, where the supposition of ‘p’ is contrary to the known fact that ‘not-p’. Such assertions are nevertheless, use=ful ‘if you broken the bone, the X-ray would have looked different’, or ‘if the reactor were to fail, this mechanism wold click in’ are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactuals (‘if the metal were to be heated, it would expand’), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals comes out true whenever ‘p’ is false, so there would be no division between true and false counterfactuals.
Although the subjunctive form indicates a counterfactual, in many contexts it does not seem to matter whether we use a subjunctive form, or a simple conditional form: ‘If you run out of water, you will be in trouble’ seems equivalent to ‘if you were to run out of water, you would be in trouble’, in other contexts there is a big difference: ‘If Oswald did not kill Kennedy, someone else did’ is clearly true, whereas ‘if Oswald had not killed Kennedy, someone would have’ is most probably false.
The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether ‘q’ is true in the ‘most similar’ possible worlds to ours in which ‘p’ is true. The similarity-ranking this approach needs has proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is a growing awareness tat the classification of conditionals is an extremely tricky business, and categorizing them as counterfactuals or not be of limited use.
The pronouncing of any conditional; preposition of the form ‘if p then q’. The condition hypothesizes, ‘p’. Its called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weaken in that of material implication, merely telling us that with not-p. or q. stronger conditionals include elements of modality, corresponding to the thought that ‘if p is true then q must be true’. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of theocratical sentence ids only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for example, belief in God, are the widest sense of the works satisfactorily in the widest sense of the word. On James’s view almost any belief might be respectable, and even rue, provided it works (but working is no s simple matter for James). The apparently subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20 century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an ‘automatic sweetheart’ or female zombie) and remarks hat the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others. The implication that this is what makes it true that the other persons have minds in the disturbing part.
Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who have usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because belief have effects, as they work. Pragmatism can be found in Kant’s doctrine of the primary of practical over pure reason, and continues to play an influential role in the theory of meaning and of truth.
In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdates, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states, and what effects it is likely to have on behaviour, then we would have done all tat is needed to make the state a proper theoretical notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or ‘realization’ of the program the machine is running. The principle advantage of functionalism include its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items tat do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable us to ascribe thoughts and desires to different from our own, it may then seem as though beliefs and desires can be ‘variably realized’ causal architecture, just as much as they can be in different neurophysiological states.
The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notion that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truth is what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
The Association for International Conciliation first published William James’s pacifist statement, 'The Moral Equivalent of War', in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism - a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammar represent standards of the time.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather that these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested to many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning - in particular, the meaning of concepts used in science. The meaning of the concept 'brittle', for example, is given by the observed consequences or properties that objects called 'brittle' exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivists, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivists emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life - morality and religious belief, for example - are leaps of faith. As such, they depend upon what he called 'the will to believe' and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist - someone who believes the world to be far too complex for any one philosophy to explain everything.
Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depends on a historical context and is thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.
The pragmatist tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists - Pierce, James, and Dewey - have an alternative to Rorty’s interpretation of the tradition
Apparent facts to be explained about the distinction between knowing things and knowing about things are there. Knowledge about things is essentially propositional knowledge, where the mental states involved refer to specific things. This propositional knowledge can be more or less comprehensive, can be justified inferentially and on the basis of experience, and can be communicated. Knowing things, on the other hand, involves experience of things. This experiential knowledge provides an epistemic basis for knowledge about things, and in some sense is difficult or impossible to communicate, perhaps because it is more or less vague.
If one is unconvinced by James and Russell’s reasons for holding that experience of and reference work to things that are at least sometimes direct. It may seem preferable to join Helmholtz in asserting that knowing things and knowing about things both involve propositional attitudes. To do so would at least allow one the advantages of unified accounts of the nature of knowledge (propositional knowledge would be fundamental) and of the nature of reference: Indirect reference would be the only kind. The two kinds of knowledge might yet be importantly different if the mental states involved have different sorts of causal origins in the thinkers cognitive faculties, involve different sorts of propositional attitudes, and differ in other constitutive respects relevant to the relative vagueness and communicability of the mental sates.
In any of cases, perhaps most, Foundationalism is a view concerning the structure of the system of justified belief possessed by a given individual. Such a system is divided into foundation and superstructure, so related that beliefs in the latter depend on the former for their justification but not vice versa. However, the view is sometimes stated in terms of the structure of knowledge than of justified belief. If knowledge is true justified belief (plus, perhaps, some further condition), one may think of knowledge as exhibiting a Foundationalist structure by virtue of the justified belief it involves. In any event, the construing doctrine concerning the primary justification is laid the groundwork as affording the efforts of belief, though in feeling more free, we are to acknowledge the knowledgeable infractions that will from time to time be worthy in showing to its recognition.
The first step toward a more explicit statement of the position is to distinguish between mediate (indirect) and immediate (direct) justification of belief. To say that a belief is mediately justified is to any that it s justified by some appropriate relation to other justified beliefs, i.e., by being inferred from other justified beliefs that provide adequate support for it, or, alternatively, by being based on adequate reasons. Thus, if my reason for supposing that you are depressed is that you look listless, speak in an unaccustomed flat tone of voice, exhibit no interest in things you are usually interested in, etc., then my belief that you are depressed is justified, if, at all, by being adequately supported by my justified belief that you look listless, speak in a flat tone of voice. . . .
A belief is immediately justified, on the other hand, if its justification is of another sort, e.g., if it is justified by being based on experience or if it is self-justified. Thus my belief that you look listless may not be based on anything else I am justified in believing but just on the cay you look to me. And my belief that 2 + 3 = 5 may be justified not because I infer it from something else, I justifiably believe, but simply because it seems obviously true to me.
In these terms we can put the thesis of Foundationalism by saying that all mediately justified beliefs owe their justification, ultimately to immediately justified beliefs. To get a more detailed idea of what this amounts to it will be useful to consider the most important argument for Foundationalism, the regress argument. Consider a mediately justified belief that 'p' (we are using lowercase letters as dummies for belief contents). It is, by hypothesis, justified by its relation to one or more other justified beliefs, 'q' and 'r'. Now what justifies each of these, e.g., q? If it too is mediately justified that is because it is related accordingly to one or subsequent extra justified beliefs, e.g., By virtue of what is s justified? If it is mediately justified, the same problem arises at the next stage. To avoid both circularity and an infinite regress, we are forced to suppose that in tracing back this chain we arrive at one or more immediately justified beliefs that stop the regress, since their justification does not depend on any further justified belief.
According to the infinite regress argument for Foundationalism, if every justified belief could be justified only by inferring it from some further justified belief, there would have to be an infinite regress of justifications: Because there can be no such regress, there must be justified beliefs that are not justified by appeal to some further justified belief. Instead, they are non-inferentially or immediately justified, they are basic or foundational, the ground on which all our other justifiable beliefs are to rest.
Variants of this ancient argument have persuaded and continue to persuade many philosophers that the structure of epistemic justification must be foundational. Aristotle recognized that if we are to have knowledge of the conclusion of an argument in the basis of its premisses, we must know the premisses. But if knowledge of a premise always required knowledge of some further proposition, then in order to know the premise we would have to know each proposition in an infinite regress of propositions. Since this is impossible, there must be some propositions that are known, but not by demonstration from further propositions: There must be basic, non-demonstrable knowledge, which grounds the rest of our knowledge.
Foundationalist enthusiasms for regress arguments often overlook the fact that they have also been advanced on behalf of scepticism, relativism, fetishists, conceptualism and Coherentism. Sceptics agree with Foundationalists both that there can be no infinite regress of justifications and that nevertheless, there must be one if every justified belief can be justified only inferentially, by appeal to some further justified belief. But sceptics think all true justification must be inferential in this way -the Foundationalists talk of immediate justification merely overshadows the requiring of any rational justification properly so-called. Sceptics conclude that none of our beliefs is justified. Relativists follow essentially the same pattern of sceptical argument, concluding that our beliefs can only be justified relative to the arbitrary starting assumptions or presuppositions either of an individual or of a form of life.
Regress arguments are not limited to epistemology. In ethics there is Aristotle regresses arguments (in Nichomachean Ethics) for the existence of a single end of rational action. In metaphysics there is Aquinas regress argument for an unmoved mover: If a mover that it is in motion, there would have to be an infinite sequence of movers each moved by a further mover, since there can be no such sequence, there is an unmoved mover. A related argument has recently been given to show that not every state of affairs can have an explanation or cause of the sort posited by principles of sufficient reason, and such principles are false, for reasons having to do with their own concepts of explanation (Post, 1980; Post, 1987).
The premise of which in presenting Foundationalism as a view concerning the structure that is in fact exhibited by the justified beliefs of a particular person has sometimes been construed in ways that deviate from each of the phrases that are contained in the previous sentence. Thus, it is sometimes taken to characterize the structure of our knowledge or scientific knowledge, rather than the structure of the cognitive system of an individual subject. As for the other phrase, Foundationalism is sometimes thought of as concerned with how knowledge (justified belief) is acquired or built up, than with the structure of what a person finds herself with at a certain point. Thus some people think of scientific inquiry as starting with the recordings of observations (immediately justified observational beliefs), and then inductively inferring generalizations. Again, Foundationalism is sometimes thought of not as a description of the finished product or of the mode of acquisition, but rather as a proposal for how the system could be reconstructed, an indication of how it could all be built up from immediately justified foundations. This last would seem to be the kind of Foundationalism we find in Descartes. However, Foundationalism is most usually thought of in contemporary Anglo-American epistemology as an account of the structure actually exhibited by an individuals system of justified belief.
It should also be noted that the term is used with a deplorable looseness in contemporary, literary circles, even in certain corners of the philosophical world, to refer to anything from realism -the view that reality has a definite constitution regardless of how we think of it or what we believe about it to various kinds of absolutism in ethics, politics, or wherever, and even to the truism that truth is stable (if a proposition is true, it stays true).
Since Foundationalism holds that all mediate justification rests on immediately justified beliefs, we may divide variations in forms of the view into those that have to do with the immediately justified beliefs, the foundations, and those that have to do with the modes of derivation of other beliefs from these, how the superstructure is built up. The most obvious variation of the first sort has to do with what modes of immediate justification are recognized. Many treatments, both pro and con, are parochially restricted to one form of immediate justification self-evidence, self-justification (self-warrant), justification by a direct awareness of what the belief is about, or whatever. It is then unwarrantably assumed by critics that disposing of that one form will dispose of Foundationalism generally (Alston, 1989). The emphasis historically has been on beliefs that simply record what is directly given in experience (Lewis, 1946) and on self-evident propositions (Descartes clear and distinct perceptions and Lockes Perception of the agreement and disagreement of ideas). But self-warrant has also recently received a great deal of attention (Alston 1989), and there is also a reliabilist version according to which a belief can be immediately justified just by being acquired by a reliable belief-forming process that does not take other beliefs as inputs (BonJour, 1985).
Foundationalists also differ as to what further constraints, if any, are put on foundations. Historically, it has been common to require of the foundations of knowledge that they exhibit certain epistemic immunities, as we might put it, immunity from error, refutation or doubt. Thus Descartes, along with many other seventeenth and eighteenth-century philosophers, took it that any knowledge worthy of the name would be based on cognateness the truth of which is guaranteed (infallible), that were maximally stable, immune from ever being shown to be mistaken, as incorrigible, and concerning which no reasonable doubt could be raised (indubitable). Hence the search in the Meditations for a divine guarantee of our faculty of rational intuition. Criticisms of Foundationalism have often been directed at these constraints: Lehrer, 1974, Will, 1974? Both responded to in Alston, 1989. It is important to realize that a position that is Foundationalist in a distinctive sense can be formulated without imposing any such requirements on foundations.
There are various ways of distinguishing types of Foundationalist epistemology by the use of the variations we have been enumerating. Plantinga (1983), has put forwards an influential innovation of criterial Foundationalism, specified in terms of limitations on the foundations. He construes this as a disjunction of ancient and medieval Foundationalism, which takes foundations to comprise what is self-evidently and evident to he senses, and modern Foundationalism that replaces evidently to the senses with incorrigible, which in practice was taken to apply only to beliefs about ones present states of consciousness. Plantinga himself developed this notion in the context of arguing those items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously called strong or extreme Foundationalism and moderate, modest or minimal Foundationalism, with the distinction depending on whether various epistemic immunities are required of foundations. Finally, its distinction is simple and iterative Foundationalism (Alston, 1989), depending on whether it is required of a foundation only that it is immediately justified, or whether it is also required that the higher level belief that the firmer belief is immediately justified is itself immediately justified. Suggesting only that the plausibility of the stronger requirement stems from a level confusion between beliefs on different levels.
The classic opposition is between Foundationalism and Coherentism. Coherentism denies any immediate justification. It deals with the regress argument by rejecting linear chains of justification and, in effect, taking the total system of belief to be epistemically primary. A particular belief is justified yo the extent that it is integrated into a coherent system of belief. More recently into a pragmatist like John Dewey has developed a position known as contextualism, which avoids ascribing any overall structure to knowledge. Questions concerning justification can only arise in particular context, defined in terms of assumptions that are simply taken for granted, though they can be questioned in other contexts, where other assumptions will be privileged.
Foundationalism can be attacked both in its commitment to immediate justification and in its claim that all mediately justified beliefs ultimately depend on the former. Though, it is the latter that is the positions weakest point, most of the critical fire has been detected to the former. As pointed out about much of this criticism has been directly against some particular form of immediate justification, ignoring the possibility of other forms. Thus, much anti-Foundationalist artillery has been directed at the myth of the given. The idea that facts or things are given to consciousness in a pre-conceptual, pre-judgmental mode, and that beliefs can be justified on that basis (Sellars, 1963). The most prominent general argument against immediate justification is a-level ascent argument, according to which whatever is taken ti immediately justified a belief that the putative justifier has in supposing to do so. Hence, since the justification of the higher level belief after all (BonJour, 1985). We lack adequate support for any such higher level requirements for justification, and if it were imposed we would be launched on an infinite undergo regress, for a similar requirement would hold equally for the higher level belief that the original justifier was efficacious.
Coherence is a major player in the theater of knowledge. There are coherence theories of belief, truth, and justification. These combine in various ways to yield theories of knowledge. We will proceed from belief through justification to truth. Coherence theories of belief are concerned with the content of beliefs. Consider a belief you now have, the beliefs that you are reading a page in a book, so what makes that belief the belief that it is? What makes it the belief that you are reading a page in a book than the belief hat you have a monster in the garden?
One answer is that the belief has a coherent place or role in a system of beliefs. Perception has an influence on belief. You respond to sensory stimuli by believing that you are reading a page in a book rather than believing that you have a centaur in the garden. Belief has an influence on action. You will act differently if you believe that you are reading a page than if you believe something about a centaur. Perspicacity and action undermine the content of belief, however, the same stimuli may produce various beliefs and various beliefs may produce the same action. The role that gives the belief the content it has in the role it plays in a network of relations to the beliefs, the role in inference and implications, for example, I refer different things from believing that I am inferring different things from believing that I am reading a page in a book than from any other beliefs, just as I infer that belief from any other belief, just as I infer that belief from different things than I infer other beliefs from.
The input of perception and the output of an action supplement the center role of the systematic relations the belief has to other beliefs, but it is the systematic relations that give the belief the specific content it has. They are the fundamental source of the content of beliefs. That is how coherence comes in. A belief has the content that it does because of the way in which it coheres within a system of beliefs (Rosenberg, 1988). We might distinguish weak coherence theories of the content of beliefs from strong coherence theories. Weak coherence theories affirm that coherence are one-determinant of the content of belief. Strong coherence theories of the contents of belief affirm that coherence is the sole determinant of the content of belief.
When we turn from belief to justification, we are in confronting a corresponding group of similarities fashioned by their coherence motifs. What makes one belief justified and another not? The answer is the way it coheres with the background system of beliefs. Again, there is a distinction between weak and strong theories of coherence. Weak theories tell us that the way in which a belief coheres with a background system of beliefs is one determinant of justification, other typical determinants being perception, memory and intuition. Strong theories, by contrast, tell us that justification is solely a matter of how a belief coheres with a system of beliefs. There is, however, another distinction that cuts across the distinction between weak and strong coherence theories of justification. It is the distinction between positive and negative coherence theories (Pollock, 1986). A positive coherence theory tells us that if a belief coheres with a background system of belief, then the belief is justified. A negative coherence theory tells us that if a belief fails to cohere with a background system of beliefs, then the belief is not justified. We might put this by saying that, according to a positive coherence theory, coherence has the power to produce justification, while according to a negative coherence theory, coherence has only the power to nullify justification.
A strong coherence theory of justification is a combination of a positive and a negative theory that tells us that a belief is justified if and only if it coheres with a background system of beliefs.
Traditionally, belief has been of epistemological interest in its propositional guise: S believes that p, where p is a proposition toward which an agent, S, exhibits an attitude of acceptance. Not all belief is of this sort. If I trust what you say, I believe you. And someone may believe in Mrs. Thatcher, or in a free-market economy, or in God. It is sometimes supposed that all belief is reducible to propositional belief, belief-that. Thus, my believing you might be thought a matter of my believing, perhaps, that what you say is true, and your belief in free-markets or in God, a matter of your believing that free-market economy are desirable or that God exists.
It is doubtful, however, that non-propositional believing can, in every case, be reduced in this way. Debate on this point has tended to focus on an apparent distinction between belief-that and belief-in, and the application of this distinction to belief in God. Some philosophers have followed Aquinas, 1225-74, in supposing that to believe in, and God is simply to believe that certain truth hold: That God exists, that he is benevolent, etc. Others (e.g., Hick, 1957) argue that belief-in is a distinctive attitude, one that includes essentially an element of trust. More commonly, belief-in has been taken to involve a combination of propositional belief together with some further attitude.
H.H. Price (1969) defends the claims that there are different sorts of belief-in, some, but not all, reducible to beliefs-that. If you believe in God, you believe that God exists, that God is good, etc., but, according to Price, your belief involves, in addition, a certain complex pro-attitude toward its object. One might attempt to analyze this further attitude in terms of additional beliefs-that: 'S' believes in 'χ' just in case (1) 'S' believes that ‘χ’ exists (and perhaps holds further factual beliefs about (χ): (2) 'S' believes that 'χ' is good or valuable in some respect, and (3) 'S' believes that 'χ's' being good or valuable in this respect is itself is a good thing. An analysis of this sort, however, fails adequately to capture the further affective component of belief-in. Thus, according to Price, if you believe in God, your belief is not merely that certain truth hold, you posses, in addition, an attitude of commitment and trust toward God.
Notoriously, belief-in outruns the evidence for the corresponding belief-that. Does this diminish its rationality? If belief-in presupposes belief-that, it might be thought that the evidential standards for the former must be, at least as high as standards for the latter. And any additional pro-attitude might be thought to require a further layer of justification not required for cases of belief-that.
Some philosophers have argued that, at least for cases in which belief-in is synonymous with faith (or faith-in), evidential thresholds for constituent propositional beliefs are diminished. You may reasonably have faith in God or Mrs. Thatcher, even though beliefs about their respective attitudes, were you to harbour them, would be evidentially substandard.
Belief-in may be, in general, less susceptible to alternations in the face of unfavorable evidence than belief-that. A believer who encounters evidence against Gods’ existence may remain unshaken in his belief, in part because the evidence does not bear on his pro-attitude. So long as this is united with his belief that God exists, the belief may survive epistemic buffeting-and reasonably so in a way that an ordinary propositional belief-that would not.
At least two large sets of questions are properly treated under the heading of epistemological religious beliefs. First, there is a set of broadly theological questions about the relationship between faith and reason, between what one knows by way of reason, broadly construed, and what one knows by way of faith. These theological questions may as we call theological, because, of course, one will find them of interest only if one thinks that in fact there is such a thing as faith, and that we do know something by way of it. Secondly, there is a whole set of questions having to do with whether and to what degree religious beliefs have warrant, or justification, or positive epistemic status. The second, is seemingly as an important set of a theological question is yet spoken of faith.
Epistemology, so we are told, is theory of knowledge: Its aim is to discern and explain that quality or quantity enough of which distinguishes knowledge from mere true belief. We need a name for this quality or quantity, whatever precisely it is, call it warrant. From this point of view, the epistemology of religious belief should center on the question whether religious belief has warrant, an if it does, hoe much it has and how it gets it. As a matter of fact, however, epistemological discussion of religious belief, at least since the Enlightenment (and in the Western world, especially the English-speaking Western world) has tended to focus, not on the question whether religious belief has warrant, but whether it is justified. More precisely, it has tended to focus on the question whether those properties enjoyed by theistic belief -the belief that there exists a person like the God of traditional Christianity, Judaism and Islam: An almighty Law Maker, or an all-knowing and most wholly benevolent and a loving spiritual person who has created the living world. The chief question, therefore, has ben whether theistic belief is justified, the same question is often put by asking whether theistic belief is rational or rationally acceptable. Still further, the typical way of addressing this question has been by way of discussing arguments for or and against the existence of God. On the pro side, there are the traditional theistic proofs or arguments: The ontological, cosmological and teleological arguments, using Kant’s terms for them. On the other side, the anti-theistic side, the principal argument is the argument from evil, the argument that is not possible or at least probable that there be such a person as God, given all the pain, suffering and evil the world displays. This argument is flanked by subsidiary arguments, such as the claim that the very concept of God is incoherent, because, for example, it is impossible that there are the people without a body, and Freudian and Marxist claims that religious belief arises out of a sort of magnification and projection into the heavens of human attributes we think important.
But why has discussion centered on justification rather than warrant? And precisely what is justification? And why has the discussion of justification of theistic belief focussed so heavily on arguments for and against the existence of God?
As to the first question, we can see why once we see that the dominant epistemological tradition in modern Western philosophy has tended to identify warrant with justification. On this way of looking at the matter, warrant, that which distinguishes knowledge from mere true belief, just is justification. Belief theory of knowledge-the theory according to which knowledge is justified true belief has enjoyed the status of orthodoxy. According to this view, knowledge is justified truer belief, therefore any of your beliefs have warrant for you if and only if you are justified in holding it.
But what is justification? What is it to be justified in holding a belief? To get a proper sense of the answer, we must turn to those Twin towers of western epistemology. René Descartes and especially, John Locke. The first thing to see is that according to Descartes and Locke, there are epistemic or intellectual duties, or obligations, or requirements. Thus, Locke:
Faith is nothing but a firm assent of the mind, which if it is regulated, A is our duty, cannot be afforded to anything, but upon good reason: And cannot be opposite to it, he that believes, without having any reason for believing, may be in love with his own fanciers: But, neither seeks truth as he ought, nor pats the obedience due his maker, which would have him use those discerning faculties he has given him: To keep him out of mistake and error. He that does this to the best of his power, however, he sometimes lights on truth, is in the right but by chance: And I know not whether the luckiest of the accidents will excuse the irregularity of his proceeding. This, at least is certain, that he must be accountable for whatever mistakes he runs into: Whereas, he that makes use of the light and faculties God has given him, by seeks sincerely to discover truth, by those helps and abilities he has, may have this satisfaction in doing his duty as rational creature, that though he should miss truth, he will not miss the reward of it. For he governs his assent right, and places it as he should, who in any case or matter whatsoever, believes or disbelieves, according as reason directs him. He manages otherwise, transgresses against his own light, and misuses those faculties, which were given him.
Rational creatures, creatures with reason, creatures capable of believing propositions (and of disbelieving and being agnostic with respect to them), say Locke, have duties and obligation with respect to the regulation of their belief or assent. Now the central core of the notion of justification(as the etymology of the term indicates) this: One is justified in doing something or in believing a certain way, if in doing one is innocent of wrong doing and hence not properly subject to blame or censure. You are justified, therefore, if you have violated no duties or obligations, if you have conformed to the relevant requirements, if you are within your rights. To be justified in believing something, then, is to be within your rights in so believing, to be flouting no duty, to be to satisfy your epistemic duties and obligations. This way of thinking of justification has been the dominant way of thinking about justification: And this way of thinking has many important contemporary representatives. Roderick Chisholm, for example (as distinguished an epistemologist as the twentieth century can boast, in his earlier work explicitly explains justification in terms of epistemic duty (Chisholm, 1977).
The (or, a) main epistemological; questions about religious believe, therefore, has been the question whether or not religious belief in general and theistic belief in particular is justified. And the traditional way to answer that question has been to inquire into the arguments for and against theism. Why this emphasis upon these arguments? An argument is a way of marshalling your propositional evidence-the evidence from other such propositions as likens to believe-for or against a given proposition. And the reason for the emphasis upon argument is the assumption that theistic belief is justified if and only if there is sufficient propositional evidence for it. If there is not much by way of propositional evidence for theism, then you are not justified in accepting it. Moreover, if you accept theistic belief without having propositional evidence for it, then you are going contrary to epistemic duty and are therefore unjustified in accepting it. Thus, W.K. William James, trumpets that it is wrong, always everything upon insufficient evidence, his is only the most strident in a vast chorus of only insisting that there is an intellectual duty not to believer in God unless you have propositional evidence for that belief. A few others in the choir: Sigmund Freud, Brand Blanshard, H.H. Price, Bertrand Russell and Michael Scriven.
Now, the justification of theistic beliefs gets identified with there being propositional evidence for it? Justification is a matter of being blameless, of having done ones duty (in this context, for ones individualistic reasons are epistemically are in duty): What, precisely, has this to do with having propositional evidence?
The answer, once, again, is to be found of Descartes, and, especially Locke. As, justification is the property your beliefs have when, in forming and holding them, you conform to your epistemic duties and obligations. But according to Locke, a central epistemic duty is this: To believe a proposition, is only to the degree that it is probable with respect to what is certain for you. What propositions are certain for you? First, according to Descartes and Locke, propositions about your own immediate experience, that you have a mild headache, or that it seems to you that you see something red: And second, propositions that are self-evident for you, necessarily true propositions so obvious that you cannot so much as entertain them without seeing that they must be true. (Examples would be simple arithmetical and logical propositions, together with such propositions as that the whole is at least as large as the parts, that red is a colour, and that whatever exists has properties). Propositions of these two sorts are certain for you, as fort other prepositions. You are justified in believing if and only if when one and only to the degree to which it is probable with respect to what is certain for you. According to Locke, therefore, and according to the whole modern Foundationalist tradition initiated by Locke and Descartes (a tradition that until has recently dominated Western thinking about these topics) there is a duty not to accept a proposition unless it is certain or probable with respect to what is certain.
In the present context, therefore, the central Lockean assumption is that there is an epistemic duty not to accept theistic belief unless it is probable with respect to what is certain for you: As a consequence, theistic belief is justified only if the existence of God is probable with respect to what is certain. Locke does not argue for his proposition, he simply announces it, and epistemological discussion of theistic belief has for the most part followed hin ion making this assumption. This enables us to see why epistemological discussion of theistic belief has tended to focus on the arguments for and against theism: On the view in question, theistic belief is justified only if it is probable with respect to what is certain, and the way to show that it is probable with respect to what it is certain are to give arguments for it from premises that are certain or, are sufficiently probable with respect to what is certain.
There are at least three important problems with this approach to the epistemology of theistic belief. First, there standards for theistic arguments have traditionally been set absurdly high (and perhaps, part of the responsibility for this must be laid as the door of some who have offered these arguments and claimed that they constitute wholly demonstrative proofs). The idea seems to test. a good theistic argument must start from what is self-evident and proceed majestically by way of self-evidently valid argument forms to its conclusion. It is no wonder that few if any theistic arguments meet that lofty standard -particularly, in view of the fact that almost no philosophical arguments of any sort meet it. (Think of your favourite philosophical argument: Does it really start from premisses that are self-evident and move by ways of self-evident argument forms to its conclusion?)
Secondly, attention has ben mostly confined to three theistic arguments: The traditional arguments, cosmological and teleological arguments, but in fact, there are many more good arguments: Arguments from the nature of proper function, and from the nature of propositions, numbers and sets. These are arguments from intentionality, from counterfactual, from the confluence of epistemic reliability with epistemic justification, from reference, simplicity, intuition and love. There are arguments from colours and flavours, from miracles, play and enjoyment, morality, from beauty and from the meaning of life. This is even a theistic argument from the existence of evil.
But there are a third and deeper problems here. The basic assumption is that theistic belief is justified only if it is or can be shown to be probable with respect to many a body of evidence or proposition - perhaps, those that are self-evident or about ones own mental life, but is this assumption true? The idea is that theistic belief is very much like a scientific hypothesis: It is acceptable if and only if there is an appropriate balance of propositional evidence in favours of it. But why believer a thing like that? Perhaps the theory of relativity or the theory of evolution is like that, such a theory has been devised to explain the phenomena and gets all its warrant from its success in so doing. However, other beliefs, e.g., memory beliefs, freelife in other minds is not like that, they are not hypothetical at all, and are not accepted because of their explanatory powers. There are instead, the propositions from which one start in attempting to give evidence for a hypothesis. Now, why assume that theistic belief, belief in God, is in this regard more like a scientific hypothesis than like, say, a memory belief? Why think that the justification of theistic belief depends upon the evidential relation of theistic belief to other things one believes? According to Locke and the beginnings of this tradition, it is because there is a duty not to assent to a proposition unless it is probable with respect to what is certain to you, but is there really any such duty? No one has succeeded in showing that, say, belief in other minds or the belief that there has been a past, is probable with respect to what is certain for us. Suppose it is not: Does it follow that you are living in epistemic sin if you believer that there is other minds? Or a past?
There are urgent questions about any view according to which one has duties of the sort do not believer p unless it is probable with respect to what is certain for you; . First, if this is a duty, is it one to which I can conform? My beliefs are for the most part not within my control: Certainly they are not within my direct control. I Believer that there has been a past and that there are other people, even if these beliefs are not probable with respect to what is certain forms (and even if I came to know this) I could not give them up. Whether or not I accept such beliefs are not really up to me at all, For I can no more refrain from believing these things than I can refrain from conforming yo the law of gravity. Second, is there really any reason for thinking I have such a duty? Nearly everyone recognizes such duties as that of not engaging in gratuitous cruelty, taking care of ones children and ones aged parents, and the like, but do we also find ourselves recognizing that there is a duty not to believer what is not probable (or, what we cannot see to be probable) with respect to what are certain for us? It hardly seems so. However, it is hard to see why being justified in believing in God requires that the existence of God be probable with respect to some such body of evidence as the set of propositions certain for you. Perhaps, theistic belief is properly basic, i.e., such that one is perfectly justified in accepting it on the evidential basis of other propositions one believes.
Taking justification in that original etymological fashion, therefore, there is every reason ton doubt that one is justified in holding theistic belief only if one is justified in holding theistic belief only if one has evidence for it. Of course, the term justification has under-gone various analogical extensions in the of various philosophers, it has been used to name various properties that are different from justification etymologically so-called, but analogically related to it. In such a way, the term sometimes used to mean propositional evidence: To say that a belief is justified for someone is to saying that he has propositional evidence (or sufficient propositional evidence) for it. So taken, however, the question whether theistic belief is justified loses some of its interest; for it is not clear (given this use) beliefs that are unjustified in that sense. Perhaps, there is no propositional evidence for and individuals memory beliefs, if so, that would not be a mark against them and would not suggest that there be something wrong holding them.
Another analogically connected way to think about justification (a way to think about justification by the later Chisholm) is to think of it as simply a relation of fitting between a given proposition and ones epistemic vase -which includes the other things one believes, as well as ones experience. Perhaps tat is the way justification is to be thought of, but then, if it is no longer at all obvious that theistic belief has this property of justification if it seems as a probability with respect to many another body of evidence. Perhaps, again, it is like memory beliefs in this regard.
To recapitulate: The dominant Western tradition has been inclined to identify warrant with justification, it has been inclined to take the latter in terms of duty and the fulfilment of obligation, and hence to suppose that there is no epistemic duty not to believers in God unless you have good propositional evidence for the existence of God. Epistemological discussion of theistic belief, as a consequence, as concentrated on the propositional evidence for and against theistic belief, i.e., on arguments for and against theistic belief. But there is excellent reason to doubt that there are epistemic duties of the sort the tradition appeals to here.
And perhaps it was a mistake to identify warrant with justification in the first place. Napoleons have little warrant for him: His problem, however, need not be dereliction of epistemic duty. He is in difficulty, but it is not or necessarily that of failing to fulfill epistemic duty. He may be doing his epistemic best, but he may be doing his epistemic duty in excelsus: But his madness prevents his beliefs from having much by way of warrant. His lack of warrant is not a matter of being unjustified, i.e., failing to fulfill epistemic duty. So warrant and being epistemologically justified by name are not the same things. Another example, suppose (to use the favourite twentieth-century variant of Descartes evil demon examples) I have been captured by Alpha-Centaurian super-scientists, running a cognitive experiment, they remove my brain, and keep it alive in some artificial nutrients, and by virtue of their advanced technology induce in me the beliefs I might otherwise have if I were going about my usual business. Then my beliefs would not have much by way of warrant, but would it be because I was failing to do my epistemic duty?
As a result of these and other problems, another, externalist way of thinking about knowledge has appeared in recent epistemology, that a theory of justification is internalized if and only if it requires that all of its factors needed for a belief to be epistemically accessible to that of a person, internal to his cognitive perception, and externalist, if it allows that, at least some of the justifying factors need not be thus accessible, in that they can be external to the believe s cognitive Perspectives, beyond his ken. However, epistemologists often use the distinction between internalized and externalist theories of epistemic justification without offering any very explicit explanation.
Or perhaps the thing to say, is that it has reappeared, for the dominant sprains in epistemology priori to the Enlightenment were really externalist. According to this externalist way of thinking, warrant does not depend upon satisfaction of duty, or upon anything else to which the Knower has special cognitive access (as he does to what is about his own experience and to whether he is trying his best to do his epistemic duty): It depends instead upon factors external to the epistemic agent -such factors as whether his beliefs are produced by reliable cognitive mechanisms, or whether they are produced by epistemic faculties functioning properly in-an appropriate epistemic environment.
How will we think about the epistemology of theistic belief in more than is less of an externalist way (which is at once both satisfyingly traditional and agreeably up to date)? I think, that the ontological question whether there is such a person as God is in a way priori to the epistemological question about the warrant of theistic belief. It is natural to think that if in fact we have been created by God, then the cognitive processes that issue in belief in God are indeed realizable belief-producing processes, and if in fact God created us, then no doubt the cognitive faculties that produce belief in God is functioning properly in an epistemologically congenial environment. On the other hand, if there is no such person as God, if theistic belief is an illusion of some sort, then things are much less clear. Then beliefs in God in of the most of basic ways of wishing that never doubt the production by which unrealistic thinking or another cognitive process not aimed at truth. Thus, it will have little or no warrant. And belief in God on the basis of argument would be like belief in false philosophical theories on the basis of argument: Do such beliefs have warrant? Notwithstanding, the custom of discussing the epistemological questions about theistic belief as if they could be profitably discussed independently of the ontological issue as to whether or not theism is true, is misguided. There two issues are intimately intertwined,
Nonetheless, the vacancy left, as today and as days before are an awakening and untold story beginning by some sparking conscious paradigm left by science. That is a central idea by virtue accredited by its epistemology, where in fact, is that justification and knowledge arising from the proper functioning of our intellectual virtues or faculties in an appropriate environment.
Finally, that the concerning mental faculty reliability point to the importance of an appropriate environment. The idea is that cognitive mechanisms might be reliable in some environments but not in others. Consider an example from Alvin Plantinga. On a planet revolving around Alfa Centauri, cats are invisible to human beings. Moreover, Alfa Centaurian cats emit a type of radiation that causes humans to form the belief that there I a dog barking nearby. Suppose now that you are transported to this Alfa Centaurian planet, a cat walks by, and you form the belief that there is a dog barking nearby. Surely you are not justified in believing this. However, the problem here is not with your intellectual faculties, but with your environment. Although your faculties of perception are reliable on Earth, yet are unrealizable on the Alga Centaurian planet, which is an inappropriate environment for those faculties.
The central idea of virtue epistemology, as expressed in (J) above, has a high degree of initial plausibility. By masking the idea of faculties cental to the reliability if not by the virtue of epistemology, in that it explains quite neatly to why beliefs are caused by perception and memories are often justified, while beliefs caused by unrealistic and superstition are not. Secondly, the theory gives us a basis for answering certain kinds of scepticism. Specifically, we may agree that if we were brains in a vat, or victims of a Cartesian demon, then we would not have knowledge even in those rare cases where our beliefs turned out true. But virtue epistemology explains that what is important for knowledge is toast our faculties are in fact reliable in the environment in which we are. And so we do have knowledge so long as we are in fact, not victims of a Cartesian demon, or brains in a vat. Finally, Plantinga argues that virtue epistemology deals well with Gettier problems. The idea is that Gettier problems give us cases of justified belief that is truer by accident. Virtue epistemology, Plantinga argues, helps us to understand what it means for a belief to be true by accident, and provides a basis for saying why such cases are not knowledge. Beliefs are rue by accident when they are caused by otherwise reliable faculties functioning in an inappropriate environment. Plantinga develops this ligne of reasoning in Plantinga (1988).
The Humean problem if induction supposes that there is some property ‘A’ pertaining to an observational or experimental situation, and that of ‘A’, some fraction m/n (possibly equal to 1) have also been instances of some logically independent property B. Suppose further that the background circumstances, have been varied to a substantial degree and that there is no collateral information available concerning the frequency of B’s among A’s or concerning causal nomological connections between instances of ‘A’ and instances of ‘B’.
In this situation, an enumerative or instantial inductive inference would move from the premise that m/n of observed 'A's' are 'B's' to the conclusion that approximately m/n of all 'A's' and 'B's'. (The usual probability qualification will be assumed to apply to the inference, than being part of the conclusion). Hereabouts the class of As should be taken to include not only unobservable As of future As, but also possible or hypothetical as. (An alternative conclusion would concern the probability or likelihood of the very next observed 'A' being a 'B').
The traditional or Humean problem of induction, often refereed to simply as the problem of induction, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true if the corresponding premiss is true or even that their chances of truth are significantly enhanced?
Humes discussion of this deals explicitly with cases where all observed 'A's' are 'B's', but his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: inductive inferences are not rationally justified, but are instead the result of an essentially a-rational process, custom or habit. Hume challenges the proponent of induction to supply a cogent ligne of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma, to show that there can be no such reasoning. Such reasoning would, ne argues, have to be either deductively demonstrative reasoning concerning relations of ideas or experimental, i.e., empirical, reasoning concerning mattes of fact to existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that the course of nature may change, tat an order that was observed in the past will not continue in the future: but it also cannot be the latter, since any empirical argument would appeal to the success of such reasoning in previous experiences, and the justifiability of generalizing from previous experience is precisely what is at issue - so that any such appeal would be question-begging, so then, there can be no such reasoning.
An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble or, that unobserved cases will reassembly observe cases. An inductive argument may be viewed as enthymematic, with this principle serving as a suppressed premiss, in which case the issue is obviously how such a premise can be justified. Humes argument is then that no such justification is possible: The principle cannot be justified speculatively as it is not contradictory to deny it: it cannot be justified by appeal to its having been true in pervious experience without obviously begging te question.
The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Humes argument, viz. That inductive inferences cannot be justified I the sense of showing that the conclusion of such an inference is likely to be truer if the premise is true, and thus attempt to find some other sort of justification for induction.
Bearing upon, and if not taken into account the term induction is most widely used for any process of reasoning that takes us from empirical premises to empirical conclusions supported by the premise, but not deductively entailed by them. Inductive arguments are therefore kinds of amplicative argument, in which something beyond the content of the premises is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this amplicative character, by being confined to inference in which the conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from premiss telling that 'Fa', 'Fb', 'Fc'. , Where 'a', 'b', 'c', are all of some kind 'G', It is inferred 'G's' from outside the sample, such as future 'G's' will be 'F', or perhaps other person deceive them, children may well infer that everyone is a deceiver. Different but similar inferences are those from the past possession of a property by some object to the same object’s future possession, or from the constancy of some law-like pattern in events, and states of affairs to its future constancy: all objects we know of attract each the with a fore inversely proportional to the square of the distance between them, so perhaps they all do so, an will always do so.
The rational basis of any inference was challenged by David Hume (1711-76), who believed that induction of nature, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the propriety of processes of inducting ion, but sceptical about the tole of reason in either explaining it or justifying it. trying to answer Hume and to show that there is something rationally compelling about the inference is referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones for which 't' is not. It is also recognized that actual inductive habits are more complex than those of simple and science pay attention to such factors as variations within the sample of giving us the evidence, the application of ancillary beliefs about the order of nature, and so on. Nevertheless, the fundamental problem remains that any experience shows us only events occurring within a very restricted part of the vast spatial temporal order about which we then come to believer things.
All the same, the classical problem of induction is often phrased in terms of finding some reason to expect that nature is uniform. In Fact, Fiction and Forecast (1954) Goodman showed that we need in addition some reason for preferring some uniformities to others, for without such a selection the uniformity of nature is vacuous. Thus, suppose that all examined emeralds have been green. Uniformity would lead us to expect that future emeralds will be green as well. But, now we define predicate stuff: is trued if and only if 'x' is examined before time 'T' and is green, or 'χ' is examined after 'T' and is blue? Let 'T' refer to some time around the present. Then if newly examined emeralds are like previous ones in respect of being stuff, they will be blue. We prefer blueness a basis of prediction to gluiness, but why?
Goodman argued that although his new predicate appears to be gerrymandered, and itself involves a reference to a difference, this is just aparohial or language-relative judgement, there being no language-independent standard of similarity to which to appeal. Other philosophers have not been convince by this degree of linguistic relativism. What remains clear that the possibility of these bent predicates put a decisive obstacle in face of purely logical and syntactical approaches to problems of confirmation? .
Even so, that the theory of the measure to which evidence supports a theory, whereby a fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might have in a theory, given some body of evidence. The grandfather of confirmation theory is the German philosopher, mathematician and polymath Wilhelm Gottfried Leibniz (1646-1716), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully forma confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific.
The principal developments were due to the German logical postivists Rudolf Carnap (1891-1970). Wherefore, Carnap, culminating in his Logical Foundations of Probability (1950), that Carnap's idea was that the measure needed would be the proposition of logically possible stares of affairs in which the theory and the evidence both hold, compared to the number in which the evidence itself holds that the probability of a proposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, when compared to the total range of possibilities left open by the evidence. The theory was originally reached by the French mathematician Pierre Simon de LaPlace (1749-1827), and has guided confirmation theory, for example, into the works of Carnap. The difficulty with the range theory of probability had with the theory lies in identifying sets of possibilities so that they admit of measurement. LaPlace appealed to the principle of indifference, supposing that possibilities have an equal probability unless there is reason for distinguishing them. However, unrestricted appeal to this principle introduces inconsistency. Treating possibilities as equally probable may be regarded as depending upon metaphysical choices or logical choices, as in the view of an English economist and philosopher John Maynard Keynes (1883-1946), or on semantic choices, as in the work of Carnap. In any event, it is hard to find an objective source for the authority of such a choice, and this is one of the principal difficulties in front of formalizing the theory of confirmation.
It therefore demands that we can put a measure on the 'range' of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone. Among the obstacles the enterprise meets is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language, in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated sense of what looks plausible.
Both, Frége and Carnap, represented as analyticities best friends in this century, did as much to undermine it as its worst enemies. Quine (1908-) whose early work was on mathematical logic, and issued in A System of Logistic (1934), Mathematical Logic (1940) and Methods of Logic (1950) it was with this collection of papers a Logical Point of View (1953) that his philosophical importance became widely recognized, also, Putman (1926-) his concern in the later period has largely been to deny any serious asymmetry between truth and knowledge as it is obtained in natural science, and as it is obtained in morals and even theology. Books include Philosophy of logic (1971), Representation and Reality (1988) and Renewing Philosophy (1992). Collections of his papers include Mathematics, Master, sand Method (1975), Mind, Language, and Reality (1975), and Realism and Reason (1983). Both of which represented as having refuted the analytic/synthetic distinction, not only did no such thing, but, in fact, contributed significantly to undoing the damage done by Frége and Carnap. Finally, the epistemological significance of the distinctions is nothing like what it is commonly taken to be.
Lockes account of an analyticity proposition as, for its time, everything that a succinct account of analyticity should be (Locke, 1924, pp. 306-8) he distinguished two kinds of analytic propositions, identified propositions in which we affirm the said terms if itself, e.g., Roses are roses, and predicative propositions in which a part of the complex idea is predicated of the name of the whole, e.g., Roses are flowers, Locke calls such sentences trifling because a speaker who uses them trifles with words. A synthetic sentence, in contrast, such as a mathematical theorem, states a truth and conveys with its informative real knowledge. Correspondingly, Locke distinguishes two kinds of necessary consequences, analytic entailment where validity depends on the literal containment of the conclusions in the premiss and synthetic entailments where it does not. (Locke did not originate this concept-containment notion of analyticity. It is discussions by Arnaud and Nicole, and it is safe to say it has been around for a very long time (Arnaud, 1964).
Kant’s account of analyticity, which received opinion tells us is the consummate formulation of this notion in modern philosophy, is actually a step backward. What is valid in his account is not novel, and what is novel is not valid. Kant presents Lockes account of concept-containment analyticity, but introduces certain alien features, the most important being his characterizations of most important being his characterization of analytic propositions as propositions whose denials are logical contradictions (Kant, 1783). This characterization suggests that analytic propositions based on Lockes part-whole relation or Kants explicative copulas are a species of logical truth. But the containment of the predicate concept in the subject concept in sentences like Bachelors are unmarried is a different relation from containment of the consequent in the antecedent in a sentence like If John is a bachelor, then John is a bachelor or Mary read Kants Critique. The former is literal containment whereas, the latter are, in general, not. Talk of the containment of the consequent of a logical truth in the metaphorical, a way of saying logically derivable.
Kants conflation of concept containment with logical containment caused him to overlook the issue of whether logical truths are synthetically deductive and the problem of how he can say mathematical truth are synthetically deductive when they cannot be denied without contradiction. Historically. , the conflation set the stage for the disappearance of the Lockean notion. Frége, whom received opinion portrays as second only to Kant among the champions of analyticity, and Carnap, who it portrays as just behind Frége, was jointly responsible for the appearance of concept-containment analyticity.
Frége was clear about the difference between concept containment and logical containment, expressing it as like the difference between the containment of beams in a house the containment of a plant in the seed (Frége, 1853). But he found the former, as Kant formulated it, defective in three ways: It explains analyticity in psychological terms, it does not cover all cases of analytic propositions, and, perhaps, most important for Fréges logicism, its notion of containment is unfruitful as a definition; mechanisms in logic and mathematics (Frége, 1853). In an insidious containment between the two notions of containment, Frége observes that with logical containment we are not simply talking out of the box again what we have just put inti it. This definition makes logical containment the basic notion. Analyticity becomes a special case of logical truth, and, even in this special case, the definitions employ the power of definition in logic and mathematics than mere concept combination.
Carnap, attempting to overcome what he saw a shortcoming in Fréges account of analyticity, took the remaining step necessary to do away explicitly with Lockean-Kantian analyticity. As Carnap saw things, it was a shortcoming of Fréges explanation that it seems to suggest those definitional relations underlying analytic propositions can be extra-logic in some sense, say, in resting on linguistic synonymy. To Carnap, this represented a failure to achieve a uniform forma treatment of analytic propositions and left us with a dubious distinction between logical and extra-logical vocabulary. Hence, he eliminated the reference to definitions in Fréges of analyticity by introducing meaning postulates, e.g., statements such as '(∀χ)' ('χ' is a Bachelors-is unmarried) (Carnap, 1965). Like standard logical postulate on which they were modelled, meaning postulates express nothing more than constrains on the admissible models with respect to which sentences and deductions are evaluated for truth and validity. Thus, despite their name, its asymptomatic-balance having to pustulate itself by that in what it holds on to not more than to do with meaning than any value-added statements expressing an indispensable truth. In defining analytic propositions as consequences of (an explained set of) logical laws, Carnap explicitly removed the one place in Fréges explanation where there might be room for concept containment and with it, the last trace of Lockes distinction between semantic and other necessary consequences.
Quine, the staunchest critic of analyticity of our time, performed an invaluable service on its behalf-although, one that has come almost completely unappreciated. Quine made two devastating criticism of Carnaps meaning postulated by the approach that expose it as both irrelevant and vacuous. It is irrelevant because, in using particular words of a language, meaning postulates fail to explicate analyticity for sentences and language generally, that is, the outlived definition does not define anything but for variables 'S' and 'L' (Quine, 1953). It is vacuous because, although meaning postulates tell us what sentences are to count as analytic, they do not tell us what it is for them to be analytic.
Received opinion gas it that Quine did much more than refute the analytic/synthetic distinction as Carnap tried to draw it. Received opinion has that Quine demonstrated there is no distinction, however, anyone might try to draw it. This, too, is incorrect. To argue for this stronger conclusion, Quine had to show that there is no way to draw the distinction outside logic, in particular theory in linguistic corresponding to Carnaps, Quines argument had to take an entirely different form. Some inherent feature of linguistics had to be exploited in showing that no theory in this science can deliver the distinction. But the feature Quine chose was a principle of operationalist methodology characteristic of the school of Bloomfieldian linguistics. Quine succeeds in showing that meaning cannot be made objective sense of in linguistics. If making sense of a linguistic concept requires, as that school claims, operationally defining it in terms of substitution procedures that employ only concepts unrelated to that linguistic concept. But Chomskys revolution in linguistics replaced the Bloomfieldian taxonomic model of grammars with the hypothetico-deductive model of generative linguistics, and, as a consequence, such operational definition was removed as the standard for concepts in linguistics. The standard of theoretical definition that replaced it was far more liberal, allowing the members of as family of linguistic concepts to be defied with respect to one another within a set of axioms that state their systematic interconnections - the entire system being judged by whether its consequences are confirmed by the linguistic facts. Quines argument does not even address theories of meaning based on this hypothetico-deductive model (Katz, 1988 and 1990).
Putman, the other staunch critic of analyticity, performed a service on behalf of analyticity fully on a par with, and complementary to Quines, whereas, Quine refuted Carnaps formalization of Fréges conception of analyticity, Putman refuted this very conception itself. Putman put an end to the entire attempt, initiated by Fridge and completed by Carnap, to construe analyticity as a logical concept.
However, as with Quine, received opinion has it that Putman did much more. Putman in credited with having devised science fiction cases, from the robot cat case to the Twin Earth cases, that are counter examples to the traditional theory of meaning. Again, received opinion is incorrect. These cases are only counter examples to Fréges version of the traditional theory of meaning. Fréges version claims both (1) that senses determine reference, and (2) that there are instances of analyticity, say, typified by cats are animals, and of synonymy, say typified by water in English and water in Twin Earth English. Given (1) and (2), what we call cats nothing, but being non-animal and what we call water of what could not differ from what the Earthier Twin called water. But, as Putman's cases show, what we call cats could be Martian robots and what they call water could be something other than H2O Hence, the cases are counter examples to Fréges version of the theory.
The remaining Frégean criticism points to a genuine incompleteness of the traditional account of analyticity. There are analytic relational sentences, for example, Jane walks with those with whom she strolls, Jack kills those he
himself has murdered, etc., and analytic entailment with existential conclusions, for example, I think, therefore I exist. The containment in these sentences is just as literal as that in an analytic subject-predicate sentence like Bachelors are unmarried, such are shown to have a theory of meaning construed as a hypothetico-deductive systemisations of sense as defined in (D) overcoming the incompleteness of the traditional account in the case of such relational sentences.
Such a theory of meaning makes the principal concern of semantics the explanation of sense properties and relations like synonymy, an antonymy, redundancy, analyticity, ambiguity, etc. Furthermore, it makes grammatical structure, specifically, senses structure, the basis for explaining them. This leads directly to the discovery of a new level of grammatical structure, and this, in turn, makes possible a proper definition of analyticity. To see this, consider two simple examples. It is a semantic fact that a male Bachelors is redundant and that single person is synonymous with woman who never married; . In the case of the redundancy, we have to explain the fact that the sense of the modifier male is already contained in the sense of its head Bachelors. In the case of the synonymy, we have to explain the fact that the sense of sinister is identical to the sense of woman who never married (compositionally formed from the senses of woman, never and married). But is so far as such facts concern relations involving the components of the senses of Bachelors and spinster and are insofar as these words are syntactic simple, there must be a level of grammatical structure at which syntactic simple are semantically complex. This, in brief, is the route by which we arrive a level of decompositional semantic structure; that is the locus of sense structures masked by syntactically simple words.
Once, again, the fact that (A) itself makes no reference to logical operators or logical laws indicate that analyticity for subject-predicate sentences can be extended to simple relational sentences without treating analytic sentences as instances of logical truth. Further, the source of the incompleteness is no longer explained, as Fridge explained it, as the absence of fruitful logical apparatus, but is now explained as mistakenly treating what is only a special case of analyticity as if it were the general case. The inclusion of the predicate in the subject is the special case (where n = 1) of the general case of the inclusion of an–place predicate (and its terms) in one of its terms. Noting that the defects, by which, Quine complained of in connexion with Carnaps meaning-postulated explication are absent in (A). (A) contains no words from a natural language. It explicitly uses variable 'S' and variable 'L' because it is a definition in linguistic theory. Moreover, (A) tell us what property is in virtue of which a sentence is analytic, namely, redundant predication, that is, the predication structure of an analytic sentence is already found in the content of its term structure.
Received opinion has been anti-Lockean in holding that necessary consequences in logic and language belong to one and the same species. This seems wrong because the property of redundant predication provides a non-logic explanation of why true statements made in the literal use of analytic sentences are necessarily true. Since the property ensures that the objects of the predication in the use of an analytic sentence are chosen on the basis of the features to be predicated of them, the truth-conditions of the statement are automatically satisfied once its terms take on reference. The difference between such a linguistic source of necessity and the logical and mathematical sources vindicate Lockes distinction between two kinds of necessary consequence.
Received opinion concerning analyticity contains another mistake. This is the idea that analyticity is inimical to science, in part, the idea developed as a reaction to certain dubious uses of analyticity such as Fréges attempt to establish logicism and Schlicks, Ayers and other logical; postivists attempt to deflate claims to metaphysical knowledge by showing that alleged deductive truth attempts (Schlick, 1948, and Ayer, 1946). In part, it developed as also a response to a number of cases where alleged analytic, and hence, necessary truth, e.g., the law of excluded a seeming next-to-last subsequent to have been taken as open to revision, such cases convinced philosophers like Quine and Putnam that the analytic/synthetic distinction is an obstacle to scientific progress.
The problem, if there is, one is one is not analyticity in the concept-containment sense, but the conflation of it with analyticity in the logical sense. This made it seem as if there is a single concept of analyticity that can serve as the grounds for a wide range of deductive truth. But, just as there are two analytic/synthetic distinctions, so there are two concepts of concept. The narrow Lockean/Kantian distinction is based on a narrow notion of expressions on which concepts are senses of expressions in the language. The broad Frégean/Carnap distinction is based on a broad notion of concept on which concepts are conceptions - often scientific one about the nature of the referent (s) of expressions (Katz, 1972) and curiously Putman, 1981). Conflation of these two notions of concepts produced the illusion of a single concept with the content of philosophical, logical and mathematical conceptions, but with the status of linguistic concepts. This encouraged philosophers to think that they were in possession of concepts with the contentual representation to express substantive philosophical claims, e.g., such as Fridge, Schlick and Ayers, . . . and so on, and with a status that trivializes the task of justifying them by requiring only linguistic grounds for the deductive propositions in question.
Finally, there is an important epistemological implication of separating the broad and narrowed notions of analyticity. Fridge and Carnap took the broad notion of analyticity to provide foundations for necessary and a priority, and, hence, for some form of rationalism, and nearly all rationalistically inclined analytic philosophers followed them in this. Thus, when Quine dispatched the Frége-Carnap position on analyticity, it was widely believed that necessary, as a priority, and rationalism had also been despatched, and, as a consequence. Quine had ushered in an empiricism without dogmas and naturalized epistemology. But given there is still a notion of analyticity that enables us to pose the problem of how necessary, synthetic deductive knowledge is possible (moreover, one whose narrowness makes logical and mathematical knowledge part of the problem), Quine did not under-cut the foundations of rationalism. Hence, a serious reappraisal of the new empiricism and naturalized epistemology is, to any the least, is very much in order (Katz, 1990).
In some areas of philosophy and sometimes in things that are less than important we are to find in the deductively/inductive distinction in which has been applied to a wide range of objects, including concepts, propositions, truth and knowledge. Our primary concern will, however, be with the epistemic distinction between deductive and inductive knowledge. The most common way of marking the distinction is by reference to Kants claim that deductive knowledge is absolutely independent of all experience. It is generally agreed that Ss knowledge that p is independent of experience just in case Ss belief that p is justified independently of experience. Some authors (Butchvarov, 1970, and Pollock, 1974) are, however, in finding this negative characterization of deductive unsatisfactory knowledge and have opted for providing a positive characterization in terms of the type of justification on which such knowledge is dependent. Finally, others (Putman, 1983 and Chisholm, 1989) have attempted to mark the distinction by introducing concepts such as necessity and rational unrevisability than in terms of the type of justification relevant to deductive knowledge.
One who characterizes deductive knowledge in terms of justification that is independent of experience is faced with the task of articulating the relevant sense of experience, and proponents of the deductive ly cites intuition or intuitive apprehension as the source of deductive justification. Furthermore, they maintain that these terms refer to a distinctive type of experience that is both common and familiar to most individuals. Hence, there is a broad sense of experience in which deductive justification is dependent of experience. An initially attractive strategy is to suggest that theoretical justification must be independent of sense experience. But this account is too narrow since memory, for example, is not a form of sense experience, but justification based on memory is presumably not deductive. There appear to remain only two options: Provide a general characterization of the relevant sense of experience or enumerates those sources that are experiential. General characterizations of experience often maintain that experience provides information specific to the actual world while non-experiential sources provide information about all possible worlds. This approach, however, reduces the concept of non-experiential justification to the concept of being justified in believing a necessary truth. Accounts by enumeration have two problems (1) there is some controversy about which sources to include in the list, and (2) there is no guarantee that the list is complete. It is generally agreed that perception and memory should be included. Introspection, however, is problematic, and beliefs about ones conscious states and about the manner in which one is appeared to are plausible regarded as experientially justified. Yet, some, such as Pap (1958), maintain that experiments in imagination are the source of deductive justification. Even if this contention is rejected and deductive justification is characterized as justification independent of the evidence of perception, memory and introspection, it remains possible that there are other sources of justification. If it should be the case that clairvoyance, for example, is a source of justified beliefs, such beliefs would be justified deductively on the enumerative account.
The most common approach to offering a positive characterization of deductive justification is to maintain that in the case of basic deductive propositions, understanding the proposition is sufficient to justify one in believing that it is true. This approach faces two pressing issues. What is it to understand a proposition in the manner that suffices for justification? Proponents of the approach typically distinguish understanding the words used to express a proposition from apprehending the proposition itself and maintain that it is the latter which are relevant to deductive justification. But this move simply shifts the problem to that of specifying what it is to apprehend a proposition. Without a solution to this problem, it is difficult, if possible, to evaluate the account since one cannot be sure that the account since on cannot be sure that the requisite sense of apprehension does not justify paradigmatic inductive propositions as well. Even less is said about the manner in which apprehending a proposition justifies one in believing that it is true. Proponents are often content with the bald assertions that one who understands a basic deductive proposition can thereby see that it is true. But what requires explanation is how understanding a proposition enable one to see that it is true.
Difficulties in characterizing deductive justification in a term either of independence from experience or of its source have led, out-of-the-ordinary to present the concept of necessity into their accounts, although this appeal takes various forms. Some have employed it as a necessary condition for deductive justification, others have employed it as a sufficient condition, while still others have employed it as both. In claiming that necessity is a criterion of the deductive. Kant held that necessity is a sufficient condition for deductive justification. This claim, however, needs further clarification. There are three theses regarding the relationship between the theoretically and the necessary that can be distinguished: (I) if p is a necessary proposition and 'S' is justified in believing that 'p' is necessary, then 'S's' justification is deductive: (ii) If 'p' is a necessary proposition and 'S' is justified in believing that 'p' is necessarily true, then 'S's' justification is deductive: And (iii) If 'p' is a necessary proposition and 'S' is justified in believing that 'p', then 'S's' justification is deductive. For example, many proponents of deductive contend that all knowledge of a necessary proposition is deductive. (2) and (3) have the shortcoming of setting by stipulation the issue of whether inductive knowledge of necessary propositions is possible. (I) does not have this shortcoming since the recent examples offered in support of this claim by Kriple (1980) and others have been cases where it is alleged that knowledge of the truth value of necessary propositions is knowable inductive. (I) has the shortcoming, however, of either ruling out the possibility of being justified in believing that a proposition is necessary on the basis of testimony or else sanctioning such justification as deductive. (ii) and (iii), of course, suffer from an analogous problem. These problems are symptomatic of a general shortcoming of the approach: It attempts to provide a sufficient condition for deductive justification solely in terms of the modal status of the proposition believed without making reference to the manner in which it is justified. This shortcoming, however, can be avoided by incorporating necessity as a necessary but not sufficient condition for a prior justification as, for example, in Chisholm (1989). Here there are two theses that must be distinguished: (1) If 'S' is justified deductively in believing that 'p', then p is necessarily true. (2) If 'p' is justified deductively in believing that 'p'. Then 'p' is a necessary proposition. (1) and (2), however, allows this possibility. A further problem with both (1) and (2) is that it is not clear whether they permit deductively justified beliefs about the modal status of a proposition. For they require that in order for 'S' to be justified deductively in believing that 'p' is a necessary preposition it must be necessary that p is a necessary proposition. But the status of iterated modal propositions is controversial. Finally, (1) and (2) both preclude by stipulation the position advanced by Kripke (1980) and Kitcher (1980) that there is deductive knowledge of contingent propositions.
The concept of rational unrevisability has also been invoked to characterize deductive justification. The precise sense of rational unrevisability has been presented in different ways. Putnam (1983) takes rational unrevisability to be both a necessary and sufficient condition for deductive justification while Kitcher (1980) takes it to be only a necessary condition. There are also two different senses of rational unrevisability that have been associated with the deductive (I) a proposition is weakly unreviable just in case it is rationally unrevisable in light of any future experiential evidence, and (II) a proposition is strongly unrevisable just in case it is rationally unrevisable in light of any future evidence. Let us consider the plausibility of requiring either form of rational unrevisability as a necessary condition for deductive justification. The view that a proposition is justified deductive only if it is strongly unrevisable entails that if a non-experiential source of justified beliefs is fallible but self-correcting, it is not a deductive source of justification. Casullo (1988) has argued that it vis implausible to maintain that a proposition that is justified non-experientially is not justified deductively merely because it is revisable in light of further non-experiential evidence. The view that a proposition is justified deductively only if it is, weakly unrevisable is not open to this objection since it excludes only recession in light of experiential evidence. It does, however, face a different problem. To maintain that 'S's' justified belief that 'p' is justified deductively is to make a claim about the type of evidence that justifies 'S' in believing that 'p'. On the other hand, to maintain that 'S's' justified belief that p is rationally revisable in light of experiential evidence is to make a claim about the type of evidence that can defeat 'S's' justification for believing that p that a claim about the type of evidence that justifies 'S' in believing that 'p'. Hence, it has been argued by Edidin (1984) and Casullo (1988) that to hold that a belief is justified deductively only if it is weakly unrevisable is either to confuse supporting evidence with defeating evidence or to endorse some implausible this about the relationship between the two such as that if evidence of the sort as the kind 'A' can defeat the justification conferred on 'S's belief that 'p' by evidence of kind 'B' then 'S's' justification for believing that 'p' is based on evidence of kind 'A'.
The most influential idea in the theory of meaning in the past hundred years is the thesis that the meaning of an indicative sentence is given by its truth-conditions. On this conception, to understand a sentence is to know its truth-conditions. The conception was first clearly formulated by Fridge, was developed in a distinctive way by the early Wittgenstein, and is a leading idea of Donald Herbert Davidson (1917-), who is also known for rejection of the idea of as conceptual scheme, thought of as something peculiar to one language or one way of looking at the world, arguing that where the possibility of translated stops so dopes the coherence of the idea that there is anything to translate. His [papers are collected in the Essays on Actions and Events (1980) and Inquiries into Truth and Interpretation (1983). However, the conception has remained so central that those who offer opposing theories characteristically define their position by reference to it.
Wittgenstein’s main achievement is a uniform theory of language that yields an explanation of logical truth. A factual sentence achieves sense by dividing the possibilities exhaustively into two groups, those that would make it true and those that would make it false. A truth of logic does not divide the possibilities but comes out true in all of them. It, therefore, lacks sense and says nothing, but it is not nonsense. It is a self-cancellation of sense, necessarily true because it is a tautology, the limiting case of factual discourse, like the figure '0' in mathematics. Language takes many forms and even factual discourse does not consist entirely of sentences like The fork is placed to the left of the knife. However, the first thing that he gave up was the idea that this sentence itself needed further analysis into basic sentences mentioning simple objects with no internal structure. He was to concede, that a descriptive word will often get its meaning partly from its place in a system, and he applied this idea to colour-words, arguing that the essential relations between different colours do not indicate that each colour has an internal structure that needs to be taken apart. On the contrary, analysis of our colour-words would only reveal the same pattern-ranges of incompatible properties-recurring at every level, because that is how we carve up the world.
Indeed, it may even be the case that of our ordinary language is created by moves that we ourselves make. If so, the philosophy of language will lead into the connexion between the meaning of a word and the applications of it that its users intend to make. There is also an obvious need for people to understand each others meaning of their words. There are many links between the philosophy of language and the philosophy of mind and it is not surprising that the impersonal examination of language in the Tractatus: was replaced by a very different, anthropocentric treatment in Philosophical Investigations?
If the logic of our language is created by moves that we ourselves make, various kind of realizes are threatened. First, the way in which our descriptive language carves up the world will not be forces on us by the natures of things, and the rules for the application of our words, which feel the external constraints, will really come from within us. That is a concession to nominalism that is, perhaps, readily made. The idea that logical and mathematical necessity is also generated by what we ourselves accomplish what is more paradoxical. Yet, that is the conclusion of Wittgenstein (1956) and (1976), and here his anthropocentricism has carried less conviction. However, a paradox is not sure of error and it is possible that what is needed here is a more sophisticated concept of objectivity than Platonism provides.
In his later work Wittgenstein brings the great problem of philosophy down to Earth and traces them to very ordinary origins. His examination of the concept of following a rule takes him back to a fundamental question about counting things and sorting them into types: What qualifies as doing the same again? Of a courser, this question as an inconsequential fundamental and would suggest that we forget it and get on with the subject. But Wittgenstein’s question is not so easily dismissed. It has the naive profundity of questions that children ask when they are first taught a new subject. Such questions remain unanswered without detriment to their learning, but they point the only way to complete understanding of what is learned.
It is, nevertheless, the meaning of a complex expression in a function of the meaning of its constituents, that is, indeed, that it is just a statement of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning as truth-conditions that it permits a smooth and satisfying account of the way in which the meaning of a complex expression is a dynamic function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. for singular terms-proper names, indexicals, and certain pronouns - this is done by stating the reference of the term in question.
The truth condition of a statement is the condition the world must meet if the statement is to be true. To know this condition is equivalent to knowing the meaning of the statement. Although, this sounds as if it gives a solid anchorage for meaning, some of the security disappears when it turns out that the truth condition can only be defined by repeating the very same statement, the truth condition of snow is white is that snow is white, the truth condition of Britain would have capitulated had Hitler invaded is that Britain would have certainty capitulated had Hitler invaded. It is disputed whether this element of running-on-the-spot disqualifies truth conditions from playing the central role in a substantive theory of meaning. Truth-conditional theories of meaning are sometimes opposed by the view that to know the meaning of a statement is to be able to users it in a network of inferences.
Whatever it is that makes, what would otherwise be mere sounds and inscriptions into instruments of communication and understanding. The philosophical problem is to demystifying this power, and to re-taste it to what we know of ourselves and the world. Contributions to this study include the theory of speech acts and the investigation of communication and the relationship between words and ideas and words and the world. Together with a general bias towards the sensory, in that what lies in the mind may be thought of as something like images, and a belief hat thinking is well explained as the manipulation of images, this was developed through an understanding need to be thought of more in terms of rules and organizing principle than of any kind of copy of what is given in experience.
It has become more common to think of ideas, or concepts, as dependant upon social and especially linguistic structures, than the self-standing creations of an individual mind but the tension between the objective and the subjective aspect of the matter lingers on, for instance in debates about the possibility of objective knowledge of 'indeterminancy' in translated, and of identity between the thoughts people entertain at one time and those that they entertain at another.
Apparent facts to be explained about the distinction between knowing things and knowing about thing are these. Knowledge about things is essentially propositional knowledge, where the mental states involved refer to specific things, this propositional knowledge can be more or less complete, can be justified inferentially and on the basis of experience, and can be communicated. knowing things, on the one hand, involves experience of things. This experiential knowledge provides an epistemic basis for knowledge about things, and in some sense is difficult or impossible to communicate, perhaps because it is more or less vague, least of mention, as knowing by vicariaus living through, a sort of knowledge by acquaintance that amounts to knowing what an experience is like.
What makes a belief justified and what makes a true belief knowledge? It is natural to think that whether a belief deserves one of these appraisals depends on what caused the subject to have the belief. Some causal theories of knowledge have it that a true belief that p is knowledge just in case that the right sort of causal connections to the fact that p. Such a criterion can be applied only to cases where the fact that p is a sort that can enter into causal relations, this seems to exclude mathematical and other necessary fact and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual knowledge of particular facts about the subject's environment.
A contrast relating the more general (colour) to the more specific (red). It was originally introduced by W.E. Johnson, and, one kind of usage, the contrast differs from that of genres to species, in that the specific differences identifying a determinate are themselves a medication of the determinable. Thus, what differentiates red from blue is just colour, Whereas many different properties may differentiate a member of one species, for instance of animals, from those of another.
What is more, belonging to the doctrine of determinism that every event has a cause. The usual explanation of this is that for every event, there is some antecedent state, related in such a way hat it would break a law of nature for this antecedent state to exist, yet the event not to happen. This is a purely metaphysical claim, and carries no implications for whether we can in principle predict the event. The main interests in determinism has been in assessing its implications for free-will, however, quantum physics is essentially indeterminate yet, the view that our actions are subject to quantum indéterminists hardly encourages a sense of our own responsibility for them. It is often supposed that if an action is the end of a causal chain, i.e., determined, and the cause stretch back in time to the event for which an agent is not conceivable responsibility, then the agent is not responsible for the action. The dilemma adds that if an action is not the end of such a chain, then either it or one of its causes occurs at random, in that no antecedent event brought it about, and in that case nobody is responsible for its occurrence either, so whether or not determinism is true, responsibility is shown to be illusory.
The theorist of truth conditions should insist that not every true statement about the reference of an expression be fit to be an axiom in a meaning-giving theory of truth for a language: The axiom:
London refers to the city in which there was a huge fire in 1666
is a true statement about the reference of London? . It is a consequence of a theory that substitutes this axiom for A! In our simple truth theory that London is beautiful is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand the name London without knowing that last-mentioned truth conditions, this replacement axiom is not fit to be an axiom in a meaning inferred by the specifying truth theory. It is, of course, incumbent on a theorist of meaning as truth conditions to state the constraints on the acceptability of axioms in a way that does not presuppose a deductive, non-truth conditional conception of meaning.
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity. Second, the theorist must offer an account of what it is for a persons languages to be truly descriptive by a semantic theory containing a given semantic axiom.
We can take the charge of triviality first. In more detail, it would run thus: Since the content of a claim that the sentence Paris is beautiful in which is true of the divisional region, which is no more than the claim that Paris is beautiful, we can trivially describe understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than a grasp to truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory that, is somewhat more discriminative. Horwich calls the minimal theory of truth, or deflationary view of truth, as fathered by Fridge and Ramsey. The essential claim is that the predicate . . . is true does not have a sense, i.e., expresses no substantive or profound or explanatory concepts that ought be the topic of philosophical enquiry. The approach admits of different versions, but centers on the points (1) that it is true that p says no more nor less than p (hence redundancy) (2) that in less direct context, such as everything he said was true, or all logical consequences of true propositions are true, the predicate functions as a device enabling us; to generalize than as an adjective or predicate describing the thing he said, or the kinds of propositions that follow from true propositions. For example, the second may translate as (∀ p, q) (p & p ➝ q ➝ q) where there is no use of a notion of truth.
There are technical problems in interpreting all uses of the notion of truth in such ways, but they are not generally felt to be insurmountable. The approach needs to explain away apparently substantive uses of the notion, such of a science aims at the truth, or truth is a norm governing discourse. Indeed, postmodernist writing frequently advocates that we must abandon such norms, along with a discredited objective conception of truth. But perhaps, we can have the norms even when objectivity is problematic, since they can be framed without mention of truth: Science wants it to be so that whenever science holds that 'p'. Then 'p'. Discourse is to be regulated by the principle that it is wrong to assert 'p' when 'not-p'.
The disquotational theory of truth finds that the simplest formulation is the claim that expressions of the formed ‘S’ are true mean the same as expressions of the form 'S'. Some philosophers dislike the idea of sameness of meaning, and if this is disallowed, then the claim is that the two forms are equivalent in any sense of equivalence that matters. That is, it makes no difference whether people say Dogs bark is true, or whether they say that dogs bark. In the former representation of what they say the sentence Dogs bark is mentioned, but in the latter it appears to be used, so the claim that the two are equivalent needs careful formulation and defence. On the face of it someone might know that Dogs bark is true without knowing what it means, for instance, if one were to find it in a list of acknowledged truths, although he does not understand English, and this is different from knowing that dogs bark. Disquotational theories are usually presented as versions of the redundancy theory of truth.
The minimal theory states that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition 'p', it is true that 'p' if and only if 'p'. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truths. It is how widely accepted, that both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning (Davidson, 1990, Dummett, 1959 and Horwich, 1990). If the claim that the sentence Paris is beautiful is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try to explain the sentences meaning in terms of its truth conditions. The minimal theory of truth has been endorsed by Ramsey, Ayer, the later Wittgenstein, Quine, Strawson, Horwich and-confusingly and inconsistently if it is correct - Fridge himself. But is the minimal theory correct?
The minimal or redundancy theory treats instances of the equivalence principle as definitional of truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as: London is beautiful is true if and only if London is beautiful, preserve a right to be interpreted specifically of this would be a pseudo-explanation if the fact that London refers to London is beautiful has the truth-condition it does. But that is very implausible: It is, after all, possible to understand the name London without understanding the predicate is beautiful. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something that is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal, theory thus treats as definitional or stimulative something that is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth that has, among the many links that hold it in place, systematic connections with the semantic values of sub-sentential expressions.
A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truths that go beyond anything countenanced by the minimal theory. If the minimal theory administer to truth as a predicate of anything linguistic, be it utterances, type-in-a-languages, or whatever, then the equivalence schema will not cover all cases, however, only those in the theorists own languages. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independence propositions or thoughts will only postpone, not avoid, this issue, since at some point principles have to be stated associating these languages-independent entities with sentences of particular languages. The defender of the minimalist theory is likely to say that if a sentence 'S' of a foreign language is best translated by our sentence 'p', then the foreign sentence 'S' is true if and only if 'p'. Now the best translated of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are persuasive in a plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individualized account of any concept that there exists what is called Determination Theory for that account-that is, a specification of how the account contributes to fixing the semantic value of that concept, the notion of a concepts semantic value is the notion of something that makes a certain contribution to the truth conditions of thoughts in which the concept occurs. but this is to presuppose, than to elucidate, a general notion of truth.
It is also plausible that there are general constraints on the form of such Determination Theories, constraints that involve truth and which are not derivable from the minimalists conception. Suppose that concepts are individuated by their possession conditions. A concept is something that is capable of being a constituent of such contentual representational in a way of thinking of something-a particular object, or property, or relation, or another entity. A possession condition may in various says makes a thankers possession of a particular concept dependent upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinkers perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subjects environment. If this is so, then mention of such experiences in a possession condition will make possession of that condition will make possession of that concept dependent in part upon the environment relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinkers non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinkers social environment is varied. A possession condition which property individuates such a concept must take into account the thinkers social relations, in particular his linguistic relations.
One such plausible general constraint is then the requirement that when a thinker forms beliefs involving a concept in accordance with its possession condition, a semantic value is assigned to the concept in such a way that the belief is true. Some general principles involving truth can indeed, as Horwich has emphasized, be derived from the equivalence schema using minimal logical apparatus. Consider, for instance, the principle that Paris is beautiful and London is beautiful is true if and only if Paris is beautiful is true if and only if Paris is beautiful is true and London is beautiful is true. This follows logically from the three instances of the equivalence principle: Paris is beautiful and London is beautiful is rue if and only if Paris is beautiful, and London is beautiful is true if and only if London is beautiful. But no logical manipulations of the equivalence schemas will allow the deprivation of that general constraint governing possession conditions, truth and the assignment of semantic values. That constraint can have courses be regarded as a further elaboration of the idea that truth is one of the aims of judgement.
We now turn to the other question, What is it for a persons languages to be correctly describable by a semantic theory containing a particular axiom, such as the axiom A6 above for conjunction? This question may be addressed at two depths of generality. At the shallower level, the question may take for granted the persons possession of the concept of conjunction, and be concerned with what has to be true for the axiom correctly to describe his languages. At a deeper level, an answer should not duck the issue of what it is to possess the concept. The answers to both questions are of great interest: We will take the lesser level of generality first.
When a person means conjunction by sand, he is not necessarily capable of formulating explicit axioms. Even if he can formulate it, his ability to formulate it is not the causal basis of his capacity to hear sentences containing the word and as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences he utters containing the word and. Is it then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of depriving a theorem from a truth theory at some level of conscious proceedings? One problem with this is that it is quite implausible that everyone who speaks the same languages has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, thanks particularly to the work of Davies and Evans, a conception has evolved according to which an axiom is true of a persons languages only if there is a common component in the explanation of his understanding of each sentence containing the word and, a common component that explains why each such sentence is understood as meaning something involving conjunction (Davies, 1987). This conception can also be elaborated in computational terms: Suggesting that for an axiom to be true of a persons languages is for the unconscious mechanisms which produce understanding to draw on the information that a sentence of the form 'A' and 'B' are true if and only if 'A' is true and 'B' is true (Peacocke, 1986). Many different algorithms may equally draw in this information. The psychological reality of a semantic theory thus involves, in Marrs' (1982) famous classification, something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phenomena phenomnological theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithms that the languages user employs. The identification of the particular computational methods employed is a task for psychology. But semantics, syntactic and phonology theories are answerable to psychological data, and are potentially refutable by them-for these linguistic theories do make commitments to the information drawn upon by mechanisms in the languages’ user.
This answer to the question of what it is for an axiom to be true of a persons languages clearly takes for granted the persons possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information drawn upon is that sentences of the form 'A' and 'B' are true if and only if 'A' is true and 'B' is true. This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing and. So the computational answer we have returned needs further elaboration if we are to address the deeper question, which does not want to take for granted possession of the concepts expressed in the languages. It is at this point that the theory of linguistic understanding has to draws upon a theory of concepts. It is plausible that the concepts of conjunction are individuated by the following condition for a thinker to possess it.
Finally, this response to the deeper question allows us to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory is correctly in that of another, when the two axioms assign the same semantic values, but do so by means of different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level of what it is for each axiom to be correct for some persons languages will be different accounts. Second, there is a challenge repeatedly made by the minimalist theorists of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to understand any sentence containing that expression. The combined accounts for each of he expressions that comprise a given sentence together constitute a non-circular account of what it is to understand the competed sentences. Taken together, they allow the theorists of meaning as truth-conditions fully to meet the challenge.
A curious view common to that which is expressed by an utterance or sentence: The proposition or claim made about the world. By extension, the content of a predicate or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the central concern of the philosophy of languages, in that mental states have contents: A belief may have the content that the prime minister will resign. A concept is something that is capable of bringing a constituent of such contents.
Several different concepts may each be ways of thinking of the same object. A person may think of himself in the first-person way, or think of himself as the spouse of Mary Smith, or as the person located in a certain room now. More generally, a concept c is distinct from a concept d if it is possible for a person rationally to believe d is such-and-such. As words can be combined to form structured sentences, concepts have also been conceived as combinable into structured complex contents. When these complex contents are expressed in English by that . . . clauses, as in our opening examples, they will be capable of being true or false, depending on the way the world is.
The general system of concepts with which we organize our thoughts and perceptions are to encourage a conceptual scheme of which the outstanding elements of our every day conceptual formalities include spatial and temporal relations between events and enduring objects, causal relations, other persons, meaning-bearing utterances of others, . . . and so on. To see the world as containing such things is to share this much of our conceptual scheme. A controversial argument of Davidson's urges that we would be unable to interpret speech from a different conceptual scheme as even meaningful, Davidson daringly goes on to argue that since translated proceeds according ti a principle of clarity, and since it must be possible of an omniscient translator to make sense of, us we can be assured that most of the beliefs formed within the commonsense conceptual framework are true.
Concepts are to be distinguished from a stereotype and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. Nonetheless, we can come to learn that Anthony Blunt, art historian and Surveyor of the Queens Pictures, are a spy; we can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype associated wit the concept. Similarly, an individual’s conception of a just arrangement for resolving disputes that might involve something as likely similar to contemporary Western legal systems. But whether or not it would be correct, it may intelligible be quickened by someone to rejects this conception by arguing that it dies not adequately provides for the elements of fairness and respect that are required by the concepts of justice.
Basically, a concept is that which is understood by a term, particularly a predicate. To posses a concept is to be able to deploy a term expressing it in making judgements, in which the ability connexion is such things as recognizing when the term applies, and being able to understand the consequences of its application. The term idea was formally used in the same way, but is avoided because of its associations with subjective matters inferred upon mental imagery in which may be irrelevant to the possession of a concept. In the semantics of Fridge, a concept is the reference of a predicate, and cannot be referred to by a subjective term, although its recognition of as a concept, in that some such notion is needed to the explanatory justification of which that sentence of unity finds of itself from being thought of as namely categorized lists of itemized priorities.
A theory of a particular concept must be distinguished from a theory of the object or objects it selectively picks the outlying of the theory of the concept under which is partially contingent of the theory of thought and/or epistemology. A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy-and are open to the accusation of not having fully respected the distinction between the kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought I think, containing the first-person was of thinking, to conclusions about the nonmaterial nature of the object he himself was. But though the goals of a theory of concepts and a theory of objects are distinct, each theory is required to have an adequate account of its relation to the other theory. A theory if concept is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
German philosopher Gottfried Wilhelm Leibniz, like Spinoza before him, worked in the rationalist (reason-based) tradition to produce a brilliant solution to the problems raised by dualism. Leibniz, a mathematician and statesman as well as a philosopher, developed a remarkably subtle and original system of philosophy that combined the mathematical and physical discoveries of his time with the organic and religious conceptions of nature found in ancient and medieval thought. Leibniz viewed the world as an infinite number of infinitely small units of force, called monads, each of which is a closed world but mirrors all the other monads through its activity, which is perception. All the monads are spiritual entities, but they can combine to form material bodies. Leibniz conceived of God as the Monad of Monads, which creates all other monads and predestines their development.
Leibniz’s theory of the predestination of monads, also called the theory of preestablished harmony, entailed a radical rejection of causality—the view that every effect must have a cause. According to Leibniz, monads do not interact with each other at all, and the appearance of mechanical causality in the natural world is unreal, akin to an illusion. Likewise, there is no room in the universe for free will: Even though we enjoy the illusion of acting freely, all human actions are predetermined by God. Despite these gloomy conclusions, Leibniz’s philosophy was profoundly optimistic because he argued that ours was the best of all possible worlds. He based this belief on considerations about the nature of truth and necessity. French writer Voltaire mocked this viewpoint in Candide (1759), a satirical novel that examines the woes heaped on the world in the name of God.
In the 18th century Irish philosopher and Anglican churchman George Berkeley, like Spinoza before him, rejected both Cartesian dualism and the assertion by Hobbes that only matter is real. Berkeley maintained that spirit is substance, and that only spiritual substance is real. Extending Locke’s doubts about knowledge of an external world, outside the mind, Berkeley argued that no evidence exists for the existence of such a world, because the only things that we can observe are our own sensations, and these are in the mind. The very notion of matter, he maintained, is incoherent and impossible. To exist, he claimed, means to be perceived (“esse est percipi”), and in order for things to exist when we are not observing them, they must continue to be perceived by God. By claiming that sensory phenomena are the only objects of human knowledge, Berkeley established the view known as phenomenalism, a theory of perception that suggests that matter can be analysed in terms of sensations.
Irish-born philosopher and clergyman George Berkeley (1685-1753) argued that everything that human beings conceive of exists as an idea in a mind, a philosophical focus that is known as idealism. Berkeley reasoned that because one cannot control one’s thoughts, they must come directly from a larger mind: that of God. In this excerpt from his Treatise Concerning the Principles of Human Knowledge, written in 1710, Berkeley explained why he believed that it is “impossible - that there should be any such thing as an outward object.”
Irish philosopher and clergyman George Berkeley set out to challenge what he saw as the atheism and skepticism inherent in the prevailing philosophy of the early 18th century. His initial publications, which asserted that no objects or matter existed outside the human mind, were met with disdain by the London intelligentsia of the day. Berkeley aimed to explain his “Immateriality” theory, part of the school of thought known as idealism, to a more general audience in Three Dialogues between Hylas and Philonous (1713). This passage is from the close of the third dialogue.
Scottish philosopher David Hume is considered one of the greatest skeptics in the history of philosophy. Hume thought that one can know nothing outside of experience, and experience-based on one’s subjective perceptions—never provides true knowledge of reality. Even the law of cause and effect was, for Hume, an unjustified belief: If one drops a ball, one cannot be certain it will fall to the ground. Rather, it is only possible to recognize through experience that certain pairs of events (dropping a ball, the ball striking the ground) have always accompanied one another.
Whereas Berkeley argued against materialism by denying the existence of matter, 18th-century Scottish philosopher David Hume questioned the existence of the mind itself. Hume’s sceptical philosophy also cast doubt on the idea of cause as understood in all previous philosophies and seriously disputed earlier arguments for the existence of God. His most important philosophical work, A Treatise of Human Nature, was published in three volumes in 1739 and 1740.
English philosopher David Hume’s An Enquiry Concerning Human Understanding (1748) distilled the ideas of his earlier work, A Treatise of Human Nature (1739-1740). The Enquiry demonstrates Hume’s extreme skepticism toward “objective” thinking. Like English philosopher George Berkeley, Hume rejected the idea that humans could gain an objective knowledge of facts and events. In this excerpt, Hume drew a contrast between ideas and impressions. Impressions, he argued, are the experiences that come directly from our senses. Ideas are “less lively” experiences that are derived from our previous impressions or experiences. Therefore, despite the power of human imagination, “ideas” are still not as forceful as actual sensation or experience.
All metaphysical assertions about things that cannot be directly perceived are equally meaningless, Hume claimed, and should be “committed to the flames.” In his analyses of causality and induction, Hume revealed that there is no logical justification for believing that any two events that occur together are connected by cause and effect or for making any inference from past to future. Hume noted that we depend on our experience whenever we form beliefs about anything that we do not directly perceive and whenever we make predictions about the future. According to the empiricist doctrine of Bacon, Locke, and Berkeley, we can do this because experience teaches us what particular things belong together as causes and effects. Hume, however, argued that this attempt to learn from experience is not at all rational, thus calling into question the reliability of our memories, our reasoning processes, and our ability to learn from experiences or to make even the smallest predictions about the future—for example, that the sun will rise tomorrow. Though extreme, Hume’s skepticism about philosophical empiricism raised problems about the possibility of knowledge that contemporary philosophers still struggle to resolve.
Eighteenth-century German philosopher Immanuel Kant explored the possibilities of what reason can tell about the world of experience. In his critiques of science, morality, and art, Kant attempted to derive universal rules to which, he claimed, every rational person should subscribe. In Critique of Pure Reason (1781), Kant argued that people cannot understand the nature of the things in the universe, but they can be rationally certain of what they experience themselves. Within this realm of experience, fundamental notions such as space and time are certain.
German philosopher Immanuel Kant was among the first to appreciate Hume’s skepticism, and in response he published the Critique of Pure Reason (1781), widely considered the greatest single work in modern philosophy. In this work Kant made a thorough and systematic analysis of the conditions for knowledge. As an example of genuine knowledge, he had in mind the contributions to physics of English scientist Isaac Newton. In the case of Newtonian physics, reason seemed to have done an effective job of understanding the data supplied by the senses and to have succeeded in postulating universal and necessary laws of nature, such as the law of gravitation and the laws of motion. Kant proposed to explain how such knowledge is possible, thereby providing a complete reply to Hume’s skepticism and answering many of the problems that had plagued Western philosophers since the time of Descartes.
The 18th-century German philosopher Immanuel Kant published his influential work The Critique of Pure Reason in 1781. Three years later, he explicated further on his study of the modes of thinking with an essay entitled “What is Enlightenment?” In this 1784 essay, Kant challenged readers to “dare to know,” arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.
Kant started by making a fresh analysis of the elements of knowledge, asking for the first time an extremely basic question, “How is our experience possible in the first place?” Kant’s predecessors had taken experience for granted. Thus Descartes agreed that we seem to have sensory knowledge of the world but asked whether this knowledge was true or the result of a dream. Similarly, Hume’s skepticism about causation arose when he concluded that we do not encounter causality in our ordinary experience of the world and that any inferences about it, beyond immediate experience, were questionable. Kant’s answer to the skepticism of Descartes and Hume involved certain categories, such as space, time, substance, and causality, which he maintained are essential to our thinking and to our experience of phenomena in the world. These categories he called transcendental. All objects of our knowledge, he concluded, must conform to the human mind’s essential ways of perceiving and understanding—ways that involve the transcendental categories—if they are to be knowable at all. Kant maintained that he had developed a revolutionary hypothesis about knowledge and reality that he believed to be as significant for the future of philosophy as the hypothesis of Copernicus—that the planets orbit the Sun—had been for science.
Kant’s claim that causality, substance, space, and time is forms imposed by the mind gave support to the idealism of Leibniz and Berkeley. Kant, however, made his view a more critical form of idealism by granting the empiricist claim that things-in-themselves - that is, things as they exist outside human experience—are unknowable. Kant therefore limited knowledge to the “phenomenal world” of experience, maintaining that metaphysical beliefs about the soul, the cosmos, and God (the “noumenal world” transcending human experience) are matters of faith rather than of scientific knowledge.
In his ethical writings Kant held that moral principles are categorical imperatives, absolute commands of reason that permit no exceptions and are not related pleasure or practical benefits. Kant argued that human beings should act as members of an ideal “kingdom of ends” in which every person is treated as an end in himself or herself, and never as a means to someone else’s ends. In addition, everyone should govern their conduct as if their actions were to be made law—a law that applies equally to all without exception. Kant thereby postulated a freedom of action based on moral order and equality. His moral philosophy contributed to modern political ideas about freedom and democracy. Kant was a leading figure of the movement for reason and liberty against tradition and authority, and in his religious teachings he emphasized individual conscience and represented God primarily as a moral ideal.
Kant’s writings constituted a high point of the Enlightenment, a fertile intellectual and cultural period that helped stimulate the social changes that produced the French Revolution (1789-1799). Other leading thinkers of this movement included Voltaire, Jean Jacques Rousseau, and Denis Diderot. Voltaire, developing the tradition of Deism begun by Locke and other liberal thinkers, reduced religious beliefs to those that can be justified by rational inference from the study of nature. Rousseau criticized civilization as a corruption of humanity’s nature and developed Hobbes’s doctrine that the state is based on a social contract with its citizens and represents the popular will. Diderot published his 35-volume work, known as the Encyclopédie to which many scientists and philosophers contributed. Diderot and his Encyclopaedists, as they were known, associated the progress and the happiness of humankind with science and knowledge, whereas Rousseau criticized such ideas along with the very notion of civilization.
Philosophers of the 19th century generally developed their views with reference to the work of Kant. In Germany, Kant’s influence led subsequent philosophers to explore idealism and ethical voluntarism, a philosophical tradition that places a strong emphasis on human will. Whereas philosophers before Kant had explored the objects of knowledge, German philosophers who followed Kant on the path of idealism turned to the subject of knowledge - known variously as the ego, the I, the mind, and human consciousness.
Johann Gottlieb Fichte transformed Kant’s critical idealism into absolute idealism by eliminating Kant’s “things-in-themselves” (external reality) and making the self, or the ego, the ultimate reality. Fichte maintained that the world is created by an absolute ego, which is conscious first of itself and only later of non-self, or the otherness of the world. The human will, a partial manifestation of self, gives human beings freedom to act. Friedrich Wilhelm Joseph von Schelling moved still further toward absolute idealism by construing objects or things as the works of the imagination and Nature as an all-embracing being, spiritual in character. Schelling became the leading philosopher of the movement known as romanticism, which in contrast to the Enlightenment placed its faith in feeling and the creative imagination rather than in reason. The romantic view of the divinity of nature influenced the American transcendentalist movement, led by poet and essayist Ralph Waldo Emerson.
` German philosopher Georg Wilhelm Friedrich Hegel proposed that truth is reached by a continuing dialectic, in which a concept (thesis) always gives rise to its opposite (antithesis), and the interaction between these two leads and the creation of a new concept (synthesis). Hegel employed this dialectical method in such works as Phenomenology of the Mind (1807) to explain history and the evolution of ideas.
The most powerful philosophical mind of the 19th century was the German philosopher Georg Wilhelm Friedrich Hegel, whose system of absolute idealism, although influenced greatly by Kant and Schelling, was based on a new conception of logic and philosophical method. Hegel believed that absolute truth, or reality, existed and that the human mind can know it. This is so because “whatever is real is rational,” according to Hegel. He conceived the subject matter of philosophy to be reality as a whole, a reality that he referred to as Absolute Spirit, or cosmic reason. The world of human experience, whether subjective or objective, he viewed as the manifestation of Absolute Spirit.
In his effort to develop an all-encompassing philosophical system, German philosopher Georg Wilhelm Friedrich Hegel wrote on topics ranging from logic and history to art and literature. He considered art to be one of the supreme developments of spiritual and absolute knowledge, surpassed only by religion and philosophy. In this excerpt from Introductory Lectures on Aesthetics, which was based on lectures that Hegel delivered between 1820 and 1829, Hegel discussed the relationship of poetry to other arts, particularly music, and explained that poetry was one mode of expressing the “Idea of beauty” that Hegel believed resided in all art forms. For Hegel, poetry was “the universal realization of the art of the mind.”
Philosophy’s task, according to Hegel, is to chart the development of Absolute Spirit from abstract, undifferentiated being into more and more concrete reality. Hegel believed this development occurs by a dialectical process - hat is, a process through which conflicting ideas become resolved - which consists of a series of stages that occur in triads (sets of three). Each triad involves (1) an initial state (or thesis), which might be an idea or a movement; (2) its opposite state (or antithesis); and (3) a higher state, or synthesis, that combines elements from the two opposites into a new and superior arrangement. The synthesis then becomes the thesis of the next triad in an unending progress toward the ideal.
Hegel argued that this dialectical logic applies to all knowledge, including science and history. His discussion of history was particularly influential, especially because it supported the political and social philosophy later developed by Karl Marx. According to Hegel human history demonstrates the dialectical development of Absolute Spirit, which can be observed by studying conflicts and wars and the rise and fall of civilizations. He maintained that political states are real entities, the manifestation of Spirit in the world, and participants of history. In every epoch a particular state is the bearer or agent of spiritual advance, and it thereby gathers to itself power. Because the dialectic means opposition and conflict, war must be expected, and it has value as evidence of the health of a state.
Hegel’s philosophy stimulated interest in history by representing it as a deeper penetration into reality than the natural sciences provide. His conception of the national state as the highest social embodiment of the Absolute Spirit was for some time believed to be a main source of 20th-century totalitarianism, although Hegel himself advocated a large measure of individual freedom.
German philosopher Arthur Schopenhauer considered the will as the basic reality and the source of human unhappiness, beliefs he set forth in 1819 in his major work, The World as Will and Idea. Only through reason and resignation can human beings overcome the strivings and desires of the will and achieve happiness.
German philosophers of the 19th century who came after Hegel rejected Hegel’s faith in reason and progress. Arthur Schopenhauer in The World as Will and Idea (1819) argued that existence is fundamentally irrational and an expression of a blind, meaningless force—the human will, which encompasses the will to live, the will to reproduce, and so forth. Will, however, entails continuous striving and results in disappointment and suffering. Schopenhauer offered two avenues of escape from irrational will: through the contemplation of art, which enables one to endure the tragedy of life, and through the renunciation of will and of the striving for happiness.
German philosopher Arthur Schopenhauer developed a philosophy of pessimism that focussed on the nature of the “will,” a term Schopenhauer used to mean both a person’s individual desires as well as the overall essence of being alive. Schopenhauer believed that although “will” was essential to life, it was also the source of endless striving and discontent. In this excerpt from Parerga und Paralipomena (1851, translated as Essays and Aphorisms), Schopenhauer contemplated the role of suffering in human life, and argued that pain was an inescapable part of life. Schopenauer’s acceptance of human suffering reflected the influence of both Christian and Indian Buddhist religious traditions.
Schopenhauer was one of the first Western philosophers to be influenced by Indian philosophy, which was then appearing in Europe in translation. The influence of Buddhist thought, for example, appears in his sense that the world is full of evil and suffering which can be overcome only through resignation and renunciation. Schopenhauer’s own view that an irrational force lies at the centre of life subsequently influenced voluntaristic psychology, a school of psychology that emphasized the causes for our choices; sociological studies that examine nonrational factors affecting people; and cultural attitudes that play down the value of reason in life.
During the 19th century, Friedrich Nietzsche, in a return to the classical ideals of ancient Greece, attacked Christianity and moral philosophy. According to Nietzsche, the ideals of Christianity and moral philosophy were “slave moralities” that sought to restrict individuals of talent and vision from rising above the masses. He acclaimed the “will to power” and praised the creative accomplishments of great individuals.
German philosopher Friedrich Nietzsche continued the revolt against reason initiated by the romantic movement, but he scornfully repudiated Schopenhauer’s negative, resigned attitude. Instead, Nietzsche affirmed the value of vitality, strength, and the supremacy of an existence that is purely egoistic. He also scorned the Christian and democratic ideas of the equal worth of human beings, maintaining that it is up to a few aristocrats to refuse to subordinate themselves to a state or cause, and thereby achieve self-realization and greatness. For Nietzsche the power to be strong was the greatest value in life. Although Nietzsche valued geniuses over dictators, his beliefs helped bolster the ideas of the National Socialists (Nazis) who gained control of Germany in the 1930s.
One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.
Nineteenth-century Danish philosopher Søren Kierkegaard helped found existentialism and explored the inherent paradoxes in Christianity. In his book Fear and Trembling, Kierkegaard discussed Genesis 22, in which God commanded Abraham to kill his only son, Isaac. Abraham was prepared to heed God’s unreasonable request, and Kierkegaard regards Abraham’s “leap of faith” in this matter as the essence of religion.
Danish philosopher Søren Kierkegaard developed another distinctive philosophy of life. Kierkegaard’s ideas, which were not appreciated until a century after their appearance, were literary, religious, and self-revealing rather than systematic in character. They stressed the importance of experiences that the intellectual mind judges as absurd, including the experiences of angst (“anxiety”) and “fear and trembling.” (The latter phrase is the title of one of his books.) Such experiences, in his view, lead first to despair and eventually to religious faith. Kierkegaard discussed this process in terms of the religious person who is commanded by God to sacrifice his own most cherished treasures, as in the example of Abraham and the sacrifice of Isaac in the Old Testament. Although Abraham cannot understand this absurd request from God, he decides to obey his commitment to God. Through such terrible experiences, Kierkegaard claimed, we learn that humanity’s relationship to God is absolute and all else relative. What is most significant in a person’s life, Kierkegaard concluded, are the decisions made in such ethical crises?
Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.
Kierkegaard’s ideas came to have importance in the 20th century. The concepts of existence, dread, the absurd, and decision were influential in Germany, France, and English-speaking countries. The condition of humankind during an epoch with two world wars gave these ideas a new relevance; the philosophers who developed them founded the movement now known as existentialism.
In the 18th century British philosopher Jeremy Bentham founded the ethical, legal, and political doctrine of utilitarianism, which states that correct actions are those that result in the greatest happiness for the greatest number of people. For Bentham, happiness is precisely quantifiable and reducible to units of pleasure, less units of pain. Bentham was strongly opposed to then-dominant theories of natural rights, in which human beings are believed to possess certain inherent and unalterable social requirements.
Jeremy Bentham and John Stuart Mill, both economists as well as philosophers, dominated philosophy in England during the 19th century. Bentham originated the ethical principle of utilitarianism - that what is useful is good—and Mill developed and refined the doctrine. The utilitarians argued for an ethical principle that would be superior to the self-interest of the individual, just as Kant had established a rational principle of moral law superior to individual desire, by which people’s conduct ought to be governed. The utilitarians based their principle on the theory that everyone desires his or her own happiness that people have to find that happiness in society, and that consequently we all have an interest in the general happiness. They took the position that whatever produces the greatest happiness for the greatest number of people is what is most useful for all. This is the meaning of the principle of utility, or benefit, from which utilitarianism takes its name.
English philosopher-economist John Stuart Mill was one of the most important thinkers of the 19th century. The son of English philosopher James Mill, he refined and elaborated on the work of his father and of English philosopher-economist Jeremy Bentham in his book Utilitarianism (1863). Utilitarian philosophers argued that all decisions could be made according to the principal of the greatest “utility,” or benefit, to the greatest number of people. In this section from the end of the work, Mill discussed the issue of criminal punishment and examined how it related to concepts of justice and fairness and to the doctrine of utility.
In evaluating happiness, Bentham believed it possible to measure quantitatively the pleasures resulting from each action—the pleasures of oneself and the pleasure of others—and thus to decide in any instance what promoted the greatest amount of happiness. Mill partly abandoned that idea and maintained that one should consider the quality, or type, of pleasure as well as the quantity. Mill applied utilitarian principles to social justice, and the principle of utility influenced legislation that brought about social and economic reforms in Great Britain.
British philosopher and economist John Stuart Mill, though a leading proponent of utilitarianism during the 19th century, came to understand that utilitarian thought was flawed because it failed to take account of people’s emotions. He became outspoken on the subject of equality for women, an unpopular cause at the time. His essay The Subjection of Women (1869) sought to shift the law and public perceptions in order to free women from what was effectively slavery, and to allow them to live as individuals.
British philosopher and economist Jeremy Bentham (1748-1832) was the originator of the doctrine known as utilitarianism. He declared that in order to come into accord with the laws of nature, government and citizens should act to increase the overall happiness of the community. The utilitarian principles of Bentham and others who shared his beliefs, including British philosopher-economists James Mill (1773-1836) and his son, John Stuart Mill (1806-1873), helped to bring about social and political reform in Britain.
The most influential achievement in political philosophy during the 19th century was the development of Marxism (see Political Theory). German political philosopher Karl Marx, who created the system known as Marxism, and his collaborator Friedrich Engels accepted the basic form of Hegel’s dialectic of history, but they made crucial modifications. For them history was a matter of the development not of Absolute Spirit but of the material conditions governing humanity’s economic existence. In their view, later known as historical materialism, the history of society is a history of class struggle in which the ruling class uses religion and other traditions and institutions, as well as its economic power, to reinforce its domination over the working classes. Human culture, according to Marx, is dependent on economic (material) conditions and serves economic ends. Religion, he concluded, is “the opiate of the masses” that serves the political end of suppressing mass revolution. Marxism is a theory of revolution, of history, of economics, and of politics, and it served as the ideology for Communism. Although he was a philosopher Marx had disdain for merely theoretical intellectual work, stating, “The philosophers have only interpreted the world in different ways; the point is to change it.”
Many Socialist and Communist tracts were published in the early and mid-19th century, but the succinct expression of Socialist ideas and the dramatic rhetoric helped make The Communist Manifesto (1848) the central text of modern Socialism. In this excerpt from Part I, German political philosopher Karl Marx advanced the idea that the history of society is the history of struggle between the oppressed and their oppressors. Marx based the Manifesto in part on a draft prepared by German revolutionary political economist Friedrich Engels.
Marx’s view of human history is both profoundly pessimistic and profoundly optimistic. Its pessimism lies in his belief that history reflects the oppression that employs many of using the small minority, who thereby secure economic and political power. It is optimistic on two counts. First, Marx believed that technical innovations bring about new ways of meeting human needs and make it increasingly possible for people to satisfy their deepest wants and to develop and perfect their individual capacities. Second, Marx claimed to have proved that the long history of oppression would soon end when the masses rise up and usher in a revolution that will create a classless utopian society. The first idea enabled Marx to bring attention in the modern era to Aristotle’s idealistic conception of human flourishing, which called upon people to develop and manifest many different abilities, including intellectual, artistic, and physical skills. The second idea motivated much radical activity during the 20th century, including the Russian Revolutions of 1917, the Communist victory in China in 1949, and the Cuban Revolution of 1959.
American psychologist and philosopher William James helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that ‘truth is what work’, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
Toward the end of the 19th century, pragmatism became the most vital school of thought within American philosophy. It continued the empiricist tradition of grounding knowledge on experience and stressing the inductive procedures of experimental science. The pragmatists believed in the progress of human knowledge and that ideas are tools whose validity and significance are established as people adapt and test them in physical and social settings. For pragmatists, ideas demonstrate their value insofar as they enrich human experience.
At the turn of the century, American psychologist and philosopher William James gave a series of lectures on religion at Scotland’s University of Edinburgh. In the 20 lectures he delivered between 1901 and 1902, published together as The Varieties of Religious Experience (1902), James discussed such topics as the existence of God, religious conversions, and immortality. In his lectures on mysticism, excerpted here, James defined the characteristics of a mystical experience—a state of consciousness in which God is directly experienced. He also quoted accounts of mystical experiences as given by important religious figures from many different religious traditions.
The three most important American philosophers of the pragmatic movement were Charles Sanders Peirce, who founded pragmatism and gave the movement its name; psychologist and religious thinker William James; and psychologist and educator John Dewey. Their work continued into the 20th century. Peirce formulated a pragmatic theory of knowledge and advocated “laboratory philosophy” whereby researchers investigate and clarify the kinds of knowledge that can be gained either through everyday experience or through scientific inquiry. By limiting the realm of meaningful questions to those that concern possible experience, Peirce hoped to introduce scientific logic into metaphysics. He advanced a theory of truth that defined truth as upon which an ideal community of researchers could agree. Peirce concluded that many traditional philosophical concepts have no practical use and thus are meaningless.
Whereas Peirce sought to determine the meaning of terms and ideas and thereby make metaphysics a precise and pragmatic discipline, James and Dewey applied the principles of pragmatism in developing a comprehensive philosophy. Like Peirce, James maintained that the meaning of ideas lies in their practical consequences. If an idea has no practical uses, then it is meaningless. James focussed on the power of true ideas to offer individuals, rather than scientific researchers, practical guidance in handling problems that arise in everyday experience. Truth, according to James, resides in those experiences that enable people to successfully navigate the challenges and demands of the world.
Dewey emphasized the cooperative process in which human beings, as intelligent and social beings, create and revise ideas about the world. One such process was scientific inquiry; another was participation in just and democratic social and political communities. Dewey concluded that science and democracy are the only sure guides for intelligent behaviour. His progressive social philosophy communicates a vision of a world in which science, education, and social reform demonstrate the benefits of pragmatic ideas for human life.
A diversity of methods, interests, and styles of argumentation marked 20th-century philosophy and proved both fruitful and destructive. This diversity, and the divisions that arose, proved fruitful as new topics arose and new ways developed for discussing these topics philosophically. It proved destructive, however, as philosophers wrote increasingly for a narrow audience and often ignored or derided philosophical styles different from their own.
In the decades following World War II (1939-1945), significant divisions arose between so-called continental philosophers, who worked on the European continent, and philosophers in the United States, the United Kingdom, and Australia. Deconstruction and other postmodern theories followed existentialism and phenomenology on the continent, whereas the Americans, Britons, and Australians worked in the analytic tradition. In the final decades of the century, the divisions between continental and analytic philosophy eased as interest moved away from the old disputes, and more and more philosophers became interested in exploring common roots of the two traditions in the history of Western philosophy.
German philosopher Edmund Husserl founded the 20th-century movement of phenomenology. Husserl said that philosophers must attempt to describe and analyze phenomena as they occur, setting aside such considerations as whether the phenomena are objective or subjective. He emphasized careful observation and interpretation of our conscious perceptions of things. First, we must attend to what we are conscious of, observing our perceptions far more carefully and intensely than we do in everyday life. Second, we must reflect upon these observations and interpret them without preconceptions. Husserl maintained that we arrive at meaning and the key to solving philosophical problems through a logical analysis of the data that emerges from such a “phenomenological study” of the contents of the mind.
German philosopher Martin Heidegger was instrumental in the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).
French philosopher Maurice Merleau-Ponty and German philosopher Martin Heidegger further developed phenomenology and its emphasis on pure description. For Merleau-Ponty, however, all perceptual experience carries along with it a reference to something beyond and independent of our perception of it. Heidegger, too, sought to return to what he claimed had become unfamiliar Sein (German for “being” or “existence”).
French philosopher and author Simone de Beauvoir often addressed existential themes such as the necessity of being responsible for oneself. Two of her best-known works were nonfiction: The Second Sex, a study of women in society, published in 1949; and The Coming of Age, a condemnation of society’s attitude toward the aged, published in 1970. She is shown here in a Parisian café with fellow French existentialist and writer Jean-Paul Sartre. He was her close companion for almost 60 years.
Heidegger was also a key figure in the 20th-century movement known as existentialism. Existentialists focussed on the personal: on individual existence, subjectivity, and choice. Two central existential doctrines claim that there is no fixed human essence structuring our lives and that our choices are never determined by anything except our own free will. In making choices in life, we determine our individual selves. These doctrines imply that human beings have enormous freedom. Existentialists maintained that the human ability to make free choices is so great that it overwhelms many individuals, who experience a “flight from freedom” by falsely treating religion, science, or other external factors as constraints and limits on individual freedom. In addition to Heidegger the main existentialist thinkers include French feminist philosopher Simone de Beauvoir and her companion, the philosopher, novelist, and playwright Jean-Paul Sartre.
In the early 20th century British mathematician and philosopher Bertrand Russell, along with British mathematician and philosopher Alfred North Whitehead, attempted to demonstrate that mathematics and numbers can be understood as groups of concepts, or classes. Russell and Whitehead tried to show that mathematics is closely related to logic and, in turn, that ordinary sentences can be logically analysed using mathematical symbols for words and phrases. This idea resulted in a new symbolic language, used by Russell in a field he termed philosophical logic, in which philosophical propositions were reformulated and examined according to his symbolic logic.
Analytic philosophy rose to prominence in the United Kingdom after the end of World War I (1914-1918). This movement heralded a linguistic shift according to which the philosophical study of language became the central task of philosophy. Many analytic philosophers concluded that a number of issues prominent in the history of philosophy are unimportant or even meaningless because they arose when philosophers misunderstood or misused language. Analytic philosophy is based upon the assumption that the careful analysis of language and concepts can clear up these problems and confusions. The key figures at the beginning of the movement were British philosopher and mathematician Bertrand Russell and Austrian-born British philosopher Ludwig Wittgenstein.
Russell, strongly influenced by the precision of mathematics, wished to construct a logical language that would reflect the nature of the world. He argued that what he called the “surface grammar” of everyday language masks a true “logical grammar,” knowledge of which is essential for understanding the true meaning of statements. Russell and many philosophers influenced by him asserted that complex statements can be reduced to simple components; if their logic does not permit such reduction, then the statements are meaningless.
Russell’s view was central to the development of the so-called Vienna Circle, a group of analytic philosophers active from about 1920 to 1950, who were led by Rudolf Carnap and Moritz Schlick. The members of the Vienna Circle were scientists or mathematicians as well as philosophers, and they originated the movement known as logical positivism. They believed that the clarification of meaning is the task of philosophy, and that all meaningful statements are either scientifically verifiable statements about the world or else logical tautologies (self-evident propositions). According to the logical positivists the discovery of new facts belongs to science, and metaphysics—the construction of comprehensive truths about reality - is a pretentious pseudo-science.
Wittgenstein, who studied with Russell at Cambridge University, was perhaps the most important analytic philosopher. Like Russell, he distrusted ordinary language. In his Tractatus Logico-Philosophicus (1921) Wittgenstein stated that “philosophy aims at the logical clarification of thoughts.” Philosophy’s function, he believed, is to monitor the use of language by reducing complex statements to their elementary components and by rebuffing all attempts to misuse words in creating the illusion of philosophical depth. “What can be said at all can be said clearly, and what we cannot talk about we must consign to silence.” The Tractatus made important contributions to the philosophy of language, logic, and the philosophy of mathematics. The account of language in Wittgenstein’s later work was much richer and more sophisticated than that in the Tractatus. However, Wittgenstein never abandoned his radical early views on the nature of philosophy.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
As the analytic movement developed, different ideas emerged about how philosophical analysis should proceed. A group called constructivists was inspired by Russell, the early writings of Wittgenstein, and the logical positivists. The solutions to philosophical problems, the constructivists argued, lie in using tools of logic to create more precise technical vocabularies. Two leading representatives of this movement were the American philosophers Nelson Goodman and W. V. Quine. Quine saw language and logic as themselves embodying theories about reality, rather than consisting of theory-neutral tools of analysis. By contrast, the descriptivists maintained that philosophical analysis should focus on the careful study of the everyday usage of crucial terms. This group was inspired by the 20th-century British philosophers G. E. Moore, Gilbert Ryle, and John Austin.
German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
Although the radical formulations of analytic philosophy from the first half of the 20th century no longer hold sway, analytic philosophy continues to flourish. Many contemporary philosophers have adopted ideas, methods, or values from the movement, including the Americans Donald Davidson, Hilary Putnam, and Saul Kripke. Analytic philosophy also has widely influenced the training and practices of philosophers today. On the one hand, its influence has led to a renewed commitment to clarity, concision, incisiveness, and depth in philosophical thinking and writing. On the other hand, it has also caused many philosophers to embrace difficult and obscure technical language to such an extent that their ideas are accessible to only a small community of specialists.
French philosopher Michel Foucault became world famous for his research into the shifting patterns of power in society. In 1970 he was elected to France’s highest academic post, the Collège de France.
Inaccessible ideas and impenetrable prose also characterize many postmodern philosophical texts, although the difficulties in this case are often intentional and reflect specific postmodern claims about the nature of language and meaning. The literal meaning of postmodernism is “after modernism,” and in many ways postmodernism constitutes an attack on modernist claims about the existence of truth and value-claims that stem from the European Enlightenment of the 18th century. In disputing past assumptions postmodernists generally display a preoccupation with the inadequacy of language as a mode of communication. Among the major postmodern theorists are French philosophers Jacques Derrida and Michel Foucault, and psychoanalyst Jacques Lacan.
Derrida originated the philosophical method of deconstruction, a system of analysis that assumes a text has no single, fixed meaning, both because of the inadequacy of language to express the author’s original intention and because a reader’s understanding of the text is culturally conditioned—that is, influenced by the culture in which the reader lives. Thus texts have many possible legitimate interpretations brought about by the “play” of language. Derrida stresses the philosophical importance of pun, metaphor, ambiguity, and other playful aspects of language traditionally disregarded in philosophy. His method of deconstruction involves close and careful readings of central texts of Western philosophy that bring to light some of the conflicting forces within the text and that highlight the devices the text uses to claim legitimacy and truth for itself, many of which may lie beyond the intention of its author. Although some of Derrida’s ideas about language resemble views held by the analytic philosophers, Wittgenstein, Quine, and Davidson, many philosophers schooled in the analytic tradition have dismissed Derrida’s work as destructive of philosophy.
Algerian-born French philosopher Jacques Derrida challenged established ideas on the analysis of texts during the 1960s and 1970s. His deconstructivist approach to reading a text opens it up to many different interpretations, each of which, he argued, is legitimate within its context.
Foucault created a searing critique of the ideals of the Enlightenment, such as reason and truth. Like Derrida, Foucault used close readings of historical texts to challenge assumptions, demonstrating how ideas about human nature and society, which we assume to be permanent truths, have changed over time. From an array of historical texts Foucault created “philosophical anthropologies” that reveal the evolution of concepts such as reason, madness, responsibility, punishment, and power. By examining the origins of these concepts, he maintained, we see that attitudes and assumptions that today seem natural or even inevitable are historical phenomena dependent upon time and place. He further claimed that the historical development of these ideas demonstrates that seemingly humane and liberal Enlightenment ideals are in reality coercive and destructive.
Lacan agreed with Derrida and Foucault about the need to overturn crucial cultural and philosophical assumptions, but he arrived at this conclusion by a different method altogether. Influenced by Swiss linguist Ferdinand de Saussure and the psychoanalytic theories of Sigmund Freud, Lacan claimed that the unconscious portion of the mind operates with structures and rules analogous to those of a language. He used this claim to criticize both psychoanalytic theory and philosophy. On the one hand, he believed that concepts from linguistics could clarify and correct Freud’s picture of the mind and provide the field of psychoanalysis with greater philosophical depth. On the other hand, he maintained that applying psychoanalytic methods and theories to linguistics would radically revise traditional philosophical views of language and reason.
Feminist philosophers also challenge basic principles of traditional Western philosophy, investigating how philosophical inquiry would change if women conducted it and if it incorporated women’s experiences as well as their viewpoints. In interpreting the history of Western philosophy, feminists study texts by male philosophers for their depiction of women, masculine values, and biases toward men. Feminist philosophers also write about women’s experiences of subjectivity, their relationship to their bodies, and feminist concepts of language, knowledge, and nature. They explore connections between feminism in philosophy and other emerging feminist disciplines, such as feminist legal theory, feminist theology, and ecological feminism. Central to feminist philosophy is the concept of the oppression of women who live in patriarchal (male-controlled) societies; much of the work of feminist philosophers has gone into understanding patriarchy and developing alternatives to it. Prominent feminist philosophers include French postmodern philosophers Luce Irigaray and Hélène Cixous and American philosopher of law Catharine MacKinnon.
Environmental philosophy is concerned with issues that arise when human beings interact with the environment. For instance, is a transformation of society necessary for the survival of living organisms and the environment? How is the exploitation of nature related to the subjugation of women and other oppressed humans? How can the philosophical study of the environment guide and inspire effective environmental activism? Most environmental philosophers seek to apply philosophical methods and ideas in collaboration with academics and activists working in the environmental sciences, theology, and feminism.
Two figures who played a prominent role in founding environmental philosophy are Norwegian philosopher Arne Naess and American naturalist, conservationist, and philosopher Aldo Leopold. Naess founded the so-called deep-ecology movement in the 1970s. The movement distinguishes between shallow ecology, which views nature in terms of its value to human beings, and deep ecology, which values nature independently of its usefulness to humanity. Leopold, in his influential book A Sand County Almanac (1949), called for the extension of ethical concern to include all life on Earth, not just human life. Other contemporary environmental philosophers include American ecological theologian Thomas Berry and American ecological feminist Karen Warren.
British political philosopher Isaiah Berlin is best known as a proponent of secular liberalism. In his book Four Essays on Liberty, Berlin advocates “negative” liberty - that is, freedom from restrictions on the individual.
Political philosophy dates back to Plato and Aristotle who discussed the nature of the ideal government and the ideal society. It continued in theories on individual liberty and political institutions put forth by Hobbes, Mill, and Rousseau. Political philosophy today features a lively dialogue between defenders of the liberal position and defenders of the communitarian position. The former place the highest value on individual liberties; whereas the latter argue that extreme individual freedom undermines shared community values.
According to liberalism the chief goods (benefits) of government and society are personal and political freedoms, such as freedom of speech, freedom of association, and freedom of conscience (belief). Many liberal theorists view the freedom to make moral choices as the most important freedom; they argue that political and social systems should be organized to allow individuals the freedom to pursue their own ideas about “the good life.” Communitarians respond that granting individuals this extreme freedom of choice ultimately limits human experience by undermining shared communal values. They claim that by ignoring the importance of community, liberalism disregards humanity’s social nature.
Prominent communitarians include Scottish philosopher Alasdair MacIntyre and American philosopher Michael Sandel. Important liberal theorists include British philosopher Isaiah Berlin and American philosophers Ronald Dworkin and John Rawls. Rawls is the author of A Theory of Justice (1971), considered to be the most significant work of political philosophy in the 20th century. Presently the idea of “justice as fairness,” a principle that promotes the equal distribution of the benefits and burdens of society among individuals. Any advantages that society confers should benefit those who are most disadvantaged, Rawls believes. From this and other principles he has developed theories about political and social relations within liberal democracies and between those democracies and certain illiberal states. Rawls’s ideas remain a major inspiration for much current work in political philosophy.
The advent of new medical and reproductive technologies in recent years has complicated how ethical decisions are made in medical research and practice. This slide show highlights some of the most prominent issues in medical ethics: assisted reproductive technologies, human fetal tissue transplantation, cloning, and abortion.
Although most contemporary philosophy is highly technical and inaccessible to nonspecialists, some contemporary philosophers concern themselves with practical questions and strive to influence today’s culture. Practitioners of feminist philosophy, environmental philosophy, and some areas of contemporary political philosophy seek to use the tools of philosophy to resolve current issues directly related to peoples’ lives. Nowhere have philosophers more enthusiastically embraced practical relevance than in contemporary applied ethics, a field that has developed since the 1960s. Most of the questions implemented by the ethicist collected concerns in the general motif “How should we live and die?” - A question central to ancient Greek philosophy.
Separate areas of specialization, such as biomedical ethics and business ethics, have emerged within applied ethics. Biomedical ethics deals with questions arising from the life sciences and human health care, and has two subspecialties: bioethics and medical ethics. Bioethicists study the ethical implications of advances in genetics and biotechnology, such as genetic testing, genetic privacy, cloning, and new reproductive technologies. For example, they consider the consequences for individuals who learn they have inherited a fatal genetic disease, or the consequences of technology that enables parents to choose the sex of a baby. Bioethicists then offer advice to legislators, researchers, and physicians active in these areas. Specialists in medical ethics offer advice to physicians, other health care personnel, and patients on a wide variety of issues, including abortion, euthanasia, fertility treatments, medical confidentiality, and the allocation of scarce medical resources. Much of the work in medical ethics directly affects the everyday practice of medicine, and most nursing students and medical students now take courses in this field.
Business ethicists bring ethical theories and techniques to bear on moral issues that arise in business. For example, what are the responsibilities of corporations to their employees, their customers, their shareholders, and the environment? Most business students take courses in business ethics, and many large corporations regularly consult with specialists in the field. Business ethicists also address larger topics, such as the ethics of globalization and the moral justification of various economic systems, such as capitalism and socialism.
The 6th-century-Bc Greek mathematician and philosopher Pythagoras was not only an influential thinker, but also a complex personality whose doctrines addressed the spiritual as well as the scientific. Some of the legends about Pythagoras were collected by Aristotle in his lost work On the Pythagoreans. Here is a representative sample: Pythagoras, the son of Mnesarchus, first studied mathematics and numbers but later also indulged in the miracle-mongering of Pherecydes. When at Metapontum a cargo ship was entering harbour and the onlookers were praying that it would dock safely because of its cargo, he stood up and said: ‘You will see that this ship is carrying a corpse.’ Again, in Caulonia, as Aristotle says, he foretold the appearance of the white shebear; And Aristotle in his writings about him tells many stories including the one about the poisonous snake in Tuscany which bit him and which he bit back and killed. And he foretold to the Pythagoreans the coming strife - which is why he left Metapontum without being observed by anybody. And while he was crossing the river Casas in company with others he heard a superhuman voice saying ‘Hail, Pythagoras’ - and those who were there were terrified. And once he appeared both in Croton and in Metapontum on the same day and at the same hour. Once, when he was sitting in the theatre, he stood up, so Aristotle said, and revealed to the audience his own thigh, which was made of gold. Several other paradoxical stories are told of him; But since I do not want to be a mere transcriber, enough of Pythagoras.
A large body of teachings came to be ascribed to Pythagoras. They divide roughly into two categories, the mathematico-metaphysical and the moral - as the poet Callimachus put it, Pythagoras was the first to draw triangles and polygons and bisect the circle - and to teach men to abstain from living things.
Most modern scholars are properly sceptical of these ascriptions, and their scepticism is nothing new. The best ancient commentary on Pythagoras’ doctrines is to be found in a passage of Porphyry: Pythagoras acquired a great reputation: he won many followers in the city of Croton itself (both men and women, one of whom, Theano, achieved some fame), and many from the nearby foreign territory, both kings and noblemen. What he said to his associates no-one can say with any certainty; for they preserved no ordinary silence. But it became very well known to everyone that he said, first, that the soul is immortal; then, that it changes into other kinds of animals; and further, that at certain periods whatever has happened happens again, there being nothing absolutely new; and that all living things should be considered as belonging to the same kind. Pythagoras seems to have been the first to introduce these doctrines into Greece.
The theory of metempsychosis, or the transmigration of the soul, is implicitly ascribed to Pythagoras by Xenophanes in the text quoted above. Herodotus also mentions it: The Egyptians were the first to advance the idea that the soul is immortal and that when the body dies it enters another animal that is then being born; when it has gone round all the creatures of the land, the sea and the air, it again enters the body of a man that is then being born; and this cycle takes it three thousand years. Some of the Greeks - some earlier, some later - put forward this idea as though it were their own: I know their names but I do not transcribe them.
The names Herodotus coyly refrains from transcribing will have included that of Pythagoras. Two later passages are worth quoting even though they belong to the legendary material. Heraclides of Pontus reports that [Pythagoras] tells the following story of himself: he was once born as Aethalides and was considered to be the son of Hermes. Hermes invited him to choose whatever he wanted, except immortality; so he asked that, alive and dead, he should remember what happened to him. Thus in his life he remembered everything, and when he died he retained the same memories. Some time later he became Euphorbus and was wounded by Menelaus. Euphorbus used to say that he had once been Aethalides and had acquired the gift from Hermes and learned of the circtilation of his soul - how it had circulated, into what plants and animals it had passed, what his soul had suffered in Hades and what other souls experienced. When Euphorbus died, his soul passed into Hermotimus, who himself wanted to give a proof and so went to Branchidae, entered the temple of Apollo and pointed to the shield that Menelaus had dedicated (he said that he had dedicated the shield to Apollo when he sailed back from Troy); it had by then decayed and all that was left was the ivory boss. When Hermotimus died, he became Pyrrhus, the Delian fisherman; and again he remembered everything - how he had been first Aethalides, then Euphorbus, then Hermotimus, then Pyrrhus. When Pyrrhus died, he became Pythagoras and remembered everything.
Pythagoras believed in metempsychosis and thought that eating meat was an abominable thing, saying that the souls of all animals enter different animals after death. He himself used to say that he remembered being, in Trojan times, Euphorbus, Panthus’ son, who was killed by Menelaus. They say that once when he was staying at Argos he saw a shield from the spoils of Troy nailed up, and burst into tears. When the Argives asked him the reason for his emotion, he said that he himself had borne that shield at Troy when he was Euphorbus. They did not believe him and judged him to be mad, but he said he would provide a true sign that it was indeed the case: on the inside of the shield there had been inscribed in archaic lettering euphorbus. Because of the extraordinary nature of his claim they all urged that the shield be taken down - and it turned out that on the inside the inscription was found.
The idea of eternal recurrence had a wide currency in later Greek thought. It is ascribed to ‘the Pythagoreans’ in a passage from Simplicius: The Pythagoreans too used to say that numerically the same things occur again and again. It is worth setting down a passage from the third book of Eudemus’ Physics in which he paraphrases their views:
One might wonder whether or not the same time recurs as some say it does. Now we call things ‘the same’ in different ways: things the same in kind plainly recur - e.g. summer and winter and the other seasons and periods; again, motions recur the same in kind - for the sun completes the solstices and the equinoxes and the other movements. But if we are to believe that the Pythagoreans were to circumference by the entrapping relations of things that the same in number recur - in that you will be sitting here and I shall talk to you, holding this stick, and so on for everything else - then it is plausible that the same time too recurs.
THE GREEKS
Greek Philosophy, body of philosophical concepts developed by the Greeks, particularly during the flowering of Greek civilization between 600 and 200 Bc. Greek philosophy formed the basis of all later philosophical speculation in the Western world. The intuitive hypotheses of the ancient Greeks foreshadowed many theories of modern science, and many of the moral ideas of pagan Greek philosophers have been incorporated into Christian moral doctrine. The political ideas set forth by Greek thinkers influenced political leaders as different as the framers of the US Constitution and the founders of various 20th-century totalitarian states.
The School of Athens (1510-1511) by Italian Renaissance painter Raphael adorns a room in the Vatican Palace. The artist depicts several philosophers of classical antiquity and portrays each with a distinctive gesture, conveying complex ideas in simple images. In the centre of the composition, Plato and Aristotle dominate the scene. Plato points upward to the world of ideas, where he believes knowledge lies, whereas Aristotle holds his forearm parallel to the earth, stressing observation of the world around us as the source of understanding. In addition, Raphael draws comparisons with his illustrious contemporaries, giving Plato the face of the Renaissance genius Leonardo da Vinci, and Heraclitus, who rests his elbow on a large marble block, the face of the Renaissance sculptor Michelangelo. Euclid, bending down at the right, resembles the Renaissance architect Bramante. Raphael paints his own portrait on the young man in a black beret at the far right. In accordance with Renaissance ideas, artists belong to the ranks of the learned and the fine arts have the stature and merit of the written word.
Greek philosophy may be divided between those philosophers who sought an explanation of the world in physical terms and those who stressed the importance of nonmaterial forms or ideas. The first important school of Greek philosophy, the Ionian or Milesian, was largely materialistic. Founded by Thales of Miletus in the 6th century Bc, it began with Thales' belief that water is the basic substance out of which all matter is created. A more elaborate view was offered by Anaximander, who held that the raw material of all matter is an eternal substance that changes into the commonly experienced forms of matter. These forms in turn change and merge into one another according to the rule of justice, that is, balance and proportion. Heraclitus taught that fire is the primordial source of matter, but he believed that the entire world is in a constant state of change or flux and that most objects and substances are produced by a union of opposite principles. He regarded the soul, for example, as a mixture of fire and water. The concept of nous (“mind”), an infinite and unchanging substance that enters into and controls every living object, was developed by Anaxagoras, who also believed that matter consisted of infinitesimally small particles, or atoms. He epitomized the philosophy of the Ionian school by suggesting a nonphysical governing principle and a materialistic basis of existence.
The 6th-century-Bc Greek mathematician and philosopher Pythagoras was not only an influential thinker, but also a complex personality whose doctrines addressed the spiritual as well as the scientific. The following is a collection of short excerpts from studies of Pythagorean teachings and from anecdotes about Pythagoras written by later Greek thinkers, such as the philosopher Aristotle, the historians Herodotus and Diodorus Siculus, and the biographer Diogenes Laërtius.
open sidebar
The division between idealism and materialism became more distinct. Pythagoras stressed the importance of form rather than matter in explaining material structure. The Pythagorean school also laid great stress on the importance of the soul, regarding the body only as the soul's “tomb.” According to Parmenides, the leader of the Eleatic school, the appearance of movement and the existence of separate objects in the world are mere illusions; they only seem to exist. The beliefs of Pythagoras and Parmenides formed the basis of the idealism that was to characterize later Greek philosophy.
A more materialistic interpretation was made by Empedocles, who accepted the belief that reality is eternal but declared that it is composed of chance combinations of the four primal substances: fire, air, earth, and water. Such materialistic explanations reached their climax in the doctrines of Democritus, who believed that the various forms of matter are caused by differences in the shape, size, position, and arrangement of component atoms. Materialism applied to daily life inspired the philosophy of a group known as the Sophists, who were active in the 5th century Bc. With their stress on the importance of human perception, such Sophists as Protagoras doubted that humanity would ever be able to reach objective truth through reason and taught that material success rather than truth should be the purpose of life.
Socrates, an influential philosopher of ancient Greece, never took notes on his own teachings; rather the notes of his pupils, including Plato, are the only record of his work. Socrates championed the ideal of reason and required that people act in accordance with their reasoned values. His criticism of injustice in Athenian society led to his prosecution for corrupting the youth of Athens. True to his principles, Socrates refused the opportunity to recant his criticisms and accepted the death sentence passed at his trial. Despite his followers’ plans for his escape, he died in confinement, calmly drinking a lethal dose of hemlock, in 399 Bc.
In contrast were the ideas of Socrates, with whom Greek philosophy attained its highest level. His avowed purpose was “to fulfill the philosopher's mission of searching into myself and other men.” After a proposition had been stated, the philosopher asked a series of questions designed to test and refine the proposition by examining its consequences and discovering whether it was consistent with the known facts. Socrates described the soul not in terms of mysticism but as “that in virtue of which we are called wise or foolish, good or bad.” In other words, Socrates considered the soul a combination of an individual's intelligence and character.
Greek philosopher Socrates chose to die rather than cease teaching his philosophy, declaring that “no evil can happen to a good man, either in life or after death.” In 399 Bc Socrates was accused and convicted of impiety and moral corruption of the youth of Athens, Greece. At his trial, he presented a justification of his life. The substance of his speech was recorded by Greek philosopher Plato, a disciple of Socrates, in Plato’s Apology.
Plato, one of the most famous philosophers of ancient Greece, was the first to use the term philosophy, which means “love of knowledge.” Born around 428 Bc, Plato investigated a wide range of topics. Chief among his ideas was the theory of forms, which proposed that objects in the physical world merely resemble perfect forms in the ideal world, and that only these perfect forms can be the object of true knowledge. The goal of the philosopher, according to Plato, is to know the perfect forms and to instruct others in that knowledge.
A student of ancient Greek philosopher Plato, Aristotle shared his teacher’s reverence for human knowledge but revised many of Plato’s ideas by emphasizing methods rooted in observation and experience. Aristotle surveyed and systematized nearly all the extant branches of knowledge and provided the first ordered accounts of biology, psychology, physics, and literary theory. In addition, Aristotle invented the field known as formal logic, pioneered zoology, and addressed virtually every major philosophical problem known during his time. Known to medieval intellectuals as simply “the Philosopher,” Aristotle is possibly the greatest thinker in Western history and, historically, perhaps the single greatest influence on Western intellectual development.
The idealism of Socrates was organized by Plato into a systematic philosophy. In his theory of Ideas, Plato regarded the objects of the real world for being merely shadows of eternal Forms or Ideas. Only these changeless, eternal Forms can be the object of true knowledge; the perception of their shadows, that is, the real world as heard, seen, and felt, is merely opinion. The goal of the philosopher, he said, is to know the eternal Forms and to instruct others in that knowledge.
What is the nature of knowledge? And of ignorance? The 4th-century-Bc Greek philosopher Plato used the myth, or allegory, of the cave to illustrate the difference between genuine knowledge and opinion or belief. This distinction is at the heart of one of Plato’s most important works, The Republic. In the first part of the myth of the cave, excerpted here, Plato constructs a dialogue in which he considers the difficult transition from belief based on appearances to true understanding founded in reality.
Plato's theory of knowledge is implicit in his theory of Ideas. He argued that both the material objects perceived and the individual perceiving them are constantly changing; but, since knowledge must be concerned only with unchangeable and universal objects, knowledge and perception are fundamentally different.
The Nicomachean Ethics, by ancient Greek philosopher Aristotle, examines the nature of eudaimonia, or happiness. The philosopher identifies happiness with goodness, but the problem of defining goodness then arises. This excerpt from Book I of the Ethics, consisting of chapters 5, 6, and 7, discusses the nature of happiness and asserts that human happiness derives from “self-sufficiency,” by which Aristotle means the application of reason to fulfill one’s innate abilities.
In place of Plato's doctrine of Ideas with a separate and eternal existence of their own, Aristotle proposed a group of universals that represent the common properties of any group of real objects. The universals, unlike Plato's Ideas, have no existence outside of the objects they represent. Closer to Plato's thought was Aristotle's definition of form as a distinguishing property of objects, but with an independent existence apart from the objects in which it is found. Describing the material universe, Aristotle stated it consists of the four elements, fire, air, earth, and water, and a fifth element that exists everywhere and is the sole constituent of the heavenly bodies “above” the moon.
In the writings of Plato and Aristotle the dominant strains of idealism and materialism in Greek philosophy reached, respectively, their highest expression, producing a body of thought that continues to influence philosophical inquiry. Subsequent Greek philosophy, reflecting a historical period of civil unrest and individual insecurity, was less concerned with the nature of the world than with the problems in the individual. During this period four major schools of largely materialistic, individualistic philosophy arose: that of the Cynics, and those espousing Epicureanism, Skepticism, and Stoicism
PLATO (428? -347Bc)
Plato, one of the most famous philosophers of ancient Greece, was the first to use the term philosophy, which means “love of knowledge.” Born around 428 Bc, Plato investigated a wide range of topics. Chief among his ideas was the theory of forms, which proposed that objects in the physical world merely resemble perfect forms in the ideal world, and that only these perfect forms can be the object of true knowledge. The goal of the philosopher, according to Plato, is to know the perfect forms and to instruct others in that knowledge.
The School of Athens (1510-1511) by Italian Renaissance painter Raphael adorns a room in the Vatican Palace. The artist depicts several philosophers of classical antiquity and portrays each with a distinctive gesture, conveying complex ideas in simple images. In the centre of the composition, Plato and Aristotle dominate the scene. Plato points upward to the world of ideas, where he believes knowledge lies, whereas Aristotle holds his forearm parallel to the earth, stressing observation of the world around us as the source of understanding. In addition, Raphael draws comparisons with his illustrious contemporaries, giving Plato the face of the Renaissance genius Leonardo da Vinci, and Heraclitus, who rests his elbow on a large marble block, the face of the Renaissance sculptor Michelangelo. Euclid, bending down at the right, resembles the Renaissance architect Bramante. Raphael paints his own portrait on the young man in a black beret at the far right. In accordance with Renaissance ideas, artists belong to the ranks of the learned and the fine arts have the stature and merit of the written word.
Plato was born to an aristocratic family in Athens. His father, Ariston, was believed to have descended from the early kings of Athens. Perictione, his mother, was distantly related to the 6th-century Bc lawmaker Solon. When Plato was a child, his father died, and his mother married Pyrilampes, who was an associate of the statesman Pericles.
As a young man Plato had political ambitions, but he became disillusioned by the political leadership in Athens. He eventually became a disciple of Socrates, accepting his basic philosophy and dialectical style of debate: the pursuit of truth through questions, answers, and additional questions. Plato witnessed the death of Socrates at the hands of the Athenian democracy in 399 Bc. Perhaps fearing for his own safety, he left Athens temporarily and travelled to Italy, Sicily, and Egypt.
In 387 Plato founded the Academy in Athens, the institution often described as the first European university. It provided a comprehensive curriculum, including such subjects as astronomy, biology, mathematics, political theory, and philosophy. Aristotle was the Academy’s most prominent student.
Pursuing an opportunity to combine philosophy and practical politics, Plato went to Sicily in 367 to tutor the new ruler of Syracuse, Dionysius the Younger, in the art of philosophical rule. The experiment failed. Plato made another trip to Syracuse in 361, but again his engagement in Sicilian affairs met with little success. The concluding years of his life were spent lecturing at the Academy and writing. He died at about the age of 80 in Athens in 348 or 347 Bc.
Plato’s writings were in dialogue form; philosophical ideas were advanced, discussed, and criticized in the context of a conversation or debate involving two or more persons. The earliest collection of Plato’s work includes 35 dialogues and 13 letters. The authenticity of a few of the dialogues and most of the letters has been disputed.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues - the dialectical method, used most famously by his teacher Socrates - has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
The dialogues may be divided into early, middle, and later periods of composition. The earliest represent Plato’s attempt to communicate the philosophy and dialectical style of Socrates. Several of these dialogues take the same form. Socrates, encountering someone who claims to know much, professes to be ignorant and seeks assistance from the one who knows. As Socrates begins to raise questions, however, it becomes clear that the one reputed to be wise really does not know what he claims to know, and Socrates emerges as the wiser one because he at least knows that he does not know. Such knowledge, of course, is the beginning of wisdom. Included in this group of dialogues are Charmides (an attempt to define temperance), Lysis (a discussion of friendship), Laches (a pursuit of the meaning of courage), Protagoras (a defence of the thesis that virtue is knowledge and can be taught), Euthyphro (a consideration of the nature of piety), Crito (Socrates’ defence of obedience to the laws of the state), and the Apology (Socrates’ defence of himself at his trial against the charges of atheism and corrupting Athenian youth).
Greek philosopher Socrates chose to die rather than cease teaching his philosophy, declaring that “no evil can happen to a good man, either in life or after death.” In 399 Bc Socrates was accused and convicted of impiety and moral corruption of the youth of Athens, Greece. At his trial, he presented a justification of his life. The substance of his speech was recorded by Greek philosopher Plato, a disciple of Socrates, in Plato’s Apology.
The dialogues of the middle and later periods of Plato’s life reflect his own philosophical development. The ideas in these works are attributed by most scholars to Plato himself, although Socrates continues to be the main character in many of the dialogues. Two dialogues are considered to belong to a transitional time between Plato’s early and middle periods. They are Gorgias (a consideration of several ethical questions) and Meno (a discussion of the nature of knowledge). , The writings of the middle period include Phaedo (the death scene of Socrates, in which he discusses the theory of Forms, the nature of the soul, and the question of immortality), the Symposium (Plato’s outstanding dramatic achievement, which contains several speeches on beauty and love), the Republic (Plato’s supreme philosophical achievement, which is a detailed discussion of the nature of justice).
The works of the later period include the Theaetetus (a denial that knowledge is to be identified with sense perception), Parmenides (a critical evaluation of the theory of Forms), Sophist (further consideration of the theory of Ideas, or Forms), Philebus (a discussion of the relationship between pleasure and the good), Timaeus (Plato’s views on natural science and cosmology), and the Laws (a more practical analysis of political and social issues).
At the heart of Plato’s philosophy is his theory of Forms, or Ideas. Ultimately, his view of knowledge, his ethical theory, his psychology, his concept of the state, and his perspective on art must be understood in terms of this theory.
What is the nature of knowledge? And of ignorance? The 4th-century-Bc Greek philosopher Plato used the myth, or allegory, of the cave to illustrate the difference between genuine knowledge and opinion or belief. This distinction is at the heart of one of Plato’s most important works, The Republic. In the first part of the myth of the cave, excerpted here, Plato constructs a dialogue in which he considers the difficult transition from belief based on appearances to true understanding founded in reality.
Plato’s theory of Forms and his theory of knowledge are so interrelated that they must be discussed together. Influenced by Socrates, Plato was convinced that knowledge is attainable. He was also convinced of two essential characteristics of knowledge. First, knowledge must be certain and infallible. Second, knowledge must have as its object that which is genuinely real as contrasted with that which is an appearance only. Because that which is fully real must, for Plato, be fixed, permanent, and unchanging, he identified the real with the ideal realm of being as opposed to the physical world of becoming. One consequence of this view was Plato’s rejection of empiricism, the claim that knowledge is derived from sense experience. He thought that propositions derived from sense experience have, at most, a degree of probability. They are not certain. Furthermore, the objects of sense experience are changeable phenomena of the physical world. Hence, objects of sense experience are not proper objects of knowledge.
Plato’s own theory of knowledge is found in the Republic, particularly in his discussion of the image of the divided line and the myth of the cave. In the former, Plato distinguishes between two levels of awareness: opinion and knowledge. Claims or assertions about the physical or visible world, including both commonsense observations and the propositions of science, are opinions only. Some of these opinions are well founded and some are not; but none of them counts as genuine knowledge. The higher level of awareness is knowledge, because there reason, rather than sense experience, is involved. Reason, properly used, results in intellectual insights that are certain, and the objects of these rational insights are the abiding universals, the eternal Forms or substances that constitute the real world.
The myth of the cave describes individuals chained deep within the recesses of a cave. Bound so that vision is restricted, they cannot see one another. The only thing visible is the wall of the cave upon which appear shadows cast by models or statues of animals and objects that are passed before a brightly burning fire. Breaking free, one of the individuals escapes from the cave into the light of day. With the aid of the sun, that person sees for the first time the real world and returns to the cave with the message that the only things they have seen heretofore are shadows and appearances and that the real world awaits them if they are willing to struggle free of their bonds. The shadowy environment of the cave symbolizes for Plato the physical world of appearances. Escape into the sun-filled setting outside the cave symbolizes the transition to the real world, the world of full and perfect being, the world of Forms, which is the proper object of knowledge.
The theory of Forms may best be understood in terms of mathematical entities. A circle, for instance, is defined as a plane figure composed of a series of points, all of which are equidistant from a given point. No one has ever actually seen such a figure, however.
What people have actually seen are drawn figures that are more or less close approximations of the ideal circle? In fact, when mathematicians define a circle, the points referred to are not spatial points at all; they are logical points. They do not occupy space. Nevertheless, although the Form of a circle has never been seen - indeed, could never be seen - mathematicians and others do in fact know what a circle is. That they can define a circle is evidence that they know what it is. For Plato, therefore, the Form “circularity” exists, but not in the physical world of space and time. It exists as a changeless object in the world of Forms or Ideas, which can be known only by reason.
Forms have greater reality than objects in the physical world both because of their perfection and stability and because they are models, resemblance to which gives ordinary physical objects whatever reality they have. Circularity, squareness, and triangularity are excellent examples, then, of what Plato meant by Forms. An object existing in the physical world may be called a circle or a square or a triangle only to the extent that it resembles (“participates in” is Plato’s phrase) the Form “circularity” or “squareness” or “triangularity.”
Plato extended his theory beyond the realm of mathematics. Indeed, he was most interested in its application in the field of social ethics. The theory was his way of explaining how the same universal term can refer to so many particular things or events. The word justice, for example, can be applied to hundreds of particular acts because these acts have something in common, namely, their resemblance to, or participation in, the Form “justice.” An individual is human to the extent that he or she resembles or participates in the Form “humanness.” If “humanness” is defined in terms of being a rational animal, then an individual is human to the extent that he or she is rational. A particular act is courageous or cowardly to the extent that it participates in its Form. An object is beautiful to the extent that it participates in the Idea, or Form, of beauty. Everything in the world of space and time is what it is by virtue of its resemblance to, or participation in, its universal Form. The ability to define the universal term is evidence that one has grasped the Form to which that universal refers.
Plato conceived the Forms as arranged hierarchically; the supreme Form is the Form of the Good, which, like the sun in the myth of the cave, illuminates all the other Ideas. There is a sense in which the Form of the Good represents Plato’s movement in the direction of an ultimate principle of explanation. Ultimately, the theory of Forms is intended to explain how one comes to know and also how things have come to be as they are. In philosophical language, Plato’s theory of Forms is both an epistemological (theory of knowledge) and an ontological (theory of being) thesis.
The Republic, Plato’s major political work, is concerned with the question of justice and therefore with the questions “what is a just state” and “who is a just individual?”
The ideal state, according to Plato, is composed of three classes. The economic structure of the state is maintained by the merchant class. Security needs are met by the military class, and political leadership is provided by the philosopher-kings. A particular person’s class is determined by an educational process that begins at birth and proceeds until that person has reached the maximum level of education compatible with interest and ability. Those who complete the entire educational process become philosopher-kings. They are the ones whose minds have been so developed that they are able to grasp the Forms and, therefore, to make the wisest decisions. Indeed, Plato’s ideal educational system is primarily structured so as to produce philosopher-kings.
Plato associates the traditional Greek virtues with the class structure of the ideal state. Temperance is the unique virtue of the artisan class; courage is the virtue peculiar to the military class; and wisdom characterizes the rulers. Justice, the fourth virtue, characterizes society as a whole. The just state is one in which each class performs its own function well without infringing on the activities of the other classes.
Plato divides the human soul into three parts: the rational part, the will, and the appetites. The just person is the one in whom the rational element, supported by the will, controls the appetites. An obvious analogy exists here with the threefold class structure of the state, in which the enlightened philosopher-kings, supported by the soldiers, govern the rest of society.
Pllato’s ethical theory rests on the assumption that virtue is knowledge and can be taught, which has to be understood in terms of his theory of Forms. As indicated previously, the ultimate Form for Plato is the Form of the Good, and knowledge of this Form is the source of guidance in moral decision making. Plato also argued that to know the good is to do the good. The corollary of this is that anyone who behaves immorally does so out of ignorance. This conclusion follows from Plato’s conviction that the moral person is the truly happy person, and because individuals always desire their own happiness, they always desire to do that which is moral.
Plato had an essentially antagonistic view of art and the artist, although he approved of certain religious and moralistic kinds of art. Again, his approach is related to his theory of Forms. A beautiful flower, for example, is a copy or imitation of the universal Forms “floweriness” and “beauty.” The physical flower is one step removed from reality, that is, the Forms. A picture of the flower is, therefore, two steps removed from reality. This also meant that the artist is two steps removed from knowledge, and, indeed, Plato’s frequent criticism of the artists is that they lack genuine knowledge of what they are doing. Artistic creation, Plato observed, seems to be rooted in a kind of inspired madness.
Plato’s influence throughout the history of Western philosophy has been monumental. When he died, Speusippus became head of the Academy. The school continued in existence until ad 529, when it was closed by the Byzantine emperor Justinian I, who objected to its pagan teachings. Plato’s impact on Jewish thought is apparent in the work of the 1st-century Alexandrian philosopher Philo Judaeus. Neoplatonism, founded by the 3rd-century philosopher Plotinus, was an important later development of Platonism. The theologians Clement of Alexandria, Origen, and Saint Augustine were early Christian exponents of a Platonic perspective. Platonic ideas have had a crucial role in the development of Christian theology and also in medieval Islamic thought.
During the Renaissance, the primary focus of Platonic influence was the Florentine Academy, founded in the 15th century near Florence. Under the leadership of Marsilio Ficino, members of the Academy studied Plato in the original Greek. In England, Platonism was revived in the 17th century by Ralph Cudworth and others who became known as the Cambridge Platonisms. Plato’s influence extended into the 20th century. The British philosopher Alfred North Whitehead once paid tribute to him by describing the history of philosophy as simply “a series of footnotes to Plato.”
Socrates, was the Greek philosopher, who profoundly affected Western philosophy through his influence on Plato. Born in Athens, the son of Sophroniscus, a sculptor, and Phaenarete, a midwife, he received the regular elementary education in literature, music, and gymnastics. Later he familiarized himself with the rhetoric and dialectics of the Sophists, the speculations of the Ionian philosophers, and the general culture of Periclean Athens. Initially, Socrates followed the craft of his father; according to a former tradition, he executed a statue group of the three Graces, which stood at the entrance to the Acropolis until the 2nd century ad. In the Peloponnesian War with Sparta he served as an infantryman with conspicuous bravery at the battles of Potidaea in 432-430 Bc, Delium in 424 Bc, and Amphipolis in 422 Bc. Socrates believed in the superiority of argument over writing and therefore spent the greater part of his mature life in the marketplace and public places of Athens, engaging in dialogue and argument with anyone who would listen or who would submit to interrogation. Socrates was reportedly unattractive in appearance and short of stature but was also extremely hardy and self-controlled. He enjoyed life immensely and achieved social popularity because of his ready wit and a keen sense of humour that was completely devoid of satire or cynicism.
Socrates was a Greek philosopher and teacher who lived in Athens, Greece, in the 400s Bc. He profoundly altered Western philosophical thought through his influence on his most famous pupil, Plato, who passed on Socrates’s teachings in his writings known as dialogues. Socrates taught that every person has full knowledge of ultimate truth contained within the soul and needs only to be spurred to conscious reflection in order to become aware of it. His criticism of injustice in Athenian society led to his prosecution and a death sentence for allegedly corrupting the youth of Athens.
The School of Athens (1510-1511) by Italian Renaissance painter Raphael adorns a room in the Vatican Palace. The artist depicts several philosophers of classical antiquity and portrays each with a distinctive gesture, conveying complex ideas in simple images. In the centre of the composition, Plato and Aristotle dominate the scene. Plato points upward to the world of ideas, where he believes knowledge lies, whereas Aristotle holds his forearm parallel to the earth, stressing observation of the world around us as the source of understanding. In addition, Raphael draws comparisons with his illustrious contemporaries, giving Plato the face of the Renaissance genius Leonardo da Vinci, and Heraclitus, who rests his elbow on a large marble block, the face of the Renaissance sculptor Michelangelo. Euclid, bending down at the right, resembles the Renaissance architect Bramante. Raphael paints his own portrait on the young man in a black beret at the far right. In accordance with Renaissance ideas, artists belong to the ranks of the learned and the fine arts have the stature and merit of the written word.
Greek philosopher Socrates chose to die rather than cease teaching his philosophy, declaring that “no evil can happen to a good man, either in life or after death.” In 399 Bc Socrates was accused and convicted of impiety and moral corruption of the youth of Athens, Greece. At his trial, he presented a justification of his life. The substance of his speech was recorded by Greek philosopher Plato, a disciple of Socrates, in Plato’s Apology.
Socrates was obedient to the laws of Athens, but he generally steered clear of politics, restrained by what he believed to be divine warning. He believed that he had received a call to pursue philosophy and could serve his country best by devoting himself to teaching, and by persuading the Athenians to engage in self-examination and in tending to their souls. He wrote no books and established no regular school of philosophy. All that is known with certainty about his personality and his way of thinking is derived from the works of two of his distinguished scholars: Plato, who at times ascribed his own views to his master, and the historian Xenophon, a prosaic writer who probably failed to understand many of Socrates's doctrines. Plato portrayed Socrates as hiding behind an ironical profession of ignorance, known as Socratic irony, and possessing a mental acuity and resourcefulness that enabled him to penetrate arguments with great facility.
Socrates's contribution to philosophy was essentially ethical in character. Belief in a purely objective understanding of such concepts as justice, love, and virtue, and the self-knowledge that he inculcated, were the basis of his teachings. He believed that all vice is the result of ignorance, and that no person is willingly bad; correspondingly, virtue is knowledge, and those who know the right will act rightly. His logic placed particular emphasis on rational argument and the quest for general definitions, as evidenced in the writings of his younger contemporary and pupil, Plato, and of Plato's pupil, Aristotle. Through the writings of these philosophers, Socrates profoundly affected the entire subsequent course of Western speculative thought.
Another thinker befriended and influenced by Socrates was Antisthenes, the founder of the Cynic school of philosophy. Socrates was also the teacher of Aristippus, who founded the Cyrenaic philosophy of experience and pleasure, from which developed the more lofty philosophy of Epicurus. To such Stoics as the Greek philosopher Epictetus, the Roman philosopher Seneca the Elder, and the Roman emperor Marcus Aurelius, Socrates appeared as the very embodiment and guide of the higher life.
Although a patriot and a man of deep religious conviction, Socrates was nonetheless regarded with suspicion by many of his contemporaries, who disliked his attitude toward the Athenian state and the established religion. He was charged in 399 Bc with neglecting the gods of the state and introducing new divinities, a reference to the daemonion, or mystical inner voice, to which Socrates often referred. He was also charged with corrupting the morals of the young, leading them away from the principles of democracy; and he was wrongly identified with the Sophists, possibly because he had been ridiculed by the comic poet Aristophanes in his play The Clouds as the master of a “thinking-shop” where young men were taught to make the worse reason appear the better reason.
Plato's Apology gives the substance of the defence made by Socrates at his trial; it was a bold vindication of his whole life. He was condemned to die, although the vote was carried by only a small majority. When, according to Athenian legal practice, Socrates made an ironic counterproposition to the court's death sentence, proposing only to pay a small fine because of his value to the state as a man with a philosophic mission, the jury was so angered by this offer that it voted by an increased majority for the death penalty.
Socrates' friends planned his escape from prison, but he preferred to comply with the law and die for his cause. His last day was spent with his friends and admirers, and in the evening he calmly fulfilled his sentence by drinking a cup of hemlock according to a customary procedure of execution. Plato described the trial and death of Socrates in the Apology, the Crito, and the Phaedo.
ARISTOTLE (384-322Bc)
Aristotle, was also was a Greek philosopher and scientist, who shares with Plato and Socrates the distinction of being the most famous of ancient philosophers. Aristotle had been a student of ancient Greek philosopher Plato, Aristotle shared his teacher’s reverence for human knowledge but revised many of Plato’s ideas by emphasizing methods rooted in observation and experience. Aristotle surveyed and systematized nearly all the extant branches of knowledge and provided the first ordered accounts of biology, psychology, physics, and literary theory. In addition, Aristotle invented the field known as formal logic, pioneered zoology, and addressed virtually every major philosophical problem known during his time. Known to medieval intellectuals as simply “the Philosopher,” Aristotle is possibly the greatest thinker in Western history and, historically, perhaps the single greatest influence on Western intellectual development.
Aristotle was born at Stagira, in Macedonia, the son of a physician to the royal court. At the age of 17, he went to Athens to study at Plato's Academy. He remained there for about 20 years, as a student and then as a teacher.
When Plato died in 347 Bc, Aristotle moved to Assos, a city in Asia Minor, where a friend of his, Hermias, was ruler. There he counselled Hermias and married his niece and adopted daughter, Pythias. After Hermias was captured and executed by the Persians in 345 Bc, Aristotle went to Pella, the Macedonian capital, where he became the tutor of the king's young son Alexander, later known as Alexander the Great. In 335, when Alexander became king, Aristotle returned to Athens and established his own school, the Lyceum. Because much of the discussion in his school took place while teachers and students were walking about the Lyceum grounds, Aristotle's school came to be known as the Peripatetic (“walking” or “strolling”) school. Upon the death of Alexander in 323 Bc, strong anti-Macedonian feeling developed in Athens, and Aristotle retired to a family estate in Euboea (Évvoia). He died there the following year.
Aristotle's Poetics may be one of the most influential documents ever produced on the art of the drama. The text is probably a transcription of lectures on the art of dramatic literature given to a group of students. In this excerpt, Aristotle defines the nature of tragic drama, discusses the six essential elements of drama, states his opinion on the best type of tragic plot, and suggests the most effective means to arouse essential emotions such as pity and fear. For centuries, scholars have regarded the Poetics as the definitive statement on playwriting, although the precise meaning of Aristotle's ideas is debated to this day.
Aristotle, like Plato, made regular use of the dialogue in his earliest years at the Academy, but lacking Plato's imaginative gifts, he probably never found the form congenial. Apart from a few fragments in the works of later writers, his dialogues have been wholly lost. Aristotle also wrote some short technical notes, such as a dictionary of philosophic terms and a summary of the doctrines of Pythagoras. Of these, only a few brief excerpts have survived. Still extant, however, are Aristotle's lecture notes for carefully outlined courses treating almost every branch of knowledge and art. The texts on which Aristotle's reputation rests are largely based on these lecture notes, which were collected and arranged by later editors.
Among the texts are treatises on logic, called Organon (“instrument”), because they provide the means by which positive knowledge is to be attained. His works on natural science include Physics, which gives a vast amount of information on astronomy, meteorology, plants, and animals. His writings on the nature, scope, and properties of being, which Aristotle called First Philosophy (Protç philosophia), were given the title Metaphysics in the first published edition of his works (60? Bc), because in that edition they followed Physics. His treatment of the Prime Mover, or first cause, as pure intellect, perfect in unity, immutable, and, as he said, “the thought of thought,” is given in the Metaphysics. To his son Nicomachus he dedicated his work on ethics, called the Nicomachean Ethics. Other essential works include his Rhetoric, his Poetics (which survives in incomplete form), and his Politics (also incomplete).
Perhaps because of the influence of his father's medical profession, Aristotle's philosophy laid its principal stress on biology, in contrast to Plato's emphasis on mathematics. Aristotle regarded the world as made up of individuals (substances) occurring in fixed natural kinds (species). Each individual has its built-in specific pattern of development and grows toward proper self-realization as a specimen of its type. Growth, purpose, and direction are thus built into nature. Although science studies general kinds, according to Aristotle, these kinds find their existence in particular individuals. Science and philosophy must therefore balance, not simply choose between, the claims of empiricism (observation and sense experience) and formalism (rational deduction).
One of the most distinctive of Aristotle's philosophic contributions was a new notion of causality. Each thing or event, he thought, has more than one “reason” that helps to explain what, why, and where it is. Earlier Greek thinkers had tended to assume that only one sort of cause can be really explanatory; Aristotle proposed four. (The word Aristotle uses, aition,”a responsible, explanatory factor” is not synonymous with the word cause in its modern sense.)
These four causes are the material cause, the matter out of which a thing is made; the efficient cause, the source of motion, generation, or change; the formal cause, which is the species, kind, or type; and the final cause, the goal, or full development, of an individual, or the intended function of a construction or invention. Thus, a young lion is made up of tissues and organs, its material cause; the efficient cause is its parents, who generated it; the formal cause is its species, lion; and its final cause is its built-in drive toward becoming a mature specimen. In different contexts, while the causes are the same four, they apply analogically. Thus, the material cause of a statue is the marble from which it was carved; the efficient cause is the sculptor; the formal cause is the shape the sculptor realized - Hermes, perhaps, or Aphrodite; and the final cause is its function, to be a work of fine art.
In each context, Aristotle insists that something can be better understood when its causes can be stated in specific terms rather than in general terms. Thus, it is more informative to know that a sculptor made the statue than to know that an artist made it; and even more informative to know that Polycleitus chiseled it rather than simply that a sculptor did so.
In astronomy, Aristotle proposed a finite, spherical universe, with the earth at its centre. The central region is made up of four elements: earth, air, fire, and water. In Aristotle's physics, each of these four elements has a proper place, determined by its relative heaviness, its “specific gravity.” Each moves naturally in a straight line—earth down, fire up—toward its proper place, where it will be at rest. Thus, terrestrial motion is always linear and always comes to a halt. The heavens, however, move naturally and endlessly in a complex circular motion. The heavens, therefore, must be made of a fifth, and different element, which he called aither. A superior element, aither is incapable of any change other than change of place in a circular movement. Aristotle's theory that linear motion always takes place through a resisting medium is in fact valid for all observable terrestrial motions. He also held that heavier bodies of a given material fall faster than lighter ones when their shapes are the same, a mistaken view that was accepted as fact until the Italian physicist and astronomer Galileo conducted his experiment with weights dropped from the Leaning Tower of Pisa.
In zoology, Aristotle proposed a fixed set of natural kinds (“species”), each reproducing true to type. An exception occurs, Aristotle thought, when some “very low” worms and flies come from rotting fruit or manure by “spontaneous generation.” The typical life cycles are epicycles: The same pattern repeats, but through a linear succession of individuals. These processes are therefore intermediate between the changeless circles of the heavens and the simple linear movements of the terrestrial elements. The species form a scale from simple (worms and flies at the bottom) to complex (human beings at the top), but evolution is not possible.
For Aristotle, psychology was a study of the soul. Insisting that form (the essence, or unchanging characteristic element in an object) and matter (the common undifferentiated substratum of things) always exist together, Aristotle defined a soul as a “kind of functioning of a body organized so that it can support vital functions.” In considering the soul as essentially associated with the body, he challenged the Pythagorean doctrine that the soul is a spiritual entity imprisoned in the body. Aristotle's doctrine is a synthesis of the earlier notion that the soul does not exist apart from the body and of the Platonic notion of a soul as a separate, nonphysical entity. Whether any part of the human soul is immortal, and, if so, whether its immortality is personal, are not entirely clear in his treatise On the Soul.
Through the functioning of the soul, the moral and intellectual aspects of humanity are developed. Aristotle argued that human insight in its highest form (nous poetikos,”active mind”) is not reducible to a mechanical physical process. Such insight, however, presupposes an individual “passive mind” that does not appear to transcend physical nature. Aristotle clearly stated the relationship between human insight and the senses in what has become a slogan of empiricism—the view that knowledge is grounded in sense experience. “There is nothing in the intellect,” he wrote, “that was not first in the senses.”
The Nicomachean Ethics, by ancient Greek philosopher Aristotle, examines the nature of eudaimonia, or happiness. The philosopher identifies happiness with goodness, but the problem of defining goodness then arises. This excerpt from Book I of the Ethics, consisting of chapters 5, 6, and 7, discusses the nature of happiness and asserts that human happiness derives from “self-sufficiency,” by which Aristotle means the application of reason to fulfill one’s innate abilities.
It seemed to Aristotle that the individual's freedom of choice made an absolutely accurate analysis of human affairs impossible. “Practical science,” then, such as politics or ethics, was called science only by courtesy and analogy. The inherent limitations on practical science are made clear in Aristotle's concepts of human nature and self-realization. Human nature certainly involves, for everyone, a capacity for forming habits; but the habits that a particular individual forms depend on that individual's culture and repeated personal choices. All human beings want “happiness,” an active, engaged realization of their innate capacities, but this goal can be achieved in a multiplicity of ways.
Aristotle's Nicomachean Ethics is an analysis of character and intelligence as they relate to happiness. Aristotle distinguished two kinds of “virtue,” or human excellence: moral and intellectual. Moral virtue is an expression of character, formed by habits reflecting repeated choices. A moral virtue is always a mean between two less desirable extremes. Courage, for example, is a mean between cowardice and thoughtless rashness; generosity, between extravagance and parsimony. Intellectual virtues are not subject to this doctrine of the mean. Aristotle argued for an elitist ethics: Full excellence can be realized only by the mature male adult of the upper class, not by women, or children, or barbarians (non-Greeks), or salaried “mechanics” (manual workers) for whom, indeed, Aristotle did not want to allow voting rights.
In politics, many forms of human association can obviously be found; which one is suitable depends on circumstances, such as the natural resources, cultural traditions, industry, and literacy of each community. Aristotle did not regard politics as a study of ideal states in some abstract form, but rather as an examination of the way in which ideals, laws, customs, and property interrelate in actual cases. He thus approved the contemporary institution of slavery but tempered his acceptance by insisting that masters should not abuse their authority, since the interests of master and slave are the same. The Lyceum library contained a collection of 158 constitutions of the Greek and other states. Aristotle himself wrote the Constitution of Athens as part of the collection, and after being lost, this description was rediscovered in a papyrus copy in 1890. Historians have found the work of great value in reconstructing many phases of the history of Athens.
In logic, Aristotle developed rules for chains of reasoning that would, if followed, never lead from true premises to false conclusions (validity rules). In reasoning, the basic links are syllogisms: pairs of propositions that, taken together, give a new conclusion. For example, “All humans are mortal” and “All Greeks are humans” yield the valid conclusion “All Greeks are mortal.” Science results from constructing more complex systems of reasoning. In his logic, Aristotle distinguished between dialectic and analytic. Dialectic, he held, only tests opinions for their logical consistency; analytic works deductively from principles resting on experience and precise observation. This is clearly an intended break with Plato's Academy, where dialectic was supposed to be the only proper method for science and philosophy alike.
In his metaphysics, Aristotle argued for the existence of a divine being, described as the Prime Mover, who is responsible for the unity and purposefulness of nature. God is perfect and therefore the aspiration of all things in the world, because all things desire to share perfection. Other movers exist as well - the intelligent movers of the planets and stars (Aristotle suggested that the number of these is “either 55 or 47”). The Prime Mover, or God, described by Aristotle is not very suitable for religious purposes, as many later philosophers and theologians have observed. Aristotle limited his “theology,” however, to what he believed science requires and can establish.
Aristotle's works were lost in the West after the decline of Rome. During the 9th century ad, Arab scholars introduced Aristotle, in Arabic translation, to the Islamic world (see Islam). The 12th-century Spanish-Arab philosopher Averroës is the best known of the Arabic scholars who studied and commented on Aristotle. In the 13th century, the Latin West renewed its interest in Aristotle's work, and Saint Thomas Aquinas found in it a philosophical foundation for Christian thought. Church officials at first questioned Aquinas's use of Aristotle; in the early stages of its rediscovery, Aristotle's philosophy was regarded with some suspicion, largely because his teachings were thought to lead to a materialistic view of the world. Nevertheless, the work of Aquinas was accepted, and the later philosophy of scholasticism continued the philosophical tradition based on Aquinas's adaptation of Aristotelian thought.
The influence of Aristotle's philosophy has been pervasive; it has even helped to shape modern language and common sense. His doctrine of the Prime Mover as final cause played an important role in theology. Until the 20th century, logic meant Aristotle's logic. Until the Renaissance, and even later, astronomers and poets alike admired his concept of the universe. Zoology rested on Aristotle's work until British scientist Charles Darwin modified the doctrine of the changelessness of species in the 19th century. In the 20th century a new appreciation has developed of Aristotle's method and its relevance to education, literary criticism, the analysis of human action, and political analysis.
Not only the discipline of zoology, but also the world of learning as a whole, seems to amply justify Darwin's remark that the intellectual heroes of his own time “were mere schoolboys compared with old Aristotle.”
The Sophists, translated to the Greek as,”expert, master craftsman, man of wisdom,” and originally, name applied by the ancient Greeks to learned men, such as the Seven Wise Men of Greece; in the 5th century Bc, a name applied to itinerant teachers who provided instruction in several higher branches of learning for a fee.
Individuals sharing a broad philosophic outlook rather than a school, the Sophists popularized the ideas of various early philosophers; but based on their understanding of this prior philosophic thought, most of them concluded that truth and morality were essentially matters of opinion. Thus, in their own teaching, they tended to emphasize forms of persuasive expression, such as the art of rhetoric, which provided pupils with skills useful for achieving success in life, particularly public life.
The Sophists were popular for a time, especially in Athens; however, their sceptical view on absolute truth and morality eventually provoked sharp criticism. Socrates, Plato, and Aristotle challenged the philosophic basis of the Sophists' teaching, and Plato and Aristotle further condemned them for taking money. Later, they were accused by the state of lacking morality. As a result, the word sophist acquired a derogatory meaning, as in the modern term sophistry, which can be defined as subtle and deceptive or false argumentation or reasoning.
The Sophists were of minor importance in the development of Western philosophic thought. They were, however, the first to systematize education. Leading 5th-century Sophists included Protagoras, Gorgias, Hippias of Elis, and Prodicus of Ceos.
The first Greek philosophers were interested in theoretical science. They lived in the Ionia region of western Asia Minor and learned from earlier Middle Eastern thinkers, especially those from Babylonia. The Greek philosophers Thales and Anaximander, who lived in the 6th century Bc, reached the revolutionary conclusion that the physical world was governed by laws of nature, not by the whims of the gods. Pythagoras, who also lived in the 6th century Bc, taught that numbers explained the world and started the study of mathematics in Greece. These philosophers called the universe cosmos, meaning “a beautiful thing,” because it had order based on scientific rules, not mythology. Therefore, the philosophers believed in logic. Their insistence that people produce evidence for their beliefs opened the way to modern science and philosophy.
Philosophers called Sophists upset many people in the 5th century Bc by teaching relativism, the belief that there is no universal truth or right and wrong. The most famous Sophist was Protagoras, who said, “Man is the measure of all things.” Socrates (469-399 Bc) insisted that the Sophists were wrong and that well-informed people would never do wrong on purpose. His pupil Plato (428-347 Bc) became Greece's most famous philosopher. Plato’s complicated works argued universal truths did exist and that the human soul made the body unimportant. Plato founded an academy in Athens that remained in business until ad 529. His pupil Aristotle (384-322 Bc) turned away from theoretical philosophy to teach about practical ethics, self-control, logic, and science. Alexander the Great (whom Aristotle once tutored) sent him information on plants and animals encountered on the march to India. Aristotle's works became so influential that they determined the course of Western scientific thought until modern times.
Hellenistic philosophers concentrated on ethics, helping people achieve tranquillity in a period of change when things seemed out of their control. In the 3rd century Bc, Epicurus taught that people should not be afraid because everything, including our bodies, consists of microscopic atoms that dissolve painlessly at death. Zeno of Citium, who also lived in the 3rd century Bc, founded Stoicism, which taught that life was ruled by fate but that people should still live morally to be in harmony with nature.
The Golden Age of Greek science came in the Hellenistic period, with the greatest advances in mathematics. The geometry theories published by Euclid about 300 Bc still endure. Archimedes (287-212 Bc) calculated the value of pi (the ratio of the circumference of a circle to its diameter) and invented fluid mechanics. Aristarchus, early in the 3rd century Bc, argued that the earth revolved around the sun, while Eratosthenes accurately calculated the circumference of the earth. Also in the 3rd century Bc, Ctesibius invented machines operated by air and water pressure; Hero later built a rotating sphere powered by steam. These inventions did not lead to practical uses because the technology did not yet exist to produce the pipes, fittings, and screws needed to build powerful machines. Military technology vaulted ahead with the invention of huge catapults and wheeled towers to batter down city walls. Finally, medical scientists made many discoveries, such as the significance of the pulse and the nervous system
THE PROBLEM of CONSCIOUSNESS
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
The most-overwhelming question in neurobiology today is the relation between the mind and the brain. Everyone agrees that what we know as mind is closely related to certain aspects of the behaviour of the brain, not to the heart, as Aristotle thought. Its most mysterious aspect is consciousness or awareness, which can take many forms, from the experience of pain to self-consciousness. In the past the mind (or soul) was often regarded, as it was by Descartes, as something immaterial, separate from the brain but interacting with it in some way. A few neuroscientists, such as Sir John Eccles, still assert that the soul is distinct from the body. But most neuroscientists now believe that all aspects of mind, including its most puzzling attribute - consciousness or awareness are likely to be explainable in a more materialistic way as the behaviour of large sets of interacting neurons. As William James, the father of American psychology, said a century ago, consciousness is not a thing but a process.
Exactly what the process is, however, has yet to be discovered. For many years after James penned The Principles of Psychology, consciousness was a taboo concept in American psychology because of the dominance of the behaviorist movement. With the advent of cognitive science in the mid-1950s, it became possible once more for psychologists to consider mental processes as opposed to merely observing behaviour. In spite of these changes, until recently most cognitive scientists ignored consciousness, as did almost all neuroscientists. The problem was felt to be either purely "philosophical" or too elusive to study experimentally. It would not have been easy for a neuroscientists to get a grant just to study consciousness.
In our opinion, such timidity is ridiculous, so a few years ago we began to think about how best to attack the problem scientifically. How to explain mental events for being caused by the firing of large sets of neurons? Although there are those who believe such an approach is hopeless, we feel it is not productive to worry too much over aspects of the problem that cannot be solved scientifically or, more precisely, cannot be solved solely by using existing scientific ideas. Radically new concepts may indeed be needed—recall the modifications of scientific thinking forced on us by quantum mechanics. The only sensible approach is to press the experimental attack until we are confronted with dilemmas that call for new ways of thinking.
There are many possible approaches to the problem of consciousness. Some psychologists feel that any satisfactory theory should try to explain as many aspects of consciousness as possible, including emotion, imagination, dreams, mystical experiences and so on. Although such an all-embracing theory will be necessary in the long run, we thought it wiser to begin with the particular aspect of consciousness that is likely to yield most easily. What this aspect may be is a matter of personal judgment. We selected the mammalian visual system because humans are very visual animals and because so much experimental and theoretical work has already been done on it.
It is not easy to grasp exactly what we need to explain, and it will take many careful experiments before visual consciousness can be described scientifically. We did not attempt to define consciousness itself because of the dangers of premature definition. (If this seems like a copout, try defining the word "gene" - you will not find it easy.) Yet the experimental evidence that already exists provides enough of a glimpse of the nature of visual consciousness to guide research. In this article, we will attempt to show how this evidence opens the way to attack this profound and intriguing problem.
Visual theorists agree that the problem of visual consciousness is ill posed. The mathematical term "ill posed" means that additional constraints are needed to solve the problem. Although the main function of the visual system is to perceive objects and events in the world around us, the information available to our eyes is not sufficient by itself to provide the brain with its unique interpretation of the visual world. The brain must use past experience (either its own or that of our distant ancestors, which is embedded in our genes) to help interpret the information coming into our eyes. An example would be the derivation of the three-dimensional representation of the world from the two-dimensional signals falling onto the retinas of our two eyes or even onto one of them.
Visual theorists also would agree that seeing is a constructive process, one in which the brain has to carry out complex activities (sometimes called computations) in order to decide which interpretation to adopt of the ambiguous visual input. "Computation" implies that the brain acts to form a symbolic representation of the visual world, with a mapping (in the mathematical sense) of certain aspects of that world onto elements in the brain.
Ray Jackendoff of Brandeis University postulates, as do most cognitive scientists, that the computations carried out by the brain are largely unconscious and that what we become aware of is the result of these computations. But while the customary view is that this awareness occurs at the highest levels of the computational system, Jackendoff has proposed an intermediate-level theory of consciousness.
What we see, Jackendoff suggests, relates to a representation of surfaces that are directly visible to us, together with their outline, orientation, Collor, texture and movement. (This idea has similarities to what the late David C. Marr of the Massachusetts Institute of Technology called a "2 1/2-dimensional sketch." It is more than a two-dimensional sketch because it conveys the orientation of the visible surfaces. It is less than three-dimensional because depth information is not explicitly represented.) In the next stage this sketch is processed by the brain to produce a three-dimensional representation. Jackendoff argues that we are not visually aware of this three-dimensional representation.
An example may make this process clearer. If you look at a person whose back is turned to you, you can see the back of the head but not the face. Nevertheless, your brain infers that the person has a face. We can deduce as much because if that person turned around and had no face, you would be very surprised.
The viewer-entered representation that corresponds to the visible back of the head is what you are vividly aware of. What your brain infers about the front would come from some kind of three-dimensional representation. This does not mean that information flows only from the surface representation to the three-dimensional one; it almost certainly flows in both directions. When you imagine the front of the face, what you are aware of is a surface representation generated by information from the three-dimensional model.
It is important to distinguish between an explicit and an implicit representation. An explicit representation is something that is symbolized without further processing. An implicit representation contains the same information but requires further processing to make it explicit. The pattern of coloured dots on a television screen, for example, contains an implicit representation of objects (say, a person's face), but only the dots and their locations are explicit. When you see a face on the screen, there must be neurons in your brain whose firing, in some sense, symbolizes that face.
We call this pattern of firing neurons an active representation. A latent representation of a face must also be stored in the brain, probably as a special pattern of synaptic connections between neurons. For example, you probably have a representation of the Statue of Liberty in your brain, a representation that is usually inactive. If you do think about the Statue, the representation becomes active, with the relevant neurons firing away.
An object, incidentally, may be represented in more than one way—as a visual image, as a set of words and their related sounds, or even as a touch or a smell. These different representations are likely to interact with one another. The representation is likely to be distributed over many neurons, both locally and more globally. Such a representation may not be as simple and straightforward as uncritical introspection might indicate. There is suggestive evidence, partly from studying how neurons fire in various parts of a monkey's brain and partly from examining the effects of certain types of brain damage in humans, that different aspects of a face—and of the implications of a face—may be represented in different parts of the brain.
First, there is the representation of a face as a face: two eyes, a nose, a mouth and so on. The neurons involved are usually not too fussy about the exact size or position of this face in the visual field, nor are they very sensitive to small changes in its orientation. In monkeys, there are neurons that respond best when the face is turning in a particular direction, while others seem to be more concerned with the direction in which the eyes are gazing.
Then there are representations of the parts of a face, as separate from those for the face as a whole. Further, the implications of seeing a face, such as that person's sex, the facial expression, the familiarity or unfamiliarity of the face, and in particular whose face it is, may each be correlated with neurons firing in other places.
What we are aware of at any moment, in one sense or another, is not a simple matter. We have suggested that there may be a very transient form of fleeting awareness that represents only rather simple features and does not require an attentional mechanism. From this brief awareness the brain constructs a viewer-entered representation—what we see vividly and clearly—that does require attention. This in turn probably leads to three-dimensional object representations and thence to more cognitive ones.
Representations corresponding to vivid consciousness are likely to have special properties. William James thought that consciousness involved both attention and short-term memory. Most psychologists today would agree with this view. Jackendoff writes that consciousness is "enriched" by attention, implying that whereas attention may not be essential for certain limited types of consciousness, it is necessary for full consciousness. Yet it is not clear exactly which forms of memory are involved. Is long-term memory needed? Some forms of acquired knowledge are so embedded in the machinery of neural processing that they are almost certainly used in becoming aware of something. On the other hand, there is evidence from studies of brain-damaged patients that the ability to lay down new long-term episodic memories is not essential for consciousness to be experienced.
It is difficult to imagine that anyone could be conscious if he or she had no memory whatsoever of what had just happened, even an extremely short one. Visual psychologists talk of iconic memory, which lasts for a fraction of a second, and working memory (such as that used to remember a new telephone number) that lasts for only a few seconds unless it is rehearsed. It is not clear whether both of these are essential for consciousness. In any case, the division of short-term memory into these two categories may be too crude.
If these complex processes of visual awareness are localized in parts of the brain, which processes are likely to be where? Many regions of the brain may be involved, but it is almost certain that the cerebral neocortex plays a dominant role. Visual information from the retina reaches the neocortex mainly by way of a part of the thalamus (the lateral geniculate nucleus); another significant visual pathway from the retina is to the superior colliculus, at the top of the brain stem.
The cortex in humans consists of two intricately folded sheets of nerve tissue, one on each side of the head. These sheets are connected by a large tract of about half a billion axons called the corpus callosum. It is well known that if the corpus callosum is cut, as is done for certain cases of intractable epilepsy, one side of the brain is not aware of what the other side is seeing. In particular, the left side of the brain (in a right-handed person) appears not to be aware of visual information received exclusively by the right side. This shows that none of the information required for visual awareness can reach the other side of the brain by travelling down to the brain stem and, from there, back up. In a normal person, such information can get to the other side only by using the axons in the corpus callosum.
A different part of the brain—the hippocampal system—is involved in one-shot, or episodic, memories that, over weeks and months, it passes on to the neocortex. This system is so placed that it receives inputs from, and projects to, many parts of the brain. Thus, one might suspect that the hippocampal system is the essential seat of consciousness. This is not the case: evidence from studies of patients with damaged brains shows that this system is not essential for visual awareness, although naturally a patient lacking one is severely handicapped in everyday life because he cannot remember anything that took place more than a minute or so in the past.
In broad terms, the neocortex of alert animals probably acts in two ways. By building on crude and somewhat redundant wiring, produced by our genes and by embryonic processes, the neocortex draws on visual and other experience to slowly "rewire" itself to create categories (or "features") it can respond to. A new category is not fully created in the neocortex after exposure to only one example of it, although some small modifications of the neural connections may be made.
The second function of the neocortex (at least of the visual part of it) is to respond extremely rapidly to incoming signals. To do so, it uses the categories it has learned and tries to find the combinations of active neurons that, on the basis of its past experience, are most likely to represent the relevant objects and events in the visual world at that moment. The formation of such coalitions of active neurons may also be influenced by biases coming from other parts of the brain: for example, signals telling it what best to attend to or high-level expectations about the nature of the stimulus.
Consciousness, as James noted, is always changing. These rapidly formed coalitions occur at different levels and interact to form even broader coalitions. They are transient, lasting usually for only a fraction of a second. Because coalitions in the visual system are the basis of what we see, evolution has seen to it that they form as fast as possible; otherwise, no animal could survive. The brain is handicapped in forming neuronal coalitions rapidly because, by computer standards, neurons act very slowly. The brain compensates for this relative slowness partly by using very many neurons, simultaneously and in parallel, and partly by arranging the system in a roughly hierarchical manner.
If visual awareness at any moment corresponds to sets of neurons firing, then the obvious question is: Where are these neurons located in the brain, and in what way are they firing? Visual awareness is highly unlikely to occupy all the neurons in the neocortex that are firing above their background rate at a particular moment. We would expect that, theoretically, at least some of these neurons would be involved in doing computations-ing to arrive at the best coalitions - whereas others would express the results of these computations, in other words, what we see.
fortunately, some experimental evidence can be found to back up this theoretical conclusion. A phenomenon called binocular rivalry may help identify the neurons whose firing symbolizes awareness. This phenomenon can be seen in dramatic form in an exhibit prepared by Sally Duensing and Bob Miller at the Exploratorium in San Francisco.
Inocular rivalry occurs when each eye has a different visual input relating to the same part of the visual field. The early visual system on the left side of the brain receives an input from both eyes but sees only the part of the visual field to the right of the fixation point. The converse is true for the right side. If these two conflicting inputs are rivalrous, one sees not the two inputs superimposed but first one input, then the other, and so on in alternation.
In the exhibit, called "The Cheshire Cat," viewers put their heads in a fixed place and are told to keep the gaze fixed. By means of a suitably placed mirror, one of the eyes can look at another person's face, directly in front, while the other eye sees a blank white screen to the side. If the viewer waves a hand in front of this plain screen at the same location in his or her visual field occupied by the face, the face is wiped out. The movement of the hand, being visually very salient, has captured the brain's attention. Without attention the face cannot be seen. If the viewer moves the eyes, the face reappears.
n some cases, only part of the face disappears. Sometimes, for example, one eye, or both eyes, will remain. If the viewer looks at the smile on the person's face, the face may disappear, leaving only the smile. For this reason, the effect has been called the Cheshire Cat effect, after the cat in Lewis Carroll's Alice's Adventures in Wonderland.
Although it is very difficult to record activity in individual neurons in a human brain, such studies can be done in monkeys. A simple example of binocular rivalry has been studied in a monkey by Nikos K. Logothetis and Jeffrey D. Schall, both then at M.I.T. They trained a macaque to keep its eyes still and to signal whether it is seeing upward or downward movement of a horizontal grating. To produce rivalry, upward movement is projected into one of the monkey's eyes and downward movement into the other, so that the two images overlap in the visual field. The monkey signals that it sees up and down movements alternatively, just as humans would. Even though the motion stimulus coming into the monkey's eyes is always the same, the monkey's percept changes every second or so.
The ortical area MT (which some researchers prefer to label V5) is an area mainly concerned with movement. What do the neurons in MT do when the monkey's percept is sometimes up and sometimes down? (The researchers studied only the monkey's first response.) The simplified answer—the actual data are rather more messy - is that whereas the firing of some of the neurons correlates with the changes in the percept, for others the average firing rate is relatively unchanged and independent of which direction of movement the monkey is seeing at that moment. Thus, it is unlikely that the firing of all the neurons in the visual neocortex at one particular moment corresponds to the monkey's visual awareness. Exactly which neurons do correspond to awareness remains to be discovered.
e have postulated that when we clearly see something, there must be neurons actively firing that stand for what we see. This might be called the activity principle. Here, too, there is some experimental evidence. One example is the firing of neurons in a specific cortical visual area in response to illusory contours. Another and perhaps more striking case is the filling in of the blind spot. The blind spot in each eye is caused by the lack of photoreceptors in the area of the retina where the optic nerve leaves the retina and projects to the brain. Its location is about 15 degrees from the fovea (the visual centre of the eye). Yet if you close one eye, you do not see a hole in your visual field.
Philosopher Daniel C. Dennett of Tufts University is unusual among philosophers in that he is interested both in psychology and in the brain. This interest is much to be welcomed. In a recent book, Consciousness Explained, he has argued that it is wrong to talk about filling in. He concludes, correctly, that "an absence of information is not the same as information about an absence." From this general principle he argues that the brain does not fill in the blind spot but rather ignores it.
Dennett's argument by itself, however, does not establish that filling in does not occur; it only suggests that it might not. Dennett also states that "your brain has no machinery for [filling in] at this location." This statement is incorrect. The primary visual cortex lacks a direct input from one eye, but normal "machinery" is there to deal with the input from the other eye. Ricardo Gattass and his colleagues at the Federal University of Rio de Janeiro have shown that in the macaque some of the neurons in the blind-spot area of the primary visual cortex do respond to input from both eyes, probably assisted by inputs from other parts of the cortex. Moreover, in the case of simple filling in, some of the neurons in that region respond as if they were actively filling in.
Thus, Dennett's claim about blind spots is incorrect. In addition, psychological experiments by Vilayanur S. Ramachandran , have shown that what is filled in can be quite complex depending on the overall context of the visual scene. How, he argues, can your brain be ignoring something that is in fact commanding attention?
Filling in, therefore, is not to be dismissed as nonexistent or unusual. It probably represents a basic interpolation process that can occur at many levels in the neocortex. It is, incidentally, a good example of what is meant by a constructive process.
How can we discover the neurons whose firing symbolizes a particular percept? William T. Newsome and his colleagues at Stanford University have done a series of brilliant experiments on neurons in cortical area MT of the macaque's brain. By studying a neuron in area MT, we may discover that it responds best to very specific visual features having to do with motion. A neuron, for instance, might fire strongly in response to the movement of a bar in a particular place in the visual field, but only when the bar is oriented at a certain angle, moving in one of the two directions perpendicular to its length within a certain range of speed.
It is technically difficult to excite just a single neuron, but it is known that neurons that respond to roughly the same position, orientation and direction of movement of a bar tend to be located near one another in the cortical sheet. The experimenters taught the monkey a simple task in movement discrimination using a mixture of dots, some moving randomly, the rest all in one direction. They showed that electrical stimulation of a small region in the right place in cortical area MT would bias the monkey's motion discrimination, almost always in the expected direction.
Thus, the stimulation of these neurons can influence the monkey's behaviour and probably its visual percept. Such experiments do not, however, show decisively that the firing of such neurons is the exact neural correlate of the percept. The correlate could be only a subset of the neurons being activated. Or perhaps the real correlate is the firing of neurons in another part of the visual hierarchy that are strongly influenced by the neurons activated in area MT.
These same reservations apply also to cases of binocular rivalry. Clearly, the problem of finding the neurons whose firing symbolizes a particular percept is not going to be easy. It will take many careful experiments to track them down even for one kind of percept.
It seems obvious that the purpose of vivid visual awareness is to feed into the cortical areas concerned with the implications of what we see; from there the information shuttles on the one hand to the hippocampal system, to be encoded (temporarily) into long-term episodic memory, and on the other to the planning levels of the motor system. But is it possible to go from a visual input to a behavioural output without any relevant visual awareness?
That such a process can happen is demonstrated by the remarkable class of patients with "blindsight." These patients, all of whom have suffered damage to their visual cortex, can point with fair accuracy at visual targets or track them with their eyes while vigorously denying seeing anything. In fact, these patients are as surprised as their doctors by their abilities. The amount of information that "gets through," however, is limited: blindsight patients have some ability to respond to wavelength, orientation and motion, yet they cannot distinguish a triangle from a square.
It is naturally of great interest to know which neural pathways are being used in these patients. Investigators originally suspected that the pathway ran through the superior colliculus. Recent experiments suggest that a direct even if weak connection may be involved between the lateral geniculate nucleus and other visual areas in the cortex. It is unclear whether an intact primary visual cortex region is essential for immediate visual awareness. Conceivably the visual signal in blindsight is so weak that the neural activity cannot produce awareness, although it remains strong enough to get through to the motor system.
Normal-seeing people regularly respond to visual signals without being fully aware of them. In automatic actions, such as swimming or driving a car, complex but stereotypical actions occurred with little, if any, associated visual awareness. In other cases, the information conveyed is either very limited or very attenuated. Thus, while we can function without visual awareness, our behaviour without it is rather restricted.
Clearly, it takes a certain amount of time to experience a conscious percept. It is difficult to determine just how much time is needed for an episode of visual awareness, but one aspect of the problem that can be demonstrated experimentally is that signals received close together in time are treated by the brain as simultaneous.
A disk of red light is flashed for, say, 20 milliseconds, followed immediately by a 20-millisecond flash of green light in the same place. The subject reports that he did not see a red light followed by a green light. Instead he saw a yellow light, just as he would have if the red and the green light had been flashed simultaneously. Yet the subject could not have experienced yellow until after the information from the green flash had been processed and integrated with the preceding red one.
Experiments of this type led psychologist Robert Efron, now at the University of California at Davis, to conclude that the processing period for perception is about 60 to 70 milliseconds. Similar periods are found in experiments with tones in the auditory system. It is always possible, however, that the processing times may be different in higher parts of the visual hierarchy and in other parts of the brain. Processing is also more rapid in trained, compared with naive, observers.
Because it appears to be involved in some forms of visual awareness, it would help if we could discover the neural basis of attention. Eye movement is a form of attention, since the area of the visual field in which we see with high resolution is remarkably small, roughly the area of the thumbnail at arm's length. Thus, we move our eyes to gaze directly at an object in order to see it more clearly. Our eyes usually move three or four times a second. Psychologists have shown, however, that there appears to be a faster form of attention that moves around, in some sense, when our eyes are stationary.
The exact psychological nature of this faster attentional mechanism is at present controversial. Several neuroscientists, however, including Robert Desimone and his colleagues at the National Institute of Mental Health, have shown that the rate of firing of certain neurons in the macaque's visual system depends on what the monkey is attending to in the visual field. Thus, attention is not solely a psychological concept; it also has neural correlates that can be observed. A number of researchers have found that the pulvinars, a region of the thalamus, appears to be involved in visual attention. We would like to believe that the thalamus deserves to be called "the organ of attention," but this status has yet to be established.
The major problem is to find what activity in the brain corresponds directly to visual awareness. It has been speculated that each cortical area produces awareness of only those visual features that are "columnar," or arranged in the stack or column of neurons perpendicular to the cortical surface. Thus, the primary visual cortex could code for orientation and area MT for motion. So far experientialists have not found one particular region in the brain where all the information needed for visual awareness appears to come together. Dennett has dubbed such a hypothetical place "The Cartesian Theatre." He argues on theoretical grounds that it does not exist.
Awareness seems to be distributed not just on a local scale, but more widely over the neocortex. Vivid visual awareness is unlikely to be distributed over every cortical area because some areas show no response to visual signals. Awareness might, for example, be associated with only those areas that connect back directly to the primary visual cortex or alternatively with those areas that project into one another's layer 4. (The latter areas are always at the same level in the visual hierarchy.)
The key issue, then, is how the brain forms its global representations from visual signals. If attention is indeed crucial for visual awareness, the brain could form representations by attending to just one object at a time, rapidly moving from one object to the next. For example, the neurons representing all the different aspects of the attended object could all fire together very rapidly for a short period, possibly in rapid bursts.
This fast, simultaneous firing might not only excite those neurons that symbolized the implications of that object but also temporarily strengthen the relevant synapses so that this particular pattern of firing could be quickly recalled—a form of short-term memory. If only one representation needs to be held in short-term memory, as in remembering a single task, the neurons involved may continue to fire for a period.
A problem arises if it is necessary to be aware of more than one object at exactly the same time. If all the attributes of two or more objects were represented by neurons firing rapidly, their attributes might be confused. The Collor of one might become attached to the shape of another. This happens sometimes in very brief presentations.
Some time ago Christoph von der Malsburg, now at the Ruhr-Universität Bochum, suggested that this difficulty would be circumvented if the neurons associated with any one object all fired in synchrony (that is, if their times of firing were correlated) but out of synchrony with those representing other objects. Recently two groups in Germany reported that there does appear to be correlated firing between neurons in the visual cortex of the cat, often in a rhythmic manner, with a frequency in the 35- to 75-hertz range, sometimes called 40-hertz, or g, oscillation.
Von der Malsburg's proposal prompted us to suggest that this rhythmic and synchronized firing might be the neural correlate of awareness and that it might serve to bind together activity concerning the same object in different cortical areas. The matter is still undecided, but at present the fragmentary experimental evidence does rather little to support such an idea. Another possibility is that the 40-hertz oscillations may help distinguish figure from ground or assist the mechanism of attention.
Are there some particular types of neurons, distributed over the visual neocortex, whose firing directly symbolizes the content of visual awareness? One very simplistic hypothesis is that the activities in the upper layers of the cortex are largely unconscious ones, whereas the activities in the lower layers (layers 5 and 6) mostly correlate with consciousness. We have wondered whether the pyramidal neurons in layer 5 of the neocortex, especially the larger ones, might play this latter role.
These are the only cortical neurons that project right out of the cortical system (that is, not to the neocortex, the thalamus or the claustrum). If visual awareness represents the results of neural computations in the cortex, one might expect that what the cortex sends elsewhere would symbolize those results. Moreover, the neurons in layer 5 show a rather unusual propensity to fire in bursts. The idea that layer 5 neurons may directly symbolize visual awareness is attractive, but it still is too early to tell whether there is anything in it.
Visual awareness is clearly a difficult problem. More work is needed on the psychological and neural basis of both attention and very short-term memory. Studying the neurons when a percept changes, even though the visual input is constant, should be a powerful experimental paradigm. We need to construct neurobiological theories of visual awareness and test them using a combination of molecular, neurobiological and clinical imaging studies.
We believe that once we have mastered the secret of this simple form of awareness, we may be close to understanding a central mystery of human life: how the physical events occurring in our brains while we think and act in the world relate to our subjective sensations—that is, how the brain relates to the mind.
Postscript: There have been several relevant developments since this article was first published. It now seems likely that there are rapid "on-line" systems for stereotyped motor responses such as hand or eye movement. These systems are unconscious and lack memory. Conscious seeing, on the other hand, seems to be slower and more subject to visual illusions. The brain needs to form a conscious representation of the visual scene that it then can use for many different actions or thoughts. Exactly how all these pathways work and how they interact is far from clear.
There have been more experiments on the behaviour of neurons that respond to bistable visual percepts, such as binocular rivalry, but it is probably too early to draw firm conclusions from them about the exact neural correlates of visual consciousness. We have suggested on theoretical grounds based on the neuroanatomy of the macaque monkey that primates are not directly aware of what is happening in the primary visual cortex, even though most of the visual information flows through it. This hypothesis is supported by some experimental evidence, but it is still controversial.
AN INTRODUCTION TO ANALYTIC AND LINGUISTIC PHILOSOPHY
Linguistic Philosophy, 20th-century philosophical movement, dominant in Britain and the United States since World War II, that aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originate in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focussed on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues—the dialectical method, used most famously by his teacher Socrates—has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosophers G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, they set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating less puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided in determining the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical view based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell’s work in mathematics attracted to Cambridge the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science—are considered factually meaningful. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920s initiated the movement known as logical positivism (see Positivism). Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivists, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
German philosopher Rudolf Carnap attempted to introduce the methodology and precision of mathematics into the study of philosophy. This approach is now known as logical positivism or logical empiricism.
The positivists divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivists concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empty. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.
The positivists’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953; trans. 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Adaptational contributions within the analytic and linguistic movement include the work of the British philosophers Gilbert Ryle, John Austin, and P. F. Strawson and the American philosopher W. V. Quine. According to Ryle, the task of philosophy is to restate “systematically misleading expressions” in forms that are logically more accurate. He was particularly concerned with statements the grammatical form of which suggests the existence of nonexistent objects. For example, Ryle is best known for his analysis of mentalistic language, language that misleadingly suggests that the mind is an entity in the same way as the body.
Austin maintained that one of the most fruitful starting points for philosophical inquiry is attention to the extremely fine distinctions drawn in ordinary language. His analysis of language eventually led to a general theory of speech acts, that is, to a description of the variety of activities that an individual may be performing when something is uttered. Strawson is known for his analysis of the relationship between formal logic and ordinary language. The complexity of the latter, he argued, is inadequately represented by formal logic. A variety of analytic tools, therefore, is needed in addition to logic in analysing ordinary language.
Quine discussed the relationship between language and ontology. He argued that language systems tend to commit their users to the existence of certain things. For Quine, the justification for speaking one way rather than another is a thoroughly pragmatic one.
The commitment to language analysis as a way of pursuing philosophy has continued as a significant contemporary dimension in philosophy. A division also continues to exist between those who prefer to work with the precision and rigour of symbolic logical systems and those who prefer to analyze ordinary language. Although few contemporary philosophers maintain that all philosophical problems are linguistic, the view continues to be widely held that attention to the logical structure of language and to how language is used in everyday direct discourse would oftentimes aid in resolving philosophical problems.
Edmund Husserl (1859-1938), German Philosopher, founder of phenomenology, and as a philosopher constitutive contribution was of phenomenology. This 20th-century philosophical movement is dedicated to the description of phenomena as they present themselves through perception to the conscious mind.
Husserl was born in Prossnitz, Moravia (now in the Czech Republic), on April 8, 1859. He studied science, philosophy, and mathematics at the universities of Leipzig, Berlin, and Vienna and wrote his doctoral thesis on the calculus of variations. He became interested in the psychological basis of mathematics and, shortly after becoming a lecturer in philosophy at the University of Halle, wrote his first book, Philosophie der Arithmetik (1891). At that time he maintained that the truths of mathematics have validity regardless of the way people come to discover and believe in them.
Husserl then argued against his early position, which he called psychologism, in Logical Investigations (1900-1901; trans. 1970). In this book, regarded as a radical departure in philosophy, he contended that the philosopher's task is to contemplate the essences of things, and that the essence of an object can be arrived at by systematically varying that object in the imagination. Husserl noted that consciousness is always directed toward something. He called this directedness intentionality and argued that consciousness contains ideal, unchanging structures called meanings, which determine what object the mind is directed toward at any given time.
During his tenure (1901-1916) at the University of Göttingen, Husserl attracted many students, who began to form a distinct phenomenological school, and he wrote his most influential work, Ideas: A General Introduction to Pure Phenomenology (1913; trans. 1931). In this book Husserl introduced the term phenomenological reduction for his method of reflection on the meanings the mind employs when it contemplates an object. Because this method concentrates on meanings that are in the mind, whether or not the object present to consciousness actually exists, Husserl said the method involved “bracketing existence,” that is, setting aside the question of the real existence of the contemplated object. He proceeded to give detailed analyses of the mental structures involved in perceiving particular types of objects, describing in detail, for instance, his perception of the apple tree in his garden. Thus, although phenomenology does not assume the existence of anything, it is nonetheless a descriptive discipline; according to Husserl, phenomenology is devoted, not to inventing theories, but rather to describing the “things themselves.”
After 1916 Husserl taught at the University of Freiburg. Phenomenology had been criticized as an essentially solipsistic method, confining the philosopher to the contemplation of private meanings, so in Cartesian Meditations (1931; trans. 1960), Husserl attempted to show how the individual consciousness can be directed toward other minds, society, and history. Husserl died in Freiburg on April 26, 1938.
Husserl's phenomenology had a great influence on a younger colleague at Freiburg, Martin Heidegger, who developed existential phenomenology, and Jean-Paul Sartre and French existentialism (see Existentialism). Phenomenology remains one of the most vigorous tendencies in contemporary philosophy, and its impact has also been felt in theology, linguistics, psychology, and the social sciences.
The founder of phenomenology, German philosopher Edmund Husserl, introduced the term in his book Ideen zu einer reinen Phänomenolgie und phänomenologischen Philosophie (1913; Ideas: A General Introduction to Pure Phenomenology,1931). The early followers of Husserl such as German philosopher Max Scheler, influenced by his previous book, Logische Untersuchungen (two volumes, 1900 and 1901; Logical Investigations, 1970), claimed that the task of phenomenology is to study essences, such as the essence of emotions. Although Husserl himself never gave up his early interest in essences, he later held that only the essences of certain special conscious structures are the proper object of phenomenology. As formulated by Husserl after 1910, phenomenology is the study of the structures of consciousness that enable consciousness to refer to objects outside itself. This study requires reflection on the content of the mind to the exclusion of everything else. Husserl called this type of reflection the phenomenological reduction. Because the mind can be directed toward nonexistent as well as real objects, Husserl noted that phenomenological reflection does not presuppose that anything exists, but rather amounts to a “bracketing of existence” - that is, setting aside the question of the real existence of the contemplated object.
German philosopher Martin Heidegger greatly influenced the modern philosophy movements of phenomenology and existentialism. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).
Heidegger was born in Messkirch, Baden. He studied Roman Catholic theology and then philosophy at the University of Freiburg, where he was an assistant to Edmund Husserl, the founder of phenomenology. Heidegger began teaching at Freiburg in 1915. From 1923 to 1928 he taught at Marburg University. He then returned to Freiburg in 1928, inheriting Husserl's position as professor of philosophy. Because of his public support of Adolf Hitler and the Nazi Party in 1933 and 1934, Heidegger's professional activities were restricted in 1945, and controversy surrounded his university standing until his retirement in 1959.
German philosopher Martin Heidegger was instrumental in the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).
open sidebar
Besides Husserl, Heidegger was especially influenced by the pre-Socratics (see Greek Philosophy; Philosophy), by Danish philosopher Søren Kierkegaard, and by German philosopher Friedrich Nietzsche. In developing his theories, Heidegger rejected traditional philosophic terminology in favour of an individual interpretation of the works of past thinkers. He applied original meanings and etymologies to individual words and expressions, and coined hundreds of new, complex words. In his most important and influential work, Sein und Zeit (Being and Time, 1927), Heidegger was concerned with what he considered the essential philosophical question: What is it, to be? This led to the question of what kind of “being” human beings have. They are, he said, thrown into a world that they have not made but that consists of potentially useful things, including cultural as well as natural objects. Because these objects come to humanity from the past and are used in the present for the sake of future goals, Heidegger posited a fundamental relation between the mode of being of objects, of humanity, and of the structure of time.
The individual is, however, always in danger of being submerged in the world of objects, everyday routine, and the conventional, shallow behaviour of the crowd. The feeling of dread (Angst) brings the individual to a confrontation with death and the ultimate meaninglessness of life, but only in this confrontation can an authentic sense of Being and of freedom be attained.
After 1930, Heidegger turned, in such works as Einführung in die Metaphysik (An Introduction to Metaphysics, 1953), to the interpretation of particular Western conceptions of being. He felt that, in contrast to the reverent ancient Greek conception of being, modern technological society has fostered a purely manipulative attitude that has deprived Being and human life of meaning—a condition he called nihilism. Humanity has forgotten its true vocation and must recover the deeper understanding of Being (achieved by the early Greeks and lost by subsequent philosophers) to be receptive to new understandings of Being.
Heidegger's original treatment of such themes as human finitude, death, nothingness, and authenticity led many observers to associate him with existentialism, and his work had a crucial influence on French existentialist Jean-Paul Sartre. Heidegger, however, eventually repudiated existentialist interpretations of his work. His thought directly influenced the work of French philosophers Michel Foucault and Jacques Derrida and of German sociologist Jurgen Habermas. Since the 1960s his influence has spread beyond continental Europe and has had an increasing impact on philosophy in English-speaking countries worldwide.
German philosopher Martin Heidegger was instrumental in the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of “authentic” or “inauthentic” existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or “being,” which was first expounded in his major work Being and Time (1927).
EXISTENTIALISM
Existentialism, philosophical movement or tendency, emphasizing individual existence, freedom, and choice, that influenced many diverse writers in the 19th and 20th centuries. Because of the diversity of positions associated with existentialism, the term is impossible to define precisely. Certain themes common to virtually all existentialist writers can, however, be identified. The term itself suggests one major theme: the stress on concrete individual existence and, consequently, on subjectivity, individual freedom, and choice.
Most philosophers since Plato have held that the highest ethical good is the same for everyone; insofar as one approaches moral perfection, one resembles other morally perfect individuals. The 19th-century Danish philosopher Søren Kierkegaard, who was the first writer to call himself existential, reacted against this tradition by insisting that the highest good for the individual is to find his or her own unique vocation. As he wrote in his journal, “I must find a truth that is true for me . . . the idea for which I can live or die.” Other existentialist writers have echoed Kierkegaard's belief that one must choose one's own way without the aid of universal, objective standards. Against the traditional view that moral choice involves an objective judgment of right and wrong, existentialists have argued that no objective, rational basis can be found for moral decisions. The 19th-century German philosopher Friedrich Nietzsche further contended that the individual must decide which situations are to count as moral situations.
All existentialists have followed Kierkegaard in stressing the importance of passionate individual action in deciding questions of both morality and truth. They have insisted, accordingly, that personal experience and acting on one's own convictions are essential in arriving at the truth. Thus, the understanding of a situation by someone involved in that situation is superior to that of a detached, objective observer. This emphasis on the perspective of the individual agent has also made existentialists suspicious of systematic reasoning. Kierkegaard, Nietzsche, and other existentialist writers have been deliberately unsystematic in the exposition of their philosophies, preferring to express themselves in aphorisms, dialogues, parables, and other literary forms. Despite their antirationalist position, however, most existentialists cannot be said to be irrationalists in the sense of denying all validity to rational thought. They have held that rational clarity is desirable wherever possible, but that the most important questions in life are not accessible to reason or science. Furthermore, they have argued that even science is not as rational as is commonly supposed. Nietzsche, for instance, asserted that the scientific assumption of an orderly universe is for the most part a useful fiction.
Perhaps the most prominent theme in existentialist writing is that of choice. Humanity's primary distinction, in the view of most existentialists, is the freedom to choose. Existentialists have held that human beings do not have a fixed nature, or essence, as other animals and plants do; each human being makes choices that create his or her own nature. In the formulation of the 20th-century French philosopher Jean-Paul Sartre, existence precedes essence. Choice is therefore central to human existence, and it is inescapable; even the refusal to choose is a choice. Freedom of choice entails commitment and responsibility. Because individuals are free to choose their own path, existentialists have argued, they must accept the risk and responsibility of following their commitment wherever it leads.
Kierkegaard held that it is spiritually crucial to recognize that one experiences not only a fear of specific objects but also a feeling of general apprehension, which he called dread. He interpreted it as God's way of calling each individual to make a commitment to a personally valid way of life. The word anxiety (German Angst) has a similarly crucial role in the work of the 20th-century German philosopher Martin Heidegger; anxiety leads to the individual's confrontation with nothingness and with the impossibility of finding ultimate justification for the choices he or she must make. In the philosophy of Sartre, the word nausea is used for the individual's recognition of the pure contingency of the universe, and the word anguish is used for the recognition of the total freedom of choice that confronts the individual at every moment.
Existentialism as a distinct philosophical and literary movement belongs to the 19th and 20th centuries, but elements of existentialism can be found in the thought (and life) of Socrates, in the Bible, and in the work of many premodern philosophers and writers.
The first to anticipate the major concerns of modern existentialism was the 17th-century French philosopher Blaise Pascal. Pascal rejected the rigorous rationalism of his contemporary René Descartes, asserting, in his Pensées (1670), that a systematic philosophy that presumes to explain God and humanity is a form of pride. Like later existentialist writers, he saw human life in terms of paradoxes: The human self, which combines mind and body, is itself a paradox and contradiction.
Nineteenth-century Danish philosopher Søren Kierkegaard played a major role in the development of existentialist thought. Kierkegaard criticized the popular systematic method of rational philosophy advocated by German Georg Wilhelm Friedrich Hegel. He emphasized the absurdity inherent in human life and questioned how any systematic philosophy could apply to the ambiguous human condition. In Kierkegaard’s deliberately unsystematic works, he explained that each individual should attempt an intense examination of his or her own existence.
Kierkegaard, generally regarded as the founder of modern existentialism, reacted against the systematic absolute idealism of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, who claimed to have worked out a total rational understanding of humanity and history. Kierkegaard, on the contrary, stressed the ambiguity and absurdity of the human situation. The individual's response to this situation must be to live a totally committed life, and this commitment can only be understood by the individual who has made it. The individual therefore must always be prepared to defy the norms of society for the sake of the higher authority of a personally valid way of life. Kierkegaard ultimately advocated a “leap of faith” into a Christian way of life, which, although incomprehensible and full of risk, was the only commitment he believed could save the individual from despair.
Danish religious philosopher Søren Kierkegaard rejected the all-encompassing, analytical philosophical systems of such 19th-century thinkers as German philosopher G. W. F. Hegel. Instead, Kierkegaard focussed on the choices the individual must make in all aspects of his or her life, especially the choice to maintain religious faith. In Fear and Trembling (1846; trans. 1941), Kierkegaard explored the concept of faith through an examination of the biblical story of Abraham and Isaac, in which God demanded that Abraham demonstrate his faith by sacrificing his son.
One of the most controversial works of 19th-century philosophy, Thus Spake Zarathustra (1883-1885) articulated German philosopher Friedrich Nietzsche’s theory of the Übermensch, a term translated as “Superman” or “Overman.” The Superman was an individual who overcame what Nietzsche termed the “slave morality” of traditional values, and lived according to his own morality. Nietzsche also advanced his idea that “God is dead,” or that traditional morality was no longer relevant in people’s lives. In this passage, the sage Zarathustra came down from the mountain where he had spent the last ten years alone to preach to the people.
Nietzsche, who was not acquainted with the work of Kierkegaard, influenced subsequent existentialist thought through his criticism of traditional metaphysical and moral assumptions and through his espousal of tragic pessimism and the life-affirming individual will that opposes itself to the moral conformity of the majority. In contrast to Kierkegaard, whose attack on conventional morality led him to advocate a radically individualistic Christianity, Nietzsche proclaimed the “death of God” and went on to reject the entire Judeo-Christian moral tradition in favour of a heroic pagan ideal.
The modern philosophy movements of phenomenology and existentialism have been greatly influenced by the thought of German philosopher Martin Heidegger. According to Heidegger, humankind has fallen into a crisis by taking a narrow, technological approach to the world and by ignoring the larger question of existence. People, if they wish to live authentically, must broaden their perspectives. Instead of taking their existence for granted, people should view themselves as part of Being (Heidegger's term for that which underlies all existence).
Heidegger, like Pascal and Kierkegaard, reacted against an attempt to put philosophy on a conclusive rationalistic basis—in this case the phenomenology of the 20th-century German philosopher Edmund Husserl. Heidegger argued that humanity finds itself in an incomprehensible, indifferent world. Human beings can never hope to understand why they are here; instead, each individual must choose a goal and follow it with passionate conviction, aware of the certainty of death and the ultimate meaninglessness of one's life. Heidegger contributed to existentialist thought an original emphasis on being and ontology (see Metaphysics) as well as on language.
Twentieth-century French intellectual Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work focuses on the dilemma of choice faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.
No comments:
Post a Comment