Anexplanation is a set of statements usually constructed to describe a set of facts that clarifies the causes, context, and consequences of those facts. It may establish rules or laws, and clarifies the existing rules or laws in relation to any objects or phenomena examined.[1]
However, the tendency in much of the recent philosophical literature has been to assume that there is a substantial continuity between the sorts of explanations found in science and at least some forms of explanation found in more ordinary non-scientific contexts, with the latter embodying in a more or less inchoate way features that are present in a more detailed, precise, rigorous etc. form in the former. It is further assumed that it is the task of a theory of explanation to capture what is common to both scientific and at least some more ordinary forms of explanation.[3]
A notable theory of scientific explanation is Hempel's Deductive-nomological model. This model has been widely criticized[by whom?] but it is still the starting point for discussion of most theories of explanation.
The difference between explanations and arguments reflects a difference in the kind of question that arises. In the case of arguments, we start from a doubted fact, which we try to support by arguments. In the case of explanations, we start with an accepted fact, the question being why is this fact or what caused it. The answer here is the explanation.[4]
For instance, if Fred and Joe address the issue of whether or not Fred's cat has fleas, Joe may state: "Fred, your cat has fleas. Observe the cat is scratching right now." Joe has made an argument that the cat has fleas. However, if Fred and Joe agree on the fact that the cat has fleas, they may further question why this is so and put forth an explanation: "The reason the cat has fleas is that the weather has been damp." The difference is that the attempt is not to settle whether or not some claim is true, but to show why it is true. In this sense, arguments aim to contribute knowledge, whereas explanations aim to contribute understanding.[citation needed]
While arguments attempt to show that something is, will be, or should be the case, explanations try to show why or how something is or will be. If Fred and Joe address the issue of whether or not Fred's cat has fleas, Joe may state: "Fred, your cat has fleas. Observe the cat is scratching right now." Joe has made an argument that the cat has fleas. However, if Fred and Joe agree on the fact that the cat has fleas, they may further question why this is so and put forth an explanation: "The reason the cat has fleas is that the weather has been damp." The difference is that the attempt is not to settle whether or not some claim is true, but to show why it is true.[citation needed]
The term explanation is sometimes used in the context of justification, e.g., the explanation as to why a belief is true. Justification may be understood as the explanation as to why a belief is a true one or an account of how one knows what one knows. It is important to be aware when an explanation is not a justification. One may give a detailed and believable account on something without giving a single proof.[citation needed]
There are many and varied events, objects, and facts which require explanation. So too, there are many different things that can be used to explain something. Aristotle recognized four archetypes of explanation. These were thought, since even more ancient times, to be universal and unique 'kinds' of explanation that comprise all ways of explaining something. However, there is much confusion about their precise definition and how they relate to each other. Types of explanation involve appropriate types of reasoning, such as Deductive-nomological, Functional, Historical, Psychological, Reductive, Teleological, Methodological explanations.[1]
McIlroy told reporters the PGA Tour was informed Monday that a course volunteer acknowledged stepping on his ball while looking for it in the rough, providing an explanation of how the ball might have become embedded after taking a bounce.
The explanation for communication consent is what you'll see in the History tab when reviewing contact subscriptions, see the first screenshot here: -your-subscription-preferences-and-types#how-to-manage-...
The pattern I'm trying to make clear is that you want to leave as easy a papertrail to follow should a contact ever complain or an auditor ever want to see details as to why someone was opted in. You don't just want to have a tight-lipped short note, you want as much context as possible.
is a law. Thus, according to the DN model, the lattergeneralization can be used, in conjunction with information that someparticular sample of gas has been heated under constant pressure, toexplain why it has expanded. By contrast, the former generalization (1) in conjunction with the information that a particular person nis a member of the 1964 Greensbury school board, cannot be used toexplain why n is bald.
There is considerable disagreement over whether such generalizationsare laws. Some philosophers (e.g., Woodward 2000) suggest that suchgeneralizations satisfy too few of the standard criteria to count aslaws but can nevertheless figure in explanations; if so, it apparentlyfollows that we must abandon the DN requirement that allexplanations must appeal to laws. Others (e. g., Mitchell 1997),emphasizing different criteria for lawfulness, conclude instead thatgeneralizations like (M) are laws and hence no threat to therequirement that explanations must invoke laws. In the absence of amore principled account of laws, it is hard to evaluate thesecompeting claims and hence hard to assess the implications of theDN model for the special sciences. At the very least,providing such an account is an important item of unfinished businessfor advocates of the DN model.
One can think of IS explanation as involving a naturalgeneralization of this idea. While an IS explanation does notshow that the explanandum-phenomenon was to be expected withcertainty, it does the next best thing: it shows that theexplanandum-phenomenon is at least to be expected with highprobability and in this way provides understanding. Stated moregenerally, both the DN and IS models, share thecommon idea that, as Salmon (1989) puts it,
As explained above, examples like (3) are potential counterexamples to the claim that the DN modelprovides necessary conditions for explanation. There are alsoa number of well-known counterexamples to the claim that theDN model provides sufficient conditions forsuccessful scientific explanation. Here are two illustrations.
Explanatory Irrelevancies. A derivation can satisfythe DN criteria and yet be a defective explanation because itcontains irrelevancies besides those associated with the directionalfeatures of explanation. Consider an example due to Wesley Salmon(1971a: 34):
It is arguable that (L) meets the criteria for lawfulnessimposed by Hempel and many other writers. (If one wants to deny that(L) is a law one needs some principled, generally acceptedbasis for this judgment and, as explained above, it is unclear whatthis basis is.) Moreover, (5) is certainly a sound deductive argument in which (L) occurs asan essential premise. Nonetheless, most people judge that (L)and (K) are no explanation of (E). There are many othersimilar illustrations. For example (Kyburg 1965), it is presumably alaw (or at least an exceptionless, counterfactual supportinggeneralization) that all samples of table salt that have been hexed bybeing touched with the wand of a witch dissolve when placed in water.One may use this generalization as a premise in a DNderivation which has as its conclusion that some particular hexedsample of salt has dissolved in water. But again the hexing isirrelevant to the dissolving and such a derivation is noexplanation.
There are two possible reactions one might have to this observation.One is that the idea that explanation is a matter of nomicexpectability is correct as far as it goes, but that something more isrequired as well. According to this assessment, the DN/ISmodel does state a necessary condition for successfulexplanation and, moreover, a condition that is a non-redundant part ofa set of conditions that are jointly sufficient for explanation.However, some other, independent feature, X (which will accountfor the directional features of explanation and insure the kind ofexplanatory relevance that is apparently missing in the birth controlexample) must be added to the DN model to achieve asuccessful account of explanation. The idea is thus that NomicExpectability + X = Explanation. Something like this idea is endorsed, bythe unificationist models of explanation developed by Friedman (1974)and Kitcher (1989), which are discussed in Section 5 below.
Suggested Readings. The most authoritative andcomprehensive statement of the DN and IS models isprobably Hempel (1965b). This is reprinted in Hempel 1965a, along witha number of other papers that touch on various aspects of the problemof scientific explanation. In addition to the references cited in thissection, Salmon (1989: 46ff.) describes a number of well-knowncounterexamples to the DN/IS models and discusses theirsignificance.
assuming that not all women in the population take birth controlpills. In other words, if you are a male in this population, takingbirth control pills is statistically irrelevant to whether you becomepregnant, while if you are a female it is relevant. Thus taking birthcontrol pills is explanatorily irrelevant to pregnancy among males butnot among females.
3a8082e126