Sorryall of the previous discussions are unreadable, so I'll make it easy. While using impact factor alone to determine reliability is not acceptable, is impact factor a valid thing to look at to determine reliability? I've set up an easy and clear yes/no option.
I think this entire debate is a red herring that arose from the belief of some editors that Wikipedia's decision not to include very recent outlier papers in global warming (an article that is under continual pressure to include everybody's favorite hobby horse) was based on unreliability of the paper. Not so. The pertinent policy is the neutral point of view. I think it got caught up with this policy because there are one or two words about weight of sources in this policy. Obviously newer papers, especially those that have no had time to percolate, and especially those that contradict much of the science, tend not to carry an attributable weight, but given a few months we have a clearer view. We're writing an encyclopedia, not a newspaper, so we can afford to wait to see secondaries such as review articles and the like, especially in a situation where there is a problem with recentism. --TS 01:41, 14 March 2010 (UTC)
This is where you and I differ (and you'll be aware that I am the editor who raised FAQ Q22 at talk:global cooling). You continually conflate impact and impact factor, but others have addressed that. Sources in this case means journals, papers, and authors, all three and attempts to reduce the meaning to journals only are unlikely to prevail.
It is of course correct to say that new, as-yet unevaluated papers are far less suitable for inclusion than those whose impact (not impact factor, which is just one metric) is known. This speaks to due weight. --TS 16:11, 14 March 2010 (UTC)
"Using impact factor alone to determine reliability" of WHAT? If this is supposed to be a simplified restatement of an unreadable question, it must contain whatever it is that one is trying to evaluate the reliability of. Jc3s5h (talk) 14:51, 14 March 2010 (UTC)
Sorry, when someone uses the unique turn of phrase "citation index scores," I assume they mean the only time citation indicies are looked at and things scored - ie - impact factor. If TMLutas was saying when he edited this policy was that it is inapropriate to determine if other sources have used the source in question - IE, do other papers cite this source, then I the page itself says, under "Usage by other sources" - "How accepted, high-quality reliable sources use a given source provides evidence, positive or negative, for its reliability and reputation." That is true for every source - if a paper is frequently cited by other papers, that is evidence that it has reliability and reputation. If the paper is uncited by all, that is evidence it does not have reliability and reputation. If TMLutas is trying to usurp that by saying that we should not look at if other papers cite a given paper at all, he's clearly ignored the bulk of this policy. Hipocrite (talk) 14:08, 15 March 2010 (UTC)
For purposes of this guideline, a reliable source is one that is good enough to consider using in an article. Reputable academic journals are reliable sources, and that includes all the papers that they contain. If someone justifies a new claim in an article based on an unreliable source, such as an anonymous personal website, any editor would be justified in removing the claim without even reading the source, or spending 5 seconds thinking about whether the source might be right. Sure, some papers that appear in reputable journals turn out to be wrong, but unlike unreliable sources, they are worthy of examination and comparison to other reliable sources. Unreliable sources deserve no consideration at all. Jc3s5h (talk) 01:31, 17 March 2010 (UTC)
I looked at 2.1 in toto anew and noticed something that might resolve the issue a bit better. 2.1(2) says "Material that has been vetted by the scholarly community is regarded as reliable; this means published in reputable peer-reviewed sources or by well-regarded academic presses." This has been around since at least February of 2009 and not touched by me so can we agree that there's a consensus on this? If papers are "material" and not a "source" then the problem posed by 2.1(4) seems to go away. The problem ends up being that papers are being misdefined as sources and coming under 2.1(4) instead of material and handled under 2.1(2). I don't feel too badly about that since a number of other editors missed the exact same thing.
There is a long backstory to this which can be found further up and on further referenced pages and policies at the link. The nub of the issue is not *whether* to deny inclusion on a page but how it is to be denied. Whether a study that is peer reviewed but new and a bit out there should not be deemed reliable anywhere in Wikipedia until its influence on its field is determined or shall it be shunted off to specialist pages dealing directly with its minority position and kept off the main topic summary page directly because too much balancing text would be required, ballooning the article beyond any reasonable limit. TMLutas (talk) 17:11, 22 March 2010 (UTC)
Why does it say "...that is challenged or likely to be challenged..."? Why not just say "everything"? What kind of thing are we talking about that would not be likely to be challenged? Stuff so obvious that no one could honestly disagree, or that no one would bother stating in a reliable source? I'm trying to understand the point of this clause. Is it trying to open some leeway for statements so obvious that no one notable would ever have published a sourse for it, like...well, I can think of several examples, but first I think it's best if someone else first gives me an example of the kind of statement that they had in mind when this clause was written, or if not, some explanation of the purpose of this clause. Chrisrus (talk) 05:53, 20 March 2010 (UTC)
Experienced editors should try to do as Blueboar says. But this is a wiki. We let people add unsourced knowledge as they feel motivated to contribute such. We don't preemptively require sourcing. To do so would break our Wiki editing model of incremental improvement. There is evidence that most of our content isn't written by people who carefully add sources, it's written by people contributing their knowledge freely as they feel motivated to do so. If someone adds something dubious, it can be removed, and if anyone wants to readd it, then they are forced to find sources or leave it out. That's the basic framework for how things get written. It's nice when people do add citations when they add information, but to require it would go against the wiki process of imperfection leading to gradual improvement. A commonly cited principle is that we "tolerate things that we do not condone". That used to be in one of our policies, but I can't locate which one right now. In any case, I think it's still good guidance for those who can't fathom why we do what we do. Gigs (talk) 20:52, 23 March 2010 (UTC)
I decided to return to this topic after reading Wikipedia:BLUE. I think that readers of this article should be directed to this page for more information about where no citation is needed. Chrisrus (talk) 20:20, 13 May 2010 (UTC)
Back to the original topic, about whether news organizations are only good for names and dates, well, there's all kinds of news organizations. The top tier, such as the Washington Post, BBC, Wall Street Journal, Agence-France Presse, Stratfor, Economist, and so forth tend to publish a lot of analysis and background pieces, not just breaking news. For certain topics, like articles about political or business practices, these types of sources are usually the best ones. The mix of how much to cite to news sources depends on the article; a botany article might cite very few news sources, while an article about a current event would cite mostly breaking news reports. Squidfryerchef (talk) 16:18, 28 March 2010 (UTC)
Enterprise in the USA grew strong on the idea, that good quality work (and a solid education) is a waste of time: nobody needs to produce things to last a thousand years if they get thrown away in five (years, months, days). WP implements this principle (lousy work only means: room for improvement) with some success. An encyclopedia used to be written by people who knew everything about their subject, half of WP (I am not talking about the hard sciences or entertainment, sports, tv, manga, rock etc.) is written by people who know either nothing or very few things about their subject. So you have to have ISBN numbers to be sure the books are not only imagined and to give sources for every fact, because the suspicion is, the WP authors simply make them all up. Quality control in industry means: contol, that quality is not that high that it will cost too much. But this is not all wrong, WP can only be a success with (nearly) all its amateurs given a free reign (only obvious Vandalism is not really tolerated).--Radh (talk) 07:47, 24 March 2010 (UTC)
Does this diatribe have any point to make, beyond "please use ISBN numbers when possible"? The rest of it seems to be just (insultingly) repeating parts of Wikipedia:Why Wikipedia is not so great, which we already know... -- Quiddity (talk) 18:21, 28 March 2010 (UTC)
(edit conflict) I agree, project lists of good sources are useful for projects such as video games, music, etc. However, news sources range from high quality top end publications such as The Times and the Washington Post to some extremely low quality publications such as the Daily Star (United Kingdom) and The National Enquirer. Consideration has to be given on a case by case basis as to whether a particular source is reliable for a particular statement. For instance, this story in The Guardian, reports that The Times had apologised for a story about Charles Kennedy and a drinking problem. So in that case it was shown that The Times story was wrong, although in fact Kennedy later admitted a drink problem and stood down as party leader.
3a8082e126