Why doesn't the score voting community try harder for peer review?

246 views
Skip to first unread message

Clay Shentrup

unread,
Dec 20, 2014, 5:07:00 PM12/20/14
to electio...@googlegroups.com
I encounter a huge number of people who, either too lazy or to lacking in math skills to understand the primary data and logical arguments at sites like ScoreVoting.net, insist on peer review. I grant that the "peers" in this field are, generally, horrendously incompetent (like, less knowledgable than many smart people who've spent an afternoon studying voting methods on Wikipedia). However, this is an obvious pragmatic issue here.

Warren, how much effort do you put into your efforts to reach out to whatever scholarly community there is?

More generally, there is a huge amount of naive fallacy in the field of social choice and economics. For instance, Kahneman and others have argued that the Allais Paradox is somehow an indictment of utilitarianism (when in fact, it's just a demonstration that people use imperfect/approximate heuristics to determine the value of something). That such obviously wrong conclusions exist in the academic world is a sign that more work needs to be done to penetrate those insular bubbles with modern thinking.

david.cou...@drake.edu

unread,
Dec 22, 2014, 6:45:16 PM12/22/14
to electio...@googlegroups.com
I would definitely be one of those who feels that papers would be more convincing having gone through a peer-review process, for a number of reasons. First, there are a lot of people who are interested in figuring out the truth about how systems compare. However, I would wager that the vast majority of folks are not good at following prose like:

"The “range voting” system is as follows. In a c-candidate election, you select a vector of c real numbers, each of absolute value ≤ 1, as your vote."

This is the very first thing that a reader gets when they jump into Smith 2000, and later parts are far more opaque than this. So, the typical lay reader (in fact, the typical expert reader who is not a mathematician by training) is likely to give up on attempting to get through this paper. What that means is that they have no way of knowing whether to believe what Smith has written, because they have no way to verify it. There is all kinds of crap that gets put up on the internet, so if you can't investigate something yourself you are going to either need some kind of help or, I would argue, you *should* discount the paper. That's where peer review comes in. Now, don't get me wrong, the peer review process lets in lots of crap, and there are plenty of weak journals where peer review means very little. But at the very least the lay reader can look at an argument and say to themselves "Someone other than the author has looked at this work skeptically, and decided that it passes muster. Therefore, I can give this claim at least some credence." So, I think it's a huge service to the lay reader to go through the peer review process.

Next, peer review often finds mistakes. *Everyone* makes errors, and even friendly peer review will make a paper better. It also helps to adjust the prose to the right audience.

As a professor, if a student cited a paper that lived only on the internet and had not been through peer review, I would mark them down incrementally. One thing students have to learn is how to distinguish between reliable and unreliable sources. Now, there is lots of great stuff in the papers that you guys write, but the internet is so full of scientific-sounding crap (just check out climate and evolution denial websites) that most non-experts don't have the facilities to adequately judge an argument. So, I just flat out tell students that I want their citations to come from the literature, not from secondary sources or random websites.

Finally, it makes the work easier to find long-term. As you have found out, sometimes internet servers change. If you have a full citation, then all that breaks is an explicit link to a pdf, but others can go find the work themselves. If there's no journal, then interested readers may be SOL.

david.cou...@drake.edu

unread,
Dec 22, 2014, 9:27:07 PM12/22/14
to electio...@googlegroups.com
One other quick thought on the peer review idea. Part of the reason that engaging with the academic community is critical is that it keeps you honest. It's too easy to tell yourself that you know what you're talking about if you allow yourself to write off your critics. If you don't have to convince *them* that your work has value it's just too easy to say they're all a bunch of morons who aren't smart enough to see your own genius. We all do it, of course--I have had lots of great ideas shot down by ignoramuses (to my eyes) in the field. The genius of the scientific enterprise is that these ignoramuses almost always make my work better.

My sense is that when you write things about peers in the field being "horrendously incompetent" or dismiss Kahneman's arguments out of hand, then it feels to me like you're enjoying the luxury of an insular community who will let you get away with stuff that you shouldn't. Look, I definitely don't agree with everything that Kahneman says, but the man's a Nobel-winning economist, so you want to be pretty self-critical before suggesting that his conclusions are "obviously wrong." It's fair to say that the paradox is obviously *consistent* with the use of imperfect heuristics--and he wouldn't disagree--but it's also fair to say that it's consistent with a decision system that is so filled with this type of approximation that the concept of utilitarianism ceases to be of any practical value. Kahneman has done so much work showing that people do not act in ways that are consistent with utilitarian optimization that I think he's earned the right to wonder about its applicability--it's not just this one paradox. The fact that there is an alternative explanation that you prefer is a wonderful starting point for debate--that's how *you* will keep other people honest. But it's not proof that people who disagree are morons. *That* is what peer review does for you and for your argument: it makes you get beat up on enough by very smart people with whom you disagree that eventually your arguments become *more correct*. Avoiding peer review allows for advocacy at the expense of truth.

Warren D Smith

unread,
Dec 23, 2014, 2:05:27 PM12/23/14
to electio...@googlegroups.com
Peer review

Well, speaking for myself... I think official peer review in science
is broken. I like
the idea in principle, though. It's usually capable of rejecting
obviously incompetent papers, but usually not capable of anything
deeper, such as actually detecting errors that involve the referee
seriously comprehending the paper. This is unfortunate, but it is my
experience. I think good referees are rare nowadays. It was a system
designed for amateurs that worked pretty well 100 years ago when
science was small and, e.g. all
quantum physicists fit in 1 room. Mainly it involves the refs being
anonymous unpaid and unaccountable but the authors being non-anonymous
and paying, and ref-author interactions not done interactively... all
exactly the opposite of how it should be.
Also, it is unbelievably inefficient, often taking years.

In my case, with my voting-related work, I find if I publish on the web,
then (1) it gets published immediately, (2) it gets peer review in the sense
of people emailing me comments, questions, corrections, redoing my work,
etc to a far greater total extent than any official peer review I ever had,
(3) it gets vastly more readership than any official
scientific journal publication I ever had. Indeed, my web site gets
comparable readership to the best selling books Harvard University
Press ever published.

Also, I did submit several of my voting related works, including an
entire book I wrote, to official publishers, and so far it all has
been rejected. Amazingly though, I do not think those rejecting it
pointed out even a single specific error. If I kept fighting for
years I probably could have got it published, but I do not have the
stomach for that.

I like the idea that journals are archival, but many are now giving
even that up, becoming all-electronic.

My faith was shaken yet more by this -- here's a paper I wrote and in
fact am still working on (consider it partially finished) --
http://rangevoting.org/CombinedTestFail.html
and you probably do not want to read the math, but if you skip to the
part where I discuss the work of Ioannidis 2005, he claimed that
"most published research findings are false."
Wow! And it looks like he's probably right. What a massive failure
of The Official System. My god.

Here's a suggestion if you want peer review. The "arXiv" is largely
taking over science right now.
http://arxiv.org/
It's a big repository of online scientific papers. Instant publish,
instant access,
no refereeing at all. But a lot of these papers are then re-submitted
to official on-paper journals, for peer review and reprinting on
paper, often years later. I certainly now read the vast majority of
what I read, on the arXiv, only going to paper journals if I cannot
find it online, and it is harder and harder for me to get ahold of the
stuff that is on paper but not online, about half the time I cannot.

So, the arXiv is great. However, it would be better, if the arXiv
added rating and commenting facilities. If they were added, then
everything would be effectively peer reviewed, and would in total get
VASTLY more peer reviewing
than official referees of official journals provide, and in PUBLIC
(unlike those official reports which are almost always kept
secret,which is crazy). And the top most interested readers in the
world would provide this, not some schmuck who often is not even
interested.

But, it has been many years, and I know I have begged those behind the
arXiv to add this, and they refused. Which leads me to my question.
If the scientific community is
supposedly all for peer review, then how come they for years and years
still refuse to add what would be far greater peer review than ever
before available, to the greatest
sci-paper resource ever (the arXiv)????

And I believe the answer, sadly, is that the scientific community is
not actually interested in peer review for the right reasons that make
Review an inherently good idea. They really want peer review for
the reasons it is a bad idea -- to keep an unaccountable "old boys
club" in power.

If the scientific community really wishes to have good peer review,
and really wants to democratize science, then do it on the arXiv by
adding rating and commenting.
And you, as an activist, can try to add your voice to those saying so. Or try.
I think/hope this will eventually happen, but it will sure be a
tremendous waste if it
takes 100 years.


--
Warren D. Smith
http://RangeVoting.org <-- add your endorsement (by clicking
"endorse" as 1st step)

Warren D Smith

unread,
Dec 23, 2014, 5:16:57 PM12/23/14
to electio...@googlegroups.com
And here's another thing about peer review.

See, even after I became convinced peer review is pretty broken,
I still hoped/thought that it still worked well for the really great
shining examples of science. I mean, when Joe Average Scientist
writes a paper, the refereeing on it probably sucks. But when it's a
Great Breakthrough Famous paper, then the top guys would be happy to
referee it, and it'd really get worked over, and everything would be
top quality.

I thought/hoped.

However my faith in that was shaken by Shinichi Mochizuki, a Japanese
Math Professor
who in 2012 claimed to have proven the "ABC conjecture." This was one
of the big outstanding conjectures in Number Theory and if he's really
done it, it's definitely one of the top 5 accomplishments in Number
Theory in the last 25 years, it has a ton of consequences, and
everybody knows it. Supposedly Mochizuki has a reputation as
brilliant, etc.

Did it get refereed by top guys? No. Did it get refereed at all, in 2
years? No.
As far as I understand, the journal went to top guy after top guy
asking them to referee it, and all refused. They could not find
anybody willing to referee it.

What the hell?

Well, here's a partial explanation of what went wrong.
See, when Y.Zhang found an infinite number of near-twin primes
(breakthrough in prime numbers) and when A.Wiles proved Fermat's last
theorem, and when
G.Perelman proved Thurston's Geometrization Conjecture (breakthrough
in topology)... all those, while great, employed mathematical
ingredients that were mostly pretty well known. So lots of people
could check various parts of it, and there was lots of motivation to
do so. So they all got acclaimed and trumpeted as true, in fairly
short order. Perelman actually never submitted his stuff to a journal,
merely putting it on the internet, but others then redid all his stuff
in books and papers.

But with Mochizuki, you need to read a bunch of his papers and
preparatory papers totalling 500-1000 pages, all laying out a new
branch of mathematics he calls
"Inter-universal Teichmuller Theory" which is then used to prove ABC.
Very few, maybe zero, people besides Mochizuki know IUTT. So nobody
is an expert. And with referees being UNPAID, who's willing to read
500-1000 pages
developing a completely new kind of mathematics, that lies outside everybody's
area?

So, the system totally broke in this case, in a highly visible way
since this was allegedly a really tip top accomplishment.

So now what? Well, Mochizuki posted his stuff online for all to read,
there are a few online fora where a few people are sort of "refereeing
it in the open" via discussions, and as far as I can see none of them
getting very far... and there are a few Japanese guys close to
Mochizuki who are sort of his students or acolytes, who are sort of
trying to work in this area now. So, anyhow, the official system has
totally failed in this case, and
I do not know what the ultimate result will be.
But as far as I can see, nobody has learned anything from this
experience, as far as
modifying the Official System is concerned.
It's pitiful.

Mochizuki himself, trying to summarize the status:
http://www.kurims.kyoto-u.ac.jp/~motizuki/IUTeich%20Verification%20Report%202013-12.pdf

Clay Shentrup

unread,
Dec 24, 2014, 2:12:28 AM12/24/14
to electio...@googlegroups.com
Warren,

I think you should summarize these last two posts into a response at ScoreVoting.net, like ScoreVoting.net/peer-review.html




david.cou...@drake.edu

unread,
Dec 24, 2014, 1:19:39 PM12/24/14
to electio...@googlegroups.com
Warren,

I absolutely agree that peer review has significant problems--many of which you mention. In the case of Mochizuki, I guess I'm not terribly concerned because I can't think of any better solution. If nobody is really competent to do the checking, then that's the problem (rather than peer review). For some reason he hasn't been able to convince enough people that this area is worth getting involved in. If you are so brilliant that nobody else can understand what you are doing, and you're working in theoretical math, then for me it's an open question about whether it matters. It just seems to me like at some point you ought to be able to explain to other experts why you're right. 

Anyway, that's really neither here nor there, as cases where nobody else in the world can understand what you're doing are rare. Generally when that happens it's because the writer is delusional, though I'll grant that isn't what's going on in Mochizuki's case.

If you don't mind me saying, I would posit that the reason you have had trouble getting your work published is that it is in some cases unnecessarily opaque. I don't mean this in a critical way, as you are clearly extremely good at switching between theoretical and applied work. But let's take Smith 2000 as an example. In this paper, I would argue that the result the community would be most interested in is the simulation. That should definitely be publishable. However, the community that is going to be the most interested are generally working in applied areas, and will find the results much more useful than the theoretical underpinnings. The problem there is that the theoretical background that you do is not really necessary for setting up the model, and could be separated into another publication that would go into a game theory journal instead. Now, obviously you need the results of your theoretical work to explain how you put the model together, but those could be explained in terms that can be understood by the non-specialized expert. For example, you start this paper talking about the probability that a person's vote will "count." You provide three different possible answers to this question, one with an inverse square root, one exponential, and one a simple inverse. Honestly, I have no idea what the difference is and why it provides such shockingly different results. I can't decide if I think it matters or which formula is going to be important under which cases. Now, maybe I'm just stupid and you actually give a sufficient explanation of what you're doing in the third paragraph. I am not saying there is an error. But I have a PhD in physical chemistry from Stanford, so if I can't figure out what you're doing then my guess is that the vast majority of your intended audience is also going to be stymied, because they're the computer modelers and poli sci folks, not the mathematicians.

The issue is that there aren't a lot of journals that are read by both communities, and so you need to write for your audience.

The way to do that is to figure out what from sections 1-8 is important for putting together the model. Then, explain your assumptions in putting that model together in clear language aimed at experts but not mathematicians. Don't provide the proofs, provide the logic behind the theorems that you have proven, and then cite the other work for those who want to go deeper (or put it in an appendix). That way, most people who want to use your work will be able to read the paper, which is what you need if you want it to get published. This is sort of similar to the situation with Mochizuki: it's not enough to be right, you have to be able to communicate your ideas to the people who have to understand them, and if you can do that then you ought to be able to get published fairly easily. 

This should probably go in a separate conversation, but one problem with the "internet peer review" is that I'm guessing most of those readers also glossed over the math (at least, they appear to have from the responses that I saw posted). If I were a reviewer, I would have been confused right from the beginning of section 1, because you find that a person is twice as likely to have an impact if the number of voters is odd than if it's even. The logic behind this is that if there is an even number of voters then the best that a voter can do is create a tie, and there is still going to be a 50% chance that the tie goes the other way. If there are an odd number of voters, then for the voter to have an effect they will necessarily be *breaking* a tie, leading to certainty for their candidate. However, had the voter not participated, there was a 50% chance that the tie they broke would have broken their way anyway, so it strikes me that the situations are mirror images: each voter has a 50% chance of swaying an election that would not have gone their way to one that did go their way. Unless I am just being daft (always a significant possibility), then this point probably would have gotten cleared up in peer review. At the very least, other daft reviewers might have brought the point up and you could have explained it in a quick sentence so that they might proceed untroubled.

-- Dave

Warren D Smith

unread,
Dec 24, 2014, 3:21:42 PM12/24/14
to electio...@googlegroups.com
On 12/24/14, david.cou...@drake.edu
<david.cou...@drake.edu> wrote:
> Warren,
>
> I absolutely agree that peer review has significant problems--many of which
>
> you mention. In the case of Mochizuki, I guess I'm not terribly concerned
> because I can't think of any better solution. If nobody is really competent
>
> to do the checking, then that's the problem (rather than peer review).

--A better system would be refereeing in the open, on some bulletin board,
by a team of non-anonymous reviewers, with interactive responses by
the author (here Mochizuki), as well as comments & questions by
whoever wanted to comment, i.e. they
could ask him questions, he'd respond, etc.

You've got to take advantage of internet and the power of interaction.
If this kind of interaction is done, it leads to faster understanding
than a non-interactive approach where supplying questions & answers is
forbidden (or delayed by months or years, by which time everybody has
forgotten whatever the question was).
It also creates an extra resource for any later reader.

I once refereed a paper where I insisted on non-anonymity for me, and
insisted on interaction with the author via email. (In some cases
when I tried to insist on that, the editor told me to go to hell, but
in this case, he was fine with it.) Result: I refuted the paper.
Meanwhile a team at MIT was simultaneously trying to review same paper
by holding a seminar about it, but not interacting with author. They
did not succeed in deciding on the validity of the paper.
I conclude interaction works.

However, the author in this case did not like the result, which was:
he was busted.

For
> some reason he hasn't been able to convince enough people that this area is
>
> worth getting involved in. If you are so brilliant that nobody else can
> understand what you are doing, and you're working in theoretical math, then
>
> for me it's an open question about whether it matters. It just seems to me
> like at some point you ought to be able to explain to other experts why
> you're right.

--In his case there is an incentive problem. Should I (a
hypothetical potential referee) devote 6 months of my life to learning
his theory for no pay? Hard to justify, especially if there is a good
chance Mochizuki is wrong. If, however, somebody DID do that, and
announced Mochizuki was right, THEN the incentives change, now there
is more motivation for me to burn the 6 months because I'm pretty
confident I'm learning something useful.

> Anyway, that's really neither here nor there, as cases where nobody else in
>
> the world can understand what you're doing are rare.

--true, Mochizuki is an extreme case. But quite likely
extreme in the direction of being awesome.

> Generally when that
> happens it's because the writer is delusional, though I'll grant that isn't
>
> what's going on in Mochizuki's case.
>
> If you don't mind me saying, I would posit that the reason you have had
> trouble getting your work published is that it is in some cases
> unnecessarily opaque. I don't mean this in a critical way, as you are
> clearly extremely good at switching between theoretical and applied work.
> But let's take Smith 2000 as an example.

--well, I actually got the referee reports so I can try to know why
they rejected.
In the case of Smith 2000, there was only 1 ref, who had a lot of
derogatory adjectives about it, e.g. my work was "unprofessional."
But: in what WAY was it unprofessional, or wrong? He did not say. It
evidently was so obvious to him, there was no need to say. Then he
claimed that related work (which he implied I should have cited) was
by Myerson. He did not say what work. So I then contacted Myerson,
and he also did not know what work it was, suspected it might have
been some
as-yet-unpublished work by him, and told me I could come to his summer
course he was offering someplace over 1000 miles away and pay him like
$600 to take said course, then maybe I'd know. Oh.

In the case of my book, I actually thought the ref reports on it were
pretty complementary, but the editor had the issue with the book that
there (a) is mathematics and (b) is politics/history, and he thought I
needed to pick one, not discuss both. Doing both, in his view, was
not allowed in the same book. Oh.

More recently I wrote this:
http://rangevoting.org/EnvyFree.html
which was rejected without refereeing by editor of same journal that
had just published the paper that this followed up on. Apparently he
felt that too many articles on
the same subject in to short a timespan, should not be allowed to happen.

> In this paper, I would argue that
> the result the community would be most interested in is the simulation.
> That should definitely be publishable.

--other people have followed up on my 2000 work by doing their own
sims similar to mine; and they arrived at results similar to mine.

david.cou...@drake.edu

unread,
Dec 25, 2014, 5:20:27 PM12/25/14
to electio...@googlegroups.com
-A better system would be refereeing in the open, on some bulletin board,
by a team of non-anonymous reviewers, with interactive responses by
the author (here Mochizuki), as well as comments & questions by
whoever wanted to comment, i.e. they
could ask him questions, he'd respond, etc.

Yup. It's hard with that model to figure out when something is really "accepted," but there ought to be a way to make it work. Good work on the interactive refereeing, by the way! 
 

> --true, Mochizuki is an extreme case.   But quite likely
> extreme in the direction of being awesome. 

Yeah. Went to Princeton at 16 and had his doctorate by 23. Probs not a faker!


> --well, I actually got the referee reports so I can try to know why
> they rejected.
> In the case of Smith 2000, there was only 1 ref, who had a lot of
> derogatory adjectives about it, e.g. my work was "unprofessional."
> But: in what WAY was it unprofessional, or wrong? He did not say. It
> evidently was so obvious to him, there was no need to say. 

Yeah, some reviewers suck. Too bad it was only the one.

> In the case of my book, I actually thought the ref reports on it were
> pretty complementary,  but the editor had the issue with the book that
> there (a) is mathematics and (b) is politics/history, and he thought I
> needed to pick one, not discuss both.  Doing both, in his view, was
> not allowed in the same book.  Oh.

Yeah, that's what I was saying as well. It's ought not to be that way, but the math is specialized enough that only a relatively small audience could read it, and that audience wouldn't necessarily be the one that is interested in the results. You could get a much larger audience going with one or the other.


> More recently I wrote this:
  http://rangevoting.org/EnvyFree.html
> which was rejected without refereeing by editor of same journal that
> had just published the paper that this followed up on.   Apparently he
> felt that too many articles on
> the same subject in to short a timespan, should not be allowed to happen.

Ha! The editor probably just didn't want to look stupid publishing two opposing papers. I once had a paper panned in Nature. The author claimed that we had done a naive thing and made an error, and then went on to claim credit for the very thing that we had come up with in that paper. We wrote a response to Nature pointing out the places where we said the exact opposite of what the other piece claimed we had said, and made lots of very good points, imo. :) That got rejected because Nature has a policy of not publishing refutations (which seems like a crazy policy!). The plus side is that my paper now gets a lot of citations, but most of them are just parroting what the other guys said (it was in Nature, after all, and ours was just in Biophysical Journal), so I'll be forever more I will be known in that field as the guy who built the wrong model before the guys who built the right one. Oh well, I left that field anyway...


> In this paper, I would argue that
> the result the community would be most interested in is the simulation.
> That should definitely be publishable.

--other people have followed up on my 2000 work by doing their own
sims similar to mine; and they arrived at results similar to mine.

Great! Do you have links for those?

-- Dave

Frank

unread,
Dec 26, 2014, 8:41:23 AM12/26/14
to electio...@googlegroups.com
Consistent with Warren's experience: http://news.sciencemag.org/scientific-community/2014/12/does-journal-peer-review-miss-best-and-brightest
--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscien...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


--
P.S.: I prefer to be reached on BitMessage at BM-2D8txNiU7b84d2tgqvJQdgBog6A69oDAx6

Warren D Smith

unread,
Dec 26, 2014, 12:21:14 PM12/26/14
to electio...@googlegroups.com
there have also been (probably unethical, but it happened) studies of the form:

1. gather election of already-published papers from good journals
2. re-submit them with names & titles changed
3. what happens?

What happened was, (a) the large majority of them were rejected,
(b) in not even a single case, was the plagiarism detected.

Frank

unread,
Dec 26, 2014, 11:10:33 PM12/26/14
to electio...@googlegroups.com
I guess the question is why the papers were rejected. Plagiarism? Or something else?
--
You received this message because you are subscribed to the Google Groups "The Center for Election Science" group.
To unsubscribe from this group and stop receiving emails from it, send an email to electionscien...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Warren D Smith

unread,
Dec 26, 2014, 11:58:44 PM12/26/14
to electio...@googlegroups.com
On 12/26/14, Frank <frankdm...@gmail.com> wrote:
> I guess the question is why the papers were rejected. Plagiarism? Or
> something else?

--Nah, they never spotted the plagiarisms.
The conclusion seemed to be that the vast majority of
already-refereed, already-accepted, already-published papers are so
bad that in the opinion of referees... they should be rejected.
It also follows that essentially 100% of referees are not capable of
spotting complete wholesale plagiarism (or anyhow were not capable of
it back whenever that study was done) in their area. Anyhow, that
study didn't give me a lot of faith in peer review and the
wonderfulness of science.

Anyhow in the modern era I see no reason why paper journals should be
accorded any importance. They should be considered unimportant. There
are people who consider them important. They are idiots,
self-aggrandizing power brokers, and academic bullies,and/or those who
wish to make huge profits charging extremely high prices for journals.
Some journals cost over $30K per year, I am not making that up.
Instead said journals should be 95% replaced by the arXiv, but with
commenting and rating added. Publication will be immediate worldwide,
free, and unreviewed, but eventually will accumulate reviewing far
greater than ever previously available. The cost will be far lower
than ever before, the publication speed far greater. Limitations set
by costs and page limits should be abolished. Almost no editors and
referees will be needed.
Comments should by default be by named not anonymous commenters, and
the comments should be rated so that good commenters get good ratings
and their work can be accorded credit. Conspiracies to generate bogus
comments etc will thus be
partially cured because those commenters will be de-credited.

It is all extremely simple and obvious how it should be done, but they
refuse to do it, because of academics stuck in the stone age who do
not wish to give up their pathetic little fiefdoms. One interesting
example was some academic who was writing me and praising my work on
voting, and he wanted to know how he should cite it... but then
he said, sorry, his editor told him he was NOT ALOWED to cite my work,
since my work in question was not published in a paper journal, it was
electronic.
I'm serious. It's kind of like a mafia protection scheme by the paper
journals -- work is
not citable unless it is us. Because we say so.

Clay Shentrup

unread,
Dec 27, 2014, 12:16:57 PM12/27/14
to electio...@googlegroups.com
We need the GitHub of academic papers.
Reply all
Reply to author
Forward
0 new messages