The NY Democratic Primary Quiz

7 views
Skip to first unread message

Richard Charnin

unread,
Apr 24, 2016, 11:35:59 AM4/24/16
to Election Integrity

There were 1307 NY exit poll respondents at 9pm and 1391 at the final - an increase of just 84 respondents. 
Percentage changes favoring Clinton in the final 84 respondents  were forced to match the recorded vote in all exit poll categories. 
They  were mathematically impossible. 

Therefore, the recorded vote was also mathematically impossible. 
The impossible adjustments to the exit poll are irrefutable proof of election fraud.

The Exit Poll spreadsheet cross tabs: Final adjusted vs. 9pm vs. True Vote estimate...


The NY Primary Quiz:

Verified...@aol.com

unread,
Apr 24, 2016, 12:40:13 PM4/24/16
to richard...@comcast.net, Election...@googlegroups.com
As much as I would like to agree (and I certainly agree that electronic fraud is taking place and chronic), I have to provide a caveat that this particular method of proof (impossibilities based on # of EP respondents) relies on a misconception of what "# of respondents" indicates in a published exit poll.
 
As ridiculous as it may seem, "# respondents" is a term of art in exit polling; to my understanding, it does not refer to an actual number of respondents; it refers to the virtual number of respondents after any given stratification or weighting has been done.
 
The weighting process goes more or less as follows: the pollsters intentionally oversample certain groups (because, having chosen a particular precinct, they suspect the response rate there may be too low--there are all sorts of issues with both selection and response biases); then, when they find out how many whites/blacks/women/men/old/young/R/D/rich/poor etc. they have actually got responses from, they re-weight using any or all of those factors and getting the proportions in line with what they think the electorate composition should be (we believe that this may include composites of pre-election polling and certainly includes demographics drawn from prior elections' adjusted EPs, almost always right-shifted; there's a lot of art mixed in with the science -- this is a big part of what the Mitofskys of the world were afraid would come out). When a given questionnaire is downweighted it counts as < 1 respondent and when it is upweighted it counts as > 1 respondent.
 
Those weightings yield the first published poll (they may also work in early votecounts gathered prior to poll closing). When they have, say, double the percentage of black respondents they believe they should have, they take every black respondent's questionnaire and cut the value of every response on it in half (as if each black respondent were a half a person); this of course includes the "Who did you vote for?" questions, but it also affects income, urban/rural, etc, AND the #respondents. So far, so good: it may seem hinky, but they're just trying to get a representative sample of the electorate, not an easy thing, even when exit polling (and especially in an era where so much of the voting is early/absentee). 
 
BUT . . ., when the votecounts start coming in (or when they get early wind of them before poll closing -- another dark secret Mitofsky was trying to protect), they begin the "adjustment" process, which is really just a re-weighting to bring everything in tune with the "actual" votecount. SO . . . if they have "too many" Sanders voters (let's say), they take every "Sanders" questionnaire and downweight it by a percentage necessary to match the "actual" Sanders vote. This "forced" reweighting also, like the regular demographic weightings, ripples through the whole questionnaire, changing the %s of blacks/whites/men/women/rich/poor/old/young etc., AND the #respondents. So, when we see the "# respondents" pegged to the first posted EP (which we know has been multi-stratified), the actual number of respondents has already been transformed by that weighting process into a different number (rounded to an integer for obvious reasons). That number then changes in the course of the adjustment process as successive votecount-based re-weightings occur. It generally goes up, though there have been some cases in which it goes down.  This does not mean that more questionnaires are being added to the raw data (although, given the vagaries of the relay and reportage systems, it is possible that they could be).
 
The distorting effects of adjustment are easier to see in D/R races like Bush/Kerry where it results in distorted PartyID %s that are glaring and can be compared in many cases with party registration numbers and with optical evidence of turnout and shown to be absurd (not that the media cares to do this). BUT simple proofs based on a literal reading of "# of respondents" and the impossibilities entailed are misleading because they take at face value numbers that are not literal but virtual and essentially meaningless from a quantitative forensics standpoint.
 
More generally (and I apologize for the length of this, but EPing is a rather dense thicket), there's a lot of guesswork in exit polling because it is dependent on a non-random sample (it is clustered for one thing and on top of that beset with all sorts of executional, as opposed to mathematical, distortions, including primarily selection biases and response biases that are very difficult to neuter), which has to be corrected in a process that involves estimations (which party's or candidate's voters turned out in greater proportion, which race, which gender, urban, rural, etc.?); and the bases for these estimations are generally drawn from prior elections' adjusted (i.e., distorted) polls. And this in an era where at-poll voting may not even be the way the majority of votes are cast in a given election; and it's no easy feat to sample early and absentee voters, groups which often vary significantly from the at-poll electorate.
 
So it is no great surprise that exit pollsters would "miss" an election here or there--even if the elections were honest.  What is very hard, verging on impossible, to account for is the pervasive pattern of virtually unilateral misses, some very large.
 
Pollsters are no idiots, have a lot of resources, are presumably trying to get it right, and have all this experiential input to help them correct their mistakes.  And they keep coming out to the left (or non-Karl Rove) side of the votecounts??  And much more likely to do so in competitive than in noncompetitive elections?  And they wind up going the opposite way in one state in 2016 (other than uncontested VT and super-scrutinized WI), and that state is Oklahoma, which just happens to be the only one of the states where the state rather than the vendors programs the computers??? 
 
It's those kinds of meta-patterns that seal the deal for me, and we've been seeing them since at least 2002, pervasive.  It would be great to nail the bastards with a simple "proof," but we have to be careful about over-reaching with questionable definitions or data.  Something like "# of respondents" certainly seems solid and straightforward enough but there is very strong reason to believe that, like so many other parameters in this hall of mirrors, it is deceptive. We've more than made the case to examine a goddamn memory card. The big picture pattern of evidence makes that case a thousand times over. Unfortunately the blind or willful protectors of secrecy and fraud have not been moved, though things do seem to be heating up.  Easily debunkable "proofs" of fraud are double-edged swords: on the one hand, they do get more people stirred up and paying attention; on the other hand they risk loss of credibility for our whole movement and our collective efforts. -- Jonathan
 
 
--
To post, send email to Election...@googlegroups.com. Please review the "Posting Guidelines" page.
 
Please forward EI messages widely and invite members to join the group at http://groups.google.com/group/ElectionIntegrity/members_invite.
 
If you're not a member and would like to join, go to http://groups.google.com/group/ElectionIntegrity and click on the "join" link at right. For delivery and suspension options, use the "Edit my membership" link.
---
You received this message because you are subscribed to the Google Groups "Election Integrity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ElectionIntegr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Keefer

unread,
Apr 27, 2016, 9:10:25 AM4/27/16
to Election...@googlegroups.com, Jonathan Simon, richard...@comcast.net
Jonathan,

Thanks for this interesting post (and may I add, thanks for all your important work since 2004!). However, I have doubts about your dismissal of Richard Charnin's view--which I share--that the "mathematical impossibility" of the 'adjusted' exit poll figures points to complicity in fraud. I think the question of whether the number of respondents listed for an exit poll refers to actual or to "virtual" respondents doesn't invalidate the use of "mathematical impossibility" as an indicator of fraud.  

There are, as you suggest, a frustrating number of uncertainties and imponderables about the practices of exit pollsters. From their own statements and explanations, we do know quite a bit about how, on the basis of past voting patterns, they choose sampling precincts in such a manner as to obtain as representative as possible an overall (clustered) sample; and about how they deliberately over-sample minority groups in the hope of getting statistically significant results (and then, as you indicate, re-weight their figures, according to their best estimate of the actual percentage each group amounts to in the total number of people who voted). The demographic calculations involved are, as you say, systematically right-shifted in the US: that would be because the vote tallies from prior elections are marked by systemic patterns of vote suppression and fraud directed at Democratic-trending minorities, and because pre-election polling (where that is incorporated into the preparatory planning for exit polls) typically has a similar rightward deflection (remarked on by Steve Freeman and others, with reference to Gallup's use of the "likely voter" category).

I have no more than amateur status in these matters--but is it correct to assert that the late Warren Mitofsky and his ilk have feared any awareness on the public's part of these general processes of clustered sampling and re-weighting? My impression, on the contrary, has been that boasting about the large knowledge base, skill, and artful statistical procedures (or, if you prefer, manipulations) required for exit polling has always been part of their self-promotion--though they would of course indignantly reject the notion of a systematic right-shift built into the process, as well as any suggestion that vote tallies have been marked by systemic fraud since at least the 1980s.   

The early contamination of exit polling data by vote-tally information is another matter. That's something the pollsters would firmly deny--though Mitofsky acknowledged making unspecified use of what he called "quick counts." (I think this term refers to information collected--in parallel with the full exit poll questionnaires--by samplers who asked voters to indicate simply, without further elaboration, which presidential candidate they've just voted for. From what I read, years ago, about "quick counts," I understood them to be used by Mitofsky as a source of supplementary early information about vote trending that he provided to his media-consortium subscribers.) Data from the 2004 election might suggest that Mitofsky and Joseph Lenski (the principals of the 2004 Edison-Mitofsky exit polling consortium) would have had good reason to avoid stirring the early officially announced vote tallies into their calculations. (By the time polls were closing in the eastern states on November 2, 2004, the vote-tally figures provided by CNN at 8:50 pm EST showed Bush ahead of Kerry by 6,590,476 to 5,239,414 votes, and at 9:06 EST by 9,257,135 to 7,652,510--a lead shrinking from 11 percent to 9 percent--while pre-election polling would have led Edison-Mitofsky to expect a close race, with a probable Kerry victory; and the national exit poll figures available on CNN at 9:06 pm, with a stated 13,047 respondents, showed Kerry leading by nearly 3 percent.) But we don't know, in this and other elections, what procedures were followed in the exit-poll calculations. Are we faced here with an imponderable? Some kind of early contamination of exit polling data is widely suspected, but I'm not aware of hard information on the subject.    

The key issue, however, is the subsequent adjusting or forcing of the exit poll data, undertaken once the official vote tallies are largely complete.

Here we have clear evidence, at least in the 2004 election, of deliberate deception on the part of the exit pollsters. The national exit poll figures posted by CNN on the evening of November 2, 2004 were replaced at 1:36 am EST on November 3 by new figures, based on a stated 13,531 respondents, showing Bush ahead of Kerry by nearly 1.5 percent. A rise of 3.6 percent in the stated number of respondents was accompanied by a swing of 4.5 percent from Kerry to Bush in voters' reports of their choices. In the state exit polls that I was tracking during that election, similar effects were observable: in Ohio a 2.8 increase in the stated number of respondents, posted at 1:41 am on November 3, was accompanied by a 6.5 percent swing from Kerry to Bush; and in Florida, a 0.55 percent increase in stated respondents was accompanied by a 4 percent swing to Bush.

Lenski told Richard Morin of the Washington Post that an Edison-Mitofsky server had malfunctioned shortly before 11 pm--"barely minutes before the consortium was to update its exit polling with the results of later interviewing that found Bush with a one-point lead"--and this "glitch prevented access to any exit poll results until technicians got a backup system operational at 1:33 am" on November 3. But this explanation appears to be false: the adjusted or forced Florida figures were posted at 1:01 am.

Lenski offered Jim Rutenberg of the New York Times an equally deceptive explanation of the divergences between the November 2 and the subsequent exit poll figures, saying "that it was possible that more Democrats and more women were voting earlier, perhaps skewing the data in the afternoon. But, he said, by the end of the night the system's polling data basically tracked with the actual results. 'Sophisticated users of this data know the limitations of partial survey results,' he said." In fact, the data sets released to CNN and the other media participants in the National Election Pool at 3:59 pm, 7:33 pm, and around 9 pm (based, respectively, on 8,349, 11,027, and 13,047 stated respondents) consistently showed Kerry leading Bush by 51 to 48 percent. If by "polling data" we mean numbers actually derived from from the (demographically re-weighted) responses of voters to the exit poll questionnaires, as opposed to altered figures substituted for the data to make it appear to conform to the actual vote tally, there was no point on November 2 or afterwards at which the "polling data ... tracked with the actual results." 

Warren Mitofsky participated in this deception. When Keith Olbermann referred on the November 23, 2004 MSNBC Countdown program to "the variance among the early and late exit polls, and the voting," Countdown received what Olbermann described on the November 24 program as a "strident" email from Mitofsky protesting against the program's "misinformation," and insisting that "no early exit polls" had been released by his company or by Lenski's Edison Media Research: "the early release came from unauthorized leaks to bloggers who posted misinformation."

There were indeed unauthorized leaks, presumably from within the National Election Pool. (But this was not raw data: the figures had been demographically re-weighted.) Mitofsky may have thought he could wish away the figures that he and his colleagues had supplied to the NEP on the afternoon and evening of November 2: after all, those percentages had been erased after midnight when CNN and other subscribers replaced them with corrupted ones. He was perhaps hoping that people would forget that the Washington Post had published the final November 2 exit poll data in the morning edition of November 3, and would be unaware that you and Steve Freeman had preserved and circulated screen shots of the November 2 data. But neither Olbermann's remark nor the leaked early data posted by bloggers were "misinformation." 

It's important to recognize that the process of adjusting or forcing exit polls to bring them into conformity with the official vote tallies involves the conflation of two categorically different sets of data: one which is rich in demographic information, and one which contains no such information whatsoever. You quite rightly say that a forced re-weighting "ripples through" the whole exit poll data set, producing "distorting effects" that can be "glaring." But one should distinguish between these effects and those produced by the prior demographic re-weighting undertaken in order to remove distortions such as those introduced by the deliberate oversampling of minorities. The demographic re-weighting involves changes that (if we concede to exit polling a place among the social sciences) are scientifically justifiable; in contrast, the forced re-weighting is necessarily arbitrary: it involves fudging the data in whatever ways seem convenient in order to have it add up to a total that is more or less closely aligned with the official vote tally.

In 2004, the distorting effects that resulted from this fudging were indeed glaring. One of these is something you allude to: a distortion in the party IDs of voters. There is evidence in the 2004 exit poll figures of a sampling bias that favoured the Republican Party. Although Al Gore won the popular vote in 2000 by 540,000 votes, or 0.5 percent, the successive waves of November 2, 2004 exit polls show 3 percent more Bush than Gore voters among respondents who said they had voted in 2000. This difference was inflated to fully 6 percent in the "forced" November 3 figures, according to which 43 percent of 2004 voters had supported Bush in 2000, and only 37 percent had voted for Gore. These percentages generate the absurd conclusion that the active 2004 electorate included 52.6 million people who had voted for Bush in 2000--an election in which he received 50.5 million votes.

The forcing produced similar effects with respect to minority voters. For example, what appears to have been a pro-Bush sampling bias in the November 2 exit poll's reporting of Hispanic votes was exacerbated in the November 3 figures. Although an exit poll by the William C. Velasquez Institute showed Bush receiving less than the 35 percent of Hispanic votes he received in 2000, the November 2 exit poll credited him with 41 percent of such votes, a figure raised on November 3 to 44 percent. A month later, NBC News took the unprecedented step of revising its exit poll estimates, reducing Bush's Hispanic support to 40 percent, and, in Texas, changing an 18-point win by Bush into a 2 percent win for Kerry among Hispanics.

But the largest distortion was produced by the claim, embedded in the forced data, that Bush's 2004 victory was based on a massive 66 percent increase in voter turnout in the major urban centres, led by an increase of more than four million in the number of white voters. However, as Michael Collins demonstrated, there is strong evidence that the supposed surge in big-city white Republican voters never occurred: the actual increase in turnout in big cities was more on the order of 13 percent, and the likelihood that most of the people in this group supported Bush is vanishingly small.

As these examples suggest, the forcing of exit poll data to fit divergent vote tallies amplifies existing errors, and makes the corrupted exit polls useless for any honest purpose. (A number of academic political scientists made use of it in the years following the election and were--one hopes unwittingly--led into various follies as a result.)   

Turning now to the issue you raise of 'virtual' as opposed to actual exit poll respondents, I confess myself puzzled. First, then, a request for help: can you point to texts in which practitioners use "# respondents" in the "term of art" manner that you describe?

Here's one source of my puzzlement. It seems to me that a very simple thought-experiment can demonstrate the superfluity, when it comes to legitimate re-weighting of exit polls, of a distinction between 'actual' and 'virtual' respondents. Let's imagine an exit poll carried out in a jurisdiction in which one-tenth of the registered voting population is African-American and nine-tenths is 'white'. The sample size is, for convenience, 1,000; and for the sake of statistical validity in relation to minority voters, 20 percent of that sample is African-American. Re-weighting the sample is simple: divide the numerical responses of the 200 black respondents by 2, and multiply the numerical responses of the 800 white respondents by 1.125. (This produces the effect of a sample divided 90 percent to 10 percent between white and black respondents.) One notes in passing that the resulting number of 'virtual' respondents here would be the same as the number of 'actual' respondents: 1,000. The distinction has no apparent function.

I'd prefer to remain skeptical as to the use of this notion of 'virtual' respondents until I've seen evidence that it is actually and not just hypothetically in play. And in defence of a commonsense notion that when pollsters indicate a number of respondents they are making reference to a determinate sequence of actual encounters, mediated by exit poll questionnaires, between samplers and real people who have just cast votes, I'm tempted to quote the philosopher David Hume. In controversy with Calvinist theologians who supposed that, the deity being mysterious and incomprehensible, divine attributes such as justice and mercy could not have any determinate meaning, Hume declared that if our ideas on the subject, "so far as they go," are not just, adequate, and in line with actuality, then "I know not what there is in this subject worth insisting on." 

One might think of expanding my little thought-experiment by adding gender breakdowns within the samples of black and white respondents, by assuming (we're still in 2004) a vote split between Kerry and Bush of 55 percent for the former and 45 percent for the latter, and by imagining an Evil Manipulator whose job it would be to reproduce schematically the effects observable in 2004 in the national exit poll and the state exit polls in Ohio and Florida by bringing about a 10 percent swing in the vote with a 5 percent increase in the number of exit poll respondents. (The Evil Manipulator could be assured that no outsider would have access to his manipulations of real or imaginary data, and his only governing criterion would be one of plausibility.) But the fatuity of such an exercise is immediately obvious, because the array of possible linked data manipulations is a garden of forking paths. The precise fraudulent re-weightings that are carried out are of no interest in comparison to the basic fact that fraud is manifest in the fact that a small change in sample size produces a larger percentage shift in voters' choices.

I want to quibble with your claim that the ripple effect of re-weightings carries through the entire data field including "the #respondents." That would only be the case if the person doing the re-weighting decided to include the number of respondents as a category within the data matrix--and there is no reason to do so. Unlike the other figures, which make up the content of the exit poll and are properly interdependent (e.g., re-weighting the sample of black voters will alter the figures reported for the incomes of Democratic Party supporters), the number of respondents is of interest only as an indicator of the poll's margin of error.     

I guess my puzzlement or confusion comes down to this (with apologies if I'm being repetitive). Where we have a pattern of post-election alterations of exit polls, accompanied (as in the 2004 examples I've given, and in this year's Massachussetts and New York Democratic primaries) by statements of increases in the number of respondents that are not nearly large enough for the percentage changes in voters' choices to be legitimate, then we have, as I first wrote on November 5, 2004, "footprints of electoral fraud." To dismiss this is as not forensically interesting doesn't make sense to me. Whatever manipulations were carried out by the exit pollsters during the process of corrupting their data by conflating it with vote tally percentages remains--in the absence of whistleblowers--occult to us. In the language of classical epistemology, those manipulations inhabit the realm of the Kantian noumenal, the Ding an sich to which our perceptions can't by definition give unmediated access. But so what? We still have unmistakable evidence of illegitimate alterations of data within the instrument that--bar tampering of this kind--we know to be a reliable indicator of corruption in the vote count.

Michael Keefer  

Theodore de Macedo Soares

unread,
Apr 27, 2016, 2:55:55 PM4/27/16
to Michael Keefer, Election...@googlegroups.com, Jonathan Simon, richard...@comcast.net
Michael,

Excellent and thoughtful post. Leaving aside, for the moment, the obvious unreasonableness of the widespread implicit or explicit blind faith on unverified computer vote counts when the massive chicanery--mainly in very messy voter suppression activities--surrounding our electoral process is in full bloom and in full view when the mechanics for the undetected manipulation of the computer vote count has been proven time and again to be so much easier to accomplish, it seems important, for the sake of credibility and effectiveness not to make overstatements that are easy to criticize.

Richard Charnin's statement "[t]he impossible adjustments to the exit poll are irrefutable proof of election fraud," seems just such an overstatement. In your post you tone down the overstatement by viewing the "'mathematical impossibility' as an indicator of fraud."  One may even quibble with your restatement and call it a possible indicator of fraud. Edison Research and the members of the NEP consortium with their rationale of adjusting the "wrong" exit poll data to match the "correct" unverified computer vote count would certainly deny fraud as their motive. The "mathematical impossibility" given their rationale and taken at face value, only demonstrates their blind faith on the unverified computer vote counts coupled with a willingness to disregard the considerable efforts involved in their production of  exit polls by altering them as necessary, even arbitrarily, to have them match the computer counts.

Unreasonable blind faith on unverified computer counts, yes. Intellectually dishonest willingness to trash their own exit polls, yes. Irrefutable proof of election fraud, no.

Arguments are stronger absent overstatements.

Ted



From: Michael Keefer <mhke...@gmail.com>
To: Election...@googlegroups.com; 'Jonathan Simon' <Verified...@aol.com>; richard...@comcast.net
Sent: Wednesday, April 27, 2016 8:56 AM
Subject: Re: [ei] The NY Democratic Primary Quiz

Verified...@aol.com

unread,
Apr 27, 2016, 7:56:13 PM4/27/16
to mhke...@gmail.com, Election...@googlegroups.com, richard...@comcast.net
Michael --
 
OK, I've mulled this enough to try to answer re. the "forcing" adjustment process and "#respondents."  First thank you for putting the thought you did into these murky questions. I was hoping that David Moore might be able to shed some light, but have not yet heard back from him, so here goes.
 
The forcing process is, in pure form (we'll come back to this), a reweighting of responses to congruence with the votecounts. So the "who did you just vote for?" question becomes the independent variable in the EP crosstabs and everything else becomes dependent variables. So, to use made up numbers for illustration, if, after the demographic weighting process is completed, Kerry has 60% in the "who did you vote for" EP Q and Bush has 40%, but CNN is reporting 100% precincts in and the votecount is Kerry 50% Bush 50%, the forcing algorithm proceeds to make the EP come out 50%/50% (congruent) by downweighting every "I voted for Kerry" questionnaire by multiplying it by a factor of 0.833 (5/6) and upweighting every "I voted for Bush" questionnaire by multiplying it by a factor of 1.25 (5/4). Kerry's and Bush's EP results will now be 50%/50%.
 
This forcing process overrides any previous estimations and calculations of the electorate's profile. It is based on the bedrock assumption that the votecounts are accurate (unrigged) and would result in a congruent EP were the EP not affected by random error, clustering, and selection and response biases.  In other words, it trusts the votecount absolutely and distrust the EP design and/or execution relatively. We have more than ample reason to challenge that assumption, but we have to careful what we base that challenge upon. When the forcing algorithm does its work--at least in theory, and if it is allowed to operate without any other tweaking--it also downweights and upweights every other response on the affected questionnaires. So if an "I voted for Kerry" questionnaire was filled out by a respondent who indicated that she was black, female, a Democrat, 27 years old, and voted for Gore in 2000, every one of those responses would be downweighted from 1.0 to 0.833; conversely for the "I voted for Bush" questionnaire if the respondent indicated he was white, male, a Republican, 55 years old, and had voted for Bush in 2000, every one of those responses would be upweighted from 1.0 to 1.25. When the forcing dust settles, you can see how the sample would now almost certainly have a lower % of blacks, a lower percentage of females, a lower percentage of Democrats, a lower percentage of 18 - 29 year-olds, and a lower percentage of "Gore voters" than it did before the forcing process kicked in (and conversely higher for the categories of the Bush respondent). How much lower (or higher) depends upon the extent to which the demographic characteristic is correlated with the vote choice--so a lot of change in party ID, race, prior election's vote, less for gender and age.
 
Here's where I think it gets interesting. If the forcing process leads to implausible demographics (way too few blacks or Democrats; way too many women or return X voters), well that's embarrassing.  So embarrassing that anyone not on the system's payroll and covering for it would say, "Whoa! Something's wrong here with that forcing process; it's giving us crazy demographics which we know can't be accurate."  But what they do instead is one of two things: either publish the wacky demographics (e.g., the 37%D/37%R or urban hordes for Bush in 2004) and let the pundits attribute it all to turnout ("looks like the GOP won the turnout battle bigtime again, Jim" or "those inner-city voters just don't seem motivated in these midterms, Judy"). Or, quite possibly, just massage the more outlandish demographic markers up/down to the plausibility zone. No raw data so who is ever going to know?  I think it's fair to say that competitive professionals will do a lot to avoid embarrassment, and what else is there to do, other than blowing the cover on America's dirtiest secret?
 
So the forcing process often leaves a numerical/demographic slime trail that Richard, among a few others, has examined forensically and found various high improbabilities, sometimes bordering on if not crossing over into the impossible. However this slime trail does not involve the #respondents for the very reason you give Michael--which is that the upweightings and downweightings of the forcing process should be #respondent-neutral. What we see when Sanders lost and Clinton gained an "impossible" number of EP votes while the #respondents stayed constant or barely moved is simply the direct result of the forcing (or any weighting) process. In other words the candidate %s are not changing because a whole bunch of new questionnaires are being tallied but because E/M is using some data--either demographic or, in the case of forcing, tabulated votecounts--to change them. There is no great mystery to that: forcing is going to change the "I voted for X" %s directly and all the other demographics (seen in the crosstabs) indirectly. There is nothing legitimate about forcing but there is nothing mysteriously illegitimate either. It is simply illegitimate, in plain view as it were. Linking it to an "impossibility" relative to #respondents actually misses the point of the problem, as it has nothing to do with #respondents but everything to do with distorted demographics (which may or may not be presented with good-faith fidelity).
 
I hope that we can turn our attention to the question that has been haunting me all day: why all of a sudden, spot-on EPs in the Dem primaries, virtually no adjustment? Were they also spot-on on the Republican side? Was Rove able to predict however long in advance that it would be in the bag after NY and no more rigging needed? Did E/M give up and just pre-adjust 100%?  If so, based on what?  Early access to votes from AP? Pattern analysis from earlier primaries?  A lot to think about and try to figure out.
 
--Jonathan

Theodore de Macedo Soares

unread,
Apr 28, 2016, 10:41:31 AM4/28/16
to Theodore de Macedo Soares, Michael Keefer, Election...@googlegroups.com, Jonathan Simon, richard...@comcast.net
Just to clarify and elaborate, I don't mean to absolve Edison Research and the NEP consortium from being possibly complicit (actively or passively) with election fraud. Just that the example provided by Richard Charnin is not sufficient as an "irrefutable proof of election fraud." 

The discrepancies between the exit polls and the unverified computer vote counts that I documented in the Democratic Party primaries in various states of 8.0%, 8.3%, 9.3%, 9.9%, 10.0%, 11.6%, 12.2%, and 14.0% all in favor of Clinton are huge and alarm bells of election fraud should be ringing everywhere.

If, as Mark Blumenthal asserted, the first published exit polls may be a “composite” (see 13th paragraph in link) of the exit poll delivered by Edison with the average of pre-election polls, and as these pre-election polls generally over predict Clinton’s vote count, the discrepancies between the unadulterated Edison exit poll data and the unverified computer vote counts may be even much larger and more pervasive.

The fact that Edison Research and the major news networks comprising the NEP consortium are not transparent with their handling of exit poll data and keep their methodology and unadulterated exit poll data under lock and seal is, to me, more damming than their rationalized arbitrary jiggering with their exit polls to match the unverified computer vote counts. 

Ted

From: 'Theodore de Macedo Soares' via Election Integrity <Election...@googlegroups.com>
To: Michael Keefer <mhke...@gmail.com>; "Election...@googlegroups.com" <Election...@googlegroups.com>; 'Jonathan Simon' <Verified...@aol.com>; "richard...@comcast.net" <richard...@comcast.net>
Sent: Wednesday, April 27, 2016 2:52 PM
Reply all
Reply to author
Forward
0 new messages