Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[CM] Troll spotting algorithm

16 views
Skip to first unread message

RS Wood

unread,
Apr 14, 2015, 10:52:43 AM4/14/15
to
From the «none of that here» department:
Title: Stanford Computer Scientists Develop "Troll-Spotting Algorithm,"...
Author:
Date: Sat, 11 Apr 2015 07:04:46 -0400
Link: http://www.topix.net/science/computer-science/2015/04/1504119NENVT?fromrss=1

A group of Stanford computer scientists have developed an algorithm that
supposedly can detect trolls by analyzing as few as five comments . Today,
Justin Cheng at Stanford University in California and a few pals say they have
created just such a tool by analyzing the behavior of trolls on several
well-known websites and creating an algorithm that can accurately spot them
after as few as 10 posts.


--
Posting to comp.misc, sci.misc, and misc.news.internet.discuss

Dan Espen

unread,
Apr 14, 2015, 2:02:10 PM4/14/15
to
I look forward to the day said algorithm becomes part of Usenet.
With all it's warts, it's better than what we face now.

--
Dan Espen
Message has been deleted

Dan Espen

unread,
Apr 14, 2015, 2:28:40 PM4/14/15
to
Hils <hi...@saynotospam.net> writes:
> It's only a matter of time before someone uses the algorithm to produce
> trollbots.

And the ensuing battles will be known as the Troll Wars.

--
Dan Espen

Shadow

unread,
Apr 14, 2015, 4:21:39 PM4/14/15
to
So that's what all those spots were. I thought I'd come down
with measles !!
;)
[]'s
--
Don't be evil - Google 2004
We have a new policy - Google 2012
Message has been deleted

RS Wood

unread,
Apr 15, 2015, 8:28:14 AM4/15/15
to
On 2015-04-14, Hils <hi...@saynotospam.net> wrote:
>> And the ensuing battles will be known as the Troll Wars.
>
> It could be a way of developing AI systems, with troll bots and troll
> detectives playing cat and mouse among the human population, scoring
> points for evading detection or soliciting human replies, losing
> points for false accusations. They could be the Turing Troll Wars.

That would make this classic XKCD germane:
https://xkcd.com/810/

"Spammers are breaking traditional captchas with AI, so I've built a new
system. It asks users to rate a slate of comments as 'constructive' or
'not constructive' ... "

I /do/ miss the days of really clever trolling: subtle remarks designed
to casually cause a conflagration. These days so much of it is simple
ass-hattery.

I thought a bit about what this type of algorithm would look like: the
science ought to be something like Bayesian filtering, probabilities of
word order, and so on. But it's hard to pick out sarcasm, nuances of
humor, and the like. Post length is no easy indicator. No single
vocabulary word gives it away. IP address might help these days now
that so few are on dial-up and user addresses stay the same for longer.
On Usenet, user name often helps (ignore any post from a user named
"obama...@donkeyballs.org" or whatever on the grounds a fake ID with
an inflammatory address is likely to be out to stir the pot?). You
could look for patterns of a user consistently replying after a post
about certain subjects (which would draw out the single-issue haters).
Beyond that, this isn't easy stuff and I'd think the false-positives
would be ridiclous.

There are frequent, long discussions on soylentnews about how to tweak
the karma and scoring algorithms, but it always comes down to users
trying to adjust the math so they hear only from people they want to and
are barred from seeing posts from people they don't want to hear from.
Usenet, for all its shortcomings, does away with the issue by letting
anyone post anything and leaving it to the user with his killfile and
filters to sort out the damage. Maybe the only way to win the game is
not to play it ...

Sylvia Else

unread,
Apr 16, 2015, 9:16:03 AM4/16/15
to
On has to wonder how they validate their data.

After all, before one can check the algorithm, one has to have some
other way of identifying trolls.

A more accurate statement of the accomplishment would be that the
scientists have created an algorithm that replicates their own
assessment of who is a troll.

Sylvia.

Message has been deleted
Message has been deleted
0 new messages