Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Police use Experian Marketing Data for AI Custody Decisions

135 views
Skip to first unread message

Ian Jackson

unread,
Apr 23, 2018, 4:01:58 PM4/23/18
to
https://bigbrotherwatch.org.uk/all-media/police-use-experian-marketing-data-for-ai-custody-decisions/

Durham Police has paid global data broker Experian for UK postcode
stereotypes built on 850 million pieces of information to feed into an
artificial intelligence (AI) tool used in custody decisions, a Big
Brother Watch investigation has revealed.

...

This is quite scary. What can be done to stop it ?

--
Ian Jackson <ijac...@chiark.greenend.org.uk> These opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.

Chris R

unread,
Apr 24, 2018, 4:17:43 AM4/24/18
to
On 23/04/2018 17:53, Ian Jackson wrote:
> https://bigbrotherwatch.org.uk/all-media/police-use-experian-marketing-data-for-ai-custody-decisions/
>
> Durham Police has paid global data broker Experian for UK postcode
> stereotypes built on 850 million pieces of information to feed into an
> artificial intelligence (AI) tool used in custody decisions, a Big
> Brother Watch investigation has revealed.
>
> ...
>
> This is quite scary. What can be done to stop it ?
>
I don't see any reason to stop it. Data protection should focus on the
use to which data is put, not on who has it. For AI to make sensible
predictions from big data it needs as many inputs as necessary.
Depriving it of inputs just makes good outcomes less likely.
--
Chris R

Robin

unread,
Apr 24, 2018, 6:23:56 AM4/24/18
to
I agree.

ISTM no different in principle from the use of "soft intelligence" - eg
that gathered by community police teams. There's freedom to explore and
debate the balance between the right to privacy under Article 8 and the
interests of the prevention of crime and the protection of the rights
and the freedoms of others. But I fail to see the problem if custody
officers with the data in the AI tool make better decisions than they do
without.

Ditto if the same data is used to help GPs identify people at risk of
certain health conditions, teachers those with special needs, ...

--
Robin
reply-to address is (intended to be) valid

Bill

unread,
Apr 24, 2018, 10:01:49 AM4/24/18
to
On 23/04/2018 17:53, Ian Jackson wrote:
> https://bigbrotherwatch.org.uk/all-media/police-use-experian-marketing-data-for-ai-custody-decisions/
>
> Durham Police has paid global data broker Experian for UK postcode
> stereotypes built on 850 million pieces of information to feed into an
> artificial intelligence (AI) tool used in custody decisions, a Big
> Brother Watch investigation has revealed.
>
> ...
>
> This is quite scary. What can be done to stop it ?
>

It seems to me reasonable that the police know who we are.

As I see it the real problem with AI/big data is malicious authority
cherry picking data and misrepresenting the significance of such data.

So the problem is not that the police need to know less about us but is
that we need to know more about the police.

Speaking of which did you see:
<http://www.bbc.co.uk/news/uk-england-bristol-43849453>

Who would have guessed that would happen?





Jon Ribbens

unread,
Apr 24, 2018, 10:02:22 AM4/24/18
to
On 2018-04-24, Robin <rb...@hotmail.com> wrote:
> On 24/04/2018 08:49, Chris R wrote:
>> On 23/04/2018 17:53, Ian Jackson wrote:
>>>    Durham Police has paid global data broker Experian for UK postcode
>>>    stereotypes built on 850 million pieces of information to feed into an
>>>    artificial intelligence (AI) tool used in custody decisions, a Big
>>>    Brother Watch investigation has revealed.
>>>
>>>    ...
>>>
>>> This is quite scary.  What can be done to stop it ?
>>>
>> I don't see any reason to stop it. Data protection should focus on the
>> use to which data is put, not on who has it. For AI to make sensible
>> predictions from big data it needs as many inputs as necessary.
>> Depriving it of inputs just makes good outcomes less likely.
>
> I agree.
>
> ISTM no different in principle from the use of "soft intelligence" - eg
> that gathered by community police teams.

The difference is that that data is gathered by the police, who know
to what use it's likely to be put.

Credit reference agencies aren't run with the expectation that the
information they collect and distribute is going to be used for
decisions affecting peoples' liberty. If an incorrect decision is
made, the police are going to hide behind "it's not our fault,
the data was wrong and it wasn't our data" and the credit agency
is going to hide behind "we do not recommend making custody decisions
based on our data". It allows all parties to disclaim all
responsibility.

> But I fail to see the problem if custody officers with the data in
> the AI tool make better decisions than they do without.

That "if" in your sentence there is doing an awful lot of work.

Martin Brown

unread,
Apr 24, 2018, 10:02:48 AM4/24/18
to
On 24/04/2018 08:49, Chris R wrote:
I don't see a way to stop it but one example where postcode stereotypes
goes wrong happens near me. There is a crime hotspot including some
residences and a motorway services in the same postcode where petrol
theft and other petty car crime is a problem. The other denizens of that
postcode have trouble with their insurance premiums as a result (even
though the services is completely isolated from the rest of it).

--
Regards,
Martin Brown

Robin

unread,
Apr 24, 2018, 10:51:21 AM4/24/18
to
On 24/04/2018 11:59, Jon Ribbens wrote:
<snip>
>
> Credit reference agencies aren't run with the expectation that the
> information they collect and distribute is going to be used for
> decisions affecting peoples' liberty.


Point of information: the vast majority of information in Mosaic has
nothing to do with Experian's work as a "credit reference agency". It
comes from other sources (and much of it publicly available) such as:

Electoral register
Lifestyle survey responses
Family/personal names linked to ethnicity
Directors from Companies House
Neighbourhood information
Rurality/Urbanisation
Shopping accessibility measures
Commercial/Residential mix
Census data
Land Registry data from 1995 onwards
Registers of Scotland Transaction data
Council Tax band
PAF

Chris R

unread,
Apr 24, 2018, 11:20:11 AM4/24/18
to
So what they need is more and better data?

AI is only ever going to come up with a probability based on collective
experience. Hopefully that is better than relying on the lifetime
collection of prejudices of an individual officer making the decision.
You would not be making the decision just on postcodes, but every extra
bit of information helps contributes to making the decision better. Big
problems will come from AI if it is relied upon when it model had
insufficient data. If it turns out that postcodes are of no help at all
in making decisions, AI will establish that far more efficiently than a
human would.

I haven't heard it suggested that custody decisions are going to be
entirely by a computer.
--
Chris R

Bill

unread,
Apr 24, 2018, 11:49:17 AM4/24/18
to
On 24/04/2018 15:51, Robin wrote:
> On 24/04/2018 11:59, Jon Ribbens wrote:
> <snip>
>>
>> Credit reference agencies aren't run with the expectation that the
>> information they collect and distribute is going to be used for
>> decisions affecting peoples' liberty.
>
>
> Point of information: the vast majority of information in Mosaic has
> nothing to do with Experian's work as a "credit reference agency".

For much of the lending/credit industry establishing identity is if not
the main concern it is very close to it.

Comparing a credit reference agency to the police is comparing an
organisation that makes huge efforts to establish correct identity with
an organisation that makes very little effort. Credit reference agencies
are also highly regulated and audited, the police seem to be a law unto
themselves.

It is not a case of the police giving people the benefit of the doubt
before locking them up. It is more a case of them locking people up
until they can prove who they are.

This measure should mean less people locked up, not more.

> It
> comes from other sources (and much of it publicly available) such as:
>
>It will also come from private information shared between companies
associated with Experian. Credit companies, mobile phone companies,
banks, utilities, etc.

Robin

unread,
Apr 24, 2018, 12:09:49 PM4/24/18
to
On 24/04/2018 16:04, Chris R wrote:
<snip>
>>
> So what they need is more and better data?
>
> AI is only ever going to come up with a probability based on collective
> experience. Hopefully that is better than relying on the lifetime
> collection of prejudices of an individual officer making the decision.
> You would not be making the decision just on postcodes, but every extra
> bit of information helps contributes to making the decision better. Big
> problems will come from AI if it is relied upon when it model had
> insufficient data. If it turns out that postcodes are of no help at all
> in making decisions, AI will establish that far more efficiently than a
> human would.
>
> I haven't heard it suggested that custody decisions are going to be
> entirely by a computer.

And of course no one plans that they should be (outside the realm of
fake news). In the words of the academics behind it:

"It’s 3am on Saturday morning. The man in front of you has been caught
in possession of drugs. He has no weapons, and no record of any violent
or serious crimes. Do you let the man out on police bail the next
morning, or keep him locked up for two days to ensure he comes to court
on Monday?

"The police officers who make these custody decisions are highly
experienced,” explains Barnes. “But all their knowledge and policing
skills can’t tell them the one thing they need to now most about the
suspect – how likely is it that he or she is going to cause major harm
if they are released? This is a job that really scares people – they are
at the front line of risk-based decision-making."

"Imagine a situation where the officer has the benefit of a hundred
thousand, and more, real previous experiences of custody decisions?”
says Sherman. “No one person can have that number of experiences, but a
machine can."

http://www.cam.ac.uk/research/features/helping-police-make-custody-decisions-using-artificial-intelligence

In the light of that and many other published accounts of the system and
its early testing[1] I wonder how Big Brother Watch justify "a Big
Brother Watch investigation has revealed". Perhaps "cribbing from one
or more of publicly available accounts" counts as an "investigation" in
their editorial guide?



[1] eg http://www.bbc.co.uk/news/technology-39857645 from May 2017

Roland Perry

unread,
Apr 24, 2018, 12:20:48 PM4/24/18
to
In message <pbnifc$6q4$1...@dont-email.me>, at 16:28:45 on Tue, 24 Apr
2018, Bill <Bill.Inval...@outlook.com> remarked:

>the police seem to be a law unto themselves.

Until RIPA, perhaps, which regulated their investigatory powers.

19yrs into that project, I still get them complaining how much their
hands were tied.

PACE, I've never claimed any expertise in, but it's probably regarded
much the same.
--
Roland Perry

Jon Ribbens

unread,
Apr 24, 2018, 2:04:08 PM4/24/18
to
On 2018-04-24, Robin <rb...@hotmail.com> wrote:
> Point of information: the vast majority of information in Mosaic has
> nothing to do with Experian's work as a "credit reference agency". It
> comes from other sources (and much of it publicly available) such as:
>
> Electoral register
> Lifestyle survey responses
> Family/personal names linked to ethnicity
> Directors from Companies House
> Neighbourhood information
> Rurality/Urbanisation
> Shopping accessibility measures
> Commercial/Residential mix
> Census data
> Land Registry data from 1995 onwards
> Registers of Scotland Transaction data
> Council Tax band
> PAF

I'm not sure I follow you - all of those things sound like they have
a great deal to do with Experian's work as a credit reference agency.

Robin

unread,
Apr 24, 2018, 3:31:09 PM4/24/18
to
Sorry, I thought you were making the point that Experian by virtue of
being a credit reference agency had access to data (eg the full register
of electors) not available to others. So I thought it worth showing how
most of what is in Mosaic is neither generated by them nor confidential
(It has only the Open register.)

And if the police are to use only data generate for law and order
purposes, and whose accuracy they are accountable for, then they will be
locked out of a vast swathe of data used by other agencies to aid decisions.

GB

unread,
Apr 24, 2018, 5:26:45 PM4/24/18
to
On 24/04/2018 17:09, Robin wrote:

> "Imagine a situation where the officer has the benefit of a hundred
> thousand, and more, real previous experiences of custody decisions?”
> says Sherman. “No one person can have that number of experiences, but a
> machine can."


One drawback with AI is that you can't ask it why it made a particular
decision. It just assembles a set of parameters from vast amounts of
data, and it spews out a score. It's almost impossible to check that
process.




Mark Goodge

unread,
Apr 24, 2018, 5:27:24 PM4/24/18
to
Indeed.

I once worked for a retailer which wanted to use this kind of stuff to
screen online orders for fraud potential. Experian wanted an arm and a
leg for their data. I put together a version based entirely on open
and public data, which was just as effective (at least for our
purposes). Postcode-based segmentation can be done entirely on data
from the Office of National Statistics. If you take it down to
individual addresses you can add in council tax data and Land Registry
price paid data to get a very good analysis of residential
demegraphics. Name-based analysis is fuzzier, but there's a lot of
freely available data that can be very useful.

Mark

Mark Goodge

unread,
Apr 24, 2018, 5:27:45 PM4/24/18
to
On Tue, 24 Apr 2018 12:16:30 +0100, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

>On 24/04/2018 08:49, Chris R wrote:
>> On 23/04/2018 17:53, Ian Jackson wrote:
>>> https://bigbrotherwatch.org.uk/all-media/police-use-experian-marketing-data-for-ai-custody-decisions/
>>>
>>>    Durham Police has paid global data broker Experian for UK postcode
>>>    stereotypes built on 850 million pieces of information to feed into an
>>>    artificial intelligence (AI) tool used in custody decisions, a Big
>>>    Brother Watch investigation has revealed.
>>>    ...
>>>
>>> This is quite scary.  What can be done to stop it ?
>>>
>> I don't see any reason to stop it. Data protection should focus on the
>> use to which data is put, not on who has it. For AI to make sensible
>> predictions from big data it needs as many inputs as necessary.
>> Depriving it of inputs just makes good outcomes less likely.
>
>I don't see a way to stop it but one example where postcode stereotypes
>goes wrong happens near me. There is a crime hotspot including some
>residences and a motorway services in the same postcode where petrol
>theft and other petty car crime is a problem.

That's one of the side effects of using postcodes for anything other
than delivering post. But we still don't have a better alternative for
most typical uses.

>The other denizens of that
>postcode have trouble with their insurance premiums as a result (even
>though the services is completely isolated from the rest of it).

It can't be completely isolated, or it wouldn't have the same
postcode.

Mark

newshound

unread,
Apr 25, 2018, 2:27:54 AM4/25/18
to
Well the connection is presumably via the private service road to the
Services which are surely covered by CCTV, as well as being easy to
block. Petrol thiefs etc wouldn't be daft enough to use these; or would
they?

Roland Perry

unread,
Apr 25, 2018, 2:56:47 AM4/25/18
to
In message <pbn3me$i9f$2...@gioia.aioe.org>, at 12:16:30 on Tue, 24 Apr
2018, Martin Brown <'''newspam'''@nezumi.demon.co.uk> remarked:

>one example where postcode stereotypes goes wrong happens near me.
>There is a crime hotspot including some residences and a motorway
>services in the same postcode where petrol theft and other petty car
>crime is a problem. The other denizens of that postcode have trouble
>with their insurance premiums as a result (even though the services is
>completely isolated from the rest of it).

I'm very surprised a business as big as a motorway services doesn't have
its own unique postcode. Or are you talking about just the first two
letters and the number?
--
Roland Perry

Martin Brown

unread,
Apr 25, 2018, 3:25:37 AM4/25/18
to
On 24/04/2018 20:30, Mark Goodge wrote:
> On Tue, 24 Apr 2018 12:16:30 +0100, Martin Brown

>> The other denizens of that
>> postcode have trouble with their insurance premiums as a result (even
>> though the services is completely isolated from the rest of it).
>
> It can't be completely isolated, or it wouldn't have the same
> postcode.

There is a fence all around it and a service access road that only
employees can use to go to work.

Rural postcodes are quirky too. There are some very strange postcodes in
Yorkshire caused by prehistoric field boundaries or something. There is
a single house near me that shares a postcode with the Farm Shop about a
mile away by road - you can tell which one it is because people driving
on satnav screech to a halt in front of it and then look very puzzled.

--
Regards,
Martin Brown

Martin Brown

unread,
Apr 25, 2018, 3:26:06 AM4/25/18
to
On 24/04/2018 15:51, Robin wrote:
> On 24/04/2018 11:59, Jon Ribbens wrote:
> <snip>
>>
>> Credit reference agencies aren't run with the expectation that the
>> information they collect and distribute is going to be used for
>> decisions affecting peoples' liberty.

Mainly they are concerned with not letting people get credit who cannot
pay it back - the problem is when the thieves are clever and steal some
poor unfortunates ID or use hacked data obtained from Experian itself.

Although it may help flag people who are no real threat to the public
even if they were a nuisance whilst D&D after a night on the town.

> Point of information: the vast majority of information in Mosaic has
> nothing to do with Experian's work as a "credit reference agency".  It

From looking at my Experian data file I have marked with ** the lines
below which it seems they have most obviously strip mined.

> comes from other sources (and much of it publicly available) such as:
>
**> Electoral register
> Lifestyle survey responses
> Family/personal names linked to ethnicity
**> Directors from Companies House
**> Neighbourhood information
> Rurality/Urbanisation
> Shopping accessibility measures
> Commercial/Residential mix
> Census data
**> Land Registry data from 1995 onwards
> Registers of Scotland Transaction data
**> Council Tax band
> PAF

+ Every mobile phone contract, bank account and credit card you have
ever had in the past 6 years and full payment history/balances.

+ Everywhere you have ever lived in the UK going back to aged 16.

+ Any county court or debtors judgements against you.

+ some other things I have forgotten

I guess it depends how much of these data Experian will share with the
police and what safeguards there are to prevent fishing expeditions. It
would be a travesty if law enforcement became a pathway for ID theft.

BTW Modern census data is not publicly available under the 100 year rule
(give or take a few years early release slack).

--
Regards,
Martin Brown

Martin Brown

unread,
Apr 25, 2018, 3:34:34 AM4/25/18
to
It wasn't always that big. It happens where former trunk roads get
upgraded to full motorway status but when the postcodes were allocated
in the 1970's it was nothing more than a petrol station and a diner.

Its a historical quirk based on geographical size of post rounds rather
than anything planned. If they were being kind they would allocate a
clean postcode to the services to isolate it from the residences (there
are unused gaps in the local postcode sequence). They may have done by
now for all I know - but it was an issue in the past.

--
Regards,
Martin Brown

Chris R

unread,
Apr 25, 2018, 4:14:20 AM4/25/18
to
Which is why, until you are sure it is very reliable, you don't let it
make the decision, but provide it as an input to the human decision.
It's true that the lack of transparency makes it hard for the human to
know how much weight to give it, and having the AI generate a report of
the contributing factors might help. If the AI say there is a 90% chance
this person will abscond, the officer still has to consider whether he
is in fact satisfied that the person is in the other 10%.

One of the things I suspect big data AI will find most difficult to cope
with is changes of behaviour patterns, which may be due to the use of
the AI itself. If the AI identifies that 90% of men nicked at night
wearing striped jerseys are burglars, the burglars may stop wearing
striped jerseys. Then the 10% innocents become 100%, and will get locked
up until the datasets catch up with the change. Or fashions change, and
all men start wearing striped jerseys. Years of historical data can
become useless overnight.

Once people know that driverless cars always stop, they may just walk
out in front of them. The cars will never get anywhere in towns, and the
passengers will have a very uncomfortable ride.
--
Chris R

Robin

unread,
Apr 25, 2018, 4:46:12 AM4/25/18
to
On 24/04/2018 19:38, GB wrote:
Yes, that's one of the ways AI doesn't fit easily with the way judges
have developed administrative law (to require decisions take account of
only relevant information). Another patently is fettering discretion.
And even EU data protection regulations (and soon the GDPR) puts an oar
into soley automated decisions.

I predict a powerful lobby for statutory rights to an explanation and
to a review by a human: it'll provide work for the lawyers made
redundant by AI ;)

Roland Perry

unread,
Apr 25, 2018, 4:48:21 AM4/25/18
to
In message <pbpahd$2tg$1...@gioia.aioe.org>, at 08:25:33 on Wed, 25 Apr
2018, Martin Brown <'''newspam'''@nezumi.demon.co.uk> remarked:
>>> The other denizens of that
>>> postcode have trouble with their insurance premiums as a result (even
>>> though the services is completely isolated from the rest of it).
>> It can't be completely isolated, or it wouldn't have the same
>> postcode.
>
>There is a fence all around it and a service access road that only
>employees can use to go to work.

What's to stop criminals using the road too?
--
Roland Perry

Handsome Jack

unread,
Apr 25, 2018, 5:15:19 AM4/25/18
to
Robin <rb...@hotmail.com> posted
>In the words of the academics behind it:
>
>"It’s 3am on Saturday morning. The man in front of you has been
>caught in possession of drugs. He has no weapons, and no record of any
>violent or serious crimes. Do you let the man out on police bail the
>next morning, or keep him locked up for two days to ensure he comes to
>court on Monday?
>
>"The police officers who make these custody decisions are highly
>experienced,” explains Barnes. “But all their knowledge and
>policing skills can’t tell them the one thing they need to now most
>about the suspect – how likely is it that he or she is going to cause
>major harm if they are released?

Why would a man found in possession of drugs be likely to cause someone
major harm if he is released?


--
Jack

Tim Woodall

unread,
Apr 25, 2018, 5:15:54 AM4/25/18
to
On 2018-04-24, Chris R <invalid...@invalid.invalid.com> wrote:
> So what they need is more and better data?
>
> AI is only ever going to come up with a probability based on collective
> experience. Hopefully that is better than relying on the lifetime
> collection of prejudices of an individual officer making the decision.
> You would not be making the decision just on postcodes, but every extra
> bit of information helps contributes to making the decision better. Big
> problems will come from AI if it is relied upon when it model had
> insufficient data. If it turns out that postcodes are of no help at all
> in making decisions, AI will establish that far more efficiently than a
> human would.
>

AI won't necessarily be using the right information.

For example, I would guess that 'the arresting officer' is
correlated with risk of harm if the suspect is released on bail.

I'd also expect that some police officers are more likely to give a
verbal warning and send minor criminals on their way.

So you get unlucky, get stopped for a failed brake light by a 'soft cop'
on a bad day. And the AI says 'don't release this guy on bail' because
almost everyone that that cop brings in is a serious felon.

> I haven't heard it suggested that custody decisions are going to be
> entirely by a computer.

I'm sure that the police will have discretion to override the decision.
And I'm sure they will only exercise it when they chose not to release.
After all, let someone go when the computer says no and get it wrong...


Look at the child abuse stuff. Aparently, 25% of non mobile babies will
suffer non-deliberate bruising in an 8 week period. IIRC 5 councils have
a policy that even a single bruise must trigger an investigation.

Or sex abuse, where 'victims' are incapable of lying.


The decisions that have to be made can be catastrophically damagaging
when they are wrong. But we seem to have got into a state where
only errors in one direction are considered bad. Whatever happened to
Blackstone's ratio? 'Better that 1000 innocent men be punished than even
one guilty man be able to reoffend'

The problem is that 'harmful' crime is thankfully very rare. So
punishing 10 innocent men for each criminal still leaves the majority of
innocent men unharmed. So we suffer the no smoke without fire fallacy.

Martin Brown

unread,
Apr 25, 2018, 5:16:29 AM4/25/18
to
The barrier and cameras same as on other motorway staff only access
roads. I think you need a PIN or magic card to go through it.

--
Regards,
Martin Brown

Roland Perry

unread,
Apr 25, 2018, 5:21:47 AM4/25/18
to
In message <wdVrBmDN...@none.demon.co.uk>, at 09:34:21 on Wed, 25
Apr 2018, Handsome Jack <Ja...@nowhere.com> remarked:

>>"It’s 3am on Saturday morning. The man in front of you has been
>>caught in possession of drugs. He has no weapons, and no record of any
>>violent or serious crimes. Do you let the man out on police bail the
>>next morning, or keep him locked up for two days to ensure he comes to
>>court on Monday?
>>
>>"The police officers who make these custody decisions are highly
>>experienced,” explains Barnes. “But all their knowledge and
>>policing skills can’t tell them the one thing they need to now most
>>about the suspect – how likely is it that he or she is going to
>>cause major harm if they are released?
>
>Why would a man found in possession of drugs be likely to cause someone
>major harm if he is released?

Because he's lost his stash and either needs to steal replacements or
rob someone to get the cash to pay for replacements.
--
Roland Perry

Roland Perry

unread,
Apr 25, 2018, 5:21:48 AM4/25/18
to
In message <slrnpe0a06....@einstein.home.woodall.me.uk>, at
07:02:30 on Wed, 25 Apr 2018, Tim Woodall <new...@woodall.me.uk>
remarked:
>So you get unlucky, get stopped for a failed brake light by a 'soft cop'
>on a bad day. And the AI says 'don't release this guy on bail' because
>almost everyone that that cop brings in is a serious felon.

The AI will obviously take the nature of the offence into account.

In any event, when did they start locking people up overnight for failed
brake lights?
--
Roland Perry

Roland Perry

unread,
Apr 25, 2018, 5:25:06 AM4/25/18
to
In message <pbph19$dk8$2...@gioia.aioe.org>, at 10:16:26 on Wed, 25 Apr
2018, Martin Brown <'''newspam'''@nezumi.demon.co.uk> remarked:
>>>>> The other denizens of that
>>>>> postcode have trouble with their insurance premiums as a result (even
>>>>> though the services is completely isolated from the rest of it).
>>>>  It can't be completely isolated, or it wouldn't have the same
>>>> postcode.
>>>
>>> There is a fence all around it and a service access road that only
>>>employees can use to go to work.
>> What's to stop criminals using the road too?
>
>The barrier and cameras same as on other motorway staff only access
>roads.

Many of them don't have barriers, and the cameras are useless against
criminals with false plates or stolen cars.

>I think you need a PIN or magic card to go through it.

So you just need a mate who is one of the hundreds of workers with
access to those.
--
Roland Perry

Adam Funk

unread,
Apr 25, 2018, 6:09:59 AM4/25/18
to
If he's that kind of drug user. Of course, he might be the "weekend
stoner, otherwise respectable citizen" kind.

Robin

unread,
Apr 25, 2018, 6:23:29 AM4/25/18
to
What if "no record of any violent or serious crimes" reflects not "a
record of absence of violence or serious offences" but "a complete
absence of records"? When that's combined with the presence of gang
tats? It seems to me a custody officer would be remiss if there was not
even consideration of the risk of an offence while on bail - bearing in
mind we weren't told dealing was out of the question. ISTM asking a lot
for every custody officer to know the odds.

Tim Woodall

unread,
Apr 25, 2018, 6:29:26 AM4/25/18
to
On 2018-04-25, Roland Perry <rol...@perry.co.uk> wrote:
> In message <slrnpe0a06....@einstein.home.woodall.me.uk>, at
> 07:02:30 on Wed, 25 Apr 2018, Tim Woodall <new...@woodall.me.uk>
> remarked:
>>So you get unlucky, get stopped for a failed brake light by a 'soft cop'
>>on a bad day. And the AI says 'don't release this guy on bail' because
>>almost everyone that that cop brings in is a serious felon.
>
> The AI will obviously take the nature of the offence into account.
>
Why obviously?

If arresting officer gives a better correlation to risk of reoffending
than seriousness of offense then the AI will give greater weight to the
arresting officer than the offense.

Google's AI thought a 3d printed turtle looks like a rifle because
whatever it decided was the discriminator for a rifle was also in the
image of the turtle. (And with hindsight I can see why a turtle could be
mistaken for a rifle but it would never have occured to me without
prompting)

The US army wanted to use a neural net to spot camouflaged tanks in
trees. They trained the network, aparently successfully, using
supervised learning. They validated against a set of images that had not
been used in training the neural net.

When the pentagon did their own tests, they found the AI no better than
chance at detecting a tank.

It turned out that when the camouflaged tanks had been photographed it
had been cloudy while the no tank images had been taken on a sunny day.
The net learned to distinguish cloudy from sunny rather than tank from
no tank.

> In any event, when did they start locking people up overnight for failed
> brake lights?

When the AI starts saying 'don't release this guy because there's a high
risk of him hurting someone'

Roland Perry

unread,
Apr 25, 2018, 6:37:39 AM4/25/18
to
In message <qtv5rex...@news.ducksburg.com>, at 10:55:38 on Wed, 25
Apr 2018, Adam Funk <a24...@ducksburg.com> remarked:

>>>>"It’s 3am on Saturday morning. The man in front of you has been
>>>>caught in possession of drugs. He has no weapons, and no record of any
>>>>violent or serious crimes. Do you let the man out on police bail the
>>>>next morning, or keep him locked up for two days to ensure he comes to
>>>>court on Monday?
>>>>
>>>>"The police officers who make these custody decisions are highly
>>>>experienced,” explains Barnes. “But all their knowledge and
>>>>policing skills can’t tell them the one thing they need to now most
>>>>about the suspect – how likely is it that he or she is going to
>>>>cause major harm if they are released?
>>>
>>>Why would a man found in possession of drugs be likely to cause someone
>>>major harm if he is released?
>>
>> Because he's lost his stash and either needs to steal replacements or
>> rob someone to get the cash to pay for replacements.
>
>If he's that kind of drug user. Of course, he might be the "weekend
>stoner, otherwise respectable citizen" kind.

As those wouldn't be at risk of causing major harm, the population being
discussed in the above is therefore most likely to be "that kind of drug
user" (as you put it) *but* who has managed to avoid getting into
trouble before. To that extent I suspect it's also an edge case, and not
a very good one to be illustrating a proposed wideranging policing
strategy.
--
Roland Perry

Roland Perry

unread,
Apr 25, 2018, 6:37:39 AM4/25/18
to
In message <slrnpe0m3u....@einstein.home.woodall.me.uk>, at
10:29:18 on Wed, 25 Apr 2018, Tim Woodall <new...@woodall.me.uk>
remarked:
>On 2018-04-25, Roland Perry <rol...@perry.co.uk> wrote:
>> In message <slrnpe0a06....@einstein.home.woodall.me.uk>, at
>> 07:02:30 on Wed, 25 Apr 2018, Tim Woodall <new...@woodall.me.uk>
>> remarked:
>>>So you get unlucky, get stopped for a failed brake light by a 'soft cop'
>>>on a bad day. And the AI says 'don't release this guy on bail' because
>>>almost everyone that that cop brings in is a serious felon.
>>
>> The AI will obviously take the nature of the offence into account.
>>
>Why obviously?

Because it would be completely and utterly broken otherwise.

>If arresting officer gives a better correlation to risk of reoffending
>than seriousness of offense then the AI will give greater weight to the
>arresting officer than the offense.
>
>Google's AI thought a 3d printed turtle looks like a rifle because
>whatever it decided was the discriminator for a rifle was also in the
>image of the turtle. (And with hindsight I can see why a turtle could be
>mistaken for a rifle but it would never have occured to me without
>prompting)
>
>The US army wanted to use a neural net to spot camouflaged tanks in
>trees. They trained the network, aparently successfully, using
>supervised learning. They validated against a set of images that had not
>been used in training the neural net.
>
>When the pentagon did their own tests, they found the AI no better than
>chance at detecting a tank.
>
>It turned out that when the camouflaged tanks had been photographed it
>had been cloudy while the no tank images had been taken on a sunny day.
>The net learned to distinguish cloudy from sunny rather than tank from
>no tank.

Those are broken AI systems.

>> In any event, when did they start locking people up overnight for failed
>> brake lights?
>
>When the AI starts saying 'don't release this guy because there's a high
>risk of him hurting someone'

The AI can't learn that as an option if historically it never happens.
--
Roland Perry

Graeme

unread,
Apr 25, 2018, 6:49:26 AM4/25/18
to
In message <OrZOgUkR...@perry.co.uk>, Roland Perry
<rol...@perry.co.uk> writes
>In message <wdVrBmDN...@none.demon.co.uk>, at 09:34:21 on Wed, 25
>Apr 2018, Handsome Jack <Ja...@nowhere.com> remarked:
>>
>>Why would a man found in possession of drugs be likely to cause
>>someone major harm if he is released?
>
>Because he's lost his stash and either needs to steal replacements or
>rob someone to get the cash to pay for replacements.

Because he may be a pusher, perhaps supplying local school children?
--
Graeme

Mark Goodge

unread,
Apr 25, 2018, 7:44:45 AM4/25/18
to
On Wed, 25 Apr 2018 08:26:02 +0100, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:


>BTW Modern census data is not publicly available under the 100 year rule
>(give or take a few years early release slack).

Individual returns aren't. But aggregate data is. For example, you can
find out how many people in an output area reported themselves as a
particular ethnic group or living in particular tenure. Which is all
useful for big data analysis.

Mark

Mark Goodge

unread,
Apr 25, 2018, 8:31:44 AM4/25/18
to
On Wed, 25 Apr 2018 09:14:06 +0100, Chris R
<invalid...@invalid.invalid.com> wrote:
>
>One of the things I suspect big data AI will find most difficult to cope
>with is changes of behaviour patterns, which may be due to the use of
>the AI itself. If the AI identifies that 90% of men nicked at night
>wearing striped jerseys are burglars, the burglars may stop wearing
>striped jerseys. Then the 10% innocents become 100%, and will get locked
>up until the datasets catch up with the change. Or fashions change, and
>all men start wearing striped jerseys. Years of historical data can
>become useless overnight.

This is a known problem with using historic data. If something that
people commonly do suddenly becomes impractical, too expensive,
illegal or simply impossible, then all the historic data you've built
up around that thing becomes meaningless. For example, your choice of
daily newspaper (assuming you buy one[1]) is a very good indicator of
a whole host of other societal attitudes. When the News of the World
closed down in the wake of the phone hacking[2] scandal their former
readers didn't all migrate en masse to another title (not even to the
sucessor title from the same stable, the Sun on Sunday), so a whole
slew of historic readership data became useless literally overnight
and had to be built up again from scratch.

Dealing with this is one of the Experian's (and their competitors')
main selling points. As has already been pointed out in this thread,
the majority of the data they use is already publicly available or can
be obtained at reasonable cost - much less than you'd pay Experian for
it. But Experian's marketing promise is that their demographic
categorisations will keep up with behavioural changes, and that's one
of the things that makes it worth paying them for the data rather than
rolling your own.

>Once people know that driverless cars always stop, they may just walk
>out in front of them. The cars will never get anywhere in towns, and the
>passengers will have a very uncomfortable ride.

A real life example of this is at Pelican crossings. At a normal
Pelican, the flashing green man indicates that pedestrians still have
priority on the crossing, but those not yet crossing should not start
to cross. But people became used to the fact that, if you see the
green man start to flash, you've generally still got plenty of time to
cross even if it starts flashing before you're on the crossing. Which
led to people simply treating the flashing green man as an invitation
to cross, even if they don't know they've got time left to do so,
something with obvious potential for danger. And hence the move away
from Pelicans and towards other forms of crossing, including the
Puffin and, particularly in London, the Countdown crossing which
significantly reduce the likelihood of pedestrians crossing when the
lights are green for vehicles.

[1] Which is, of course, another data point.

[2] Yes, I know it wasn't actually hacking, in the computer sense, but
it's the media's preferred term and there's not a lot of point being
pedantic about it.

Mark

Mark Goodge

unread,
Apr 25, 2018, 8:33:59 AM4/25/18
to
On Wed, 25 Apr 2018 10:23:39 +0100, Roland Perry <rol...@perry.co.uk>
wrote:

>In message <pbph19$dk8$2...@gioia.aioe.org>, at 10:16:26 on Wed, 25 Apr
>2018, Martin Brown <'''newspam'''@nezumi.demon.co.uk> remarked:
>>>>>> The other denizens of that
>>>>>> postcode have trouble with their insurance premiums as a result (even
>>>>>> though the services is completely isolated from the rest of it).
>>>>>  It can't be completely isolated, or it wouldn't have the same
>>>>> postcode.
>>>>
>>>> There is a fence all around it and a service access road that only
>>>>employees can use to go to work.
>>> What's to stop criminals using the road too?
>>
>>The barrier and cameras same as on other motorway staff only access
>>roads.
>
>Many of them don't have barriers, and the cameras are useless against
>criminals with false plates or stolen cars.

And I would imagine that most criminals simply walk though, anyway.

Mark

Roland Perry

unread,
Apr 25, 2018, 9:31:18 AM4/25/18
to
In message <9mr0edllee70m0a74...@4ax.com>, at 13:31:41 on
Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>Once people know that driverless cars always stop, they may just walk
>>out in front of them. The cars will never get anywhere in towns, and the
>>passengers will have a very uncomfortable ride.
>
>A real life example of this is at Pelican crossings. At a normal
>Pelican, the flashing green man indicates that pedestrians still have
>priority on the crossing, but those not yet crossing should not start
>to cross. But people became used to the fact that, if you see the
>green man start to flash, you've generally still got plenty of time to
>cross even if it starts flashing before you're on the crossing. Which
>led to people simply treating the flashing green man as an invitation
>to cross, even if they don't know they've got time left to do so,
>something with obvious potential for danger.

It's only dangerous if turning-green light to motorised traffic means
"full steam ahead and plough through any remaining pedestrians", which
it doesn't.

>other forms of crossing, including the Puffin

"An on-crossing detector ensures that the signal for vehicles
remains red until pedestrians have finished crossing (within
practical limits)."

There's a reverse-Puffin [Niffup?] near me (ostensibly a Toucan), which
never gives pedestrians the green light until all the traffic has
disappeared. There may be a very long timeout, as there is with Puffins,
where it will eventually give up and stop the stream of cars, but I very
rarely see it in that mode.

Pressing the button to cross is therefore virtually futile, as it's
quicker to spot a gap in the traffic oneself, and you'll be across
before the Niffup has changed the lights.
--
Roland Perry

Tim Woodall

unread,
Apr 25, 2018, 9:47:26 AM4/25/18
to
On 2018-04-25, Mark Goodge <use...@listmail.good-stuff.co.uk> wrote:
>
> A real life example of this is at Pelican crossings. At a normal
> Pelican, the flashing green man indicates that pedestrians still have
> priority on the crossing, but those not yet crossing should not start
> to cross. But people became used to the fact that, if you see the
> green man start to flash, you've generally still got plenty of time to
> cross even if it starts flashing before you're on the crossing. Which
> led to people simply treating the flashing green man as an invitation
> to cross, even if they don't know they've got time left to do so,

I disagree with this reasoning. The reason I cross is because there
appears to be a long *minimum* time before the lights can change again.
Worse, it appears that that minimum time only starts when the button is
pressed.

As a result, on quieter roads, people have learned not to push the
button but to cross when there's a gap in the traffic. And,
unfortunately, it's getting more and more common not to press the button
even on busy roads (which further increases the waiting time) and
further encouraging people to cross whenever they can.

One particularly frustrating case is staggered crossings where it's a
predominantly one way flow of pedestrians.

The no push brigade gather at the edge of the road until someone does
manage to get to, and push the button. Then pedestrians continue to
gather for 45 seconds. The lights change and a huge group of people,
fronted by the no button pushers, cross to the central island and
proceed to stand there when it's hard to impossible to actually get to
the button to push it. Eventually, someone does push it when it's then
another 45 seconds before you will be able to cross to the other side.

And the crossings that turn the green man on early if they detect a gap
in the traffic don't help - I can see the gap and I'm usually across
before the lights change.

So prevalent is the problem of people crossing ahead of the lights (or
giving up crossing altogether I guess) that some crossings now detect if
there's noone to cross and shorten, or possibly even cancel, the green
man phase. (and they lengthen the green man if someone is slow crossing)


While there needs to be a minimum time between green men, there's no
reason (other than 'motorists are soooo important') why the lights
cannot change immediately (within 10 seconds) of a button push when the
lights have been green for motorists for at least a minute already.


Pedestrian phases at junctions are more difficult - but IMO, either
there should be a green man between each traffic flow, or the green man
should be long enough to cross to the final destination.

Pedestrian arrives, pushes button, waits for flow A to stop. Now waits
for flow B to stop. Gets green man.
Pedestrian crosses flow A. (or Flow B)
Pedestrian pushes button. Waits for flow A to stop. Waits for flow B to
stop. Gets green man.
Pedestrian crosses flow B. (or flow A)

It's no wonder that the fleet of foot cross in the all red time (red
light jumping motorists permitting)

In the worst cases you get junctions where for certain crossing patterns
you can have to wait through 3 complete cycles of the lights (sometimes
you are forced to cross three roads as there's no green man on one arm
of the junction. And each road is split into two halves.)

or you walk on the wrong side of the barrier and cross in the obvious
way except that to have allowed that crossing there would have to be a
time when the other three directions were all on red at the same time.


And, as a cyclist and pedestrian but rarely motorist, it's even more
frustrating that the 'only change if there's a gap in the traffic'
lights can't detect cyclists meaning you're far more likely to be
stopped by them than a motorist is. But as cyclists never stop on red...

Mark Goodge

unread,
Apr 25, 2018, 9:51:29 AM4/25/18
to
On Wed, 25 Apr 2018 14:21:26 +0100, Roland Perry <rol...@perry.co.uk>
wrote:

>In message <9mr0edllee70m0a74...@4ax.com>, at 13:31:41 on
>Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
>remarked:
>>>Once people know that driverless cars always stop, they may just walk
>>>out in front of them. The cars will never get anywhere in towns, and the
>>>passengers will have a very uncomfortable ride.
>>
>>A real life example of this is at Pelican crossings. At a normal
>>Pelican, the flashing green man indicates that pedestrians still have
>>priority on the crossing, but those not yet crossing should not start
>>to cross. But people became used to the fact that, if you see the
>>green man start to flash, you've generally still got plenty of time to
>>cross even if it starts flashing before you're on the crossing. Which
>>led to people simply treating the flashing green man as an invitation
>>to cross, even if they don't know they've got time left to do so,
>>something with obvious potential for danger.
>
>It's only dangerous if turning-green light to motorised traffic means
>"full steam ahead and plough through any remaining pedestrians", which
>it doesn't.

Most of the time it doesn't. But sometimes, drivers start moving in
the expectation that the crossing is clear, and simply fail to see a
late-arriving pedestrian. Pelicans do have a significantly higher
accident rate than other forms of signalised crossing. But, they
didn't used to. It's precisely because behaviour has changed with
experience that they have, possibly somewhat paradoxically, become
more dangerous. Which is a good illustration of Chris R's point.

Mark

Roland Perry

unread,
Apr 25, 2018, 10:16:01 AM4/25/18
to
In message <3o11eddlb511vbj1a...@4ax.com>, at 14:51:26 on
Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>A real life example of this is at Pelican crossings. At a normal
>>>Pelican, the flashing green man indicates that pedestrians still have
>>>priority on the crossing, but those not yet crossing should not start
>>>to cross. But people became used to the fact that, if you see the
>>>green man start to flash, you've generally still got plenty of time to
>>>cross even if it starts flashing before you're on the crossing. Which
>>>led to people simply treating the flashing green man as an invitation
>>>to cross, even if they don't know they've got time left to do so,
>>>something with obvious potential for danger.
>>
>>It's only dangerous if turning-green light to motorised traffic means
>>"full steam ahead and plough through any remaining pedestrians", which
>>it doesn't.
>
>Most of the time it doesn't. But sometimes, drivers start moving in
>the expectation that the crossing is clear, and simply fail to see a
>late-arriving pedestrian.

I'm certain the drivers don't get a green light while the pedestrian's
green man is still flashing. Next time I'm at one, I'll see how many
seconds there are after the green man has stopped flashing before the
traffic gets a green light.
--
Roland Perry

tim...

unread,
Apr 25, 2018, 10:55:50 AM4/25/18
to


"Ian Jackson" <ijac...@chiark.greenend.org.uk> wrote in message
news:kOs*Gp...@news.chiark.greenend.org.uk...
> https://bigbrotherwatch.org.uk/all-media/police-use-experian-marketing-data-for-ai-custody-decisions/
>
> Durham Police has paid global data broker Experian for UK postcode
> stereotypes built on 850 million pieces of information to feed into an
> artificial intelligence (AI) tool used in custody decisions, a Big
> Brother Watch investigation has revealed.

This is not artificial intelligence

It's we collect a large amount of data in a database

we use that data to make a preselected decision

(oh it does so annoy me)

tim



Mark Goodge

unread,
Apr 25, 2018, 11:51:44 AM4/25/18
to
On Wed, 25 Apr 2018 15:05:20 +0100, Roland Perry <rol...@perry.co.uk>
wrote:
No, the drivers get a green light after the pedestrian green man stops
flashing and goes red. But someone who started to cross while it was
still flashing may then be stranded.

Mark

Roland Perry

unread,
Apr 25, 2018, 2:53:07 PM4/25/18
to
In message <hu81ed5gt8lvg7696...@4ax.com>, at 16:51:39 on
Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>>It's only dangerous if turning-green light to motorised traffic means
>>>>"full steam ahead and plough through any remaining pedestrians", which
>>>>it doesn't.
>>>
>>>Most of the time it doesn't. But sometimes, drivers start moving in
>>>the expectation that the crossing is clear, and simply fail to see a
>>>late-arriving pedestrian.
>>
>>I'm certain the drivers don't get a green light while the pedestrian's
>>green man is still flashing. Next time I'm at one, I'll see how many
>>seconds there are after the green man has stopped flashing before the
>>traffic gets a green light.
>
>No, the drivers get a green light after the pedestrian green man stops
>flashing and goes red. But someone who started to cross while it was
>still flashing may then be stranded.

It's only dangerous if turning-green light to motorised traffic means
"full steam ahead and plough through any remaining pedestrians", which
it doesn't.
--
Roland Perry

Mark Goodge

unread,
Apr 25, 2018, 4:09:10 PM4/25/18
to
On Wed, 25 Apr 2018 19:45:29 +0100, Roland Perry <rol...@perry.co.uk>
wrote:
Mark

Handsome Jack

unread,
Apr 25, 2018, 5:29:44 PM4/25/18
to
Roland Perry <rol...@perry.co.uk> posted
The phrase "iatrogenic disease" springs to mind.

--
Jack

Martin Brown

unread,
Apr 26, 2018, 3:49:59 AM4/26/18
to
On 25/04/2018 19:45, Roland Perry wrote:

>>>> Most of the time it doesn't. But sometimes, drivers start moving in
>>>> the expectation that the crossing is clear, and simply fail to see a
>>>> late-arriving pedestrian.
>>>
>>> I'm certain the drivers don't get a green light while the pedestrian's
>>> green man is still flashing. Next time I'm at one, I'll see how many
>>> seconds there are after the green man has stopped flashing before the
>>> traffic gets a green light.
>>
>> No, the drivers get a green light after the pedestrian green man stops
>> flashing and goes red. But someone who started to cross while it was
>> still flashing may then be stranded.
>
> It's only dangerous if turning-green light to motorised traffic means
> "full steam ahead and plough through any remaining pedestrians", which
> it doesn't.

There was a time in the dim and distant past when Pelican crossings
showed a flashing amber to traffic whilst flashing the green man with
the meaning that you can go again if no-one is crossing. A fairly common
occurrence when people press the button then cross early during a gap.

These days they are mostly called Puffin and lack the flashing amber
stage so you cannot go even though the crossing is entirely empty. I am
not convinced they do sense pedestrians in transit as is claimed.

I think the timed countdown for pedestrians isn't a bad idea.

--
Regards,
Martin Brown

Adam Funk

unread,
Apr 26, 2018, 5:00:10 AM4/26/18
to
On 2018-04-25, Roland Perry wrote:

> In message <9mr0edllee70m0a74...@4ax.com>, at 13:31:41 on
> Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
> remarked:
>>>Once people know that driverless cars always stop, they may just walk
>>>out in front of them. The cars will never get anywhere in towns, and the
>>>passengers will have a very uncomfortable ride.
>>
>>A real life example of this is at Pelican crossings. At a normal
>>Pelican, the flashing green man indicates that pedestrians still have
>>priority on the crossing, but those not yet crossing should not start
>>to cross. But people became used to the fact that, if you see the
>>green man start to flash, you've generally still got plenty of time to
>>cross even if it starts flashing before you're on the crossing. Which
>>led to people simply treating the flashing green man as an invitation
>>to cross, even if they don't know they've got time left to do so,
>>something with obvious potential for danger.
>
> It's only dangerous if turning-green light to motorised traffic means
> "full steam ahead and plough through any remaining pedestrians", which
> it doesn't.

Well, you know that & I know that, but not everyone seems to.

Roland Perry

unread,
Apr 26, 2018, 9:40:46 AM4/26/18
to
In message <30o1edlbd24uiabh3...@4ax.com>, at 21:09:04 on
Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>>>>It's only dangerous if turning-green light to motorised traffic means
>>>>>>"full steam ahead and plough through any remaining pedestrians", which
>>>>>>it doesn't.
>>>>>
>>>>>Most of the time it doesn't. But sometimes, drivers start moving in
>>>>>the expectation that the crossing is clear, and simply fail to see a
>>>>>late-arriving pedestrian.
>>>>
>>>>I'm certain the drivers don't get a green light while the pedestrian's
>>>>green man is still flashing. Next time I'm at one, I'll see how many
>>>>seconds there are after the green man has stopped flashing before the
>>>>traffic gets a green light.
>>>
>>>No, the drivers get a green light after the pedestrian green man stops
>>>flashing and goes red. But someone who started to cross while it was
>>>still flashing may then be stranded.
>>
>>It's only dangerous if turning-green light to motorised traffic means
>>"full steam ahead and plough through any remaining pedestrians", which
>>it doesn't.
>
>Most of the time it doesn't. But sometimes, drivers start moving in
>expectation that the crossing is clear, and simply fail to see a
>late-arriving pedestrian.

The flashing Amber aspect (for 6 seconds if my observation is typical)
before the light turns green is a clue to look out for pedestrians!

If these crossing are getting more accidents I'm sure it's pedestrians
failing to wait for their phase, which as others have remarked sometimes
feels lie it's never going to happen.
--
Roland Perry

Mark Goodge

unread,
Apr 26, 2018, 1:46:30 PM4/26/18
to
On Thu, 26 Apr 2018 14:36:28 +0100, Roland Perry <rol...@perry.co.uk>
wrote:

>In message <30o1edlbd24uiabh3...@4ax.com>, at 21:09:04 on
>Wed, 25 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
>remarked:

>>Most of the time it doesn't. But sometimes, drivers start moving in
>>expectation that the crossing is clear, and simply fail to see a
>>late-arriving pedestrian.
>
>The flashing Amber aspect (for 6 seconds if my observation is typical)
>before the light turns green is a clue to look out for pedestrians!

Yes, but not everybody does, reliably, all of the time. And, more
importantly, a pedestrian who starts to cross while the green man is
still flashing can, if they are very late doing so, still be on the
crossing after the vehicle lights have gone green.

Mark

Roland Perry

unread,
Apr 26, 2018, 4:31:42 PM4/26/18
to
In message <m044ed17hnlr4qdtr...@4ax.com>, at 18:46:27 on
Thu, 26 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>Most of the time it doesn't. But sometimes, drivers start moving in
>>>expectation that the crossing is clear, and simply fail to see a
>>>late-arriving pedestrian.
>>
>>The flashing Amber aspect (for 6 seconds if my observation is typical)
>>before the light turns green is a clue to look out for pedestrians!
>
>Yes, but not everybody does, reliably, all of the time. And, more
>importantly, a pedestrian who starts to cross while the green man is
>still flashing can, if they are very late doing so, still be on the
>crossing after the vehicle lights have gone green.

Of course they can be, but then you have a motorist who is prepared to
risk being convicted of causing death by dangerous driving.
--
Roland Perry

Mark Goodge

unread,
Apr 27, 2018, 5:16:42 AM4/27/18
to
On Thu, 26 Apr 2018 21:25:49 +0100, Roland Perry <rol...@perry.co.uk>
wrote:
And, indeed, they do occasionally take that risk. Often enough for it
to result in a coming together between vehicle and pedestrian.
Although, in practice, it's more likely to be injury rather than death
and careless, rather than dangerous, driving. But either way, it's an
unacceptable risk, especially given that it can effectively be
designed out by changes to the way that light-controlled pedestrian
crossings work.

Mark

Martin Brown

unread,
Apr 27, 2018, 5:31:35 AM4/27/18
to
On 27/04/2018 10:16, Mark Goodge wrote:
> On Thu, 26 Apr 2018 21:25:49 +0100, Roland Perry <rol...@perry.co.uk>
> wrote:
>
>> In message <m044ed17hnlr4qdtr...@4ax.com>, at 18:46:27 on
>> Thu, 26 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
>> remarked:
>>>
>>> Yes, but not everybody does, reliably, all of the time. And, more
>>> importantly, a pedestrian who starts to cross while the green man is
>>> still flashing can, if they are very late doing so, still be on the
>>> crossing after the vehicle lights have gone green.
>>
>> Of course they can be, but then you have a motorist who is prepared to
>> risk being convicted of causing death by dangerous driving.
>
> And, indeed, they do occasionally take that risk. Often enough for it
> to result in a coming together between vehicle and pedestrian.

I'm not convinced that is a major source of collisions on crossings.

Most of the problems I have encountered are with cars not stopping at
all for newly changed red traffic lights and in some cases speeding up
as they approach. I have seen plenty of near misses some on dashcam.

> Although, in practice, it's more likely to be injury rather than death
> and careless, rather than dangerous, driving. But either way, it's an
> unacceptable risk, especially given that it can effectively be
> designed out by changes to the way that light-controlled pedestrian
> crossings work.

Unless you could put up an impenetrable barrier to protect the crossing
pedestrians I don't see how you can ever make that work.

--
Regards,
Martin Brown

Mark Goodge

unread,
Apr 27, 2018, 6:34:17 AM4/27/18
to
On Fri, 27 Apr 2018 10:31:30 +0100, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

>On 27/04/2018 10:16, Mark Goodge wrote:
>> On Thu, 26 Apr 2018 21:25:49 +0100, Roland Perry <rol...@perry.co.uk>
>> wrote:
>>
>>> In message <m044ed17hnlr4qdtr...@4ax.com>, at 18:46:27 on
>>> Thu, 26 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
>>> remarked:
>>>>
>>>> Yes, but not everybody does, reliably, all of the time. And, more
>>>> importantly, a pedestrian who starts to cross while the green man is
>>>> still flashing can, if they are very late doing so, still be on the
>>>> crossing after the vehicle lights have gone green.
>>>
>>> Of course they can be, but then you have a motorist who is prepared to
>>> risk being convicted of causing death by dangerous driving.
>>
>> And, indeed, they do occasionally take that risk. Often enough for it
>> to result in a coming together between vehicle and pedestrian.
>
>I'm not convinced that is a major source of collisions on crossings.

It's more than it used to be. And it's a problem which can be shown to
be mitigated by a simple change of design.

Mark

Roland Perry

unread,
Apr 27, 2018, 6:55:36 AM4/27/18
to
In message <e3v5edhjob9v799dd...@4ax.com>, at 11:34:13 on
Fri, 27 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>>> Yes, but not everybody does, reliably, all of the time. And, more
>>>>> importantly, a pedestrian who starts to cross while the green man is
>>>>> still flashing can, if they are very late doing so, still be on the
>>>>> crossing after the vehicle lights have gone green.
>>>>
>>>> Of course they can be, but then you have a motorist who is prepared to
>>>> risk being convicted of causing death by dangerous driving.
>>>
>>> And, indeed, they do occasionally take that risk. Often enough for it
>>> to result in a coming together between vehicle and pedestrian.
>>
>>I'm not convinced that is a major source of collisions on crossings.
>
>It's more than it used to be. And it's a problem which can be shown to
>be mitigated by a simple change of design.

A change to what?
--
Roland Perry

Tim Woodall

unread,
Apr 27, 2018, 7:04:05 AM4/27/18
to
On 2018-04-27, Mark Goodge <use...@listmail.good-stuff.co.uk> wrote:
>>
[pedestrians crossing so late they fail to complete the crossing before
the man stops flashing]
>>I'm not convinced that is a major source of collisions on crossings.
>
> It's more than it used to be. And it's a problem which can be shown to
> be mitigated by a simple change of design.
>

The change of design might mitigate the problem, but (anecdotally) I'm
not convinced the problem is pedestrians failing to complete the cross
during the flashing phase.

There's a significant proportion of drivers who will attempt to
intimidate pedestrians in order to get them to hurry up during the
flashing phase - either by moving forwards or sounding their horn.

Also drivers will start if the pedestrian isn't yet half way across when
the flashing starts. Where there is a central island this is, I believe,
allowed but it happens where there is no island too.

And motorbikes will go infront of pedestrians too.

Most of the time this is aggressive, rather than dangerous. But it's
inevitable that occasionally drivers will fail to notice someone on the
crossing, especially when 'feeling cross about the behaviour of another'

I don't notice a difference in 'last second attempts to cross' between
the different crossings. What is different is the 'determination' to
proceed on flashing amber as compared to 'willingness to proceed' on red
and amber.

It was interesting that in the very early days of the countdown
crossings (Finsbury Square in case the others were different) the
traffic lights went to red+amber as the countdown went to zero. And it
was noticable how few drivers actually looked rather than attempt an F1
start on the green light.

There's now a short delay between the zero and the red+amber and the
problem is much reduced (I'm actually surprised at the difference it
made as I think the delay is a fixed (short) time so it's still possible
to anticipate the lights changing)

I guess the delay is just long enough to scan the crossing and then look
at the lights. Previously drivers eyes went straight from the countdown
to the lights while their clutch was coming up and throttle down.


Mark Goodge

unread,
Apr 27, 2018, 7:06:00 AM4/27/18
to
On Fri, 27 Apr 2018 11:53:11 +0100, Roland Perry <rol...@perry.co.uk>
wrote:
The change I described in the original post which prompted this
sub-thread!

That is, a move away from Pelican crossings, and towards other forms,
such as Puffin crossings and Countdown crossings, which have been
shown to be statistically safer.

Mark

Martin Brown

unread,
Apr 27, 2018, 7:48:54 AM4/27/18
to
Are there many true Pelican crossings left? I can't remember when I last
saw a flashing amber on a crossing which makes them pseudo Puffin at
best. They typically lack the pedestrian sensors of a full Puffin
although they do have the mimic at waist height near the button.

I still reckon the main problem is drivers who drive through the red
traffic lights at speed rather than those that start too early at green.

--
Regards,
Martin Brown

Roland Perry

unread,
Apr 27, 2018, 8:45:40 AM4/27/18
to
In message <pbv2n0$13e1$1...@gioia.aioe.org>, at 12:48:48 on Fri, 27 Apr
2018, Martin Brown <'''newspam'''@nezumi.demon.co.uk> remarked:

>Are there many true Pelican crossings left? I can't remember when I
>last saw a flashing amber on a crossing which makes them pseudo Puffin
>at best.

There are vast numbers of Pelicans still installed. I surveyed one for
input to this thread only a couple of days ago
--
Roland Perry

Roland Perry

unread,
Apr 27, 2018, 8:45:48 AM4/27/18
to
In message <it06ed91pljj8jnjv...@4ax.com>, at 12:05:57 on
Fri, 27 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>it's a problem which can be shown to be mitigated by a simple change
>>>of design.
>>
>>A change to what?
>
>The change I described in the original post which prompted this
>sub-thread!
>
>That is, a move away from Pelican crossings, and towards other forms,
>such as Puffin crossings and Countdown crossings, which have been
>shown to be statistically safer.

I've earlier debunked Niffup crossings, and where's the budget to roll
out Countdowns nationwide.

Paging Nick Hewer.
--
Roland Perry

Mark Goodge

unread,
Apr 27, 2018, 8:47:10 AM4/27/18
to
No. And this is one of the reasons.

Mark

Mark Goodge

unread,
Apr 27, 2018, 8:49:34 AM4/27/18
to
On Fri, 27 Apr 2018 13:33:23 +0100, Roland Perry <rol...@perry.co.uk>
wrote:
There doesn't have to be an immediate replacement. But no new Pelican
crossings are allowed to be installed. Any new crossing, or any
routine replacement of an existing crossing (eg, one that simply needs
replacing due to life-expired equipment, or one that needs to be
replaced following damage) must be of a safer design.

Mark

Roland Perry

unread,
Apr 27, 2018, 8:59:35 AM4/27/18
to
In message <req5ed97hjn4c6i6l...@4ax.com>, at 10:16:36 on
Fri, 27 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>On Thu, 26 Apr 2018 21:25:49 +0100, Roland Perry <rol...@perry.co.uk>
>wrote:
>
>>In message <m044ed17hnlr4qdtr...@4ax.com>, at 18:46:27 on
>>Thu, 26 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
>>remarked:
>>>>>Most of the time it doesn't. But sometimes, drivers start moving in
>>>>>expectation that the crossing is clear, and simply fail to see a
>>>>>late-arriving pedestrian.
>>>>
>>>>The flashing Amber aspect (for 6 seconds if my observation is typical)
>>>>before the light turns green is a clue to look out for pedestrians!
>>>
>>>Yes, but not everybody does, reliably, all of the time. And, more
>>>importantly, a pedestrian who starts to cross while the green man is
>>>still flashing can, if they are very late doing so, still be on the
>>>crossing after the vehicle lights have gone green.
>>
>>Of course they can be, but then you have a motorist who is prepared to
>>risk being convicted of causing death by dangerous driving.
>
>And, indeed, they do occasionally take that risk. Often enough for it
>to result in a coming together between vehicle and pedestrian.
>Although, in practice, it's more likely to be injury rather than death
>and careless, rather than dangerous, driving.

Is running over a pedestrian at a crossing when they are in plain sight
only "careless"?

>But either way, it's an unacceptable risk, especially given that it can
>effectively be designed out by changes to the way that light-controlled
>pedestrian crossings work.

We keep getting new versions, which if anything make the problem worse
(eg the rank stupidity of the ones where the pedestrian signage is the
same side of the road, not "across the road"; closely followed by the
tardy-by-design Niffups).
--
Roland Perry

Mark Goodge

unread,
Apr 27, 2018, 9:15:11 AM4/27/18
to
On Fri, 27 Apr 2018 11:52:34 +0100, Roland Perry <rol...@perry.co.uk>
That would be for a court to decide, based on the evidence. But
"dangerous" has a high bar. You can kill someone and it still only be
"careless".

Mark

Roland Perry

unread,
Apr 27, 2018, 9:39:04 AM4/27/18
to
In message <7v66ed1n06jkc14mr...@4ax.com>, at 13:49:31 on
Fri, 27 Apr 2018, Mark Goodge <use...@listmail.good-stuff.co.uk>
remarked:
>>>>>it's a problem which can be shown to be mitigated by a simple change
>>>>>of design.
>>>>
>>>>A change to what?
>>>
>>>The change I described in the original post which prompted this
>>>sub-thread!
>>>
>>>That is, a move away from Pelican crossings, and towards other forms,
>>>such as Puffin crossings and Countdown crossings, which have been
>>>shown to be statistically safer.
>>
>>I've earlier debunked Niffup crossings, and where's the budget to roll
>>out Countdowns nationwide.
>
>There doesn't have to be an immediate replacement. But no new Pelican
>crossings are allowed to be installed. Any new crossing, or any
>routine replacement of an existing crossing (eg, one that simply needs
>replacing due to life-expired equipment, or one that needs to be
>replaced following damage) must be of a safer design.

Allegedly safer. Is there any evidence the ones where pedestrians
failing to see a crossing red/green man the other side of the road are
less likely to be mown down as a result?
--
Roland Perry

Sara Merriman

unread,
Apr 27, 2018, 11:47:33 AM4/27/18
to
In article <7vBDhymy...@perry.co.uk>, Roland Perry
<rol...@perry.co.uk> wrote:

> We keep getting new versions, which if anything make the problem worse
> (eg the rank stupidity of the ones where the pedestrian signage is the
> same side of the road, not "across the road";

Those bloody things drive me potty.

Carl Baxter

unread,
Apr 29, 2018, 2:16:51 AM4/29/18
to
Certainly is. Paint scratches, broken hood ornaments, that dented wing -
a moment's inattention can be so costly.

Andy Burns

unread,
May 6, 2018, 7:44:10 AM5/6/18
to

spuorg...@gowanhill.com

unread,
May 6, 2018, 11:52:49 AM5/6/18
to
On Tuesday, 24 April 2018 15:02:22 UTC+1, Jon Ribbens wrote:
> Credit reference agencies aren't run with the expectation that the
> information they collect and distribute is going to be used for
> decisions affecting peoples' liberty.

And I never agreed to my financial information being passed to a credit reference agency for this purpose.

Owain

Fredxx

unread,
May 7, 2018, 2:49:11 AM5/7/18
to
Any information held by a credit agency would have been with your
blessing. Perhaps you should start reading the small print from your
banks and other suppliers.

Handsome Jack

unread,
May 7, 2018, 6:43:07 AM5/7/18
to
Fredxx <fre...@nospam.com> posted
Actually the Information Commissioner's Office explicitly states (GDPR
consent guidance March 2017) that this does *not* constitute valid
consent.

"Example: A company that provides credit cards asks its customers to
give consent for their personal data to be sent to credit reference
agencies for credit scoring. However, if a customer refuses or withdraws
their consent, the credit card company will still send the data to the
credit reference agencies on the basis of ‘legitimate interests’. So
asking for consent is misleading and inappropriate - there is no real
choice. The company should have relied on 'legitimate interests' from
the start. To ensure fairness and transparency, the company should still
tell customers this will happen, but this is very different from giving
them a choice."

--
Jack

R. Mark Clayton

unread,
May 7, 2018, 7:02:26 AM5/7/18
to
On Tuesday, 24 April 2018 15:02:22 UTC+1, Jon Ribbens wrote:
> On 2018-04-24, Robin <rb...@hotmail.com> wrote:
> > On 24/04/2018 08:49, Chris R wrote:
> >> On 23/04/2018 17:53, Ian Jackson wrote:
> >>>    Durham Police has paid global data broker Experian for UK postcode
> >>>    stereotypes built on 850 million pieces of information to feed into an
> >>>    artificial intelligence (AI) tool used in custody decisions, a Big
> >>>    Brother Watch investigation has revealed.
> >>>
> >>>    ...
> >>>
> >>> This is quite scary.  What can be done to stop it ?
> >>>
> >> I don't see any reason to stop it. Data protection should focus on the
> >> use to which data is put, not on who has it. For AI to make sensible
> >> predictions from big data it needs as many inputs as necessary.
> >> Depriving it of inputs just makes good outcomes less likely.
> >
> > I agree.
> >
> > ISTM no different in principle from the use of "soft intelligence" - eg
> > that gathered by community police teams.
>
> The difference is that that data is gathered by the police, who know
> to what use it's likely to be put.
>
> Credit reference agencies aren't run with the expectation that the
> information they collect and distribute is going to be used for
> decisions affecting peoples' liberty. If an incorrect decision is
> made, the police are going to hide behind "it's not our fault,
> the data was wrong and it wasn't our data" and the credit agency
> is going to hide behind "we do not recommend making custody decisions
> based on our data". It allows all parties to disclaim all
> responsibility.
>
> > But I fail to see the problem if custody officers with the data in
> > the AI tool make better decisions than they do without.
>
> That "if" in your sentence there is doing an awful lot of work.

And the police have a DPA exemption for the prevention and detection of crime.

The police can access the electoral register as well. It may be inaccurate (people not telling their town hall they have moved), but that's not their problem.

spuorg...@gowanhill.com

unread,
May 7, 2018, 5:18:42 PM5/7/18
to
On Monday, 7 May 2018 07:49:11 UTC+1, Fredxx wrote:
> > And I never agreed to my financial information being passed to a credit
> > reference agency for this purpose.
> Any information held by a credit agency would have been with your
> blessing. Perhaps you should start reading the small print from your
> banks and other suppliers.

I do read the small print, and I've never seen any mention of my data being used by the police for custody decisions. I don't think that's covered by "for the prevention and detection of fraud".

Owain

Robin

unread,
May 7, 2018, 7:28:32 PM5/7/18
to
What makes you think the Experian are drawing upon data provided for
those purposes in the data they are providing to the police? IIRC both
I and Mark Goodge pointed out some time ago in this thread that Experian
list as their sources for Mosaic a wide range of other sources.

--
Robin
reply-to address is (intended to be) valid

Fredxx

unread,
May 7, 2018, 7:29:17 PM5/7/18
to
On 07/05/2018 19:56, spuorg...@gowanhill.com wrote:
I take your point. The only issue with your argument is that
you would have agreed to the sharing of your data with 'agencies'.

R. Mark Clayton

unread,
May 8, 2018, 9:44:01 AM5/8/18
to
No, but the police and passing data to the police is covered by their exemption for the prevention and detection of crime.

spuorg...@gowanhill.com

unread,
May 8, 2018, 1:02:37 PM5/8/18
to
On Tuesday, 8 May 2018 14:44:01 UTC+1, R. Mark Clayton wrote:
> No, but the police and passing data to the police is covered by their
> exemption for the prevention and detection of crime.

Making a custody decision is neither preventing nor detecting crime, and certainly not in the context of processing information about my financial affairs.

Owain



Roland Perry

unread,
May 8, 2018, 1:21:56 PM5/8/18
to
In message <7c2df6fb-4b93-4174...@googlegroups.com>, at
10:02:32 on Tue, 8 May 2018, spuorg...@gowanhill.com remarked:

>> No, but the police and passing data to the police is covered by their
>> exemption for the prevention and detection of crime.
>
>Making a custody decision is neither preventing nor detecting crime

It's preventing crime if there's a suspicion that a non-custodial
decision would result in the alleged perp committing further offences.
--
Roland Perry
0 new messages