On Monday, August 25, 2014 11:35:17 PM UTC+2, Pedro Worcel wrote:
> I thought "safe" was a nice word for "not porn".
So a Windows machine in a hotel lobby (which might use parental control to restrict users to IE and whatever) means that you're not allowed to look at content that might be considered 'explicit'?
In discussions elsewhere on the net people jumped on this feature and said "YES! My local popular newspaper always includes nude girls on page X, this feature can not make them NOT show that content".
Ignoring the ignorance and the naivety, the setup above means that a mature person might get a filtered view. Not because the administrators of the network want that (they .. didn't opt in, Mozilla thought that it's a great idea to make that decision for them, based on unrelated - "Only able to run IE" - restrictions). That seems off. People can argue numbers ("How many restricted hotel lobbies against how many restricted accounts of minors"), but I'd turn that right around ("How many restricted accounts that somehow, magically, want to have a filtered internet outside of their control").
> I.e. it will not prevent kids from accessing pages with racially
> discriminatory content, but rather, it would prevent kids from accessing
> porn. If the porn industry really wanted to respect the wishes of the adult
> population, then they would see this header and reject the user, similarly
> to how now they have a prompt asking whether you are older than 18.
Right. So .. IF that would be the idea behind this feature, why
- isn't the header called No-Porn: Yes
- is it tied into completely unrelated features of the underlying OS?
- is that considered a good idea? I mean.. I don't know a single person that had no access to porn in their teen years. Parents might decide that they want to fight windmills here, but Mozilla's not in a position to deduce that intention from a random/unrelated OS setting. "Cannot run Counter Strike" doesn't translate to "Should not be allowed to view porn online". It's not the same thing. It's not related. Both are decisions that the parents have to make for their kids, either by regulation (proxies, filters) or by education/social rules/open discussions ("Really, these sites are cheap and that's not real sex. Feel free to check it out and laugh about it, but don't mistake that for the Real Thing(tm)").
The underlying problem remains: The IETF draft already admits that the whole concept is muddy and unclear, that a site 'may' have a 'safe' version and 'should' serve a safe version if that header is present.
Again: The administrator of the machine that is running Fx (=> Parent, in the most common example) is never asked if that is intended. I'd say it probably isn't by default, no.
The IETF draft MIGHT (yeah, not really. Not in my world. But I'm trying to play along and be nice..) make sense in a shared network environment (the draft mentions school networks), where a central proxy might inject that header.
I'd still argue that this is utter BS, because random sites on the web won't (and cannot) guess the school's rules of conduct and what is okay or not here. But that would at least be a somewhat conscious (if misguided) decision made by an admin. The Fx feature forces this crap on random people on the internet, because "It's better for you". This feature cannot increase safety (the IETF draft explicitly states that safe is undefined and that there are risks of disclosing stuff) or trust (Trust? In what? That website operators make sure that the internet is 'clean' for my kids?).
Putting my obvious disgust aside, I would be honestly interested to hear about a use case that lead to this feature. A use case that scales to the world-wide population of Fx users, that is. Why is 'Prefer:Safe' a good idea and a reasonable default for the Fx users in the US of A, Germany, Russia, China, Israel, Iran and Iceland? What study lead to the discovery that says 'If people enable local OS restrictions, they want to share that with the world and would prefer a filtered internet experience, obviously with a magic crystal ball that helps identify the content that isn't 'safe'" in random locations like the ones I mentioned above?
I'm prepared to offer excuses/admit that I'm wrong, but at this point I'd bet that this a) doesn't exist and b) cannot exist, ever.
It's like listening to a US radio show where everything explicit is replaced with a beep. Or like looking at various 'hide your face' rules in Muslim countries. Judging them is easy. I can call out the US for being prude, can bash Islamic states for backwards ideas, but the fact is that I'm and outsider. I have no voice and should just shut up, stop judging other people's life. This latest feature means that Mozilla is now trying to step in a territory where it has no say. Mozilla stands for freedom, choice and diversity, not for cannot-even-opt-out headers for censorship.
What on earth (and again, despite the attitude and my obvious stance in this post, I'm curious) ever made this look like a good idea?