For everyone, it liberates nothing, if it requires a payment in exchange for scientific knowledge. We all have paid for the infrastructure and do pay for the infrastructure to distribute information already. We are using it right now. There is no reason people should pay more than they already pay to see your blog post to see anything else we want to share. I understand the spirit in which you offer your solution, but I must point out that we already have a system capable of liberating all scientific knowledge. Basic html and the infrastructure of the internet did that. We are missing the participation of the scientific research community because the public funding of research has been permitted to operate privayely, like a private investment fund or corporate R&D budget. Research Organizations that funnel valuable information and discoveries unto private ownership should simply be de-funded because they operate counter to the purpose of the public research funding. In short, a good way to liberate scientific knowledge would be to stop locking it up behind paywalls. Building a better pay wall seems like a fortification of the dysfunction. Scientific papers should be as easy for everyone to access as your blog post. Scientific Papers should be a content format not a commodity or a security. Shouldn't they be immune from the flow of commercial value between consumers and producers?
--
You received this message because you are subscribed to the Google Groups "science-liberation-front" group.
To unsubscribe from this group and stop receiving emails from it, send an email to science-liberation...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Richard, Please excuse my knee-jerk response. I was mistaken. You are not proposing a better paywall. Your architecture is compelling and I am going to review it further-it looks like a novel and timely approach to content distribution at scale. I apologize for skimming after the first few paragraphs and speaking without fully grokking what you propose. I'll forward your post on to some people who I think may be interested and suggest that if people intend to skim it that they start at the end of your post and work backwards. I have a practice of reviewing my critical posts and tonight, skimming through the second time I see that I made a huge mistake. I completely missed the point. Please disregard my previous assertions, the system you propose may very well address them all, though it may raise a few legal issues.
Again, I apologize for being so verbose without fully getting what you were proposing. Thank you for posting this.
Regards,
Sid Gabriel Hubbard
--
Keen to hear thoughts from this community.
Just to get a handle on the size of the issue with scientific papers, does anyone have a projected size in bytes or total number of charachters for "all scientific..." papers ever written? Can we calculate a probable sum for the size of that? The format for papers is quite light. Just borrowing a bit of compression technique from the video codecs could possibly yield a complete volume of scientific research that weighs in smaller than an episode of "How'd They Do That?" Then you'd just need to get it and do something like : apt-get upgrade liberated science. Or sudo port upgrade liberatedscience. If encoded well, all scientific papers ever (text and line graphics) should not be much larger than an episode of The Colbert Report in HD. Assuming each frame could at the very least hold a single paper's text. The total number of known scientific papers multiplied by the average character/line count for a statistically significant sample would at least give us an idea of what volume should be planned for. (Give or take one order of magnitude. I'll consult WolframAlpha and get back to ya.) Encoding all liberated scientific papers in a distributed local app package manager infrastructure, great. Next problem I see would be how [who] adds [what] and how fake papers, disinformation and spam would be managed.
--