How to liberate all scientific knowledge

161 views
Skip to first unread message

Richard Smith

unread,
Feb 17, 2014, 7:08:20 AM2/17/14
to science-libe...@googlegroups.com
Hi all,

I recently wrote a blog post with a basic outline for software that could enable complete liberation of all journal articles in a distributed way.

A reader suggested I post it here, so here it is:


Keen to hear thoughts from this community.

Richard

SGH

unread,
Feb 22, 2014, 5:05:00 PM2/22/14
to science-libe...@googlegroups.com

For everyone, it liberates nothing, if it requires a payment in exchange for scientific knowledge. We all have paid for the infrastructure and do pay for the infrastructure to distribute information already. We are using it right now.  There is no reason people should pay more than they already pay to see your blog post to see anything else we want to share. I understand the spirit in which you offer your solution, but I must point out that we already have a system capable of liberating all scientific knowledge. Basic html and the infrastructure of the internet did that. We are missing the participation of the scientific research community because the public funding of research has been permitted to operate privayely, like a private investment fund or corporate R&D budget. Research Organizations that funnel valuable information and discoveries unto private ownership should simply be de-funded because they operate counter to the purpose of the public research funding.  In short, a good way to liberate scientific knowledge would be to stop locking it up behind paywalls. Building a better pay wall seems like a fortification of the dysfunction. Scientific papers should be as easy for everyone to access as your blog post. Scientific Papers should be a content format not a commodity or a security. Shouldn't they be immune from the flow of commercial value between consumers and producers?

--
You received this message because you are subscribed to the Google Groups "science-liberation-front" group.
To unsubscribe from this group and stop receiving emails from it, send an email to science-liberation...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sid

unread,
Feb 23, 2014, 12:28:42 AM2/23/14
to science-libe...@googlegroups.com

Richard,  Please excuse my knee-jerk response.  I was mistaken. You are not proposing a better paywall. Your architecture is compelling and I am going to review it further-it looks like a novel and timely approach to content distribution at scale.   I apologize for skimming after the first few paragraphs and speaking without fully grokking what you propose.  I'll forward your post on to some people who I think may be interested and suggest that if people intend to skim it that they start at the end of your post and work backwards.  I have a practice of reviewing my critical posts and tonight, skimming through the second time I see that I made a huge mistake. I completely missed the point. Please disregard my previous assertions,  the system you propose may very well address them all, though it may raise a few legal issues.

Again, I apologize for being so verbose without fully getting what you were proposing. Thank you for posting this.

Regards,

Sid Gabriel Hubbard

On Feb 17, 2014 4:08 AM, "Richard Smith" <richard...@gmail.com> wrote:
--

Bryan Bishop

unread,
Feb 23, 2014, 10:36:50 AM2/23/14
to science-libe...@googlegroups.com, Richard Smith, Bryan Bishop
On Mon, Feb 17, 2014 at 6:08 AM, Richard Smith <richard...@gmail.com> wrote:
Keen to hear thoughts from this community.

I was working on an alternative to the Zotero translators that doesn't require Gecko or Firefox:

I was talking with Simon and explained that the basic idea behind papermonk is that all of the translators can basically be javascript plugins that should be able to function independently of Firefox; and then they can get packaged into a format that Zotero could easily consume. The actual architecture of the current Zotero translators is highly suspicious and messy. :-(

I don't think torrents wok. Nearly nobody has been seeding the Library Genesis torrents and they've been around for years. What makes this any different? Why would these people suddenly start seeding in this context whereas they haven't been yet?

- Bryan
http://heybryan.org/
1 512 203 0507

SGH

unread,
Feb 23, 2014, 5:45:51 PM2/23/14
to science-libe...@googlegroups.com

Just to get a handle on the size of the issue with scientific papers, does anyone have a projected size in bytes or total number of charachters for "all scientific..." papers ever written? Can we calculate a probable sum for the size of that? The format for papers is quite light. Just borrowing a bit of compression technique from the video codecs  could possibly yield a complete volume of scientific research that weighs in smaller than an episode of "How'd They Do That?" Then you'd just need to get it and do something like : apt-get upgrade liberated science. Or sudo port upgrade liberatedscience. If encoded well, all scientific papers ever (text and line graphics) should not be much larger than an episode of The Colbert Report in HD. Assuming each frame could at the very least hold a single paper's text. The total number of known scientific papers multiplied by the average character/line count  for a statistically significant sample would at least give us an idea of what volume should be planned for. (Give or take one order of magnitude. I'll  consult WolframAlpha and get back to ya.) Encoding all liberated scientific papers in a distributed local app package manager infrastructure, great. Next problem I see would be how [who] adds [what] and how fake papers, disinformation and spam would be managed.

--

Bryan Bishop

unread,
Feb 23, 2014, 6:06:38 PM2/23/14
to science-libe...@googlegroups.com, Bryan Bishop, SGH
On Sun, Feb 23, 2014 at 4:45 PM, SGH <s...@sidgabriel.com> wrote:
> Just to get a handle on the size of the issue with scientific papers, does anyone have a
> projected size in bytes or total number of charachters for "all scientific..." papers ever
> written?

You can safely estimate about 1 MB/paper, so 50 TB is a good mark.

> Can we calculate a probable sum for the size of that? The format for papers is quite light. Just
> borrowing a bit of compression technique from the video codecs could possibly yield a
> complete volume of scientific research that weighs in smaller than an episode of "How'd
> They Do That?"

I don't recommend applying video codecs to this problem.

> Then you'd just need to get it and do something like : apt-get upgrade liberated science. Or
> sudo port upgrade liberatedscience.

yes but where would it be hosted?

> Next problem I see would be how [who] adds [what] and how fake papers, disinformation and
> spam would be managed.

Probably a curated collection.

SGH

unread,
Nov 4, 2014, 5:13:50 AM11/4/14
to Bryan Bishop, science-libe...@googlegroups.com
Hi, I'm just reviewing all my open data work in honor of The Internet Archive's Aaron Swartz Day (Nov 8th http://blog.archive.org/2014/10/30/invitation-to-aaron-swartz-day-nov-8-in-sf/

Did anything move forward on this thread? If so, where can I help?

In this world there is private data and public data, and the internet as it is, presents a litigious death trap for those that want to ensure it knows which is which. Alas, the internet isn't going to correct itself (yet), and the longer groups can the internet to exploit the delta between private data and public data, the public will suffer. 

Publicly funded research yields public data yet it is allowed to be placed behind a private paywall. This is a critical flaw in the fundamental discipline of science. Is anyone working on this? can someone (other than google) point me to where the action is?



Reply all
Reply to author
Forward
0 new messages