-------- Original Message --------
Subject: FCC Order on Comcast - a good job (FCC decision link
corrected)
Date: Wed, 20 Aug 2008 14:23:13 -0400
From: "David P. Reed" <dpr...@reed.com>
To: Seth Johnson <seth.j...@RealMeasures.dyndns.org>
Friends - I just posted this on my blog, regarding the FCC opinion and
order about Comcast RST injection. Feel free to send a pointer to it
to
anyone interested The comment I sent to the Commissioners is also
linked there.
-David P. Reed
----------------------------------
Permalink: http://www.reed.com/blog-dpr/?p=12
FCC Order on Comcast - a good job <http://www.reed.com/blog-dpr/?p=12>
The FCC today issued its formal opinion and order in regard to
Comcast’s
degrading of P2P and other traffic using DPI and RST injection
<http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-08-183A1.doc>.
Of course, I’ve been very interested in this, especially since I was
asked by the Commission to testify as a witness at the en banc hearing
at Harvard Law School in February.
After reading the order this morning, I felt like commending the FCC -
so I filed a formal comment with the FCC, and I posted it on my site
<http://www.reed.com/blog-dpr/?page_id=10> as well. The decision is a
good decision for the Internet. In short here’s why:
The decision shows that the agency understands the importance of the
technological principles of the Internet’s design.
The Internet is a /world-wide system that does not belong to any one
operator/, whether providing access lines or backbone transport.
The design of the Internet Protocols specifies clear limits on what
operators can and cannot do to Internet Protocol datagrams when those
operators are acting as part of the Internet.
Not obeying those limits poses a serious risk to the continued success
of the world-wide Internet. Happily, the FCC recognized and exposed
Comcast’s transgressions of those limits.
Though Internet design is not a law, the Commission’s order respects
the
importance of that design, and rejects Comcast’s misbehavior and
deception in applying technologies that go against the principles of
that design.
_______________________________________________
Discuss mailing list
Dis...@freeculture.org
http://freeculture.org/cgi-bin/mailman/listinfo/discuss
> http://lessig.org/blog/2FCC.pdf
VIA ECFS
Ms. Marlene H. Dortch
Secretary
Federal Communications Commission
445 12th Street SW
Washington DC 20554
Re: Broadband Industry Practices, WC Docket No. 07-52
Dear Ms. Dortch:
I am writing to commend the Commission on its order released today
regarding Comcast. In all of my experience reviewing government
decisions affecting the Internet, I have read none that are more
subtle and sophisticated in their understanding of the Internet, and
few that are as important for setting the conditions under which
innovation and competition on the Internet will flourish.
As the Order makes clear, the Commission has clearly recognized the
importance of the Internet as a platform for technological growth and
innovation. It is also an extraordinarily important platform for free
speech. Innovation and technological growth are essential components
to economic prosperity. Free speech is the single most important
element in a democracy.
Platforms depend upon common and public standards. The next Larry Page
or Sergey Brin need to know that the "Internet" they build the next
Google for is actually the "Internet" the next Google will run upon.
The open standards process that the IETF has developed provide this
assurance. By clearly articulating the rules by which data will be
managed on the Internet, innovators can build applications and deploy
content that rely upon those rules. There’s no need for a negotiation
between innovators in their garage and the largest network providers
for those innovators to develop the next "killer app." Like the
electricity grid, innovators know that they can simply plug their
application into the Internet and -- so long as the providers of
access to that platform respect the platforms standards — the
innovation will run. This was the purpose of the Internet’s
"end-to-end design," as network architects Jerome Salzer, David Clark
and David Reed first described it: To enable innovation at the edge of
the network without the innovators concerning themselves about
complexity at the core. Comcast’s behavior, at least as detailed in
the very careful and comprehensive order the Commission released
today, poisons this environment for economic growth and innovation in
at least three ways:
First, as the Order notes, by implementing non-standard network
management technologies, Comcast weakens the value of the platform for
all. If Comcast’s behavior became common among broadband service
providers, innovators developing new applications for the Internet
would be required to tailor those applications to the specific local
rules of the major carriers. That tailoring would increase costs and
uncertainty, thereby reducing the return to Internet-based innovation.
Second, and again, as the Order notes, by keeping these modifications
to the basic Internet protocol secret, Comcast’s behavior only
increases the cost that their nonstandard implementation imposes upon
Internet innovation. Rather than simply reading a technical document
that explains the local deviations from standard practices on
Comcast’s network, innovators who want to assure that their
innovations actually run on the Comcast platform would be forced to
run expensive tests of the applications or services on the Comcast
network, essentially bearing the costs of reverse engineering a
service that advertised itself as a standard Internet connection.
Again, imagine the burden to GE if local electricity grids varied the
voltage of their local electricity networks -- sometimes running at
120v, sometimes at 220v. And then imagine the burden if those same
grids varied the voltage secretly, without any notice to GE, or other
innovators. The costs to innovation and economic growth obvious in
this example are exactly the costs Comcast creates by its behavior.
Third, as the Order notes, by keeping these modifications secret,
Comcast’s behavior imposes a particularly harsh burden on new
innovators. Anyone trying out a new application from a new company or
developer begins with some skepticism about the quality of that
application or innovation. Failures in the execution of that new
application will be attributed by the user to the developer, not to
the broadband service. Most users have no clue about the capacity of
the broadband provider to interfere with the functioning of an
application. Most would therefore assume that any failure is a failure
in the application. Comcast’s behavior would therefore particularly
burden these start-up innovators.
These costs, of course, are obviously relevant to the Commission’s
concern of assuring the Internet remains a platform for growth and
innovation. But the Order nicely illustrates how these costs are also
linked to anticompetitive concerns. As the D.C. Circuit indicated in
the Microsoft case, regulators have a particularly strong reason to
police the behavior of a platform provider when that behavior is aimed
at protecting the platform provider from new competition. This was
also the Commission’s concern in the Madison River Matter, where a DSL
provider was alleged to have blocked VOIP service, thereby protecting
the telecom company’s profits from traditional telephone markets.
In this case, the Commission has identified a legitimate concern that
Comcast’s behavior is directed towards interfering with a developing
market of alternative video service. To the extent consumers find a
reliable means for collecting and supplying video content to others,
through, for example, applications such as Miro, these alternatives
will provide competition to traditional, cable-television based models
for delivering video content. The Commission in particular, and the
U.S. government more generally, has an obvious interest in encouraging
precisely this type of competition. For it is precisely this sort of
competition that will continue to drive the costs of communication
down, and widen the opportunities for speakers -- from documentary
filmmakers, to local priests sharing sermons -- to make their speech
available to others.
The need for the Commission to play this role as an ultimate check on
private behavior that might pollute the environment for innovation is
particularly acute on a free, public network such as the Internet.
Obviously, there are plenty of private innovation platforms that don’t
require direct government oversight to protect the platform.
Microsoft’s Windows operating system is a ready example. If Dell
started tinkering with Windows, disabling or modifying certain
operating system functions, Microsoft would have an obvious interest
in stopping Dell. Private law would give Microsoft adequate tools to
protect its platform from the intermeddling by Dell. Through
trademark, copyright, and patent law, Microsoft could use government
power to force Dell to either comply with the Windows standards, or
forbid Dell from distributing Windows on its PCs. No doubt that would
be a kind of government regulation, but exercised by a private actor
to advance its own private interests.
There is no "owner" of the Internet, however, who can likewise invoke
private law to protect the platform of the Internet from the same sort
of interference. The IETF doesn’t own a trademark on "the Internet."
The protocols and standards that it, and other equivalent bodies have
deployed, don’t carry with them the power to enforce particular
implementations. Indeed, an important slice of that innovation
environment, free software (governed, for example, by the Free
Software Foundation’s GPL) explicitly grants to everyone the right to
modify the code however they want, so long as they abide by the
requirement to make that modification available to others.
Comcast didn’t invent the Internet. Indeed, it, and most other cable
companies, were relatively slow to recognize the important value the
Internet would provide both to the public and to companies providing
Internet service. Instead, Comcast now seeks to benefit from the
extraordinary economy that has developed around the Internet. It gets
to enjoy that benefit "freely," meaning without paying anyone a
licensing fee, or without securing permission from anyone to deploy
resources that link into this extraordinary network. That it can is of
course a great benefit, not just to Comcast, but to the Nation. The
free resource of the Internet has produced enormous commercial and
economic value.
But if Comcast is to benefit from the Internet, it is perfectly
reasonable that it be required to do so in a manner that doesn’t
pollute the value of the Internet for everyone else. Yet that is what
Comcast has done here. By secretly adding a layer of secret sauce into
the Internet that interferes with legitimate applications and network
services, Comcast has injured the value of the Internet to other
innovators. By denying that it has done this, it has added insult to
that injury. The Commission has done us all a great service by stating
clearly that it will assure that the platform for innovation that the
Internet is will not be compromised by such behavior. It was also
important that the Commission clearly addressed a common slogan that
Comcast had deployed in this matter that has no relation to the actual
history of the Internet -- namely, that the Internet was born free of
regulation. It might be acceptable in a political campaign to continue
that obvious canard. But in the context of this Commission, which was
the enforcer of the very rules that created the opportunity for the
narrowband Internet to take-off, it is extraordinary that a party
would suggest that the Internet was a regulation-free zone. The
Internet was made possible by a mix of minimal platform regulation.
Even the presumptive Republican nominee for President enumerates a
list of context in which "regulation is warranted." And while there
will always be argument about the proper mix, and while any sensible
policy-maker would want to keep that regulation at an absolute
minimum, the suggestion that any sensible policy-maker, including
Congress, has ever suggested that the Internet "not be regulated" is
either ignorance or deception.
Finally, let me note one other feature of this proceeding that has
particularly troubled me. These are complicated questions. There’s no
doubt that networks will have to manage traffic. There’s no doubt that
certain types of content and applications will have to be regulated.
Child pornography is the obvious case. But illegal activity extends
far beyond that paradigmatic case.
In developing the standards for effecting both (1) the public interest
in an open and vibrant Internet, as well as the public interest in
controlling illegal content and activity, and (2) the private interest
that companies such as Comcast can profit from Internet service, so
that many companies such as Comcast choose to provide Internet
service, the Commission has rightly chosen to move carefully through
adjudication, against the background of clear standards articulated
first by Chairman Powell, and then adopted by the Commission under
Chairman Martin’s leadership. That process will require at a minimum
the good faith cooperation of anyone providing significant Internet
service.
The most striking feature of the current proceeding to me, at least,
was the character of the interaction between Comcast and the FCC about
these matters. Of course anyone in dealing with the government has a
right to defend his own interests. But no one has a right to mislead.
That the Commission has identified statements made by Comcast that
were, at a minimum, not true, raises significant questions about
Comcast’s behavior. Whether or not the Commission has the authority it
claims in this particular case (and I am confident that it does), no
company has the right to mislead the Commission in its proceedings.
Obviously, there are not sufficient facts yet known to know why
statements that were not true were nonetheless made by Comcast. It
could well be that Comcast’s management didn’t fully understand what
its own technicians were doing. But when a company provides access to
millions of Americans to the most important infrastructure in the
digital age, at the very minimum, that company has an ethical
obligation to deal truthfully with the regulator charged primarily
with protecting that infrastructure from harmful behavior.
The Commission’s order today has done a great service to our Nation.
It will set a context that makes clear that those who wish to profit
from the Internet do so without harming the Internet. And it will
advance the objective of securing this infrastructure for innovation
with the minimum regulatory oversight possible. On behalf of many, I
am sure, let me express our thanks.
Sincerely,
Lawrence Lessig
New post on my blog
Permalink: http://www.reed.com/blog-dpr/?p=36
ISPs should own your eyes and ears, say AT&T, Cisco, McCurry
The hottest new faux-digerati lobby firm in DC in the communications
field is Mike McCurry’s new firm Arts+Labs. McCurry is an old
political hand, Bill Clinton’s press secretary, looking for a second
career after the Clintons. Apparently there’s no big cash to be had
protecting our freedom of speech, but Cisco and AT&T are happy to fund
him to run a firm to defend ISP’s right to do "deep packet inspection"
(DPI).
Only Arts+Labs doesn’t dare call it DPI, which sounds just a bit scary
and Big Brotherish. Instead they call it the "intelligent network"
that will smooth our experience, cleansing it of all those uneven
experiences. Those of us who are as old as I am - 56 - might
remember that the term "Intelligent Network" was a Bell Labs idea that
failed due to the success of the Internet. As David Isenberg told it,
the Internet was the "Rise of the Stupid Network".
The Internet is a simple network, a stupid network, that just connects
your computer to another computer with no interference. That’s opposed
to old smarty-pants networks that tried limit users to those things
that maximized the operators’ monopoly profits, by taxing the content
providers and preventing innovators from attaching new devices,
inventing new services at the edges, etc. The Internet won, for a
good reason: it enabled innovation, and it kept busybody operators
from having to tinker with or spy on their users’ traffic. It
delighted users, rather than holding them hostage.
The Arts+Labs site looks cool, very Web 2.0-ish. But hidden in that
beautiful design, behind the slick and seductive words, is a dangerous
idea, one that the founders of the United States rejected in the First
Amendment. The Arts+Labs site tries to convince you (and Congress) of
the idea that it’s a "good thing" to allow your ISP to decide what you
can see or hear or use. That’s the same ISP that is given by Fed,
State, or local regulators a monopoly or oligopoly over your ability
to connect at high speed to the Internet. For that monopoly to
examine your traffic, make guesses as to what it means, and to decide
for you which services you should connect to, using what protocols.
Don’t believe Mike McCurry, AT&T and Cisco’s new shill. He may be
connected, but it’s pretty clear that he wants to disconnect us.
SPECIAL TIMES EDITION BLANKETS U.S. CITIES, PROCLAIMS END TO WAR
* PDF: http://www.nytimes-se.com/pdf
* For video updates: http://www.nytimes-se.com/video
* Contact: mailto:writers@...
Early this morning, commuters nationwide were delighted to find out
that while they were sleeping, the wars in Iraq and Afghanistan had
come to an end.
If, that is, they happened to read a "special edition" of today's New
York Times.
In an elaborate operation six months in the planning, 1.2 million
papers were printed at six different presses and driven to prearranged
pickup locations, where thousands of volunteers stood ready to pass
them out on the street.
Articles in the paper announce dozens of new initiatives including the
establishment of national health care, the abolition of corporate
lobbying, a maximum wage for C.E.O.s, and, of course, the end of the
war.
The paper, an exact replica of The New York Times, includes
International, National, New York, and Business sections, as well as
editorials, corrections, and a number of advertisements, including a
recall notice for all cars that run on gasoline. There is also a
timeline describing the gains brought about by eight months of
progressive support and pressure, culminating in President Obama's
"Yes we REALLY can" speech. (The paper is post-dated July 4, 2009.)
"It's all about how at this point, we need to push harder than ever,"
said Bertha Suttner, one of the newspaper's writers. "We've got to
make sure Obama and all the other Democrats do what we elected them to
do. After eight, or maybe twenty-eight years of hell, we need to start
imagining heaven."
Not all readers reacted favorably. "The thing I disagree with is how
they did it," said Stuart Carlyle, who received a paper in Grand
Central Station while commuting to his Wall Street brokerage. "I'm all
for freedom of speech, but they should have started their own paper."
# 30 #
CONTACT:
wri...@nytimes-se.com
917-202-5479
718-208-0684
415-533-3961
"SPECIAL" NEW YORK TIMES BLANKETS CITIES WITH MESSAGE OF HOPE AND
CHANGE
Thousands of volunteers behind elaborate operation
* PDF: http://www.nytimes-se.com/pdf
* Ongoing video releases: http://www.nytimes-se.com/video
* The New York Times responds:
http://cityroom.blogs.nytimes.com/2008/11/12/pranksters-spoof-the-times/
Hundreds of independent writers, artists, and activists are claiming
credit for an elaborate project, 6 months in the making, in which 1.2
million copies of a "special edition" of the New York Times were
distributed in cities across the U.S. by thousands of volunteers.
The papers, dated July 4th of next year, were headlined with
long-awaited news: "IRAQ WAR ENDS". The edition, which bears the same
look and feel as the real deal, includes stories describing what the
future could hold: national health care, the abolition of corporate
lobbying, a maximum wage for CEOs, etc. There was also a spoof site,
at http://www.nytimes-se.com/.
"Is this true? I wish it were true!" said one reader. "It can be
true, if we demand it."
"We wanted to experience what it would look like, and feel like, to
read headlines we really want to read. It's about what's possible, if
we think big and act collectively," said Steve Lambert, one of the
project's organizers and an editor of the paper.
"This election was a massive referendum on change. There's a lot of
hope in the air, but there's a lot of uncertainty too. It's up to all
of us now to make these headlines come true," said Beka Economopoulos,
one of the project's organizers.
"It doesn't stop here. We gave Obama a mandate, but he'll need mandate
after mandate after mandate to do what we elected him to do. He'll
need a lot of support, and yes, a lot of pressure," said Andy
Bichlbaum, another project organizer and editor of the paper.
The people behind the project are involved in a diverse range of
groups, including The Yes Men, the Anti-Advertising Agency, CODEPINK,
United for Peace and Justice, Not An Alternative, May First/People
Link, Improv Everywhere, Evil Twin, and Cultures of Resistance.
In response to the spoof, the New York Times said only, "We are
looking into it." Alex S. Jones, former Times reporter who is an
authority on the history of the paper, says: "I would say if youÕve
got one, hold on to it. It will probably be a collectorÕs item."
-30-
Whatever the numbers it's an impressive achievement. (Whether you
agree with the politics of it or not.) I've seen spoof newspapers
before (I still have one from the May Day protests in London years
ago) but this really seems to have caught people's imagination and
international media attention.
- Rob.
It is non-commercial, it is satire, it doesn't compete with the
original, and I don't know how much original NYT content it uses. So
IANAL but I think the NYT staffers are right that it's covered by fair
use.
> It is conceivable that you could pull
> off this prank with any other newspaper brand, and maybe even with a purely
> fictional newspaper itself.
I think that it *had* to be the NYT. It's the paper of record for the
US. And it's the only one that would achieve this degree of
international media impact.
Satire should have fair use/fair dealing protection. :-/
> Satire should have fair use/fair dealing protection. :-/
It doesn't? Erk.
Yeah, I don't think there's any question there of fair use (and forgive me for if it seemed like I was implying such a thing), but I guess I just have an increasingly dim view of fair use in the mainstream media -- NYTimes obviously takes these things well (and has in the past) and is good natured about media criticism, so I wouldn't expect anything else.
The more I think about it the more I wonder if it is parody and not satire -- I don't want to get pedantic here, but I think its more about mocking the state of America rather than the NYTimes itself. Arguably the NYTimes declaring the war is over is the heart of the joke, but a substantial amount of the other work is closer to satire. It is conceivable that you could pull off this prank with any other newspaper brand, and maybe even with a purely fictional newspaper itself.
In the US, less so than parody:
http://grove.ufl.edu/~techlaw/vol9/issue1/collado.html
In the UK it should if the government ever actually act on Gowers. :-/
- Rob.
Reboot the FCC
We'll stifle the Skypes and YouTubes of the future if we don't
demolish the regulators that oversee our digital pipelines.
By Lawrence Lessig | Newsweek Web Exclusive
Dec 23, 2008
Economic growth requires innovation. Trouble is, Washington is
practically designed to resist it. Built into the DNA of the most
important agencies created to protect innovation, is an almost
irresistible urge to protect the most powerful instead.
The FCC is a perfect example. Born in the 1930s, at a time when the
utmost importance was put on stability, the agency has become the
focal point for almost every important innovation in technology. It is
the presumptive protector of the Internet, and the continued regulator
of radio, TV and satellite communications. In the next decades, it
could well become the default regulator for every new communications
technology, including, and especially, fantastic new ways to use
wireless technologies, which today carry television, radio, internet,
and cellular phone signals through the air, and which may soon provide
high-speed internet access on-the-go, something that Google cofounder
Larry Page calls "wifi on steroids."
If history is our guide, these new technologies are at risk, and with
them, everything they make possible. With so much in its reach, the
FCC has become the target of enormous campaigns for influence. Its
commissioners are meant to be "expert" and "independent," but they've
never really been expert, and are now openly embracing the political
role they play. Commissioners issue press releases touting their own
personal policies. And lobbyists spend years getting close to members
of this junior varsity Congress. Think about the storm around former
FCC Chairman Michael Powell's decision to relax media ownership rules,
giving a green light to the concentration of newspapers and television
stations into fewer and fewer hands. This is policy by committee,
influenced by money and power, and with no one, not even the
President, responsible for its failures.
The solution here is not tinkering. You can't fix DNA. You have to
bury it. President Obama should get Congress to shut down the FCC and
similar vestigial regulators, which put stability and special
interests above the public good. In their place, Congress should
create something we could call the Innovation Environment Protection
Agency (iEPA), charged with a simple founding mission: "minimal
intervention to maximize innovation." The iEPA's core purpose would be
to protect innovation from its two historical enemies—excessive
government favors, and excessive private monopoly power.
Since the birth of the Republic, the U.S. government has been in the
business of handing out "exclusive rights" (a.k.a., monopolies) in
order to "promote progress" or enable new markets of communication.
Patents and copyrights accomplish the first goal; giving away slices
of the airwaves serves the second. No one doubts that these monopolies
are sometimes necessary to stimulate innovation. Hollywood could not
survive without a copyright system; privately funded drug development
won't happen without patents. But if history has taught us anything,
it is that special interests—the Disneys and Pfizers of the world—have
become very good at clambering for more and more monopoly rights.
Copyrights last almost a century now, and patents regulate "anything
under the sun that is made by man," as the Supreme Court has put it.
This is the story of endless bloat, with each round of new monopolies
met with a gluttonous demand for more.
The problem is that the government has never given a thought to when
these monopolies help, and when they're merely handouts to companies
with high-powered lobbyists. The iEPA's first task would thus be to
reverse the unrestrained growth of these monopolies. For example, much
of the wireless spectrum has been auctioned off to telecom monopolies,
on the assumption that only by granting a monopoly could companies be
encouraged to undertake the expensive task of building a network of
cell towers or broadcasting stations. The iEPA would test this
assumption, and essentially ask the question: do these monopolies do
more harm than good? With a strong agency head, and a staff absolutely
barred from industry ties, the iEPA could avoid the culture of
favoritism that's come to define the FCC. And if it became credible in
its monopoly-checking role, the agency could eventually apply this
expertise to the area of patents and copyrights, guiding Congress's
policymaking in these special-interest hornet nests.
The iEPA's second task should be to assure that the nation's basic
communications infrastructure spectrum— the wires, cables and cellular
towers that serve as the highways of the information economy—remain
open to new innovation, no matter who owns them. For example, "network
neutrality" rules, when done right, aim simply to keep companies like
Comcast and Verizon from skewing the rules in favor of or against
certain types of content and services that run over their networks.
The investors behind the next Skype or Amazon need to be sure that
their hard work won't be thwarted by an arbitrary decision on the part
of one of the gatekeepers of the Net. Such regulation need not, in my
view, go as far as some Democrats have demanded. It need not put
extreme limits on what the Verizons of the world can do with their
network—they did, after all, build it in the first place—but no doubt
a minimal set of rules is necessary to make sure that the Net
continues to be a crucial platform for economic growth.
Beyond these two tasks, what's most needed from the iEPA is benign
neglect. Certainly, it should keep competition information flowing
smoothly and limit destructive regulation at the state level, and it
might encourage the government to spend more on public communications
infrastructure, for example in the rural areas which private companies
often ignore. But beyond these limited tasks, whole phone-books worth
of regulation could simply be erased. And with it, we would remove
many of the levers that lobbyists use to win favors to protect today's
monopolists.
America's economic future depends upon restarting an engine of
innovation and technological growth. A first step is to remove the
government from the mix as much as possible. This is the biggest
problem with communication innovation around the world, as too many
nations who should know better continue to preference legacy
communication monopolies. It is a growing problem in our own country
as well, as corporate America has come to believe that investments in
influencing Washington pay more than investments in building a better
mousetrap. That will only change when regulation is crafted as
narrowly as possible. Only then can regulators serve the public good,
instead of private protection. We need to kill a philosophy of
regulation born with the 20th century, if we're to make possible a
world of innovation in the 21st.
Lessig is a professor at Stanford Law School and the author of five
books, including most recently "Remix: Making Art and Commerce Thrive
in the Hybrid Economy."
I think this is misguided. Pegging Federal legislation to the
commerce clause has been successful (from the standpoint of having
such legislation upheld, whatever the legislation's substantive
merits), but it has also meant the legislation was subject to
arbitrary judicial doctrines related to the commerce clause.
Just a note on where this strikes me.
Seth
--
RIAA is the RISK! Our NET is P2P!
http://www.nyfairuse.org/action/ftc
DRM is Theft! We are the Stakeholders!
New Yorkers for Fair Use
http://www.nyfairuse.org
[CC] Counter-copyright: http://realmeasures.dyndns.org/cc
I reserve no rights restricting copying, modification or distribution
of this incidentally recorded communication. Original authorship
should be attributed reasonably, but only so far as such an
expectation might hold for usual practice in ordinary social discourse
to which one holds no claim of exclusive rights.
Seth
ACTA draft leaks: nonprofit P2P faces criminal penalties
By Nate Anderson
Last updated February 4, 2009 10:03
Public interest groups and scholars are piecing together the
Anti-Counterfeiting Trade Agreement through leaks and sources. While
ISP filtering and "three strikes" rules are nowhere to be found,
noncommercial file-swapping done on a "commercial scale" could get
criminal penalties.
It's becoming clear that the Anti-Counterfeiting Trade Agreement
(http://arstechnica.com/old/content/2008/06/the-real-acta-threat-its-not-ipod-scanning-border-guards.ars)
is not, as backers have suggested, just a minor tuneup to worldwide
intellectual property law, one done for the purpose of cracking down
on fake DVD imports or Coach handbag ripoffs.
Such a law -- one that amounted essentially to some streamlining and
coordination in the fight against actual pirates -- might well be
hashed out between nations operating in secret. But a treaty that
seeks to apply criminal penalties to peer-to-peer file-sharing? Let's
open a window and let the sunlight in.
Rightsholders, especially in the music and movie businesses, have been
upfront about what they want
(http://arstechnica.com/old/content/2008/07/abig-wishlist-for-a-scary-secret-anticounterfeiting-pact.ars),
and it's a long and sometimes scary list. But it has been hard to know
if any of these ideas are actually moving forward in the ACTA
negotiating sessions, since none of the countries involved have seen
fit to release much in the way of useful information on the process --
To the public, anyway. Based on sources and leaked documents,
Knowledge Ecology International now asserts that ACTA drafts are in
fact "formally available to cleared corporate lobbyists and informally
distributed to corporate lawyers and lobbyists in Europe, Japan, and
the US"
(http://www.keionline.org/blogs/2009/02/03/details-emerge-of-secret-acta/).
As for what's in these drafts, which are too secret to be seen by the
public paying the negotiators' salaries, it's a long and mostly boring
list of items intended to stop or slow shipments of counterfeit goods.
But the ACTA proposals currently include language that would make
copyright infringement on a "commercial scale," even when done with
"no direct or indirect motivation of financial gain," into a criminal
matter.
ACTA drafts are in fact "formally available to cleared
corporate lobbyists and informally distributed to
corporate lawyers and lobbyists in Europe, Japan, and
the US."
Both KEI and Canadian law professor Michael Geist, who has been
working his own sources
(http://www.michaelgeist.ca/content/view/3660/125/), say that the
current proposals require all signatories to "establish a laundry list
of penalties -- including imprisonment -- sufficient to deter future
acts of infringement." Geist believes that P2P use is the obvious
target here, though such provisions might well be enacted only against
file-sharing hubs rather than end-users.
Camcording in theaters gets its own special section and must also be
considered a "criminal act" by countries that adopt ACTA. Handling
fake movie and movie packages would also get the deluxe criminal
treatment.
Also interesting, though of less concern to end users, are the
proposed "border measures" that would allow customs agents to act
against counterfeit goods. According to Geist:
The proposals call for provisions that would order
authorities to suspend the release of infringing goods
for at least one year, based only on a prima facie claim
by the rights holder. Customs officers would be able to
block shipments on their own initiative, supported by
information supplied by rights holders. Those same
officers would have the power to levy penalties if the
goods are infringing. Moreover, the US would apparently
like a provision that absolves rights holders of any
financial liability for storage or destruction of the
infringing goods.
ACTA will also become a sort of institution, with the creation of an
ACTA Oversight Council to coordinate enforcement, schedule yearly
meetings, etc.
Taking the hint
Some governments appear to be listening to the public criticism of the
process. Michael Geist filed an Access to Information Act with the
Canadian government and dug up a few new interesting documents
(http://www.michaelgeist.ca/content/view/3653/125/).
One shows that Canadian negotiators at least read the documents
submitted during public consultations; in it, negotiators noted "that
the issue of transparency in the formal negotiation process is
important for many of our stakeholders. Stakeholders have also
requested additional information on the scope of the proposed ACTA, as
well as further information on why the Agreement is being negotiated
in this manner, as opposed to in existing multilateral fora such as
WIPO or the WTO."
Such concerns are apparently being aired to the other ACTA countries,
though it's clear that not everyone involved shares the Canadian
public's interest in transparent process. The US Trade Representative
has held a public consultation and a public meeting in DC to address
concerns about ACTA, but little information has been forthcoming.
Just how little? Public Knowledge today detailed its struggle to get
some ACTA documents using a Freedom of Information Act Request
(http://www.publicknowledge.org/node/1975). The government turned over
159 pages of information, then indicated it had found another 1,390
pages.
But, according to PK attorney Sherwin Siy, "Of these, 1,390 will be
withheld under various statutory exemptions to the FOIA. Yes, thats
all of them."
Things haven't been much better in the EU, where a Dutch group has
tried to access documents and gotten nowhere
(http://arstechnica.com/old/content/2008/11/eu-denies-acta-document-request-democracy-undermined.ars).
When have we ever lied to you?
So what we have to go on are largely leaked documents and the bland
reassurances of governments. The European Union is a good example of
the latter. "ACTA is about tackling large scale, criminal activity,"
we are told. "It is not about limiting civil liberties or harassing
consumers."
It goes on to say, "ACTA is not designed to negatively affect
consumers: the EU legislation (2003 Customs Regulation) has a de
minimis clause that exempts travellers from checks if the infringing
goods are not part of large scale traffic. EU customs, frequently
confronted with traffics of drugs, weapons or people, do neither have
the time nor the legal basis to look for a couple of pirated songs on
an i-Pod music player or laptop computer, and there is no intention to
change this."
(i-Pod?)
And if you think that these negotiations are taking place in smoky
backrooms to which only corporate lobbyists have access, you couldn't
be more wrong. "It is alleged that the negotiations are undertaken
under a veil of secrecy," says the EU. "This is not correct. For
reasons of efficiency, it is only natural that intergovernmental
negotiations dealing with issues that have an economic impact, do not
take place in public and that negotiators are bound by a certain level
of discretion."
Let's hope it's all true, and that the concerns of civil society
groups and citizens really are considered. Perhaps they are; perhaps
they will be. But it's certainly hard to tell at the moment. The EU
and everyone else involved can issue all the say-nothing press
releases they like
(http://ec.europa.eu/trade/issues/sectoral/intell_property/pr091008_en.htm),
but those who care about the issue would feel a lot better if they
could see the evolving drafts and have some process for providing
input on them; asking people to turn in one document months ago
without having seen even a set of proposals hardly qualifies as an
"open process."
Still, when you consider that RIAA-backed measures like ISP filtering
(http://arstechnica.com/old/content/2008/06/inside-the-riaas-acta-wishlist.ars)
don't appear in the current leaked drafts and MPAA-backed "three
strikes" laws also appear to be absent, perhaps the governments truly
are attempting to keep ACTA focused on industrial-scale piracy. It
would just be comforting to have something other than their words to
go by.
RIAA Takes Over DOJ
February 6, 2009
Posted by Alan Wexelblat
OK, enough with the funny stuff. The new Obama administration is
shaping up to be a disaster for Copyfighters everywhere. In particular
the new Department of Justice is stacked with lawyers who've been on
the wrong side of copyright and intellectual property lawsuits for the
last eight years.
First off, there's the #3 man at Justice, Thomas Perrelli, accurately
described by CNET as "beloved by the RIAA"
(http://news.cnet.com/8301-13578_3-10133425-38.html). Not only has
this guy been on the wrong side in the courtroom, he's fingered as
instrumental in convincing the Copyright Board to strangle Web radio
in its crib by imposing impossible fee structures
(http://copyfight.corante.com/archives/2007/03/05/copyright_office_set_to_kill_web_radio.php).
Then there's Neil MacBride, who used to be the Business Software
Alliance's general counsel
(http://www.bsa.org/country/Public%20Policy/Copyright.aspx). The BSA,
to its credit, hasn't been suing teenagers. Generally their name is
associated with large-scale raids on companies that are mass-producing
illegal copies of software
(http://www.nationmultimedia.com/2009/02/03/technology/technology_30094856.php).
Still, it's an industry flak group.
Then there's the #2 man, currently slated to be David Ogden. If that
name only rings a faint bell it's because you have to cast your mind
back to Eldred v Ashcroft, the argument on whether retroactive
copyright term extensions were legal
(http://eldred.cc/eldredvashcroft.html). Sitting over there on
Ashcroft's side? That's Mr. Odgen. For extra-bonus ick points, Ogden
also was involved in defending the heinous COPA legislation,
fortunately now dead and buried (but not forgotten)
(http://epic.org/free_speech/censorship/copa.html).
The capper on this line-up of suspicious characters is Donald
Verrilli, now up for Associate Deputy Attorney General. This specimen
of legal acumen is front and center in the Cartel's jihad, having
appeared for Viacom when it sued YouTube
(http://news.cnet.com/Viacom-sues-Google-over-YouTube-clips/2100-1030_3-6166668.html),
for the RIAA against Jammie Thomas, single mother
(http://copyfight.corante.com/archives/2007/10/05/cartel_gets_big_money_to_fill_in_big_hole.php).
And if we peer back a little farther, we find Verrilli's dirty
fingerprints on MGM v Grokster
(http://copyfight.corante.com/archives/2005/06/27/mgm_v_grokster_what_happened.php).
So what does all this portend? Well, if you ask Julian Sanchez over at
Portfolio.com he thinks it's a tempest in a teapot
(http://www.portfolio.com/views/blogs/the-tech-observer/2009/02/06/obamas-doj-picks-cause-for-concern).
He thinks they'll all behave and recuse themselves properly and just
because a lawyer consistently goes to bat for a certain kind of client
doesn't mean much about their professional views. Lawyers are paid
guns, after all, and the Cartel's side has consistently paid well.
Declan McCullagh, over at CNET, is much less sanguine, pointing out
that many of these cases are still ongoing (e.g. big lawsuits against
YouTube) and further noting that Vice President Biden showed a great
deal of hostility toward free use when he was in the Senate
(http://news.cnet.com/8301-13578_3-10157381-38.html).
I'm on Declan's side. To the extent that someone has to set the tone
of this administration in dealing with intellectual property matters,
it's looking pretty grim.
Hollywood's lobbyists are running all over the Hill to sneak in a
copyright filtering provision into the stimulus package. The
amendment allow ISPs to "deter" child pornography and copyright
infringement through network management techniques. The amendment is
very, very controversial for a couple of reasons:
1. First, infringement can't be found through "network
management" techniques. There are legal uses for
copyrighted works even without permission of the owner.
2. Second, it would require Internet companies to examine
every bit of information everyone puts on the Web in
order to find those allegedly infringing works, without
a hint of probable cause. That would be a massive
invasion of privacy, done at the request of one
industry, violating the rights of everyone who is
online.
Right now, we need you to contact a few key Senators: Majority Leader
Harry Reid, Chairman of the Appropriations Committee Daniel Inouye,
and Chairman of the Commerce Committee Jay Rockefeller, Chairman of
the Finance Committee Max Baucus, and senior member of the
Appropriations Committee Senator Barbara Mikulski, and tell them to
leave out this controversial provision.
**Fax a message to these Senators NOW from:**
<http://www.publicknowledge.org/alertfax/1983>
or,
**Call these Senators NOW via Cause Caller at:**
-----
Copyright: Public Knowledge
1875 Connecticut Ave, NW
Suite 650
Washington, DC 20009
License: CC-BY-SA http://creativecommons.org/licenses/by-sa/3.0/
> http://www.fsf.org/news/reoppose-tls-authz-standard
Send comments opposing TLS-authz standard by February 11
Last January, the Free Software Foundation issued an alert to efforts
at the Internet Engineering Task Force (IETF) to sneak a
patent-encumbered standard for "TLS authorization" through a back-door
approval process that was referenced as "experimental" or
"informational"
(http://www.fsf.org/news/reoppose-tls-authz-standard/newsitem_view).
The many comments sent to IETF at that time alerted committee members
to this attempt and successfully prevented the standard gaining
approval.
Unfortunately, attempts to push through this standard have been
renewed and become more of a threat. The proposal now at the IETF has
a changed status from "experimental" to "proposed standard". The FSF
is again issuing an alert and request for comments to be sent urgently
and prior to the February 11 deadline to ie...@ietf.org. Please
include us in your message by a CC to camp...@fsf.org. You should
also expect an automated reply from ie...@ietf.org, which you will need
to answer to confirm your original message.
That patent in question is claimed by RedPhone Security
(https://datatracker.ietf.org/ipr/1026/). RedPhone has given a
license to anyone who implements the protocol, but they still threaten
to sue anyone that uses it.
If our voice is strong enough, the IETF will not approve this standard
on any level unless the patent threat is removed entirely with a
royalty-free license for all users.
Further background for your comment
See the IETF summary:
> http://www.ietf.org/mail-archive/web/ietf-announce/current/msg05617.html
Much of the communication on the Internet happens between computers
according to standards that define common languages. If we are going
to live in a free world using free software, our software must be
allowed to speak these languages.
Unfortunately, discussions about possible new standards are tempting
opportunities for people who would prefer to profit by extending
proprietary control over our communities. If someone holds a software
patent on a technique that a programmer or user has to use in order to
make use of a standard, then no one is free without getting permission
from and paying the patent holder
(http://www.gnu.org/philosophy/fighting-software-patents.html). If we
are not careful, standards can become major barriers to computer users
having and exercising their freedom.
We depend on organizations like the Internet Engineering Task Force
(IETF) and the Internet Engineering Steering Group (IESG) to evaluate
new proposals for standards and make sure that they are not encumbered
by patents or any other sort of restriction that would prevent free
software users and programmers from participating in the world they
define.
In February 2006, a standard for "TLS authorization" was introduced in
the IETF for consideration
(http://tools.ietf.org/wg/tls/draft-housley-tls-authz-extns-07.txt).
Very late in the discussion, a company called RedPhone Security
disclosed (this disclosure has subsequently been unpublished from the
IETF website) that they applied for a patent which would need to be
licensed to anyone wanting to practice the standard
(https://datatracker.ietf.org/ipr/833/). After this disclosure, the
proposal was rejected.
Despite claims that RedPhone have offered a license for implementation
of this protocol, users of this protocol would still be threatened by
the patent. The IETF should continue to oppose this standard until
RedPhone provide a royalty-free license for all users.
Media Contacts
Peter T. Brown
Executive Director
Free Software Foundation
(617)542-5942
camp...@fsf.org
---
> http://www.computerworlduk.com/community/blogs/index.cfm?blogid=14&entryid=1845
Help Fight This Patent-Encumbered IETF Standard
February 10, 2009
Posted by: Glyn Moody
I've written numerous times about the importance of writing to
governments about their hare-brained schemes, but this one is rather
different. In this case, it's the normally sane Internet Engineering
Task Force that wants to do something really daft. The FSF explains:
Last January, the Free Software Foundation issued an alert to efforts
at the Internet Engineering Task Force (IETF) to sneak a
patent-encumbered standard for "TLS authorization" through a back-door
approval process that was referenced as "experimental" or
"informational". The many comments sent to IETF at that time alerted
committee members to this attempt and successfully prevented the
standard gaining approval.
Unfortunately, attempts to push through this standard have been
renewed and become more of a threat. The proposal now at the IETF has
a changed status from "experimental" to "proposed standard".
This is a throwback to the bad old days of sneaking patents into
nominal standards. It is yet another reason why such patents should
not be given in the first place. But until such time as the patent
offices around the world come to their senses, the only option is to
fight patent-encumbered standards on an individual basis. Here are the
details for doing so:
The FSF is again issuing an alert and request for comments to be sent
urgently and prior to the February 11 deadline to ie...@ietf.org.
Please include us in your message by a CC to camp...@fsf.org. You
should also expect an automated reply from ie...@ietf.org, which you
will need to answer to confirm your original message.
Here's what I've sent:
I am writing to ask you not to approve the proposed patent-encumbered
standard for TLS authorisation. To do so would fly in the face of the
IETF's fundamental commitment to openness. It would weaken not just
the standard itself, but the IETF's authority in this sphere.
---
> http://www.ietf.org/mail-archive/web/ietf-announce/current/msg05617.html
Fourth Last Call: draft-housley-tls-authz-extns
* To: IETF-Announce <ietf-announce at ietf.org>
* Subject: Fourth Last Call: draft-housley-tls-authz-extns
* From: The IESG <iesg-secretary at ietf.org>
* Date: Wed, 14 Jan 2009 08:18:20 -0800 (PST)
* List-archive: <http://www.ietf.org/pipermail/ietf-announce>
* Reply-to: ietf at ietf.org
On June 27, 2006, the IESG approved "Transport Layer Security (TLS)
Authorization Extensions," (draft-housley-tls-authz-extns) as a
proposed standard. On November 29, 2006, Redphone Security (with whom
Mark Brown, a co-author of the draft is affiliated) filed IETF IPR
disclosure 767.
Because of the timing of the IPR Disclosure, the IESG withdrew its
approval of draft-housley-tls-authz-extns. A second IETF Last Call
was initiated to determine whether the IETF community still had
consensus to publish draft-housley-tls-authz-extns as a proposed
standard given the IPR claimed. Consensus to publish as a standards
track document was not demonstrated, and the document was withdrawn
from IESG consideration.
A third IETF Last Call was initiated to determine whether the IETF
community had consensus to publish draft-housley-tls-authz-extns as an
experimental track RFC with knowledge of the IPR disclosure from
Redphone Security. Consensus to publish as experimental was not
demonstrated; a substantial segment of the community objected to
publication on any track in light of the IPR terms.
Since the third Last Call, RedPhone Security filed IETF IPR disclosure
1026. This disclosure statement asserts in part that "the techniques
for sending and receiving authorizations defined in TLS Authorizations
Extensions (version draft-housley-tls-authz-extns-07.txt) do not
infringe upon RedPhone Security's intellectual property rights". The
full text of IPR disclosure 1026 is available at:
https://datatracker.ietf.org/ipr/1026/
This Last Call is intended to determine whether the IETF community had
consensus to publish draft-housley-tls-authz-extns as a proposed
standard given IPR Disclosure 1026.
The IESG is considering approving this draft as a standards track RFC.
The IESG solicits final comments on whether the IETF community has
consensus to publish draft-housley-tls-authz-extns as a proposed
standard. Comments can be sent to ietf at ietf.org or exceptionally to
iesg at ietf.org. Comments should be sent by 2009-02-11.
A URL of this Internet-Draft is:
http://www.ietf.org/internet-drafts/draft-housley-tls-authz-extns-07.txt
_______________________________________________
IETF-Announce mailing list
IETF-Announce at ietf.org
https://www.ietf.org/mailman/listinfo/ietf-announce
Has Divestiture Worked?
A 25th Anniversary Assessment of the Break Up of AT&T.
DATE: FRIDAY, MARCH 6th, 2009 TIME: 6PM-9PM
LOCATION: New York University, Warren Weaver Hall (251 Mercer), Room
109
PRICE: ADMISSION IS FREE. (RSVP requested, rs...@bway.net)
In 1984, AT&T, then the largest company in the U.S., was broken up
because of the monopoly controls "Ma Bell" had over
telecommunications. Known as "Divestiture", we have reached the 25th
anniversary of the AT&T breakup and it is time to look carefully and
critically at the deregulation of telecommunications to evaluate the
effectiveness of this important
economic policy.
Open Infrastructure Alliance, (OIA) together with the Internet
Society, (ISOC) New York chapter, is convening a series of panels to
dialog on the deregulation of the telecommunications industry. Among
the key issues to be considered are:
. Has divestiture worked? A careful examination of the consequences
of divestiture and deregulation over the last 25 years.
. America is ranked 15th in the world in broadband. What role does
America's closed broadband networks (e.g., Verizon's FiOS and AT&T's
U-Verse) play in such a ranking? Do closed networks fulfill last mile
requirements of the Telecom Act of 1996?
. The Obama administration and Congress have put together a massive
economic stimulus package, including broadband infrastructure
projects. Does this new legislation address the major issues or are
other steps necessary?
The dialogue will assess whether deregulation has helped or harmed
America's digital future. What role should a new, reconstituted FCC
play? What policies and programs are needed to make America #1 again
in technology, broadband and the Internet?
Confirmed Speakers:(More to Come)
. Tom Allibone, LTC Consulting
. Jonathan Askin, Esq, Brooklyn Law School
. Dave Burstein, DSL Prime
. Frank A. Coluccio, Cirrant Partners Inc
. Mark Cooper, Consumer Federation of America
. Alex Goldman, ISP Planet
. Fred Goldstein, Ionary Consulting
. Bruce Kushnick, New Networks Institute
. Dean Landsman, Landsman Communications Group
. Scott McCollough, Esq.
. Joe Plotkin, Bway.net
. David Rosen, Consultant
. Dana Spiegel, NYCwireless
Market:
. A 25 year analysis of the Age of the Bell companies.
. How did America become 15th in the world in broadband?
. What is the role of the cable and phone companies?
. What happened to the price of phone service?
. Is wireless overtaking wireline services?
Regulation:
. Has deregulation helped or harmed the America's digital future?
. How do we deal with corporate controls over the FCC, or should we
scrap the FCC?
. How do we fund and create open, ubiquitous, high-speed networks?
. What should happen next with wireless services?
. What is the status of competition today, and what needs to be
changed for the future?
. What applications are going to drive the next generation?
. Is it time for another divestiture or other regulatory changes?
For More Information:
Joe Plotkin
T: 646-502-9796
E: bwa...@bway.net
Internet Society, NY Chapter
E: pres...@isoc-ny.org
Complete agenda and speaker bios at:
http://25thanniversaryofthebreakupofatt.blogspot.com/
EVENT: Has Divestiture Worked?
A 25th Anniversary Assessment of the Breakup of AT&T
DATE: *TOMORROW* FRIDAY, MARCH 6th, 2009 TIME: 6PM-9PM
LOCATION: New York University, Warren Weaver Hall 251 Mercer St. Room
109.
PRICE: ADMISSION IS FREE. (RSVP requested, by email to: rs...@bway.net
-or- Facebook, LinkedIn, or MeetUp ISOC-NY)
If you're concerned with the future of the Internet:
How does America get gigabit, open and ubiquitous, broadband telecom
infrastructure?
The goal of this conference is to outline the history of the last 25
years, discuss the current market issues, then give a view of the
future of broadband and telecom in the US that has been mostly untold
in the media. It is a future that leads to ubiquitous, very high speed
networks based on an infrastructure that is open to all competitors --
giving customers choice, lower prices and new quality products and
innovative services. And widely acknowledged as critical for long term
economic growth.
*** NEW SPEAKERS ADDED ***
Carl Mayer, Esq. The State of Privacy in the US.
Kenneth Levy Former FCC telecom lawyer at the time of
Divestiture.
Lou Klepner The New York City Co-op Fiber Network
AGENDA (subject to change)
PANEL 1: Historical perspective:
* Bruce Kushnick An overview and leading financial
indicators. What happened over the
last 25 years?
* Tom Allibone Consumers: telephony costs and other
& Dean Landsman issues of telephony and broadband.
* Ken Levy Living history, perspective from
within FCC during the Break Up!
* Alex Goldman ISP/CLEC industry: regulatory
follies over the past decade
* Mark Cooper The Failure of Market Fundamentalism
in the Telecom Sector: How
Deregulation Derailed Divestiture or
The Operation was Successful, but
the Patient Died
PANEL 2: The Present State:
* Jonathan Askin The legal/regulatory environment
then and now.
* Dave Burstein Broadband market roundup
* Joe Plotkin Small business broadband needs, and
surviving as a small competitive
provider.
* David Rosen What filmmakers and other creators
need to know
* Carl Mayer Privacy and the latest on the
wiretapping case
PANEL 3: The Future State and Alternative Approaches:
* Fred Goldstein The current state of fiber optic
networks. Are new models like
Structural Separation needed now?
* Lou Klepner NYC co-op fiber network
* Dana Spiegel The future of broadband spectrum
* W. Scott McCollough Legally rewiring telecom
infrastructure: What is possible?
Divestiture2? Separation?
===================
For More Information:
Joe Plotkin
T: 646-502-9796
E: bwa...@bway.net
Internet Society, New York Chapter
E: pres...@isoc-ny.org
===================
FOR THOSE NOT IN NEW YORK CITY we've arranged a webcast.
ALSO
*** NEW SPEAKERS ADDED ***
Carl Mayer, Esq. The State of Privacy in the US.
Kenneth Levy Former FCC telecom lawyer at the time of
Divestiture.
Lou Klepner The New York City Co-op Fiber Network
For Immediate Release
(CONTACT INFO BELOW)
EVENT: Has Divestiture Worked?
A 25th Anniversary Assessment of the Breakup of AT&T
WEBCAST AVAILABLE: For those not in New York City, there
will be an available live web cast of the event.
http://www.communityfiberproject.net/stream/
Complete agenda and speaker bios at: -- SEE NEW SPEAKERS.
http://25thanniversaryofthebreakupofatt.blogspot.com/
DATE: * FRIDAY, MARCH 6th, 2009 TIME: 6PM-9PM
LOCATION: New York University, Warren Weaver Hall 251
Mercer St. Room 109.
PRICE: ADMISSION IS FREE. (RSVP requested, by email to:
rs...@bway.net -or- Facebook, LinkedIn, or MeetUp
ISOC-NY)
If you're concerned with the future of the Internet:
How does America get gigabit, open and ubiquitous, broadband
telecom infrastructure?
The goal of this conference is to outline the history of the
last 25 years, discuss the current market issues, then give
a view of the future of broadband and telecom in the US that
has been mostly untold in the media. It is a future that
leads to ubiquitous, very high speed networks based on an
infrastructure that is open to all competitors -- giving
customers choice, lower prices and new quality products and
innovative services. And widely acknowledged as critical for
long term economic growth.
AGENDA (subject to change)
PANEL 1: Historical perspective:
_______________________________________________
> http://techdailydose.nationaljournal.com/2009/03/onewebday-founder-tapped-by-ob.php
OneWebDay Founder Tapped By Obama
Monday, March 23, 2009
Internet law expert Susan Crawford has joined President Barack Obama's
lineup of tech policy experts at the White House, according to several
sources. She will likely hold the title of special assistant to the
president for science, technology, and innovation policy, they said.
Crawford, who was most recently a visiting professor at the University
of Michigan and at Yale Law School, was tapped by Obama's transition
team in November to co-chair its FCC review process with University of
Pennsylvania professor Kevin Werbach. Her official administration
appointment has not been formally announced. Crawford may be best
known for her work with the Internet Corporation for Assigned Names
and Numbers, the California-based nonprofit group that manages the
Internet address system. She served on ICANN's board for three years
beginning in December 2005. She also founded OneWebDay, a global Earth
Day for the Internet that takes place every Sept. 22. Crawford, a Yale
graduate, clerked for U.S. District Judge Raymond Dearie before
joining Wilmer, Cutler & Pickering where she worked until the end of
2002.
--
RIAA is the RISK! Our NET is P2P!
http://www.nyfairuse.org/action/ftc
DRM is Theft! We are the Stakeholders!
New Yorkers for Fair Use
http://www.nyfairuse.org
[CC] Counter-copyright: http://realmeasures.dyndns.org/cc
I reserve no rights restricting copying, modification or distribution
of this incidentally recorded communication. Original authorship
should be attributed reasonably, but only so far as such an
expectation might hold for usual practice in ordinary social discourse
to which one holds no claim of exclusive rights.
_______________________________________________
Reading Rights Coalition (member orgs listed below):
> http://www.readingrights.org/
> http://www.readingrights.org/take-action-now
Petition (5,528 signatures at this point) (text pasted below):
> http://www.thepetitionsite.com/1/We-Want-To-Read
Call the Authors Guild: 1-212-563-5904
Next Demo:
LA Times Festival of Books
http://www.latimes.com/extras/festivalofbooks/
Saturday April 25 and Sunday April 26
Time: TBA
Location: UCLA
405 Hilguard Avenue,
Los Angeles, CA 90095
The sign I chose :-) :
> http://www.keionline.org/blogs/wp-content/uploads/2009/04/newyorker3-150x150.jpg
Video of the April 7 Demo:
> http://abraham.omnicypher.com/2009/04/authors-guild-protest-thoughts-24-hours.html
> http://i.gizmodo.com/5202554/photos-and-video-from-the-national-federation-of-the-blinds-kindle-2-protest
> http://www.cnn.com/video/?JSONLINK=/video/ireports/2009/04/09/irpt.publisher.protest.cnn
Accounts of the April 7 demo:
> http://www.keionline.org/blogs/2009/04/08/notes-from-kindle2-protest/
> http://www.eff.org/deeplinks/2009/04/protest-kindle-drm
> http://www.betanews.com/article/Protesters-confront-Authors-Guild-over-Kindle-texttospeech/1239308961
> http://www.phillyburbs.com/news/news_details/article/222/2009/april/10/cant-hear-what-others-can-see.html
Authors' Guild: Protest "Unfortunate and Unnecessary":
> http://authorsguild.org/advocacy/articles/kindle-accessibility.html
James Love cites statements of petition signatories:
> http://www.huffingtonpost.com/james-love/people-vs-the-authors-gui_b_183533.html
Chronology:
Feb 9, 2009. Release of Kindle 2
Feb 24, 2009. Roy Blount Jr., President of the Authors
Guild (AG) wrongly claims TTS would be an infringement
of copyright and a threat to audio books in a New York
Times op-ed.
Feb 27, 2009. Under pressure from the Authors Guild,
Amazon announced it would modify its system so authors
and publishers could turn off the TTS on a title by
title basis
The National Foundation for the Blind initiate a dialogue
with the AG
Authors Guild proposed a separate registration system
which was rejected by reading disabled persons
representatives
Authors Guild then proposed to make e-book TTS available
at additional cost
March 16 Letter from coalition to main 6 publishers
March 19, 2009. Amazon announced on its Kindle Blog that
it will make the menus and controls on the device
fully accessible to blind people
April 7, 2009. The Reading Rights Coalition kicks off its
campaign to reverse the stance of authors and
publishers who have disabled text-to speech with a
protest in New York city (see pictures at the end of
the post)
Current Reading Rights Coalition Members:
Please use the Contact Us form (http://www.readingrights.org/contact)
if your organization wants to join this effort.
1. AbilityNet
2. American Association of People with Disabilities
3. American Council of the Blind
4. American Foundation for the Blind
5. Arc of the United States
6. Association of Blind Citizens
7. Association on Higher Education And Disability
8. Bazelon Center for Mental Health Law
9. Burton Blatt Institute
10. DAISY Consortium
11. Disability 411 newest!
12. Disability Rights Education and Defense Fund
13. IDEAL Group, Inc.
14. International Center for Disability Resources on the Internet
15. International Dyslexia Association
16. International Dyslexia Association – New York Branch
17. Jewish Guild for the Blind
18. Knowledge Ecology International
19. Learning Disabilities Association of America
20. Lighthouse International
21. LightHouse – San Francisco newest!
22. National Association of Law Students with Disabilities
23. National Center for Learning Disabilities
24. National Disability Rights Network
25. National Federation of the Blind
26. NISH (formerly National Institute for the Severely Handicapped)
27. National Spinal Cord Injury Association
28. Smart Kids with Learning Disabilities
29. United Cerebral Palsy
30. Xavier Society for the Blind
---
Text of the petition:
> http://www.thepetitionsite.com/1/We-Want-To-Read
We the undersigned insist that the Authors Guild and Amazon not
disable the text-to-speech capability for the Kindle 2.
There are 15 million Americans who are blind, dyslexic, and have
spinal cord injuries or other disabilities that impede their ability
to read visually. The print-disabled have for years utilized
text-to-speech technology to read and access information. As
technology advances and more books move from hard-copy print to
electronic formats, people with print disabilities have for the first
time in history the opportunity to enjoy access to books on an equal
basis with those who can read print.
Authors and publishers who elect to disallow text-to-speech for their
e-books on the Kindle 2 prevent the print-disabled from enjoying these
e-books.
Denying universal access will result in more and more people with
disabilities being left out of education, employment, and the societal
conversation. We will all suffer from the absence of diverse
participation and contribution to the debates that occupy us as a
society.
Furthermore, we oppose the Authors Guild demands that this capability
should be turned off because many more books would be sold if
text-to-speech remained available. Not only does this feature benefit
persons with disabilities, but it also helps persons for whom English
is not their native language. In an increasingly mobile society,
flexible access to content improves the quality of life for everyone.
There can be no doubt that access to the written word is the
cornerstone of education and democracy. New technologies must serve
individuals with disabilities, not impede them. Our homes, schools,
and ultimately our economy rely on support for the future, not
discriminating practices and beliefs from the past.
Thank you for your time and consideration in this important matter.
On Mon, 13 Apr 2009, Kevin Donovan <kdono...@gmail.com> wrote:
> Has anyone who has been following this carefully considered blogging it?
>
> And has the board considered contacting them to join the coalition?
One of the most important things is to show up at every rally.
Every partisan makes a difference. Signs which conform to the
strict government rules should be brought.
oo--JS.
> (Join the Reading Rights Coalition in opposing the Authors' Guild's
> attempt to claim the power to control your right to read with a device
> that parses and processes text. Last Tuesday a coalition led by
> concerned disabilities constituencies stood up for our right to own
> and use fully functional computing devices. They are looking for
> 10,000 signatures on their petition -- let's push it past that -- and
> are planning to continue demonstrations, now on both coasts. I can't
> gun outreach right now, but we can all forward the following links and
> sign on . . . -- Seth)
>
We'd like to help get some Defective By Design people to the April 25/26
event.
I'm not sure about the petition text; it stops a lot shorter than I'd
like.
Can you coordinate with me/us on ways we can promote this on
defectivebydesign.org? More photos, other posts to link to, anything we
could help with...
--
John Sullivan
Free Software Foundation
ISOC-Philippines statement on the jail sentence for The Pirate Bay
founders and the criminal charges against philosophy professor Horacio
Potel
By isoc-ph, on April 20, 2009, 2:05 am
The Internet Society Philippines’ (ISOC-PH) Public Policy Principles
and activities are based upon a fundamental belief that “The Internet
is for everyone.” ISOC-PH upholds and defends core values that allow
people throughout the world to enjoy the benefits of the Internet.
Recent developments, however, demonstrate an alarming growth towards a
“license culture” on the Internet, imposed by the criminalization of
those whose culture and society advance creativity, innovation and
economic opportunity through the values of openness, sharing,
education and collaboration.
Philosophy professor Horacio Potel from Argentina is facing criminal
charges for maintaining a personal and educational website devoted to
Spanish translations of works by French philosopher Jacques Derrida.
A court in Sweden has found the four men behind “The Pirate Bay”, a
file-sharing website, guilty of breaking copyright law and were
sentenced to a year in jail and ordered to pay $4.5m (£3m) in damages.
The Ability to Share is one of ISOC’s core values. The many-to-many
architecture of the Internet makes it a powerful tool for sharing,
education, and collaboration. It has enabled the global open source
community to develop and enhance many of the key components of the
Internet, such as the Domain Name System and the World-Wide Web, and
has made the vision of digital libraries a reality. To preserve these
benefits we will oppose technologies and legislation that would
inhibit the freedom to develop and use open source software or limit
the well-established concept of fair use, which is essential to
scholarship, education, and collaboration.
We will also oppose excessively restrictive governmental or private
controls on computer hardware or software, telecommunications
infrastructure, or Internet content. Such controls and restrictions
substantially diminish the social, political, and economic benefits of
the Internet.
The wire-tapping, searches and seizures, the removal of website
content and the criminal charges against professor Potel of the
University of Buenos Aires is an onslaught on human rights and
academic freedom in Argentina and on the Internet.
The police seizures of servers, the enormous bill for damages and the
jail sentence on Frederik Neij, Gottfrid Svartholm Warg, Carl
Lundstrom and Peter Sunde is a defiance of the social and cultural
institution of file-sharing in Sweden and on the Internet.
ISOC-PH founding member and lawyer Michael Dizon writes, “Putting
greater emphasis on the development of social or community norms and
how people can actively participate in the creation of these norms …
may be more advantageous in advancing creative culture than resorting
to contractual agreements. Ideally, laws (and the licenses that seek
to enforce rights based on these laws) should embody and uphold the
norms and values of a community, and not the other way around.”
As founding president of the newly rejuvenated ISOC-Philippines
Chapter, I would like to dispute some of the statements being made
regarding the Pirate Bay trials, in particular, by John Kennedy,
Chairman and CEO of the International Federation of the Phonographic
Industry. Mr Kennedy says,
“This is good news for everyone, in Sweden and internationally, who is
making a living or a business from creative activity and who needs to
know their rights will protected by law.”
In keeping with the ISOC-PH mandate, I find it offensive to the
diversity of cultures on the Internet the claim that the global model
of copyright protection being imposed upon the developers and users of
the Internet is “good news for everyone.”
I also find it hard to accept the sincerity of Mr Kennedy’s statement
about “making a living or a business from creative activity.” In fact
only a handful of media corporations have effectively taken over what
used to be a very diverse field of creative activity.
Such a process of consolidation and privatization has created gross
inequality between artists and the big media corporations: relations
between artists and recording companies are replete with exploitative
contracts and bitter legal struggles for control; and royalties and
other earnings from copyright constitute only a fraction of the income
of most active professional artists.
The Pirate Bay trials and the criminal charges against professor Potel
are a threat to academic freedom and free speech, and they undermine
the Internet core value of the Ability to Share. If we envision a
future in which people in all parts of the world can use the Internet
to improve their quality of life, then freedom, and not a “license
culture”, must be obtained for professor Potel, the Pirate Bay
founders and the Internet communities of sharing.
ISOC-PH calls on all Internet citizens to demand freedom.
Fatima Lasay
President
Internet Society Philippines Chapter
http://isoc.ph/portal/
Quezon City, Philippines
April 20, 2009
Considering how deeply many free software orgs have delved into
various ways to modify and prohibit certain types of reuse, there is a
lot in common between Free Software legal mavens and the people
thinking about how to let author license text but not audio
digitizations of their work.
I think both parties, in this case, have fallen into a pit built into
our notions of 'copy' and 'derivative' and our acceptance of viral
licensing.
SJ
On Wed, 22 Apr 2009, Samuel Klein <met...@gmail.com> wrote:
> They don't want to control your right to read, they want to uphold an
> author's right to prohibit certain types of reuse. I hope the
> demonstrators, as well as both sides of the debate, grok the
> distinction.
>
> Considering how deeply many free software orgs have delved into
> various ways to modify and prohibit certain types of reuse, there is a
> lot in common between Free Software legal mavens and the people
> thinking about how to let author license text but not audio
> digitizations of their work.
>
> I think both parties, in this case, have fallen into a pit built into
> our notions of 'copy' and 'derivative' and our acceptance of viral
> licensing.
>
> SJ
Copyright law does not grant root on my machine to the author of
a file I have on my machine.
DRM is theft. It is theft of my computer.
Free software licenses deal with copying and distribution. Free
software license do not impede my use of my computer in my house.
oo--JS.
The particular control being protested against here prevents the book
being read using text-to-speech. So even if it was true that some
forms of control of texts do not affect reading, this one definitely
does.
> Considering how deeply many free software orgs have delved into
> various ways to modify and prohibit certain types of reuse,
If they did so then their software would not qualify as free.
> there is a
> lot in common between Free Software legal mavens and the people
> thinking about how to let author license text but not audio
> digitizations of their work.
Both groups must consider copyright law as a factor in achieving their
desired outcome but free software does so in order to protect the
freedom to use software rather than to remove the freedom to read.
Protecting and attacking freedom are not the same thing.
> I think both parties, in this case, have fallen into a pit built into
> our notions of 'copy' and 'derivative'
The publishers wish to protect their ability to prevent people from
reading (!), the public wish to protect free use and free speech.
These are very different ends despite being phrased in the legal
language of the state.
> and our acceptance of viral
> licensing.
Copyleft is not viral, nothing can catch a copyleft license through
mere proximity.
- Rob.
> http://www.thepirategoogle.com/
Bit Torrent Search
Please Note: This site is not affiliated with Google,
it simply makes use of Google Custom Search to
restrict your searches to Torrent files. You can do
this with any regular Google search by appending your
query with filetype:torrent. This technique can be
used for any type of file supported by Google
(http://www.google.com/).
The intention of this site is to demonstrate the
double standard that was exemplified in the recent
Pirate Bay Trial. Sites such as Google offer much the
same functionality as The Pirate Bay and other Bit
Torrent sites but are not targeted by media
conglomerates such as the IFPI as they have the
political and legal clout to defend themselves unlike
these small independent sites.
This site is created in support of an open, neutral
internet accessible and equitable to all regardless
of political or financial standing.
Cheers! (Contact: in...@thepirategoogle.com)
---
(Also see my comments to the FTC's Workshop on "P2P Risks" -- Seth)
> http://www.ftc.gov/os/comments/p2pfileshare/OL-100037.pdf
"The description of "P2P filesharing applications" presented in this
workshop's call for participation offers nothing to distinguish KaZaA,
Grokster or Gnutella from the basic functions of the Internet and
ordinary, generally used operating systems. It also makes no mention
of he core functionality that these applications actually do provide:
search and discovery of the locations of files. Sharing files among a
group of users is a basic network capability that operating systems
and networks already provide."
Workshop:
http://www.ftc.gov/bcp/workshops/filesharing/index.htm
Public Comments:
http://www.ftc.gov/os/comments/p2pfileshare/index.shtm
(Sorry if this gets sent out twice, having some mailinglist issues.)
I wouldn't call it a double standard. Google provides a service that aims much higher than simply enabling copyright infringement. However you feel about copyright law, I don't think you can deny that TPB induced copyright infringement (see http://en.wikipedia.org/wiki/MGM_Studios,_Inc._v._Grokster,_Ltd. [although the grokster case was over software and not an online service]) and would probably would have been liable in the U.S. for the infringement of its users.
On the other hand, Google indexes the web regardless of content. Their search function has been found by the courts to be transformative (for example see http://en.wikipedia.org/wiki/Perfect_10_v._Google and http://en.wikipedia.org/wiki/Transformativeness) and aimed at providing a kind of (clearly beneficial) public service.
Yes.
> but it does Free Culture more harm
> than good when you try to collapse the function of a service within a
> limited domain and its overall societal impact. Google is much more than a
> torrent search site, and just because it could be used as one in some
> limited domain doesn't mean that TPB ought to have any less liability for
> inducing infringement within the scope of their service. Questions of scope
> are legally important, and appropriately so IMHO.
Not just important, but essential. If you are a serious anarchist,
please advocate clearly for doing away with ridiculous 'laws' that
purport to limit what people and organizations can and cannot do with
public goods such as 'the tubes of the Internets'. I can respect
that.
If you believe that law as an institution has value and should be
supported, then collapsing Google and TPB to parallel cases actually
weakens our current excellent freedoms of network neutrality (one
solution is to monitor content!) and global access to content-neutral
hosting and cataloging (another is to require strong ISPs or enforced
blacklists).
The best argument against cracking down on torrent sites is to make
one that provides a significantly better and neutral search interface!
I myself have lots of things I want to torrent and share, and because
they aren't movies, shows, popular music, or collections of skinful
photos, it's surprisingly (alarmingly? unhelpfully) difficult to find
them.
SJ
who has only ever made one original torrent :
http://www.mininova.org/tor/1564237
I have an essay in a book that is available in torrent form through the
Pirate Bay as its official electronic distribution.
In addition to shameless self-promotion, I mention this to show that
there is at least some non-infringing use of TPB...
- Rob.
I would like to draw attention to the fact that The Pirate Bay is
called "The /Pirate/ Bay". It is pretty clear that its primary
intention is to facilitate software piracy. Does it have incidental
uses that are not for copyright infringement? Sure. But are we really
arguing that a website with the word "pirate" in its name wasn't
intending to aid people in piracy?
--
RM
rsmason.net
dreamersoften.blogspot.com
I would like to draw attention to the fact that The Pirate Bay is
called "The /Pirate/ Bay". It is pretty clear that its primary
intention is to facilitate software piracy. Does it have incidental
uses that are not for copyright infringement? Sure. But are we really
arguing that a website with the word "pirate" in its name wasn't
intending to aid people in piracy?
_______________________________________________
Discuss mailing list
Dis...@freeculture.org
http://freeculture.org/cgi-bin/mailman/listinfo/discuss
> I would like to draw attention to the fact that The Pirate Bay is
> called "The /Pirate/ Bay". It is pretty clear that its primary
> intention is to facilitate software piracy. Does it have incidental
> uses that are not for copyright infringement? Sure. But are we really
> arguing that a website with the word "pirate" in its name wasn't
> intending to aid people in piracy?
Piracy is the act of attacking ships.
If you're talking about unauthorized copying, there is a difference
between individual unauthorized copying for personal use and the
commercial widescale copying for profit.
I don't think there's anything to suggest that the pirate bay operators
actually made money from what they've done, in fact I'm sure they've
made a loss -- operating servers, maintaining them, etc costs money.
If you want to claim that DRM to limit digital copies of the text of a
work is OK in a way that DRM to limit automatic audio-transformations
of the work is not, I would like to hear your justification for that.
I think that a much broader swath of modern interpretation of
'copyright' is primitive, scale-limiting, and counteproductive to its
original intent, but there's nothing about this particular wrinkle
that I find worrisome in a novel way. (and focusing on trivia to the
exclusion of real problems can derail movements for years)
Rob Myers writes:
> The particular control being protested against here prevents the book
> being read using text-to-speech. So even if it was true that some
> forms of control of texts do not affect reading, this one definitely
> does.
No more than the Kindle in general 'prevents the book from being used
to distribute digital copies of texts'. In particular, your Kindle
does indeed parse and process text, just as it does connect to the
internet and know how to transfer files. It simply doesn't let you
run certain operations (such as 'copy this and file and email it to my
aunt' or 'use the built in tts converter and read it out loud').
>> Considering how deeply many free software orgs have delved into
>> various ways to modify and prohibit certain types of reuse,
>
> If they did so then their software would not qualify as free.
I agree with your statement, under 'reasonable' definitions of "free".
The FSF definnition does not qualify, for instance.
To the contrary, copyleft, including FSF "freedom", requires the full
strength of copyright-law-backed restrictions on reuse, for a minimum
of life+70 years. Some copyleft supporters would use infinite-term
copyleft if it were available. This imposing definition applied to a
very unimposing word (and held up as the standard for free sharing of
knowledge) is something I find rather offensive...
> Both groups must consider copyright law as a factor in achieving their
> desired outcome but free software does so in order to protect the
> freedom to use software rather than to remove the freedom to read.
> Protecting and attacking freedom are not the same thing.
Free software protects the right of the author to indefinitely and
granularly restrict future use of a work. It protects the freedom of
the author at the expense of the freedoms of reusers.
Designers of auto-text-to-speech restrictions protect the right of the
author to granularly restrict future use of a work; again protecting
this specific freedom of the author at the expense of the freedom of
reusers. Defining the freedoms protected and those restricted in each
situation is a good exercise; you may prefer one bundle of freedoms to
the other, but they are similar in form.
> Copyleft is not viral, nothing can catch a copyleft license through
> mere proximity.
Standard copyright and copyleft are both viral. You have to work at
it to make a license that fits within Copyright that is /not/ viral.
We can argue about what 'proximity' means, but in our gangly,
adolescent concept of copyright, all of the following are sufficient
proximity to pick up such a virus, [even though for instance many do
not involve significant creative alteration]:
http://en.wikipedia.org/wiki/User:Sj/rights#Acting_on_a_work
SJ
Yes, it was a dumb marketing move. But it's not legally or socially
much different from having a DRM book reader in the first place, imo.
> reader in general. #2 Is it really a derivative?
Good question. It sure is under current law: transformation from text
to audio is just one of many ways one can produce a derivative of a
work. My opinion: it should not be (but your examples below should).
I'd like to fix copyright law so that minimally-creative
transformations cannot be separately licensed, and so that
single-party licenses are limited in what sorts of hoops they can make
users jump through to be considered 'in compliance'. But note that
this is about how granular author's rights over 'derivatives' and
'interpretation' can be, not about the DRM itself.
> Suppose I scoured youtube
> and created a script that would play certain sections of videos one after
> another in sequence, which when listened to had the words for Cory
> Doctorow's book Little Brother. Now, Cory Doctorow is a pretty nice guy, and
> would probably post it to his blog, but is what I have created a derivative
> of his book?
My preferred rule of thumb is :
if you produced / could reasonably produce? the final work by
applying a script or function that was designed without knowledge
of the original works
to the original work
with minimal custom or creative tuning or alteration
then the resulting work is not derivative in a significantly creative
way, and any rights you have to use, share, interpret, transform, or
reuse w.r.t. the original work you should also have w.r.t. the final
work.
> Is the script a derivative of Doctorow's book? Is it a derivative of the original
> vidoes?
I think in your case you created a special script to do what you
wanted for this work, so yes, it and these other elements are
creatively derivative by my rot.
> Isn't this much like what happens when the Kindle "reads" it scours
> for prerecorded sounds and slaps them together on the fly.
No, the Kindle presumably uses a tts program which has not been
specially trained and customized on its input texts, and is being
uncreatively applied to those texts. Custom readings by talented
readers who first grasped the emotion and importance of the passage
would be significantly different.
> I for one am not convinced a program, hardware, or script constitutes a true derivative work.
> I'm sure many will disagree.
There's no consensus on a lot of this, and not even a clear sense of
philosophical direction that I can see (stark tell me if you have good
examples!), just lots of case law.
SJ
> Piracy is the act of attacking ships.
>
> If you're talking about unauthorized copying, there is a difference
> between individual unauthorized copying for personal use and the
> commercial widescale copying for profit.
>
> I don't think there's anything to suggest that the pirate bay operators
> actually made money from what they've done, in fact I'm sure they've
> made a loss -- operating servers, maintaining them, etc costs money.
>
>
>
Actually, the OAD's third definition for piracy reads "the
unauthorized use or reproduction of another's work." This use of the
term dates back to the 17th century.
Regarding free culture meaning we don't have to pay for everything,
the OAD's first definition is "not under the control or in the power
of another; able to act or be done as one wishes." Actually, it
doesn't speak of the sense of "not costing anything" until definition
five. When the uninitiated hear of the free culture movement, they
assume that it is probably a movement involving a removal of
restrictions of some variety, presumably restrictions related to
culture. This is not an unreasonable assumption to make--when we, as
humans, name things, we frequently give them relevant names. It helps
us remember, and it helps us quickly identify the purpose. Names, and
words, matter. The men behind TPB were flippant--indeed, they are
famous and roundly admired for this fact. Why wouldn't they name their
website something suitably flippant? It seems utterly out of
character.
Of course, I'm not exclusively basing my assessment of TPB's
functionality on its name--I'm also basing it on what everyone I know
who used it used it for--I'm merely pointing out what I feel is a
prety significant incongruity.
--
RM
rsmason.net
dreamersoften.blogspot.com
I agree that criticism of this feature should be tied to a more
general critique.
> Rob Myers writes:
>
>> The particular control being protested against here prevents the book
>> being read using text-to-speech. So even if it was true that some
>> forms of control of texts do not affect reading, this one definitely
>> does.
>
> No more than the Kindle in general 'prevents the book from being used
> to distribute digital copies of texts'.
Distribution is not use. This is similar to the difference between
positive and negative freedom. But the Kindle could be used *to*
distribute, so I agree that this is an issue whichever way you look at
it. And this means that both are examples of the freedom to read being
restricted.
> In particular, your Kindle
I don't own a kindle. ;-)
> does indeed parse and process text, just as it does connect to the
> internet and know how to transfer files. It simply doesn't let you
> run certain operations (such as 'copy this and file and email it to my
> aunt' or 'use the built in tts converter and read it out loud').
"Simply not letting" is control.
>>> Considering how deeply many free software orgs have delved into
>>> various ways to modify and prohibit certain types of reuse,
>>
>> If they did so then their software would not qualify as free.
>
> I agree with your statement, under 'reasonable' definitions of "free".
> The FSF definnition does not qualify, for instance.
The FSF's definition of freedom is reasonable for non-teleological
definitions of "free".
Both public domain dedication and the BSD licence are free under the
FSF's definition. Copyleft is a useful addition which protects that
freedom, but it is not the definition of that freedom. (Although I
personally think it is the best way of ensuring it.)
> To the contrary, copyleft, including FSF "freedom", requires the full
> strength of copyright-law-backed restrictions on reuse,
Free use of software requires the neutralization of copyright on
software. But copyright law cannot be neutralized without using
copyright law. If copyright law went away tomorrow, the need for and
mechanism of copyleft would go with it.
Copyleft is an ironization of copyright, it is a legal judo throw that
uses copyright's own strength against it.
> for a minimum
> of life+70 years. Some copyleft supporters would use infinite-term
> copyleft if it were available.
Crosbie Fitch always tells me off if it even looks like I am going to
treat copyleft as an end in itself, and I am worried about
OpenStreetMap trying to use "share-alike" for data, so I do recognize
that the desire to use legal hacks to protect freedom can become
counter-productive.
> This imposing definition applied to a
> very unimposing word (and held up as the standard for free sharing of
> knowledge) is something I find rather offensive...
If you are objecting to copyleft going too far, I agree that people
can try to take it too far.
>> Both groups must consider copyright law as a factor in achieving their
>> desired outcome but free software does so in order to protect the
>> freedom to use software rather than to remove the freedom to read.
>> Protecting and attacking freedom are not the same thing.
>
> Free software protects the right of the author to indefinitely and
> granularly restrict future use of a work. It protects the freedom of
> the author at the expense of the freedoms of reusers.
It protects everyone equally.
> Designers of auto-text-to-speech restrictions protect the right of the
> author to granularly restrict future use of a work; again protecting
> this specific freedom of the author at the expense of the freedom of
> reusers.
The "freedom" to control others is not freedom, it is control.
> Defining the freedoms protected and those restricted in each
> situation is a good exercise; you may prefer one bundle of freedoms to
> the other, but they are similar in form.
The fact that two opposing objectives can be realized using the same
methods does not make them similar *except* formally.
Similarity of form is not similarity of content.
>> Copyleft is not viral, nothing can catch a copyleft license through
>> mere proximity.
>
> Standard copyright and copyleft are both viral.
If we must use that metaphor then copyleft is the attenuated version
of the virus required to vaccinate against copyright.
But I'd say that "heritable" is a better description.
> You have to work at
> it to make a license that fits within Copyright that is /not/ viral.
> We can argue about what 'proximity' means, but in our gangly,
> adolescent concept of copyright, all of the following are sufficient
> proximity to pick up such a virus, [even though for instance many do
> not involve significant creative alteration]:
If copyright is such a problem, then it needs neutralizing as
effectively as possible. Neutralizing copyright requires use of the
law, because copyright is a legal construct. The relevant area of law
here is license law. Since it is, as you point out, difficult to make
an effective copyright license without heritability, the most
effective way of neutralizing copyright is with a heritable license.
This is what copyleft is.
- Rob.
[...]
I think Rob's reply is pretty well stated. One problem with copyright
is that's it's automatically granted, so you have to create some
construct to catch things so that they don't fall in to the default
system. If they *can* fall in, how free are they?
(Are you more free if there are no rules, but you can be thrown into
jail by anyone who decides to put you there, or if you have to follow
a few rules, but you can't be put in jail unless you break them? (5
points, all answers receive credit.))
Copyleft is an overlay onto existing law and depends on it to exist --
I would argue that copyleft licenses become meaningless when the work
falls into the public domain; it's never a loss when compared with the
defaults.
There is *no way to win* with these hacky legal overlays if you favor
maximal availability -- to create a work that has absolutely no
restrictions on use and cannot be re-trapped by the default system.
Copyleft is a tradeoff. I can't think of a better one.
-Kat
--
Your donations keep Wikipedia online: http://donate.wikimedia.org/en
Wikimedia, Press: k...@wikimedia.org * Personal: k...@mindspillage.org
http://en.wikipedia.org/wiki/User:Mindspillage * (G)AIM:Mindspillage
mindspillage or mind|wandering on irc.freenode.net * email for phone
And more to the point, the Kindle behaves as if *I* can't parse and
process text.
Bogus.
> >>> Considering how deeply many free software orgs have delved into
> >>> various ways to modify and prohibit certain types of reuse,
> >>
> >> If they did so then their software would not qualify as free.
> >
> > I agree with your statement, under 'reasonable' definitions of "free".
> > The FSF definnition does not qualify, for instance.
>
> The FSF's definition of freedom is reasonable for non-teleological
> definitions of "free".
>
> Both public domain dedication and the BSD licence are free under the
> FSF's definition. Copyleft is a useful addition which protects that
> freedom, but it is not the definition of that freedom. (Although I
> personally think it is the best way of ensuring it.)
>
> > To the contrary, copyleft, including FSF "freedom", requires the full
> > strength of copyright-law-backed restrictions on reuse,
>
> Free use of software requires the neutralization of copyright on
> software. But copyright law cannot be neutralized without using
> copyright law. If copyright law went away tomorrow, the need for and
> mechanism of copyleft would go with it.
>
> Copyleft is an ironization of copyright, it is a legal judo throw that
> uses copyright's own strength against it.
Man, why does it seem like this always ends up having to be
explained? It is tres cool, but you'd think the lesson would have
landed by now . . . maybe it's a trick to get it, but it's a way old
trick.
Seth
broken, a bit
Forbidden
Your client does not have permission to get URL /custom?... from this
server. (Client IP address: ....)
We apologize for your inconvenience, but this request could not be processed.
Please click here to continue your search on Google.
> http://somalipirate.livejournal.com/3339.html
I know I'm a bit lagged, but:
"The day we see a movie executive on the high seas is the day we take him
hostage and show him what real piracy looks like."
Can someone design a good shirt with that sentence on it?
That's *fantastic*.
-- Asheesh.
--
Use what talents you possess: the woods would be very silent if no birds
sang there except those that sang best.
-- Henry Van Dyke
"The Internet derives its disruptive quality from a very special
property: IT IS PUBLIC. The core of the Internet is a body of simple,
public agreements, called RFCs, that specify the structure of the
Internet Protocol packet. These public agreements don't need to be
ratified or officially approved - they just need to be widely adopted
and used.
"The Internet's component technologies - routing, storage,
transmission, etc. - can be improved in private. But the Internet
Protocol itself is hurt by private changes, because its very strength
is its public-ness."
---
> http://isen.com/blog/2009/04/broadband-without-internet-ain-worth.html
Thursday, April 30, 2009
Broadband without Internet ain't worth squat
by David S. Isenberg
Keynote address delivered at Broadband Properties Summit
(http://www.bbpmag.com/2009s/9fullagenda.php#gs71)
4/28/09
We communications professionals risk forgetting why the networks we
build and run are valuable. We forget what we're connecting to what.
We get so close to the ducts and splices and boxes and protocols that
we lose the big picture.
Somewhere in the back of our mind, we know that we're building
something big and new and fundamental. We know, at some level, there's
more than business and economics at stake.
This talk is a 30,000-foot view of why our work is important. I'm
going to argue that the Internet is the main value creator here - not
our ability to digitize everything, not high speed networking, not
massive storage - the Internet. With this perspective, maybe you'll
you go back to work with a slight attitude adjustment, and maybe one
or two concrete things to do.
In the big picture, We're building interconnectedness. We're
connecting every person on this planet with every other person. We're
creating new ways to share experience. We're building new ways for
buyers to find sellers, for manufacturers to find raw materials, for
innovators to rub up against new ideas. We're creating a new means to
distribute our small planet's limited resources.
Let's take a step back from the ducts and splices and boxes and
protocols. Let's go on an armchair voyage in the opposite direction --
to a strange land . . . to right here, right now, but without the
Internet.
In this world we have all the technology of today, but no Internet
Protocol, that is, there's no packet protocol that all proprietary
networks can understand.
In this alternate reality, every form of information can be digitized,
BUT there's not necessarily a connection between all this information
and all the users and services that might discover it and use it to
their advantage.
This was the world envisioned by the movie, The President's Analyst,
where The Phone Company secretly ran the world. It's from 1967, the
same year that Larry Roberts published the original ArpaNet spec.
Roll Clip (http://www.youtube.com/v/uUa3np4CKC4)
In a world without the Internet, it's not clear that we'd actually
have a thought transducer in our brain. But if we did, I'd bet we
couldn't program it ourselves. I'd bet we couldn't shut it off. I'd
bet we couldn't decide who could receive its signal and who could not.
What WOULD we have?
We would have super-clear telephony. We'd have cable TV with lots and
lots of channels. We'd have lower op-ex and higher def. We'd probably
have some kind of telephone-to-TV integration so we could order from
Dominos while we watched Gunsmoke. Our cell phones would make really,
really good phone calls . . . and we'd have another half-dozen bungled
attempts to convince us that picturephones were the next great leap
forward.
Surprisingly, we might not have email. The first generation of
Internet Researchers only discovered human-to-human email in 1972 -
the subsequent growth of "People-to-People" applications was a big
surprise to them. Now, without email, there there'd be no reason to
invent the Blackberry or the iPhone. Without the Internet, it would be
a voice, voice, voice, voice world.
This voice, voice, voice would be expensive. Without the Internet -
specifically without Voice over IP -- we'd still be paying fifteen
cents a minute for long distance, because VocalTec would not have
commercialized VOIP, Vonage and Skype wouldn't exist, and even the
major telcos would not have used VOIP to destroy the international
settlement system.
Data service? Think ISDN. Actually, think about a dozen different
so-called Integrated Services Networks, each with its own access and
login, with no good way for one to connect to another. Metcalfe's Law
would suggest there'd be orders of magnitude less traffic overall.
Would we have Search? Perhaps. Imagine what Encyclopedia Britannica On
Line would look like in a non-Wikipedia world . . . at a buck a
lookup.
Digital photography? Perhaps . . . but medium would be paper and the
biggest company would be Kodak.
What about Amazon? EBay? YouTube? Weather.com? Google Maps?
Travelocity? Yahoo Finance? iTunes? Twitter? Facebook? CraigsList?
Blogging? On-Line Banking?
We wouldn't even have Web sites. Sure we could probably buy some kind
of proprietary on-line presence, but it would be so expensive that
only GE, GM and GQ could afford it, and so inaccessible they probably
wouldn't want to pay.
Web 2.0 - the ability of a single computer to reach across the
Internet in a dozen different directions at once to build an
customized web page on the fly - would be worse than unavailable, it
would be unthinkable.
But it's not all bad. Without the Internet, we would still get our
news from newspapers, the corner bookstore would still be down on the
corner, the Post Office would be thriving, your friendly travel agent
would still be booking your trips, Dan Rather would still be on TV,
perverts would still get their sick pix in inconvenient plain brown
wrappers, and the NSA would not know the books I bought at Amazon or
who I email with.
Tough. We lost a lot of skilled leather-smiths when they invented the
horseless carriage. We'll find ways to deal with the Internet's
changes too.
Without the Internet, the minor improvements in telephony and TV
certainly would not drive the buildout of a whole new infrastructure.
The best way to do telephony would still be twisted pair. The best way
to do Cable TV would be coax.
Now I'm a huge Fiber to the Home enthusiast! But I'm also part of the
Reality Based Community. So let's face it, even WITH the Internet,
including Verizon's amazingly ambitious FIOS buildout, the business
case for fiber is so weak that 97 percent of US homes still aren't on
fiber. We are still in "Law of Small Numbers" territory. The Internet
is the only thing standing between our limited success and abject
failure.
Notice, I have not yet, until now, used the word BROADBAND.
But before I talk about broadband, I want to talk about Synechdoche.
Synecdoche is when you say, "The Clock" but you mean Time. Synecdoche
is when you say, "Eyeballs," but you mean The Customer's Attention.
Synecdoche is when you say, Dallas, but you mean, "The Mavericks."
Most of the time Broadband is synecdoche. When we say, "Broadband,"
most of the time we mean, "High Speed Connections to the Internet."
I repeat, Most of the time when we say Broadband we mean High Speed
Connections to the Internet. Broadband is synecdoche.
Without the Internet, "Broadband" is just another incremental
improvement. It makes telephony and TV better. It makes the Internet
better too. But the key driver of all the killer apps we know and love
is the Internet, not Broadband. And, of course, the Internet is
enabled by lots of technologies - computers, storage, software, audio
compression, video display technology, AND high-speed wired and
wireless networking.
Now, Broadband is a very important enabler. The United States has
slower, more expensive connections to the Internet than much of the
developed world. And that's embarrassing to me as a US citizen.
Imagine if a quirk of US policy caused us to have dimmer displays.
That would be a quick fix, unless the display terminal industry
demanded that we disable the Internet in other ways before it gave us
brighter displays. Or insisted "all your screens are belong to us."
High-speed transmission does not, by itself, turn the wheel of
creative destruction so central to the capitalist process. The
Internet does that. Broadband, by itself, does not fuel the rise of
new companies and the destruction of old ones. The Internet does that.
Broadband by itself is not disruptive; the Internet is.
The Internet derives its disruptive quality from a very special
property: IT IS PUBLIC. The core of the Internet is a body of simple,
public agreements, called RFCs, that specify the structure of the
Internet Protocol packet. These public agreements don't need to be
ratified or officially approved - they just need to be widely adopted
and used.
The Internet's component technologies - routing, storage,
transmission, etc. - can be improved in private. But the Internet
Protocol itself is hurt by private changes, because its very strength
is its public-ness.
Because it is public, device makers, application makers, content
providers and network providers can make stuff that works together.
The result is completely unprecedented; instead of a special-purpose
network - with telephone wires on telephone poles that connect
telephones to telephone switches, or a cable network that connects TVs
to content - we have the Internet, a network that connects any
application - love letters, music lessons, credit card payments,
doctor's appointments, fantasy games - to any network - wired,
wireless, twisted pair, coax, fiber, wi-fi, 3G, smoke signals, carrier
pigeon, you name it. Automatically, no extra services needed. It just
works.
This allows several emergent miracles.
First, the Internet grows naturally at its edges, without a master
plan. Anybody can connect their own network, as long as the connection
follows the public spec. Anybody with their own network can improve it
-- in private if they wish, as long as they follow the public
agreement that is the Internet, the result grows the Internet.
Another miracle: The Internet let's us innovate without asking
anybody's permission. Got an idea? Put it on the Internet, send it to
your friends. Maybe they'll send it to their friends.
Another miracle: It's a market-discovery machine. Text messaging
wasn't new in 1972. What surprised the Internet Researchers was
email's popularity. Today a band that plays Parisian cafe music can
discover its audience in Japan and Louisiana and Rio.
It's worth summarizing. The miracles of the Internet - any-app over
any infrastructure, growth without central planning, innovation
without permission, and market discovery. If the Internet Protocol
lost its public nature, we'd risk shutting these miracles off.
One of the public agreements about the Internet Protocol lays out a
process for changing the agreements. If somebody changes their part of
the Internet in private, they put the Internet's miracles at risk.
Comcast tried to do that by blocking BitTorrent. Fortunately, we
persuaded Comcast to stop. If it had continued, it would have put a
whole family of Internet applications at risk, not only for Comcast
Internet customers, but also for everybody who interacts with
Comcast's customers.
The whole fight over Network Neutrality is about preserving what's
valuable about the Internet - its public-ness.
The Internet threatens the telephone business and the cable TV
business. So of course there's a huge propaganda battle around the
Internet.
The propaganda says Network Neutrality is about treating every packet
exactly the same, but the Internet has never done that. The propaganda
says that Network Neutrality is about regulating the Internet, but we
know that the Internet exists thanks to the government's ArpaNet, and
subsequent wise government regulation.
Look who's calling for regulation anyway! The only reason telcos and
cablecos exist is that there's a whole body of franchises and tariffs
and licenses and FCCs and PUCs keeping them in business.
Cut through the propaganda. Network Neutrality is about preserving the
public definition of the Internet Protocol, the structure of the
Internet packet, and the way it is processed. If there are reasons to
change the Internet Protocol, we can do it in public - that's part of
the Internet too.
It's the Internet, smart people. Your property already has telephone
and TV. So does everybody else's. Broadband without the Internet isn't
worth squat. You're building those fast connections to The Internet.
So please remember that the essence of the Internet is a body of
public agreements. Anti-Network Neutrality attacks on the public
nature of the Internet are attacks on the value of the infrastructure
improvements you've made to your property. So you can't be neutral on
Network Neutrality. Take a stand.
If you install advanced technology that makes your property more
valuable, you deserve your just rewards. But the potential of the
Internet is much, much bigger than your property.
Like other great Americans on whose shoulders I stand, I have a dream.
In my dream the Internet becomes so capable that I am able to be with
you as intimately as I am right now without leaving my home in
Connecticut.
In my dream the Internet becomes so good that we think of the people
in Accra or Baghdad or Caracas much as we think of the people of
Albuquerque, Boston and Chicago, as "us" not "them.".
In my dream, the climate change problem will be solved thanks to
trillions of smart vehicles, heaters and air conditioners connected to
the Internet to mediate real-time auctions for energy, carbon credits,
and transportation facilities.
In my dream, we discover that one of the two billion who live on less
than dollar a day is so smart as to be another Einstein, that another
is so compassionate as to be another Gandhi, that another is so
charismatic as to be another Mandella . . . and we will can comment on
their blog, subscribe to their flickr stream and follow their twitter
tweets.
But I also have a nightmare . . .
In my nightmare, the telephone company has convinced us that it needs
to monitor every Internet transaction, so it can -- quote-unquote --
manage -- what it calls "my pipes".
Maybe it says it needs to stop terrorism, or protect the children, or
pay copyright holders. Maybe there's a genuine emergency -- a pandemic
or a nuclear attack or a 9.0 earthquake.
In my nightmare, whatever the excuse -- or the precipitating
real-world event -- once the telephone company gains the ability to
know which apps are generating which packets, it begins charging more
for applications we value more.
In my nightmare, once the telephone company has some applications that
generate more revenues because they're subject to management -- and
others that don't -- the former get all the newest, shiniest, fastest
network upgrades, while the latter languish in what soon becomes
Yesterday's Network.
In my nightmare, new innovations that need the newest fastest network,
but don't yet have a revenue stream, are consigned to second-class
service. Or they're subject to lengthy engineering studies and other
barriers that keep them off the market. In other words, in my
nightmare, all but the most mundane innovation dies
So it's up to you. When you make high-speed networks part of your real
estate, if you insist that these connect to the REAL Internet, the
un-mediated, un-filtered publicly defined Internet, you're part of a
global miracle that's much bigger than your property. Please ask
yourself what's valuable in the long run, and act accordingly.
Technorati Tags: Cableco, fiberoptics, NetworkNeutrality,
privatization, Telco
PermaLink // posted by isen @ 9:48 AM // emailthis
> http://www.copyright.gov/1201/hearings/2009
The following submissions and testimony from the 2006 and 2003
proceedings exhibit should convey the kinds of "exemptions" we really
need.
All of these texts are pasted below.
New Yorkers for Fair Use's 2006 Request for an Exemption (drafted by
Jay
Sulzberger):
> http://www.copyright.gov/1201/2006/reply/10sultzberger_NYFU.pdf
Jay's testimony at the hearing itself begins on page 36 here (the text
is pasted below as well):
> http://www.copyright.gov/1201/2006/hearings/transcript-mar31.pdf
Jay's testimony at the 2003 hearing may also be found to be salient.
Here's a good transcript (also pasted below):
> http://thread.gmane.org/gmane.org.dmca-activists/570/focus=572
It starts at the bottom of page 171 of the official transcript here:
> http://www.copyright.gov/1201/2003/hearings/transcript-may2.pdf
See the copyright.gov transcript links above to track the further
discussion each year.
Part 2: Developments in "Trusted Computing"
In the meantime, note the following "Trusted Computing" developments.
First, a workshop at CMU. Plus Microsoft has recently hired Jonathan
Shapiro, lead developer for the EROS/Coyotos/Bit-C projects. These
are projects to produce a fully virtualized operating system, which is
a key part of palladiated computers -- a form of "DRM" that succeeds
completely in robbing you of the ability to control your own
computer. "Virtualization" means making *all* parts of the computer
virtual -- you cannot directly address a port, a bit in RAM,
anything. Essentially, every single operation on the computer is
PGP-encrypted, and you must route all operations through an
impregnable kernel. A palladiated computer uses *somebody else's*
private key on your own computer's motherboard, creating a system that
gives outsiders complete control over what you can do.
"Trusted Infrastructure" Workshop
> http://www.cylab.cmu.edu/TIW/
Microsoft hires Jonathan Shapiro:
> http://blogs.zdnet.com/microsoft/?p=2463
> http://www.coyotos.org/pipermail/bitc-dev/2009-April/001784.html
Richard Stallman on Treacherous Computing:
> http://www.gnu.org/philosophy/can-you-trust.html
"Virtualization" technology proceeds apace while our policy channels
fail to distinguish private interest concerns from the true concerns
and nature of copyright.
Seth
---
Request for an Exemption, 2006:
> http://www.copyright.gov/1201/2006/reply/10sultzberger_NYFU.pdf
This is a comment on the class of works proposed by Edward W. Felten
and
Deirdre K. Mulligan to be exempt from the prohibition on circumvention
of DRM under the DMCA.
Our comment is that the Felten-Mulligan class is drawn too narrowly.
We
present an amended definition of the Felten-Mulligan class of works,
with brief arguments.
0. The class of works which should be exempt from the
Anti-Circumvention
Clauses of the DMCA consists of all malicious software, including
viruses, worms, spywares, trojan horses, remote controllers, rootkits,
and more. The phrase "malicious software" designates programs which
cause harms to a computer and/or its owner, and which are placed on
the
computer against the owner's wishes and without the owner's express
consent. Malicious software might be delivered with a computer or be
installed later. Some malicious software may be contained in, or make
use of, components installed as hardware.
1. Harms from not granting the exemption: Millions of home and
business
computer owners have had to remove malicious software from their
computers. Many computer owners have had credit card numbers and bank
passwords appropriated and compromised. If the circumvention of
Technological Protective Measures preventing malicious software from
being detected, analyzed, or removed, were illegal, then the DMCA
would
be used as a shield against computer owners' rights to maintain
control
over their computers.
The numbers here are easy to estimate as being in the billions of
dollars per year losses caused by malicious software, and the number
of
people adversely affected by malicious software as being in the
millions.
2. Harms from granting the exemption: Some malicious software works
are
under copyright. The malicious software author would lose an apparent
right of concealment, and thus, often, the practical ability to commit
a
crime, or crimes, against the intended victim or victims. In some
cases
the author, or other rightsholder, might be unable to make a living by
making and distributing malicious software, or software which is in
part
malicious.
The numbers here are harder to estimate, since we know of no
successful
suit by a malicious software rightsholder against a person who has
discovered the malicious software and removed it, on the basis of
copyright infringement, or DMCA violation. Perhaps a thousand, or
perhaps ten thousand, malicious software authors/rightsholders might
lose their chance to sue their victims under the DMCA
Anti-Circumvention
Clauses.
3. General argument for exemption: Decrypting lists of blocked sites
in
filtering software presently enjoys an exemption to the
anti-circumvention provisions of the DMCA. Computer owners throughout
the world are today at great risk of infestation by malicious
software.
If an exemption were not available for circumvention of malicious
software, the scale of harm that would ensue would be far greater than
for filtering software. Fewer computer owners are at risk of
missing/seeing some sites due to false positives and false negatives
on
blocked sites lists. The danger from malicious software is in most
cases
much higher.
The harms our exemption would defend against are not hypothetical:
Recently many computers have been infested by the Sony BMG rootkit,
and
the rootkit has been used by other distributors of malicious software
to
compromise home and business computers. The Sony BMG rootkit attempts
to
conceal itself, is under copyright (though it likely also infringes
others' copyrights) and is itself malicious software, in that it is
installed without consent and damages the computer. Our exemption
would
prevent Sony BMG from successfully claiming that the computer owner
who
gains access to the rootkit has violated the Anti-Circumvention
Clauses
of the DMCA.
For information on the Sony BMG rootkit see:
http://www.eff.org/IP/DRM/Sony-BMG
The Sony BMG rootkit is an example of a kind of DRM which Microsoft,
in
cooperation with Intel, IBM, and various computer vendors, intend to
place in many home computers in the next few years. The Sony BMG
rootkit
is weak in practice, in that an expert in Microsoft OSes, if hired to
find, analyze, and craft defenses against it, would almost surely
succeed pretty quickly. The system of DRM once called by Microsoft
"Palladium", and today called by Microsoft "NGSCB", would offer to
licensees of Microsoft the same cloaking capabilities as the Sony BMG
rootkit does today. But Palladium is much harder to crack open and
remove than the Sony BMG rootkit. And Palladium offers other services
to
authors of malicious software beyond what the Sony BMG rootkit has
made
available.
Here is a quote which shortly conveys part of the threat Palladium
poses
to owners of home computers:
From
http://zgp.org/linux-elitists/2003121117...@cannabis.html#2003121116...@shaitan.lightconsulting.com
Re: [linux-elitists] Monday 15 Dec: first all-Open Source
System-on-Chip
Jason Spence <jsp...@lightconsulting.com>
Thu, 11 Dec 2003 16:49:11 -0800 rfc822
mailmethis
On Thu, Dec 11, 2003 at 01:23:33PM -0600, D. Joe Anderson wrote:
>
> w00t! Here's a good start to the the back-up plan if
> TCPA/Longhorn/Palladium/"Fritz-chips"* get out of hand.
You know, the black hat community is drooling over the possibility of
a
secure execution environment that would allow applications to run in a
secure area which cannot be attached to via debuggers and such.
-Jason
Last known location: 2.5 miles northwest of MOUNTAIN VIEW, CA
Under a government which imprisons any unjustly, the true place for a
just man is also a prison.
--Henry David Thoreau
End quote.
Our exemption would, in part, lift the burden of legal risk a computer
owner would face in the attempt to remove malicious software that lies
behind the cloak of Palladium.
For information about Palladium see
http://en.wikipedia.org/wiki/Trusted_computing
http://en.wikipedia.org/wiki/Talk:Next-Generation_Secure_Computing_Base
4. Our proposed exemption differs from some proposed exemptions in
that
our exemption is not aimed at preserving decades old textbook examples
of fair use rights, such as the right to quote a work in argument, the
right of parody, etc.. Rather, our exemption, if granted, would defend
important personal property, that is, the home computer. The exemption
would also defend privacy and free speech rights, because of the use
of
home computers to communicate using the world's Net. The dangers our
exemption defends against cannot be classed as picayune inconveniences
nor as negligible impairments of rights. Our exemption would help
defend
fundamental human rights.
New Yorkers for Fair Use
http://www.nyfairuse.org
Jay Sulzberger
ja...@panix.com
US Mail Address:
New Yorkers for Fair Use
622A President Street
Brooklyn, NY 11215
---
2006 Opening Testimony:
MR. SULZBERGER: My name is Jay Sulzberger, and I’m a working member of
New Yorkers for Fair Use. I’d like to address Matthew Schruers’ last
statement and expand on it. I think lawyers are terribly important
here
and, of course, the part of the law that is terribly important in
these
considerations is not copyright law. It’s the law of private property.
It’s the law of privacy. Those are the parts of the law.
Now, Matthew also mentioned that should we be handing the entire
computer and communications infrastructure of the United States and
the
world over to copyright holders in cooperation with hardware
manufacturers and Microsoft? And the answer is of course not. But we
have to first be clear on this. This is so obvious when stated in
those
terms that I believe there’s not a single person in this -- just a
moment. Is there anybody here who is disabled from understanding the
concept of private property? If anybody is not clear on it, and I know
lawyers will raise all sorts of objections because there’s a too
simple
notion of a perfect freehold, a perfect ownership of a chattel. But
look. Your computer and your house, your relationship and ownership to
it, if you’ve bought it and are legally running it and you’re not
violating, you’re not committing copyright infringement by publishing
for profit other people’s works for which you don’t have a license,
copyright holders should not be inside your computer, and they
shouldn’t
have pieces of code that you can’t look at to get control of your
computer.
And I had a sentence in my comment up on Professor Felten’s proposal
for
an exemption, and, of course, people would think, "Oh, he’s being
witty."
I’m not being witty. Who are the copyright holders? For whom do you
have
to give authorization under the Section -- I’ll have to check it -- J,
I
think, of the 1201(j) of the DMCA, you have to get authorization from
people who’ve written a piece of malware that’s gotten on your machine
without your express consent that’s damaging your machine. I think
there’s no member of the panel and I think there’s no member of the
people up on the dias who can possibly defend the concept that United
States copyright law is going to require me to go and get permission
from somebody who’s invaded my machine, done damage to my machine,
cost
me hours of effort, and, if I’m a business, perhaps cost me thousands
and thousands of dollars. These are the issues.
Now, why are we unclear on this? It’s because we don’t know what a
computer is. Copyright has already been misused to allow Microsoft and
Apple to place stuff in our machine when we go to the store we’re not
allowed to look at. It’s my right to look at every darn piece of code.
It’s my right to publish what the code does. It’s my right to
decompile.
You might find me agreeing it’s not my right to sell an improved
version
of their operating systems without getting a copyright license for it,
but that’s quite a separate issue. The issue here is private ownership
and wiretapping. And this is ridiculous that the DMCA should be
misinterpreted so as to actually defend people who write malware. We
have heard testimony from people who have tried to get the people who
wrote the malware to do something about it, and their response was
nothing or, "We promise not to sue you," or, "Maybe we’ll sue you."
This
isn’t okay.
Every lawyer here has taken a course or one or two or more on the law
of
private property. And, my gosh, copyright law can never say that I
lose
my right of ownership of a computer because some copyright holder
appeals to the DMCA after they’ve written a trojan, a virus, whatever
it
is they’ve written, something that goes into my machine, a rootkit.
Now, I was going to explain more, but I think I’ve come to the end of
my
time. I see these introductory comments are short. And what I wanted
to
do was explain how Sony BMG rootkit is negligible in its damage
compared
to what the DMCA anticircumvention clauses are enabling in the near
future. They’re enabling Microsoft, as announced, it announced in 2002
that it was going to install and license a rootkit to anybody who paid
the money. The system, the OS, and the hardware together, let’s
briefly
call them Palladium -- they’ve changed the name, I think I made the
same
joke three years ago, into mom’s apple pie and the anti-terrorist
loveable operating system with lots of bright, shiny colors. I’ve
forgotten if that’s their latest name for it.
Look. They’ve got something called the curtain. When you pay Microsoft
a
certain amount of money in the future, they claim they will let you
write programs that are hidden behind the curtain. You can never look
at
them. The Sony BMG rootkit is a joke today. It’s based on the
Microsoft
operating system. You can get around it in a few weeks, if you’re
really
competent and have hotshot students or if you’ve a professional and
know
what you’re doing and know about Microsoft operating system. You can
get
right around it, and, of course, it always has the joke get-around
that
I think if you press the shift key while the thing is loading there’s
certain circumstances it doesn’t get installed.
Look. That’s nothing. You should hardly be concerned about it, except
we
know that people who write viruses and trojans that damage your
machines
will appeal to the anticircumvention clauses in the DMCA. It’s a joke
how little damage it’s caused compared to what’s coming down the pike
real soon unless you act.
I know it seems ridiculous. You’re specialists in copyright. You’re
specialists in learning, publication, making sure authors get paid,
what
are the rights here, what are the rights there. It’s because the
country
has gone crazy and because people don’t know what ownership of
computers
means that we have this thing.
I think I’ve come to the end of my opening statement. I’m sorry to
rant
so hard, but I know that you’re prepared for it.
---
2003 Opening Testimony:
I'm Jay Sulzberger, and I'm here to represent New Yorkers for Fair
Use.
Well, I was a little bit puzzled as to what to say on this panel,
because seemingly this particular panel is about very specific harms
of
a very specific part of a big, complex law.
But as a matter of fact, I've been provided by the first three
panelists
with a parade of horribles. Mr. Montoro seems to have an 86 page
parade
of horribles, and of course CERT has an extraordinary parade of
horribles -- things that one would not have thought could happen in
America, things that one would have expected in the old Russian
Communist empire. And of course, Mr. Band has just brought up the
problem of the looting, spontaneous or planned, of ancient libraries
of
Earth's heritage [as had been reported in Iraq -- Seth].
I will just try to make what I thought was a difficult argument: We
should not be discussing particular exemptions of particular clauses
of
the DMCA. But I think that with the three panelists before me, the
pattern is clear: There's no excuse for any anticircumvention law in
the
United States of America. Because in each and every case, it is not
that we have a parade of particular offenses against good sense,
offenses against our freedom, attacks on free markets, attacks on
scientific research, attacks of artists rights, attacks on our right
to
free speech, and most important, a fundamental, general and effective
attack upon our present right of private ownership of computers.
Computers today are printing presses -- and it's shocking! I have
certain conservative tendencies; I am also sympathetic to the
socialists. But the idea that everybody who's a member of the middle
classes can pick up a computer for 300 bucks, and pay their 20 bucks a
month and get Internet access, and set up a web page -- it's shocking!
Democracy is one thing, but mob rule is another. But yet, there's
nothing that America can do about this. I hope there isn't.
But it looks as though there is. The DMCA anticircumvention clauses,
in
combination with the loose association, the alliance of cartels,
oligopolies and monopolies which I term the englobulators, is in
process
of placing spy machinery and remote control machinery at this very
moment, into every single Intel motherboard that's going to be sold in
the next year. When Microsoft completes the software part of its
system
of DRM called Palladium, this will end, completely, your right of
ownership, your right of private use of your Palladiated computer.
Now, the question arises: This can't be true, what I'm saying. I'm a
nut, I'm an extremist, I'm strident. Yes. (Laughter) But I'm not
nearly as much of a nut, I'm not nearly as much of an extremist, and
I'm
not nearly as crazy, vicious and strident, as the englobulators.
The question arises as: Why hasn't the press picked up on the fact
that
I'm the less extreme of the extremists? I believe in the Constitution
-- even though I didn't sign it; that's my anarchist side. I think
there's something to the first ten Amendments. And I think we should
take the Fourth Amendment very seriously. I think also the Fifth has
something to say about takings.
Why doesn't the press get it? It's a very simple reason -- I'm
talking
about rights and powers. I'm talking about fundamental rights of
ownership, fundamental rights of free speech, fundamental rights of
free
association using our Internet and our computers. Why doesn't the
press
get it? Because in practice today, most people run a damaged,
malfunctioning and obsolete operating system, usually called Microsoft
Windows -- there's several versions.
Copyright law has already been, I think, dreadfully misapplied for the
last twenty years, to prevent people from gaining control of their own
property in their own homes. This is important property. We know
that
Microsoft -- and as a matter of fact all other vendors and makers of
source-secret operating systems -- it's almost impossible not to give
in
to the temptation to spy somewhat on your users, particularly if
they're
connected to the Internet. Sun has done it; other companies have done
it. It's mainly Microsoft because it was only interested in the
Internet after 1990, although some of us have used the Net since 1970.
Now most people have a computer. It is their means of personal
communication; it's also their means of authorship, and their means of
publication.
Now, let me deal with the accusation of copyright infringement. Yeah,
sure -- there's going to be a heck of a lot more very serious
copyright
-- of the most dreadful sort -- because there are computers on the
Internet, and I don't give a good gosh-darn about it. The invention
of
writing was dreadful to the ancient and honorable profession of the
singing poet. The invention of the printing press did terrible things
to the Catholic Church's position in Europe, particularly once the
Bible
was translated and then printed.
Things change. And the cries of a small, unimportant industry -- I
mean
the whole of the "content providers" side -- who of course refuse to
admit there are any more content providers -- I really enjoy my own
stuff much more than anything Disney has made since 1935. I stand
equal
to them, by the way. New Yorkers for Fair Use, one of our favorite
tropes is: "Nonsense! We're not consumers; we're owners and we're
makers."
Okay. Let me try and outline what anticircumvention laws do, and what
they're about. This is one of our standard pieces of propaganda;
we've
been handing it out since last summer (Shows flyer).
"We are the Stakeholders" -- why do we say we're the stakeholders?
This
is an old joke, everybody knows it, I'm sure I'm not the first person
to
say this. In Washington parlance they say, what is a stakeholder?
It's
some organized group that can afford a full-time lobbyist, that's all.
The bizarre spectacle of seeing small private interests -- when I say
small, I mean small: the cotton subsidies last year in the United
States
were about, I think, 40% of the gross of Hollywood. You don't see
huge
articles about particular wrongs and a huge struggle on the basic
principles over how much of a subsidy they should get.
Okay. I'm not sure I'm actually going to read this whole thing, but
--
"Freedom One: You may buy a copy of a movie recorded on DVD, you may
watch this movie whenever you please, you may make copies of this
movie,
some of which may be exact copies, others of which may be variant
copies." We all know that the legal underpinnings of DRM is
anticircumvention. In the future, you won't be able to do that.
Now, this is an assault on private ownership of computers. This is
absurd. There's no need to say it, you all know this: Ernest Miller
and
Joan Feigenbaum, both at Yale, suggested that this is just a mistake,
it's going to be corrected. Copyright law shouldn't say anything
about
private copies. In the first place, technically it's going to be very
hard. You're going to have an endless line of the most difficult,
subtle things. For example, something on a news spool. Is that a
copy
or is it something in transmission?
The natural point which will defend us against the dreadful assault on
private property which is all the anticircumvention clauses of the
DMCA,
is to draw a natural line. Inside your house, you've got a copy of
something, if you've lawfully obtained it -- Oh, by the way, we're not
copyright extremists. I myself am a big supporter of the GPL, which
is
a somewhat strict copyright license, and I consider it actually one of
the main foundations of the defense of free software.
If you don't draw the line, if you seek for exemptions, you'll have to
make hundreds of exemptions -- and even if you enforce them -- and you
could enforce them -- the principle would remain: you don't have
control
over your machine. You'd have to get lobbyists, or a grassroots
organization to come to Washington, appear before you every three
years,
and beg, on bended knee, for particular exemptions.
You don't have to do that. You are allowed to turn to Congress and
say,
we've seen the parade of horribles. And not just one parade. All of
the people here, arguing for exemptions -- the principle is the same:
These people can't reach into your house and tell you what to do!
It's
absurd!
I'm going to try to avoid discussing the other side of the bundle of
rights that these people want to take away from us: the right to free
publication, the right to free dissemination -- which are of course
restricted by copyright, which I support strongly. I don't think it
right that I should be allowed to go down and steal a movie without
paying for it and set up a movie house and charge admission for it.
I'm sorry, I lost my track in one of my sentences -- You know, the
Xerox
machine -- it's always the same structure, we all know this here: the
people who have the old methods for publication think their methods
have
to go on forever; always the words "business model" are used. Well,
you
know, we're not worried about their business models. We're worried
about our computers and our rights.
And I believe it is within your commission to turn and then say,
"We've
had it." What are we going to do, have to have these hearings every
six
months? We're going to have to have ten of you up there, and a
hundred
of us here, explaining the absolute terrible things that
anticircumvention laws in the United States do to markets, do to
freedom
of speech, do to development of better computers, etc., etc., etc.
I think you can turn and say, "We've heard enough. We suggest that
Congress reconsider the entire bundle of anticircumvention clauses of
the DMCA."
And if I'm asked a specific question, I will be happy to try and
connect
by at most three half steps, any particular anticircumvention measure
to
truly horrible and very large scale things.
Thank you.
---
> http://www.cylab.cmu.edu/TIW/
(via posting to David Farber's Interesting People list)
From: David Farber <da...@farber.net>
To: "ip" <i...@v2.listbox.com>
Date: 04/28/2009 04:33 AM
Subject: [IP] Advanced Workshop and Summer School on Architectures
for
Trustworthy Computing
TIW 2009: TRUSTED INFRASTRUCTURE WORKSHOP: ADVANCED SUMMER SCHOOL ON
ARCHITECTURES FOR TRUSTWORTHY COMPUTING
JUNE 8-12, 2009, Carnegie Mellon University, Pittsburgh, PA, USA
When IT infrastructure technologies fail to keep pace with emerging
threats, we can no longer trust them to sustain the applications we
depend on in both business and society at large.
Ranging from Trusted Computing, to machine virtualization, new
hardware architectures, and new network security architectures,
trusted infrastructure technologies attempt to place security into the
very design of commercial off-the-shelf technologies.
The TIW is an open innovation event modelled as a highly interactive
summer school, consisting of lectures, workshops, and other lab
sessions. It is aimed at bringing together researchers in the field of
IT security with an interest in systems and infrastructure security,
as well as younger Master-1òùs or PhD students who are new to the
field. Funding is available to support student attendance.
AGENDA HIGHLIGHTS
- 4 keynote lectures
- 7 technology lectures: Trusted computing architecture, TPM module,
attestation, SW-based attestation, virtualization security, network
security, and trusted storage.
- 4 research workshops: HW security, attestation in practice, OS
security, verification and formal methods.
- 3 hands-on labs: TPM, trusted virtualization, trusted network
connect.
Several social events and networking with other researchers are
planned.
For more details on the workshop and how to register, please visit
http://www.cylab.cmu.edu/TIW
TIW SPONSORS
- Carnegie Mellon CyLab
- Fujitsu
- HP Labs
- IBM
- NSA
- NSF
- Seagate
CONTACTS
Workshop details: Michael Willett <michael...@seagate.com>
Registration details: Tina Yankovich <ti...@andrew.cmu.edu>
SPEAKERS
Leaders from academia, industry, and government are delivering the
lectures, labs, and workshops.
VENUE
CyLab, Carnegie Mellon University
CIC Building
4720 Forbes Avenue
Pittsburgh, PA 15213
---
> http://www.gnu.org/philosophy/can-you-trust.html
Can You Trust Your Computer?
by Richard Stallman
Who should your computer take its orders from? Most people think their
computers should obey them, not obey someone else. With a plan they
call “trusted computing”, large media corporations (including the
movie companies and record companies), together with computer
companies such as Microsoft and Intel, are planning to make your
computer obey them instead of you. (Microsoft's version of this scheme
is called “Palladium”.) Proprietary programs have included malicious
features before, but this plan would make it universal.
Proprietary software means, fundamentally, that you don't control what
it does; you can't study the source code, or change it. It's not
surprising that clever businessmen find ways to use their control to
put you at a disadvantage. Microsoft has done this several times: one
version of Windows was designed to report to Microsoft all the
software on your hard disk; a recent “security” upgrade in Windows
Media Player required users to agree to new restrictions. But
Microsoft is not alone: the KaZaa music-sharing software is designed
so that KaZaa's business partner can rent out the use of your computer
to their clients. These malicious features are often secret, but even
once you know about them it is hard to remove them, since you don't
have the source code.
In the past, these were isolated incidents. “Trusted computing” would
make it pervasive. “Treacherous computing” is a more appropriate name,
because the plan is designed to make sure your computer will
systematically disobey you. In fact, it is designed to stop your
computer from functioning as a general-purpose computer. Every
operation may require explicit permission.
The technical idea underlying treacherous computing is that the
computer includes a digital encryption and signature device, and the
keys are kept secret from you. Proprietary programs will use this
device to control which other programs you can run, which documents or
data you can access, and what programs you can pass them to. These
programs will continually download new authorization rules through the
Internet, and impose those rules automatically on your work. If you
don't allow your computer to obtain the new rules periodically from
the Internet, some capabilities will automatically cease to function.
Of course, Hollywood and the record companies plan to use treacherous
computing for “DRM” (Digital Restrictions Management), so that
downloaded videos and music can be played only on one specified
computer. Sharing will be entirely impossible, at least using the
authorized files that you would get from those companies. You, the
public, ought to have both the freedom and the ability to share these
things. (I expect that someone will find a way to produce unencrypted
versions, and to upload and share them, so DRM will not entirely
succeed, but that is no excuse for the system.)
Making sharing impossible is bad enough, but it gets worse. There are
plans to use the same facility for email and documents—resulting in
email that disappears in two weeks, or documents that can only be read
on the computers in one company.
Imagine if you get an email from your boss telling you to do something
that you think is risky; a month later, when it backfires, you can't
use the email to show that the decision was not yours. “Getting it in
writing” doesn't protect you when the order is written in disappearing
ink.
Imagine if you get an email from your boss stating a policy that is
illegal or morally outrageous, such as to shred your company's audit
documents, or to allow a dangerous threat to your country to move
forward unchecked. Today you can send this to a reporter and expose
the activity. With treacherous computing, the reporter won't be able
to read the document; her computer will refuse to obey her.
Treacherous computing becomes a paradise for corruption.
Word processors such as Microsoft Word could use treacherous computing
when they save your documents, to make sure no competing word
processors can read them. Today we must figure out the secrets of Word
format by laborious experiments in order to make free word processors
read Word documents. If Word encrypts documents using treacherous
computing when saving them, the free software community won't have a
chance of developing software to read them—and if we could, such
programs might even be forbidden by the Digital Millennium Copyright
Act.
Programs that use treacherous computing will continually download new
authorization rules through the Internet, and impose those rules
automatically on your work. If Microsoft, or the US government, does
not like what you said in a document you wrote, they could post new
instructions telling all computers to refuse to let anyone read that
document. Each computer would obey when it downloads the new
instructions. Your writing would be subject to 1984-style retroactive
erasure. You might be unable to read it yourself.
You might think you can find out what nasty things a treacherous
computing application does, study how painful they are, and decide
whether to accept them. It would be short-sighted and foolish to
accept, but the point is that the deal you think you are making won't
stand still. Once you come to depend on using the program, you are
hooked and they know it; then they can change the deal. Some
applications will automatically download upgrades that will do
something different—and they won't give you a choice about whether to
upgrade.
Today you can avoid being restricted by proprietary software by not
using it. If you run GNU/Linux or another free operating system, and
if you avoid installing proprietary applications on it, then you are
in charge of what your computer does. If a free program has a
malicious feature, other developers in the community will take it out,
and you can use the corrected version. You can also run free
application programs and tools on non-free operating systems; this
falls short of fully giving you freedom, but many users do it.
Treacherous computing puts the existence of free operating systems and
free applications at risk, because you may not be able to run them at
all. Some versions of treacherous computing would require the
operating system to be specifically authorized by a particular
company. Free operating systems could not be installed. Some versions
of treacherous computing would require every program to be
specifically authorized by the operating system developer. You could
not run free applications on such a system. If you did figure out how,
and told someone, that could be a crime.
There are proposals already for US laws that would require all
computers to support treacherous computing, and to prohibit connecting
old computers to the Internet. The CBDTPA (we call it the Consume But
Don't Try Programming Act) is one of them. But even if they don't
legally force you to switch to treacherous computing, the pressure to
accept it may be enormous. Today people often use Word format for
communication, although this causes several sorts of problems (see “We
Can Put an End to Word Attachments”). If only a treacherous computing
machine can read the latest Word documents, many people will switch to
it, if they view the situation only in terms of individual action
(take it or leave it). To oppose treacherous computing, we must join
together and confront the situation as a collective choice.
For further information about treacherous computing, see
http://www.cl.cam.ac.uk/users/rja14/tcpa-faq.html.
To block treacherous computing will require large numbers of citizens
to organize. We need your help! The Electronic Frontier Foundation and
Public Knowledge are campaigning against treacherous computing, and so
is the FSF-sponsored Digital Speech Project. Please visit these Web
sites so you can sign up to support their work.
You can also help by writing to the public affairs offices of Intel,
IBM, HP/Compaq, or anyone you have bought a computer from, explaining
that you don't want to be pressured to buy “trusted” computing systems
so you don't want them to produce any. This can bring consumer power
to bear. If you do this on your own, please send copies of your
letters to the organizations above.
Postscripts
1. The GNU Project distributes the GNU Privacy Guard, a
program that implements public-key encryption and
digital signatures, which you can use to send secure
and private email. It is useful to explore how GPG
differs from treacherous computing, and see what makes
one helpful and the other so dangerous.
When someone uses GPG to send you an encrypted
document, and you use GPG to decode it, the result is
an unencrypted document that you can read, forward,
copy, and even re-encrypt to send it securely to
someone else. A treacherous computing application
would let you read the words on the screen, but would
not let you produce an unencrypted document that you
could use in other ways. GPG, a free software package,
makes security features available to the users; they
use it. Treacherous computing is designed to impose
restrictions on the users; it uses them.
2. The supporters of treacherous computing focus their
discourse on its beneficial uses. What they say is
often correct, just not important.
Like most hardware, treacherous computing hardware can
be used for purposes which are not harmful. But these
uses can be implemented in other ways, without
treacherous computing hardware. The principal
difference that treacherous computing makes for users
is the nasty consequence: rigging your computer to
work against you.
What they say is true, and what I say is true. Put
them together and what do you get? Treacherous
computing is a plan to take away our freedom, while
offering minor benefits to distract us from what we
would lose.
3. Microsoft presents palladium as a security measure,
and claims that it will protect against viruses, but
this claim is evidently false. A presentation by
Microsoft Research in October 2002 stated that one of
the specifications of palladium is that existing
operating systems and applications will continue to
run; therefore, viruses will continue to be able to do
all the things that they can do today.
When Microsoft speaks of “security” in connection with
palladium, they do not mean what we normally mean by
that word: protecting your machine from things you do
not want. They mean protecting your copies of data on
your machine from access by you in ways others do not
want. A slide in the presentation listed several types
of secrets palladium could be used to keep, including
“third party secrets” and “user secrets”—but it put
“user secrets” in quotation marks, recognizing that
this somewhat of an absurdity in the context of
palladium.
The presentation made frequent use of other terms that
we frequently associate with the context of security,
such as “attack”, “malicious code”, “spoofing”, as
well as “trusted”. None of them means what it normally
means. “Attack” doesn't mean someone trying to hurt
you, it means you trying to copy music. “Malicious
code” means code installed by you to do what someone
else doesn't want your machine to do. “Spoofing”
doesn't mean someone fooling you, it means you fooling
palladium. And so on.
4. A previous statement by the palladium developers
stated the basic premise that whoever developed or
collected information should have total control of how
you use it. This would represent a revolutionary
overturn of past ideas of ethics and of the legal
system, and create an unprecedented system of control.
The specific problems of these systems are no
accident; they result from the basic goal. It is the
goal we must reject.
Ryan
(See webpage for internal links. I just noticed there's some "rich"
language in this, so be advised -- Seth)
‘DPI is necessary’ - Sandvine
DPI, Deep Privacy Invasion (or Deep Packet Inspection) is the tool
used by disgraced ‘behavioural targeting’ firm Phorm on behalf of
giant UK provider BT, as well as other companies.
British government approval of the technology has gotten it into a
costly and politically disastrous lawsuit with the European
Commission.
In Canada, its use inspired the federal privacy commissioner to launch
an anti-DPI site which states clearly and unequivocally »»»
Deep packet inspection is just one seemingly neutral technological
application that can have a significant impact on privacy rights and
other basic civil liberties, especially as market forces, the
enthusiasm of technologists and the influence of national security
interests grow stronger.
DPI is employed by acompany called Sandvine, based in Waterloo,
Ontario, and which has now submitted a CRTC filing on Network
Management (TPN2008-19 Review of Internet Traffic Management Practices
of Internet Service Providers) in which it claims “DPI is necessary,”
says Sandvine Fluff in a dslreports comment post.
In it, “DPI is necessary for the identification of traffic today
because the historically-used ‘honour-based’ port system of
application classification no longer works,” says Sandvine.
“Essentially, some application developers have either intentionally or
unintentionally designed their applications to obfuscate the identity
of the application. Today, DPI technology represents the only
effective way to accurately identify different types of applications.
”
Really?
‘Policy management’
Whenever you see a corporate product with ‘fair’ in the name, you can
be 100% sure it’ll be the exact opposite, p2pnet posted a little less
than a year ago, going on »»»
Apple’s FairPlay DRM is a shining example, and now ace Canadian
digital restrictions management company Sandvine has come out with a
product sure to make the likes of Bell Canada and Rogers glow.
Sandvine, which coined the notable phrase ‘policy management,’ is now
touting Sandvine FairShare to, “enhance its suite of Traffic
Optimization solutions”.
For ‘Traffic Optimization’ read bandwidth throttling, and Sandvine’s
new consumer control technology ‘empowers’ ISPs, enabling, “fair usage
in the shared access network” with “advanced techniques” to “ensure
equitable allocation of network resources during periods of
congestion,” it says.
And it’s “fully application-agnostic,” meaning BitTorrent isn’t the
only P2P file sharing application it’ll target.
We continued »»»
“FairShare automatically responds to the changing network environment
and subscriber usage patterns in real-time,” says Sandvine.
To do that, it must be constantly spying on users and although DPI
isn’t mentioned, one wonders if it figures in Sandvine’s FairShare.
DPI = Deep Packet Inspection which, says the Wikipedia, “enables
advanced security functions as well as internet data mining,
eavesdropping, censorship, etc”.
CAIP (Canadian Association of Internet Providers) said in a submission
to Canadian regulators, “Bell is using DPI to sequester or ‘hijack’
certain data packets as they pass through the network, and hold these
packets hostage until certain pre-conditions are met …”
And CIPPIC (Canadian Internet Policy and Public Interest Clinic) is
asking the Canadian privacy commissioner to open an investigation
because, it says, Bell has not only, “failed to obtain the consent of
its retail and wholesale internet customers in applying its
deep-packet inspection technology, which tells the company what
subscribers are using their connections for,” it’s using Deep Packet
Inspection to, “find and limit the use of peer-to-peer applications
such as BitTorrent, which it says are congesting its network”.
Sandvine says, blandly, its FairShare, “collects subscriber usage
metrics from various sources and analyzes the data according to
sophisticated, configurable parameters”.
Then it, “dynamically modifies policies to balance available bandwidth
and resources among subscribers”.
It actively throttles bandwidth, in other words.
According to Sandvine in its submission to the CRTC, “DPI is necessary
for the identification of traffic today because the historically-used
“honour-based” port system of application classification no longer
works. Essentially, some application developers have either
intentionally or unintentionally designed their applications to
obfuscate the identity of the application. Today, DPI technology
represents the only effective way to accurately identify different
types of applications.”
Now, in the first of what’s certain to be a long series of posts and
arguments deconstructing Sandvine’s claims of innocence, “Boy, this
makes me glad I gave up the free beer and ended up working elsewhere,”
says shepd in dslreports (http://www.dslreports.com/profile/933870),
going on »»»
Sandvine (6) : Sandvine submits that the true “content” of an Internet
transmission is represented as the body of your e-mail message; the
music or movie you are downloading; the video you are streaming; the
words in your VoIP call, etc. As explained in Sandvine’s initial
comments to the Notice, Sandvine’s congestion management solutions,
including those that employ DPI, do not inspect content as the content
is not relevant to a congestion management solution. To be clear,
they: Do not read your e-mail; Do not listen to your voice calls; Do
not watch the video you are streaming, etc.
shepd: Point 6 is (or or will be) a lie. The best DPI systems would
implement caching for streaming video, I’m guessing Sandvine doesn’t
do this (yet).
Sandvine (16): Because typical congestion management solutions do not
inspect the actual content of users’ Internet traffic, they also
cannot record, report on, or store such personal information. As
explained in paragraph 62 of Sandvine’s original comments, the most
“personal” information that Sandvine’s congestion management solutions
record for an Internet account (i.e, not a particular individual, but
the IP address attached to an Internet account, which may include
access for many individuals) is aggregate volume usage data, by
application or protocol. For example, a typical congestion management
solution could report the number of bytes of a VoIP protocol sent
and/or received by a given Internet account over a fixed period.
shepd: I know personally is an absolute and complete utter lie. One of
Sandvine’s most popular solutions was to combine logging activities
with their DPI hardware. You could buy several TB log servers just for
this purpose. The idea was that when you call up support they could
check your account on this log server and see if you have viruses or
are running P2P so they could weed out people who just can’t fix their
PCs vs. people with bad connections.
Sandvine (17): As described above, Sandvine submits that the use of
DPI-based congestion management solutions do not create a privacy
concern in that they do not inspect content for the purposes of
traffic classification, nor is any such information stored within such
solutions. Despite this fact, certain respondents claim that somehow
the mere presence of DPI-based technology itself raises privacy
issues, and have called for an outright ban on any such technology.
Imagine if this approach were applied to other technologies, such as
those supporting cameras. Single Lens Reflex (SLR) technology
underlies cameras that take photos at family birthday parties. The
same technology has been applied for surveillance of individuals and
public spaces. One use of the technology raises privacy issues, the
other does not. Nobody questions the value or validity of the camera
technology. So why question DPI technology? Privacy concerns properly
attach to applications or uses of technologies, not to the
technologies themselves.
shepd: 17 is just plain stupid. Encrypted communications are private
by their very nature. If I walk into most museums and start taking
pictures (especially with an SLR) I’ll be escorted out by the police,
because it’s trespassing. I’ll probably be served, too, if it’s
obvious I was intended to be a douche about it.
Sandvine (18): Banning the use of DPI, would have far-reaching and
damaging consequences across the Internet, where the technology is
used extensively. The wireless router in your home probably uses DPI
to make sure that time-sensitive packets like VoIP or gaming are
delivered quickly, while delaying less time-sensitive packets like
e-mail. Firewalls, some built right into popular PC operating
systems, use DPI to analyze packets for malicious intent like
viruses, trojans, and Spam. Libraries, schools and government
institutions rely on their firewalls to protect themselves and their
users from attacks. Those firewalls use DPI technology. Load
balancers and routers, indispensable hardware that distribute traffic
on the Internet and private networks, use DPI to identify where a
given packet or URL should be routed and what priority it should be
given.
shepd: Yes, that’s why we want DPI banned for PUBLIC usage, not
PRIVATE. Duh.
Sandvine (19): DPI is also a key part of the innovation in allowing a
migration from IPv4 to IPv6 allowing a network operator to convert
from one to the other using a carrier-grade
network-address-translation (NAT) and keeping protocols such as VoIP
operational.
shepd: WTF??? How the hell can inspecting a packet help you take an
IPv4 address and put it on an IPv6 network without modifying the
contents of the packet? And I thought you just said in point 6 you
don’t inspect the content? How do you even know it’s an IPv4 packet
then?
Sandvine (20): As described above, Sandvine submits that typical
congestion management practices (which the Company believes is the
subject of theNotice) do not raise personal privacy issues. However,
Sandvine recognizes that other Internet solutions that are in high
demand from consumers, governments and society in general may raise
personal privacy considerations. Examples, raised by certain
respondents include lawful intercept, copyright enforcement, and
targeted advertising.
shepd: See 19
… and, “22, 24, 26 — Contradict point 6, again,” he says. [22 -- To
continue the earlier analogy, surveillance of individuals or public
spaces could be achieved through a SLR-supported still frame camera or
through video recorders supported by a variety of technologies.
Similarly, solutions like lawful intercept, copyright enforcement and
targeted advertising are achieved through a variety of technologies,
not just ? or even predominantly ? DPI. 24 -- DPI technology can
comprise a component of targeted advertising solutions, but it has
been very rarely used this way. Instead, other technologies have
dominated. Google is one of the leaders in targeted advertising, but
to Sandvine's knowledge its targeted advertising solutions do not use
DPI. According to Google's own Advertising and Privacy notice in
connection with its enormously popular Gmail e-mail application,
Google reads your mail to make decisions on targeted advertising:
"The Gmail filtering system also scans for keywords in users' emails
which are then used to match and serve ads. When a user opens an
email message, computers scan the text and then instantaneously
display relevant information that is matched to the text of the
message. 26 -- Lawful intercept provides another example of how
privacy-sensitive solutions can be enabled by a wide variety of
technologies. In the United States under the Communications
Assistance for Law Enforcement Act (CALEA), service providers are
required to identify and intercept criminal data traffic under a
lawful warrant provided by law enforcement agencies. DPI technology
could be used in a solution designed to support the collection of that
data, but so too could a home computer "tapped" into the
communications of the individual that is the subject of the warrant.]
Sandvine (25): According to the Google Toolbar Privacy Notice, the Web
History service available through the popular Google Toolbar, “records
information about the web pages you visit and your activity on Google,
including your search queries, the results you click on, and the date
and time of your searches in order to improve your search experience
and display your web activity. Over time, the service may also use
additional information about your activity on Google or other
information you provide us in order to deliver a more personalized
experience.” According to the same Privacy Notice, Google’s PageRank
service also sends Google “the addresses or other information about
sites when you visit them. According to Google’s Privacy FAQ,
Google stores search engine logs data for each user for 18 months
prior to anonymizing it. Again, to Sandvine’s knowledge, none of these
solutions use DPI.
shepd: So, because Google does it differently, that’s how it’s all
done, right? I use a 1541 disk drive (Commodore), so *OBVIOUSLY* my PC
can read the disks, you know, because *I* do it that way. Yup. Awesome
argument.
Sandvine (27): In many cases, questions around privacy-sensitive
Internet solutions will ultimately come down to the ability to secure
sufficient user consent. To date, vendors of privacy-sensitive
solutions like targeted advertising have struggled with providing
reliable mechanisms for managing user consent. The mechanisms,
whether designed as opt-in (where the user must proactively consent to
being subject to the solution) or opt-out (where the user must
proactively demand NOT to be subject to the solution) have typically
been cookies-based. Cookies are “small pieces of text, stored by a
user’s web browser, that contain the user’s settings, shopping cart
contents, or other data used by websites. 29 – Fortunately, a better
solution to the consentproblem is available, through a network-level
association between the subscriber’s account and his permission
settings related to the privacysensitive solutions. Regardless of the
computer he uses to access his Internet account or the browser that he
uses on those computers, the permissions follow the user. Only if
the user intentionally changes his account-level privacy permissions
could a previously opted-out user be opted-in. Such a solution can be
implemented through the use of DPI technology.
shepd: 27 - 29 — Nothing at all to do with DPI.
Sandvine (30, 32): Service providers are just beginning to explore
other uses of DPI that can make their service offerings more
attractive to consumers in an increasingly competitive Internet access
market. High-speed Internet services are largely offered in the form
of flat rate, monthly, unlimited plans. Consumers may be interested
in other types of service plans that better reflect the unique ways
that they use their Internet connections. Such plans would likely
necessitate the ability to differentiate between types of traffic and
applications, which in turn would necessitate the use DPI technology
as well as other network intelligence tools. 32 — Other consumers may
be interested in a service package that guarantees a high quality of
service for certain frequently-used, latency-sensitive applications,
like Internet video gaming or VoIP. A DPI-supported policy solution
that can distinguish between different types of traffic and
applications is necessary to enable this type of service package.
shepd: 30 - 32 — Direct assault on net neutrality. Pay more for real
internet, pay less for fake internet. Why go through the effort with
DPI? Just dump in a forced proxy server and you’re gold if you just
want to provide KIRF internet.
Sandvine (35): In response to point “a” (and as already described in
paragraph 55 of Sandvine’s initial comments) a policy that is targeted
at disproportionate users of bandwidth can become more targeted by
applying an application-specific policy as well. For example, by
their nature, applications like VoIP, online video gaming and others
do not contribute meaningfully to network congestion, but because they
are time-sensitive applications, their usefulness to the consumer is
greatly impacted by any delays in their delivery. Congestion
management solutions allow service providers to create a
narrowly-targeted policy that affects: only disproportionate users;
only applications that contribute disproportionately to bandwidth
consumption; and only applications that are not time-sensitive.
shepd: 35 — In my opinion, nothing is a bigger hog than work VPNs. So
let’s boot off these corporate hogs. Oh wait, this is all opinion
based and therefore total BS, right?
Sandvine (36): Such a policy would minimally impact users’ quality of
experience, while achieving the congestion management goal. Sandvine
is focused on maximizing the user’s Internet experience.
shepd: 36 — “Maximizing” their experience they way they did with
Comcast, yes? Yes, I sure do feel people had their experience with
tech-support “maximized”.
Sandvine (41): Further, many IETF standards implicitly require the use
of DPI, such as RFC 3489, “Simple Traversal of User Datagram Protocol
(UDP) Through Network Address Translators (NATs)”, and RFC 2766,
“Network Address Translation - Protocol Translation (NAT-PT)”
shepd: 41 — If my IP started with 192.168, 172.16-30, or 10. this is
right. Guess what, that’s not what any of this is about.
Sandvine (42): One of the DPI-supported congestion management policies
that Sandvine has historically offered service providers is “session
management”of P2P file-sharing traffic through the use of TCP Reset
packets (RST packets) (see paragraph 53 of Sandvine’s initial
comments). Despite the claims of certain respondents, there are
simply no IETF standards on when or how RST packets should be used.
It is further claimed that the RST packets used in session management
are in some way “forged” because an RST packet is supposed to mean
that “the other end of the connection has failed.” While original
implementations of RST packets were for this purpose, as with much on
the Internet, their use has evolved. For example, most webservers use
RST packets today as a mechanism for tearing down TCP connections
because it is much more efficient than a four-way connection teardown.
In short, RST packets are broadly used today and for purposes other
than communicating that “the other end of the communication has
failed.”
shepd: 42 — The US Government, of all people, has told you, Sandvine,
that you *are* impersonating people on the internet by injecting RST
packets. STFU already!
“Sandvine, you are embarassing my hometown,” says shepd, adding, “If
you are going to write shit, at least make it coherent shit.”
And guess who’ll be taking notes avidly, if it isn’t already in
behind-closed-doors communication with Sandvine?
Phorm, anyone?
Definitely stay tuned.
Jon Newton - p2pnet
(Thanks, Marc)
Government Still Blocking Information on Secret IP Enforcement Treaty
Broken Promises from the Obama Administration Keep Americans in the
Dark About ACTA
May 6th, 2009
Washington, D.C. - Two public interest groups today called on the
government to stop blocking the release of information about a secret
intellectual property trade agreement with broad implications for
privacy and innovation around the world.
The Electronic Frontier Foundation (EFF) and Public Knowledge said
that the April 30th release of 36 pages of material by the United
States Trade Representative (USTR) was the second time the government
had the opportunity to provide some public insight into the
Anti-Counterfeiting Trade Agreement (ACTA), but declined to do so.
More than a thousand pages of material about ACTA are still being
withheld, despite the Obama administration's promises to run a more
open government.
"We are very disappointed with the USTR's decision to continue to
withhold these documents," said EFF Senior Counsel David Sobel. "The
president promised an open and transparent administration. But in this
case and others we are litigating at EFF, we've found that the new
guidelines liberalizing implementation of the Freedom of Information
Act haven't changed a thing."
EFF and Public Knowledge filed suit in September of 2008, demanding
that background documents on ACTA be disclosed under the Freedom of
Information Act (FOIA). Initially, USTR released 159 pages of
information about ACTA and withheld more than 1300 additional pages,
claiming they implicate national security or reveal the USTR's
"deliberative process." After reconsidering the release under the
Obama administration's new transparency policies, the USTR disclosed
the additional pages last week, most of which contain no substantive
information.
However, one of the documents implies that treaty negotiators are
zeroing in on Internet regulation. A discussion of the challenges for
the pact includes "the speed and ease of digital reproductions" and
"the growing importance of the Internet as a means of distribution."
Other publicly available information shows that the treaty could
establish far-reaching customs regulations over Internet traffic in
the guise of anti-counterfeiting measures. Additionally,
multi-national IP industry companies have publicly requested that ISPs
be required to engage in filtering of their customers' Internet
communications for potentially copyright-infringing material, force
mandatory disclosure of personal information about alleged copyright
infringers, and adopt "Three Strikes" policies requiring ISPs to
automatically terminate customers' Internet access upon a repeat
allegation of copyright infringement.
"What we've seen tends to confirm that the substance of ACTA remains a
grave concern," said Public Knowledge Staff Attorney Sherwin Siy. "The
agreement increasingly looks like an attempt by Hollywood and the
content industries to perform an end-run around national legislatures
and public international forums to advance an aggressive, radical
change in the way that copyright and trademark laws are enforced."
"The USTR's official summary of the process, released last month,
recognized the lack of transparency so far while doing nothing to
broaden stakeholder input or engage public debate," said International
Affairs Director Eddan Katz. "The radical proposals being considered
under the Internet provisions deserve a more transparent process with
greater public participation."
Litigation in the case will now continue, with USTR asking U.S.
District Judge Rosemary M. Collyer to uphold its decision to conceal
virtually all of the information that EFF and PK seek concerning the
ACTA negotiations.
For the documents released so far:
http://www.eff.org/fn/directory/6661/329
For more on ACTA:
http://www.eff.org/issues/acta/
Contacts:
Rebecca Jeschke
Media Relations Director
Electronic Frontier Foundation
pr...@eff.org
Art Brodsky
Communications Director
Public Knowledge
abro...@publicknowledge.org
> http://news.idg.no/cw/art.cfm?id=16272025-1A64-6A71-CE42CECA725771BA
Reform of EU telecom laws rejected by European Parliament
Paul Meller
06.05.2009 kl 13:23
Wide-ranging reforms of European Union telecom laws were rejected by
the European Parliament on Wednesday because of one clause that would
have compromised citizens' rights of access to the Internet.
The reforms were designed to take account of advances in technology
and the rapid growth of high speed Internet access.
The Parliament supported all other aspects of the reforms, including
the creation of an E.U.-wide telecommunications regulator with powers
to police competition in the single market, a plan for distributing
radio spectrum among emerging mobile technologies, and enhancing
citizens' privacy rights online data protection.
However, failure to agree one element in the so-called telecom package
of legislation means the whole reform is stalled, Members of the
European Parliament (MEPs) said after the vote in Strasbourg, France,
on Wednesday.
Initial reactions to the vote lauded the MEPs for not bowing to
pressure from national governments, in particular France and the U.K.,
which wanted greater power to restrict people's internet access if
they are found to have been downloading copyright content illegally.
The cable industry was one of the first to react.
"This is ultimately a consumer issue and the European Parliament stood
up to be counted on behalf of its citizens and our 70 million European
customers," said Manuel Kohnstam, president of the trade group Cable
Europe.
"Europe has chosen to ignore a reflex to police the net in the name of
one business model. In the end, there was support to protect the
European fundamental right to access information," he said.
Representatives of the European Commission, which wrote the reforms
and pushed hard for their adoption in recent weeks, weren't
immediately available to comment. Telecom Commissioner Viviane Reding
will hold a news conference later Wednesday, together with key MEPs
involved in the law reform effort.
Some observers still believe the bulk of the reforms can be adopted
even without support for the full package. However, representatives
for the Commission and the Parliament were not immediately available
to confirm this.
The failure to adopt the whole package came as a surprise to many
observers, who believed the reform package was "in the bag", as one
person close to Reding put it.
"The basis of our governments being the opinion of the people, the
very first object should be to keep that right; and were it left to me
to decide whether we should have a government without newspapers or
newspapers without a government, I should not hesitate a moment to
prefer the latter. But I should mean that every man should receive
those papers and be capable of reading them."
-- Thomas Jefferson, 1787
(I nevertheless still look upon the Obama administration with a
considerable and appropriate measure of trepidation and querulouness.
I don't exactly see the words "citizen-powered approaches" regarding
international concerns as quite conveying what needs to be heard.
(If "citizen-powered approaches" means according the sort of regard
for the autonomy of their own representative organs of government that
the United States harkened, from its very inception, in the history of
the human race -- specifically as opposed to undue regard for
transnational entities acting to overpower those organs and their
states that originally granted those entities their limited liability
privileges -- then we would be hearing something I feel would be worth
lending some degree of my trust to. This is the key factor I believe
should be borne in mind as one approaches the policy table, even as
one nurtures an "open mind." And one has to remain conscious that
this administration has taken a good number of steps in certain areas,
that already simply have to be reversed/rectified. Just how things
look to me. -- Seth)
Crawford: Tech Agenda Just Beginning
> http://techdailydose.nationaljournal.com/2009/06/crawford-tech-agenda-just-begi.php
Obama Team Stumps for Tech Policy
> http://www.internetnews.com/security/article.php/3823091/Obama+Team+Stumps+for+Tech+Policy.htm
Broadband Still at Top of Obama’s List, Says Crawford
> http://broadbandcensus.com/2009/06/tech-policy-broadband-still-at-top-of-obamas-list-says-crawford/
---
> http://techdailydose.nationaljournal.com/2009/06/crawford-tech-agenda-just-begi.php
Tuesday, June 2, 2009
Crawford: Tech Agenda Just Beginning
Even though the Obama administration has made important, early strides
in its first 133 days as part of its technology policy agenda, a key
adviser to the president on Tuesday said the White House has a long
way to go. "We need your criticism, your engagement, your involvement,
and your help," Susan Crawford, special assistant to the president for
science, technology and innovation policy, told the Computers Freedom
& Privacy conference (http://www.cfp.org/). After "timely, targeted
and tapered" economic stimulus package implementation, the
administration's focus will turn to job creation -- and that weighs
heavily on high-tech investment, said Crawford who is also a member of
the National Economic Council. Innovation is tied to a range of
priorities from diminishing the country's carbon footprint and
creating clean energy jobs to reducing the cost of healthcare and
educating the next generation.
Crawford also spoke about the need to bolster broadband deployment and
bridge the gaps between urban and rural areas and rich and poor
Americans. She said the United States is "definitively behind" its
international counterparts and Obama cares deeply about the issue.
"This is not about national pride. This is about restoring American
competitiveness for the future," Crawford stressed. Addressing the
problem will require "civility, thoughtfulness and attention" and that
work has begun at the FCC and elsewhere in the administration, she
added. On the international front, Crawford pointed out that the State
Department is using technology to expand its traditional
government-to-government outreach to incorporate citizen-centered
approaches to advancing U.S. diplomatic and developmental goals. "A
networked public can meaningfully shape international politics,"
Crawford said. Also from CFP: White House Aide Warns Online
Advertisers To Be Monitored (Dow Jones)
(http://online.wsj.com/article/BT-CO-20090602-708994.html).
---
> http://www.internetnews.com/security/article.php/3823091/Obama+Team+Stumps+for+Tech+Policy.htm
Obama Team Stumps for Tech Policy
June 2, 2009
By Kenneth Corbin: More stories by this author:
White House official Susan Crawford kicks off a D.C. conference with
an emphatic call for progressive tech policy at home and abroad.
WASHINGTON -- The Obama administration's tech policy team has been on
a bit of a publicity junket of late.
In the four months since taking office, President Obama has added
several tech advisory positions to the White House staff, made a
series of splashy overtures to new media, and last week, he lent his
imprimatur to an ambitious program to overhaul national cybersecurity.
And in an effort to demonstrate the administration's commitment to
technology, members of the team in recent weeks have become regulars
on the tech policy conference circuit.
One is Susan Crawford, the president's special assistant for science,
technology and innovation policy, who was on hand to kick off this
year's Computers, Freedom and Privacy conference here at George
Washington University.
"Tech policy is at the heart of this administration's plans for the
future," she declared.
Crawford held forth on a host of areas the administration has
identified as priorities, including balancing security with privacy --
one of the prevailing themes of this week's conference -- as well as
Net neutrality and universal access to high-speed networks.
"To be connected is increasingly essential," Crawford said, noting
that much of the administration's domestic agenda, including
healthcare and energy reform, hinges on ubiquitous Internet
connectivity.
"Our broadband connections as a country are slow and expensive," she
said. "We're not falling behind, we are definitively behind."
The administration put what it described as a down payment on the
country's digital infrastructure with the February economic stimulus
bill, which directed $7.2 billion for broadband projects. But that
money, the administration acknowledges, is only a start.
"There is no one easy answer, but without adequate high-speed
connections, we will miss tremendous opportunities to pioneer the
great innovation of the future," Crawford said. "Because pioneers
these days are working on data-intensive, collaborative projects that
require high-speed connections that we may not have."
Crawford went so far this morning as to compare Obama to Lincoln in
his commitment to the cutting-edge technology of the day. Just as
Obama is forging a new digital infrastructure, she said, Lincoln was a
driving force behind the expansion of the railroad. And when Lincoln
arrived at the White House, he installed a telegraph line to receive
real-time updates from his advisors in the field, much like the
tech-savvy transition set about modernizing the IT facilities when it
set up shop in January.
Critics have charged that the administration's actions haven't kept
pace with its rhetoric about open and transparent government. The
president had pledged to post all non-emergency bills online for five
days before signing them into law, for instance, but in most cases
that hasn't happened.
Nevertheless, the administration has taken the initiative to create or
overhaul several Web sites in an effort to make government information
more accessible to the public.
One was Data.gov, a project led by newly minted federal CIO Vivek
Kundra and Beth Noveck, who holds the title of deputy CTO and is
charged with promoting open government initiatives. The Web site
houses vast stores of government information, organized and formatted
in a way the administration hopes will make it easy for the public to
access and analyze.
When administration officials talk about their open government
initiatives, they frame them in the crowd-sourced model of Web 2.0
technologies, like Wikipedia, or the software development kits and
APIs companies like Apple (NASDAQ: AAPL) or Facebook have made
available to the developer community. The aim is to make information
available to as broad an audience as possible, unleashing a flood of
innovation that runs in unpredictable directions.
"We have no idea how this data will be used, and that's the point,"
Crawford said.
Crawford also touched on the administration's efforts to promote
collaborative technologies abroad as a vehicle for advancing the
country's diplomatic agenda. All foreign service officers now receive
training in new media technologies to better connect directly with the
citizens of foreign countries as the State Department pursues a
strategy it calls 21st century statecraft. Crawford also cited Obama's
move to ease restrictions for U.S. telecom companies looking to do
business in Cuba, an effort seen in part to sow the seeds of democracy
by making it easier for Cubans to connect with the outside world.
---
> http://broadbandcensus.com/2009/06/tech-policy-broadband-still-at-top-of-obamas-list-says-crawford/
Broadband Still at Top of Obama’s List, Says Crawford
By Andrew Feinberg, Deputy Editor, BroadbandCensus.com
Tuesday, June 2nd, 2009
WASHINGTON, June 2, 2009 - Just 133 days into the Obama
administration, technology policy and broadband deployment are issues
“at the heart of this administration’s plans for the future,” Special
Assistant to the President for Science, Technology and Innovation
Policy Susan Crawford said Tuesday during opening remarks at the
Computers, Freedom, Privacy conference in Washington.
Broadband deployment remains a linchpin of the Obama agenda,
particularly for the nation’s short and long term economic health. The
nation’s broadband connections are “slow and expensive” compared to
the rest of the world, Crawford said.
"We are not falling behind," she warned. "We are definitely behind."
High speed networks can bridge economic, racial and cultural divides,
Crawford said. Even the homeless now need access to the internet,
Crawford said, referencing a recent article in The New York Times.
"We’re talking about…the human need to connect," she said.
More importantly, broadband will be key to the nation’s economic
recovery and future stability. "The president cares deeply about
[broadband]," she said. "Without adequate high speed connections, we
will miss opportunities."
The Federal Communications Commission’s forthcoming national broadband
strategy will be a key tool to help aid the recovery effort, even
after the stimulus programs have ended, Crawford said.
"We have to focus on creating jobs after the stimulus," she said.
Network neutrality will almost certainly be an important element of
that plan to encourage economic prosperity, she suggested.
But no matter the specifics of the FCC plan, Crawford was emphatic
about the importance of a coherent national strategy.
"[The plan] is not about national pride . . . but about economic
competitiveness for the future."
It’s The Internet Stupid
A Comment on Notice of Inquiry, FCC GN Docket No. 09-51
Comments on A National Broadband Plan For Our Future,
Notice of Inquiry, FCC GN Docket No. 09-51.
The American Recovery and Reinvestment Act (ARRA) aims at building a
new economic foundation for the United States by providing, “job
preservation and creation, infrastructure investment, energy
efficiency and science, assistance to the unemployed,” et cetera. As
one step towards these goals, the ARRA mandates that the FCC deliver a
National Broadband Plan to Congress by February 17, 2010.
The National Broadband Plan mandated in Section 6001(k)(2) of the ARRA
makes clear that its objectives are for, “all people of the United
States . . . the public . . . [for] advancing consumer welfare, civic
participation, public safety and homeland security, community
development, health care delivery, energy independence and efficiency,
education, worker training, private sector investment, entrepreneurial
activity, job creation and economic growth, and other national
purposes.” It would be impossible to achieve most of these benefits
without the Internet. The most direct, most immediate way to reach
these objectives is via broadband connections to the Internet.
Broadband has other uses, to be sure. It is used for cellular
backhaul, in cable TV systems, for proprietary financial transaction
networks and for other proprietary enterprise networks. While cellcos,
cablecos and enterprises may need better broadband technologies for
their own proprietary purposes, these uses don’t rise to the level
that would require a National Broadband Plan for “all people of the
United States.” The people of the United States already have
reasonable telephone and television services; they need faster, more
affordable, more ubiquitous, more reliable connections to the
Internet.
Broadband is not the Internet. Broadband is shorthand for a diverse
class of wired and wireless digital transmission technologies. The
Internet, in contrast, is a set of public protocols for
inter-networking systems that specifies how data packets are
structured and processed. Broadband technologies, at their essence,
are high-capacity and always-on. The essence of the Internet is (a)
that it carries all packets that follow its protocols regardless of
what kinds of data the packets carry, (b) that it can interconnect all
networks that follow those protocols, and (c) its protocols are
defined via well-established public processes.
There’s risk in confusing broadband and Internet. If the National
Broadband Plan starts from the premise that the U.S. needs the
innovation, increased productivity, new ideas and freedoms of
expression that the Internet affords, then the Plan will be shaped
around the Internet. If, instead, the Plan is premised on a need for
broadband, it fails to address the ARRA’s mandated objectives
directly. More importantly, the premise that broadband is the primary
goal entertains the remaking of the Internet in ways that could put
its benefits at risk. The primary goal of the Plan should be broadband
connections to the Internet.
The FCC’s Internet Policy Statement of 2005 is a first attempt to
codify important aspects of the Internet independent of access
technology. It advocates end-user access to content, and end-user
choice of applications, services and devices. It says that Internet
users are, “entitled to competition,” but it does not spell out the
entitlement to the benefits of competition, such as increased choice,
lower price and diversity of offers. It fails to provide for
information about whether advertised services perform as specified. It
doesn’t address packet inspection, packet discrimination, data
collection or end-user privacy. It is not clear that all of these are
within the FCC’s purview, but it is abundantly clear that all of these
factors should be critical to a National Broadband Plan that addresses
broadband connections to the Internet.
Therefore, we urge that the FCC’s National Broadband Plan emphasize
that broadband connection to the Internet is the primary goal. In
addition, we strongly suggest that the Plan incorporate the FCC
Internet Policy Statement of 2005 and extend it to (a) include
consumer information that meaningfully specifies connection
performance and identifies any throttling, filtering, packet
inspection, data collection, et cetera, that the provider imposes upon
the connection, (b) prohibit discriminatory or preferential treatment
of packets based on sender, recipient or packet contents. Finally, we
suggest that the Internet is such a critical infrastructure that
enforcement of mandated behavior should be accompanied by penalties
severe enough to deter those behaviors.
Signatories
John Perry Barlow, co-founder Electronic Frontier Foundation,
bar...@eff.org
Scott Bradner, University Technology Security Officer, Harvard
University, s...@harvard.edu
Dave Burstein, Editor, DSL Prime, da...@dslprime.com
Robin Chase, Meadow Networks, rch...@alum.mit.edu
Judi Clark, independent consultant, ju...@manymedia.com
Gordon Cook, Editor & Publisher, Cook Report on Internet Protocol,
co...@cookreport.com
Steve Crocker, Author RFC #1, CEO Shinkuro, st...@shinkuro.com
Susan Estrada, President, FirstMile.US, su...@firstmile.us
Harold Feld, blogger http://wetmachine.com, harol...@gmail.com
Tom Freeburg, CTO Memorylink, t...@memorylink.com
Dewayne Hendricks, CEO Tetherless Access, dew...@tetherless.com
David S. Isenberg, isen.com, LLC & F2C:Freedom to Connect,
is...@isen.com
Jeff Jarvis, City University of New York Graduate School of
Journalism; Author of What Would Google Do, je...@buzzmachine.com
Mitch Kapor, co-founder Electronic Frontier Foundation,
mi...@kapor.com
Larry Lessig, Professor at Harvard Law School & Director of Harvard
University Edmond J. Safra Foundation Center for Ethics,
les...@pobox.com
Sascha Meinrath, Open Technology Initiative, New America Foundation,
mein...@newamerica.net
Jerry Michalski, independent consultant, je...@sociate.com
Elliott Noss, CEO Tucows, en...@tucows.com
Leslie Nulty, Principal, Focal Point Advisory Services/Project
Coordinator, East Central Vermont Community Fiber Network Project; and
Treasurer, Vermont Businesses for Social Responsibility,
nulty_...@yahoo.com
Tim Nulty, CEO, East Central Vermont Community Fiber Network Project
t_n...@yahoo.com
Tim O’Reilly, founder and CEO of O’Reilly Media, t...@oreilly.com
Andrew Rasiej, Personal Democracy Forum, and...@fon.com
David P. Reed, early contributor to the Internet architecture, MIT
Media Laboratory, dpr...@reed.com
Howard Rheingold, Author of The Virtual Community and Smart Mobs,
how...@rheingold.com
Roy Russell, GoLoco, Inc., r...@alum.mit.edu
Doc Searls, Harvard Berkman Center for Internet & Society,
dse...@cyber.law.harvard.edu
Micah L. Sifry, Personal Democracy Forum, msi...@gmail.com
Dana Spiegel, Executive Director, NYCwireless, Da...@NYCwireless.net
Aaron Swartz, Co-Founder, BoldProgressives.org, m...@aaronsw.com
Katrin Verclas, Co-Founder, MobileActive.org, katrin...@gmail.com
David Weinberger, Harvard Berkman Center for Internet & Society,
se...@evident.com
Stanton Williams, Board Chair, ValleyNet, stan.w...@valley.net
Brian Worobey, CEO, openairboston.net, br...@openairboston.net
Esme Vos Yu, founder of Muniwireless.com, es...@muniwireless.com
Seth
> http://www.eff.org/press/archives/2009/06/17
EFF and Public Knowledge Reluctantly Drop Lawsuit for Information
About ACTA
Government's 'National Security' Claims Keep IP Treaty Under Wraps
June 17th, 2009
Washington, D.C. - The Obama Administration's decision to support
Bush-era concealment policies has forced the Electronic Frontier
Foundation (EFF) and Public Knowledge (PK) to drop their lawsuit about
the proposed Anti-Counterfeiting Trade Agreement (ACTA). EFF and PK
had been seeking important documents about the secret intellectual
property enforcement treaty that has broad implications for global
privacy and innovation.
Federal judges have very little discretion to overrule Executive
Branch decisions to classify information on "national security"
grounds, and the Obama Administration has recently informed the court
that it intends to defend the classification claims originally made by
the Bush Administration.
"We're extremely disappointed that we have to end our lawsuit, but
there is no point in continuing it if we're not going to obtain
information before ACTA is finalized," said EFF International Policy
Director Gwen Hinze. "There's a fundamental fairness issue at stake
here. It's now clear that the negotiating texts and background
documents for this trade agreement have been made available to
representatives of major media copyright owners and pharmaceutical
companies on the Industry Trade Advisory Committee on Intellectual
Property. Yet private citizens -- who stand to be greatly affected by
ACTA -- have had to rely on unofficial leaks for any substantive
information about the treaty and have had no opportunity for
meaningful input into the negotiation process. This can hardly be
described as transparent or balanced policy-making."
"Even though we have reluctantly dropped this lawsuit, we will
continue to press the U.S. Trade Representative and the Obama
Administration on the ACTA issues," said Public Knowledge Deputy Legal
Director Sherwin Siy. "The issues are too far-reaching and too
important to allow this important agreement to be negotiated behind
closed doors," he added.
Very little is known about ACTA, currently under negotiation between
the U.S. and more than a dozen other countries, other than that it is
not limited to anti-counterfeiting measures. Leaked documents indicate
that it could establish far-reaching customs regulations governing
searches over personal computers and iPods. Multi-national IP
corporations have publicly requested mandatory filtering of Internet
communications for potentially copyright-infringing material, as well
as the adoption of "Three Strikes" policies requiring the termination
of Internet access after repeat allegations of copyright infringement,
like the legislation recently invalidated in France. Last year, more
than 100 public interest organizations around the world called on ACTA
country negotiators to make the draft text available for public
comment.
EFF and Public Knowledge first filed suit against the Office of the
U.S. Trade Representative in September of 2008 demanding that
background documents on ACTA be disclosed under the Freedom of
Information Act (FOIA). Rather than pursuing a lawsuit with little
chance of forcing the disclosure of key ACTA documents, EFF and Public
Knowledge will devote their efforts to advocating for consumer
representation on the U.S. Industry Trade Advisory Committee on IP,
the creation of a civil society trade advisory committee, and greater
government transparency about what ACTA means for citizens.
For more on this case:
http://www.eff.org/cases/eff-and-public-knowledge-v-ustr
For more on ACTA:
http://www.eff.org/issues/acta
Contacts:
David Sobel
Senior Counsel
Electronic Frontier Foundation
so...@eff.org
Gwen Hinze
International Policy Director
Electronic Frontier Foundation
gw...@eff.org
Art Brodsky
Communications Director
Public Knowledge
abro...@publicknowledge.org
_______________________________________________
> http://toc.oreilly.com/2009/06/four-roles-for-publishers-stay.html
Four roles for publishers: staying relevant when you are no longer a
gatekeeper
By Andy Oram
June 17, 2009
Bookbuilders of Boston, a nonprofit membership organization for
publishing professionals, held a panel on June 11 about open
publishing. It attracted an usually large number of attendees--about
60--revealing the curiosity its members have toward the potential
changes created by this movement.
I was one of the panelists, along with managers from MIT Press and
Harvard University Press. In addition to a discussion of the core
topic of open publishing--that is, distributing documents free of
charge, often under a license that permits free alteration and
distribution--I laid out a larger vision that places the publisher in
a context where contributors hold conversations online and share large
amounts of material freely among themselves. That vision is the center
of the following remarks.
When trade publishers are invited to speak, we seem to be expected to
follow a certain script. We must stress the importance of finding new
ways to distribute and market our material online. We have to point
out that only 15% of a book's cost goes to shipping and printing. We
champion the importance of supporting authors financially, shed a tear
or two for our sister industry, journalism, and so on.
When staff from O'Reilly Media are invited to speak, we defy
expectations by throwing out all of that stuff, talking instead about
the excitement exploring new technologies that can change people's
lives, about working together to educate each other, about how sharing
information in communities can help us all grow. This is the open
source movement in a nutshell, as it were.
Tonight I'll take a somewhat in-between position: I'll talk about
business models, but from the standpoint of open online content.
The bedrock principle in this environment is that the publisher is no
longer a gatekeeper. Anything can go online to be linked to, rated,
berated, or anything else people want to do with it. Since we are no
longer gatekeepers, publishers have to focus on how we add quality.
Sound nice--but that puts us in a real quandary, because the elements
of quality we have seized on so proudly over the decades no longer
matter as much. We have to recognize the new environment we're in and
find new meaning for ourselves.
This is a classic application of the principles from The Innovator's
Dilemma, the classic book by Clayton M. Christensen, where he talks
about changes caused by disruptive technologies. In our case,
disruptive social norms are just as important.
In many areas of publishing--including certainly my own, computer
books--there are enormous resources of free online material and
innumerable forums where individuals can quickly and conveniently post
their own observations. Much of the material can be edited and
redisplayed instantly, particularly on wikis. That is the context in
which we have to define the publisher's new roles.
I won't discuss marketing in this talk because I'm not a marketing
person and because the rules are changing so fast that I'm afraid of
making any predictions about what works. Focusing instead on content
production, I've divided the roles publishers play in adding quality
into four parts. For each one, I'll discuss how we're affected by the
presence of so much online material.
Proofing for grammar, syntax, and consistency of usage
Publishers spend a lot of time making documents look professional and
enforcing standards. We're obsessed with getting every comma and
semi-colon right, ensuring that capitalization is consistent, and so
on.
I think this as a valuable contribution to quality. Sometimes someone
reading an article will stop and as me, "Here's an abbreviation
spelled two different ways--does it refer to the same thing or two
different things?" And sometimes I'll read a sentence that's missing a
word, and have to go over it two or three times to see how the parts
fit together. Proofreading can resolve real problems in comprehension.
But many modern readers don't value proofreading, because it comes at
a cost. This cost, of course, is the extra time proofreading adds to
publication. The modern reader would rather have the document right
now, so he can get his tweet out before his colleague does. First
tweet wins.
Proofreading is also like cleaning the Aegean Stables. I've found
myself in the situation where I edit a whole book and get it looking
really professional, then find that someone goes in the files the next
day to make some updates--and there goes all my hard work.
But publishers can still offer professional proofreading. The time
this is useful is when an organization needs a professional looking
document--for instance, when it wants to print an online book in order
to show off the organization's capabilities to a potential client. In
the same situation where you take off your T-shirt and don a
pants-suit, you want a professional-looking text. And publishers may
be able to get revenue in such situations.
Fact-checking
A more significant contribution publishers make to quality is
fact-checking. Many newspapers and magazines hire staff to do it;
technical journals and book publishers such as O'Reilly pay outside
experts a few hundred or couple thousand dollars to perform the same
service.
Few authors and readers online hold the view expressed by a blogger in
last Sunday's New York Times who said, "Getting it right is expensive.
Getting it first is cheap." But there is an attitude among responsible
bloggers--which I adopt myself--that if you've gathered enough of the
facts to propound a valid opinion, you can go ahead and put the
opinion out for debate. If other people see errors or have evidence
that weakens your argument, they can cite them in comments. If you
write a wiki, they can edit it. In any case, you're encouraged to
express yourself so long as you're sure you're heading in the right
direction.
This approach is more limited than many of its adherents think,
though. In the computer field I work in, especially, a lot of online
participants hold to an essential philosophy of logical positivism.
They believe that if enough facts are brought to bear and enough
people comment, we will all converge on the truth. If this were the
case, most of the articles in Wikipedia would be perfect by now.
But if course this is not the case, because new information, new
opinions, new interpretations get added all the time, and with them
new errors are introduced as well.
So there may be a role for publishing professionals in fact checking.
It will probably not be a large part of our work, though because in
the Internet age fact checking is a lot easier than it used to be.
Just don't rely on Wikipedia.
Editing unclear and ambiguous passages
This task is probably where publishers create the most value, and
where they can make some of their biggest contributions to Internet
content. I find it sad when I read a document by someone who is
clearly brilliant and knows his material well, and come across a
passage that doesn't make sense because no editor said, "You have to
work on this."
And every editor knows the work involved in making text comprehensible
by ripping up paragraphs, rearranging points in the proper order,
introducing connecting or transitional material, and even adding facts
that the author took for granted but that the editor knows have to be
explicitly told to the reader.
I've noticed that the give and take of modern online media compensates
even for poorly argued text. If someone doesn't understand a point,
she can just post a question. The author can come back to cover it in
more detail, and after a couple rounds of discussion they work out the
meaning. Other people can join in to offer explanations.
Still, I look at these exchanges and think, "A lot of people could
have saved a lot of time if someone had just edited the document." And
some projects are recognizing the value of having an expert eye look
over a document, something few amateurs know how or take time to do.
Integrating facets of a large-scale text
We all know the difference between reading an anthology of diverse
articles for different audiences, written from different points of
view in different tones of voice, and reading a 250-page book so well
integrated that you start on page 1 and can't put it down till you
reach the end. Achieving this quality is where publishers shine, and I
haven't found any process or mechanism in collaborative, online
document production that can carry it off.
But even this has diminished value in the Internet world, because
hardly anyone reads a 250-page book at once. No one has time. If we
read chunks of a few thousand words at a time, we could just as well
read documents the way they usually appear on the Internet: many small
contributions by different people scattered among different web sites.
(This very article, topping 1,500 words, is about as long a text as
most people would tolerate.)
That doesn't mean the problem of integration has disappeared; it has
just shifted. Now the public needs help finding their way among the
different documents. Hints are needed as to what to read first, where
to go when they encounter a new concept they need to learn, and how to
harmonize documents that use different terms or approach a problem
from different angles.
I think publishers can play a major role helping to organize content
culled from around the Internet. But the process is a lot different
from organizing material into a book. It requires a new online tools
and a type of different interaction between experts and those tools. I
will leave you with a pointer to an article I wrote proposing some
tools, and another pointer to my collection of articles about
community educational efforts.
In summary, publishers still have roles to play when we are no longer
gatekeepers. But we have to renew our relevance in environments where
enormous amounts of information are put online by different
participants, with ample facilities for commenting and linking. These
new technologies and norms force us to look at every area where we
traditionally boast of adding quality, and to find new ways to apply
our skills.
--
Glenn "Channel6" Kerbein
United States Pirate Party
"Burn, Hollywood, Burn"
Free and Open-Source Software & Services
Welcome to the Open NYSenate
To pursue its commitment to transparency and openness the New York
State Senate (http://nysenate.gov/) is undertaking a cutting-edge
program to not only release data, but help empower citizens and give
back to the community. Under this program the New York Senate will,
for the first time ever, give developers and other users direct access
to its data through APIs and release its original software to the
public. By placing the data and technological developments generated
by the Senate in the public domain, the New York Senate hopes to
invigorate, empower and engage citizens in policy creation and
dialogue.
What You'll Find Here
* Application Programming Interfaces (APIs) for building your own
applications and services
* Embeddable widgets for easy sharing on your site, profile or
blog
* Original Software such as Drupal Modules and Java libraries
* Data sets in a variety of formats, along with Plain Language and
graphical explanations of important documents and definitions
* The legal rules and licenses adopted by the Senate guaranteeing
that the information and tools here can be used freely
Please take the data and tools offered here, mash them up, improve
them and re-distribute them to help the Senate educate and engage the
citizens of New York.
APIs
The New York Senate has created a developer API to help organizations
and individuals compile the Senate data they want the way they want.
The basic goal was to parse the flat file records from the Legislative
Retrieval System (LRS) (http://public.leginfo.state.ny.us/menuf.cgi)
into an open format. This data can then be queried via a simple
RESTful API to produce output in any number of desired formats or
standards - XML (RSS, ATOM, custom schemas), JSON, CSV, HTML
(widgets).
* Browse the Open Leg Service (http://open.nysenate.gov/openleg)
* View the Developer API documentation
(http://open.nysenate.gov/openleg/doc)
Embeddable Widgets
Widgets are a fun and easy way to reach people with important
information. Please view our widget library and help yourself to the
tools that will enhance the experience of your users, and ours, all
over the web. If you have a widget of your own using our API or data,
please contact us and we may feature it here.
*coming soon*
Original Software
As a user of Open-Source software the New York Senate wants to help
give back to the community that has given it so much - including this
website. To meet its needs the Senate is constantly devleoping new
code and fixing existing bugs. Not only does the Senate recognize that
it has a responsibility to give back to the Open Source community, but
public developments, made with public money should be public.
You can find all the NY Senate source code published on Github at
http://github.com/nysenatecio
Data Sets
The New York Senate's Open Data page
(http://www.nysenate.gov/open-data) is the official repository of all
government data. There you can browse through data produced by and
considered by the Senate in their original forms as well as various
other file types created for your convenience; including but not
limited to: Excel spreadsheets, .csv, text files and PDFs. To
supplement the source data it is making available, the Senate has also
created the Plain Language Initiative designed to help explain complex
data sets and legal terms in plain language.
* Browse Open Data (http://www.nysenate.gov/open-data)
* Browse Plain Language Initiative
(http://www.nymtasolutions.org/2009/)
Open-Source Software & Software Licenses
In order to make the Senate's information and software as public as
possible, it is has adopted unique system using two types of licenses
- GNU General Public License (http://www.gnu.org/copyleft/gpl.html) as
well as the BSD License
(http://www.opensource.org/licenses/bsd-license.php). This system is
meant to ensure the most public license is used in each specific case
such that:
(i) Any Software released containing components with preexisting GPL
copyrights must be released pursuant to a GPL v3 copyright
restriction.
(ii) Any Software created independently by the Senate without any
preexisting licensing restrictions on any of its components shall be
released under dual licensing and take one of two forms: (a) a BSD
license, or (b) a GPL v3 license. The ultimate user of such Software
shall choose which form of licensing makes the most sense for his or
her project.
(iii) Regarding Software containing preexisting copyright restrictions
other than GPL, the CIO shall make the determination how he or she
wishes to release such Software.
To a certain very great extent, I believe this is really "the cat out of
the bag" for this area of concern: free software and "content" in
government. As I repeat ad nauseum, the nature of information will do
the rest and it's only repression that can stop it (as it has been since
the 80's). All we needed was for it to be registered in an official,
public legislative venue. Now we can fight for the rest with near
absolute certainty of success just by standing on principle.
Seth
This literally went down the Friday before the contretemps broke out
upstate. The only question is to what extent the fabulous enlightened
sponsorship that produced this will continue under whatever develops out
of that situation. We're very fortunate that this came out, because it
sets the right stage and even if the promise here is pulled back in some
way, the right discussion has now arisen in the right venue (for the
first time in a legislature), discussion of this key area of concern, on
the right terms, that we've needed to occur somewhere, somehow, since
the 80's.
To a certain very great extent, I believe this is really "the cat out of
the bag" for this area of concern: free software and "content" in
government. As I repeat ad nauseum, the nature of information will do
the rest and it's only repression that can stop it (as has been taking
place since the 80's). All we needed was for it to be registered in an
official, public legislative venue. Now we can fight for the rest with
near absolute certainty of success, just by standing on principle.
noel
On 18 Jun 2009, at 20:23, Seth Johnson wrote:
> The only question is to what extent the fabulous enlightened
> sponsorship that produced this will continue under whatever develops
> out
> of that situation.
DOJ Opens Review of Telecom Industry
By AMOL SHARMA
JULY 6, 2009, 12:42 P.M. ET
The Department of Justice has begun an initial review to determine
whether large U.S. telecom companies such as AT&T Inc. and Verizon
Communications Inc. have abused the market power they've amassed in
recent years, according to people familiar with the matter.
The review of potential anti-competitive practices is in its very
early stages, and it isn't a formal investigation of any specific
company at this point, the people said. It isn't clear whether the
agency intends to launch an official inquiry.
Among the areas the Justice Department could explore is whether
wireless carriers are hurting smaller competitors by locking up
popular phones through exclusive agreements with handset makers,
according to the people. In recent weeks lawmakers and regulators have
raised questions about deals such as AT&T's exclusive right to provide
service for Apple Inc.'s popular iPhone in the U.S.
The Justice Department may also review whether telecom carriers are
unduly restricting the types of services other companies can offer on
their networks, one person familiar with the situation said.
The scrutiny of the telecom industry is an indication of the Obama
administration's aggressive stance on antitrust enforcement. The
Justice Department's antitrust chief, Christine Varney, has said she
wants to reassert the government's role in policing monopolistic and
anti-competitive practices by powerful companies.
The statute that governs such behavior – the Sherman Antitrust Act –
was used by the government in cases against giants ranging from
Standard Oil to Microsoft Corp. But it lay essentially dormant during
the Bush years, with the agency bringing no major case.
Now Ms. Varney plans to revive that area of U.S. law, and the telecom
industry is among several sectors – including health care and
agriculture – that are coming under scrutiny, the people familiar with
the matter said. She is already tackling one high-tech area by
investigating Google Inc.'s settlement with authors and publishers
over its Book Search product.
Through a spasm of consolidation and organic growth, AT&T and Verizon
have become the two dominant players in telecommunications, with the
largest networks and major clout over equipment makers. Combined, they
control 90 million landline customers and 60% of the 270 million U.S.
wireless subscribers. They also operate large portions of the Internet
backbone, ferrying data across the country and overseas.
A Justice Department spokeswoman declined to comment.
Some antitrust experts said the government would have a tough time
opening a Sherman Act case against telecom providers if it chooses to
do so. To bring a case, the government must show that a company is
abusing its market power.
"It would be a very hard case to make," said Donald Russell, a
Washington attorney who reviewed a number of major telecom mergers as
a DOJ antitrust lawyer in the Clinton Administration. "You don't have
any firm that's in a dominant position. Usually, you need to show a
firm has real market power."
Write to Amol Sharma at amol....@wsj.com
> http://www.ft.com/cms/s/0/87c523a4-6b18-11de-861d-00144feabdc0.html
Copyright laws threaten our online freedom
By Christian Engström
Published: July 7 2009 18:10
If you search for Elvis Presley in Wikipedia, you will find a lot of
text and a few pictures that have been cleared for distribution. But
you will find no music and no film clips, due to copyright
restrictions. What we think of as our common cultural heritage is not
“ours” at all.
On MySpace and YouTube, creative people post audio and video remixes
for others to enjoy, until they are replaced by take-down notices
handed out by big film and record companies. Technology opens up
possibilities; copyright law shuts them down.
This was never the intent. Copyright was meant to encourage culture,
not restrict it. This is reason enough for reform. But the current
regime has even more damaging effects. In order to uphold copyright
laws, governments are beginning to restrict our right to communicate
with each other in private, without being monitored.
File-sharing occurs whenever one individual sends a file to another.
The only way to even try to limit this process is to monitor all
communication between ordinary people. Despite the crackdown on
Napster, Kazaa and other peer-to-peer services over the past decade,
the volume of file-sharing has grown exponentially. Even if the
authorities closed down all other possibilities, people could still
send copyrighted files as attachments to e-mails or through private
networks. If people start doing that, should we give the government
the right to monitor all mail and all encrypted networks? Whenever
there are ways of communicating in private, they will be used to share
copyrighted material. If you want to stop people doing this, you must
remove the right to communicate in private. There is no other option.
Society has to make a choice.
The world is at a crossroads. The internet and new information
technologies are so powerful that no matter what we do, society will
change. But the direction has not been decided.
The technology could be used to create a Big Brother society beyond
our nightmares, where governments and corporations monitor every
detail of our lives. In the former East Germany, the government needed
tens of thousands of employees to keep track of the citizens using
typewriters, pencils and index cards. Today a computer can do the same
thing a million times faster, at the push of a button. There are many
politicians who want to push that button.
The same technology could instead be used to create a society that
embraces spontaneity, collaboration and diversity. Where the citizens
are no longer passive consumers being fed information and culture
through one-way media, but are instead active participants
collaborating on a journey into the future.
The internet it still in its infancy, but already we see fantastic
things appearing as if by magic. Take Linux, the free computer
operating system, or Wikipedia, the free encyclopedia. Witness the
participatory culture of MySpace and YouTube, or the growth of the
Pirate Bay, which makes the world’s culture easily available to
anybody with an internet connection. But where technology opens up new
possibilities, our intellectual property laws do their best to
restrict them. Linux is held back by patents, the rest of the examples
by copyright.
The public increasingly recognises the need for reform. That was why
Piratpartiet – the Pirate party – won 7.1 per cent of the popular vote
in Sweden in the European Union elections. This gave us a seat in the
European parliament for the first time.
Our manifesto is to reform copyright laws and gradually abolish the
patent system. We oppose mass surveillance and censorship on the net,
as in the rest of society. We want to make the EU more democratic and
transparent. This is our entire platform.
We intend to devote all our time and energy to protecting the
fundamental civil liberties on the net and elsewhere. Seven per cent
of Swedish voters agreed with us that it makes sense to put other
political differences aside in order to ensure this.
Political decisions taken over the next five years are likely to set
the course we take into the information society, and will affect the
lives of millions for many years into the future. Will we let our
fears lead us towards a dystopian Big Brother state, or will we have
the courage and wisdom to choose an exciting future in a free and open
society?
The information revolution is happening here and now. It is up to us
to decide what future we want.
The writer is the Pirate party’s member of the European parliament.
(Others in other cities, get ready)
This is the rollout for the new Global Top Level Domains. It's the
supposed public input phase. But it's also about putting in place a
massive, global change in trademark policy.
Get this -- the group that put this plan together (the "Implementation
Recommendation Team" or IRT) has already closed shop before initiating
these meetings -- so what's the point?
Domain names don't match up with trademark law -- DNS is about giving
symbols one universal address. Language is not. You don't trademark
"Apple" -- you reserve the use of that trademark to market a
particular kind of goods or service. Thus we have Apple Computers and
The Beatles' Apple music company. Or Sun Oil in Canada, a completely
separate company from Sun Oil in America -- and certainly not the same
as the Sun computing company. There's also fair use -- and of course
free speech.
The MPAA and International Trademark have had a hand in ICANN from its
inception, when they required the Uniform Dispute Resolution Policy.
Now, along with rolling out new global Top Level Domains, trademark
owners are ramming through a new process that goes well beyond that.
They are pulling out the stops to get ICANN to implement what will in
practical terms amount to a huge revision in the nature of trademark,
back by strong practical action. Along with a new "Uniform Rapid
Suspension System" to shut down sites quickly, they are establishing
ICANN as playing the role of policing trademarks -- which by law is
the trademark holders' responsibility.
The thing to remember is that while domain names and trademarks might
be hard to get a hold of politically, this sets a huge precedent that
will change trademark beyond that area. So we call them on their
process.
(Among other things, this will mean no more Yes Men. :-) )
Kathy Kleiman of the ICANN "Noncommercial Users Constituency" will be
able to brief you more fully. She can also explain what went down in
the previous discussions, where they've essentially ignored all the
substantive points she presented. It's up to us to come in in numbers
and say we got their number.
See below blurb from Kathy.
Seth
ICANN Public Consultation: Should New Top Level Domains Include Broad
New Trademark Protections?
On Mon, July 13, the Internet Corporation for Assigned Names and
Numbers (ICANN) will hold a public consultation at the Hudson
Theatre, Millennium Hotel, 145 West 44th Street, to discuss the
"rules of the road" for new generic top level domains (gTLDs), future
competitors to .COM, .ORG and .NET.
A group of trademark attorneys, representing large brand owners, in
May wrote a report calling on ICANN to create broad new trademark
protections before opening up new gTLDs.
A. IP Clearinghouse: a massive database of registered and
unregistered trademark rights created by ICANN (IRT
Report, pp. 12-16
B. Globally Protected Marks List: a list of global marks
created and maintained by ICANN (IRT Report, pp. 16-
22)
C. Uniform Rapid Suspension System (URS): A ultra-fast
takedown service with little notice or time to respond
by domain name registrants (IRT, pp. 25-37)
These proposals have been criticized as outside the mission and scope
of ICANN, a technical body, and outside the protections and limits
of trademark law. ICANN's Noncommercial Users Constituency writes "We
fear the impact of the IRT Proposals on free speech and fair use
online. Trademark owners don't own strings of letters, they have a
trademark for specific goods and services. Basic words like APPLE,
TIDE, SUN and TIME belong to all of us. Many important domain names
will be lost, or worse, blocked before they can be registered."
Approval of the IRT Report is being rushed through ICANN with minimal
opportunity to comment. It is vital that ICANN hear comment as soon
as possible, and Monday is an opportunity to speak.
ICANN's Noncommercial Users Constituency will be hosting a breakfast
at the Millennium Hotel on Monday morning. Please contact NCUC
Co-Founder Kathy Kleiman, ka...@kathykleiman.com
<mailto:ka...@kathykleiman.com>, for more details.
Registration to speak on 7/13 at this link (deadline 7/10):
http://www.registration123.com/ICANN/GTLD/
IRT Report:
http://www.icann.org/en/announcements/announcement-4-29may09-en.htm
IP Justice Comments:
http://forum.icann.org/lists/irt-final-report/msg00210.html
EFF Australia Comments:
http://forum.icann.org/lists/irt-final-report/msg00179.html
Noncommercial Users Constituency Website with comments:
http://icann-ncuc.ning.com/
> http://www.poclad.org/media/SenJudiciaryReSotomayor.pdf
Program on Corporations, Law & Democracy
Instigating democratic conversations and actions that contest the
authority of corporations to govern
Box 246 S. Yarmouth, MA 02664
www.poclad.org
people@...
508-398-1145
OPEN LETTER TO MEMBERS OF THE US SENATE JUDICIARY COMMITTEE
from the Program on Corporations, Law and Democracy (POCLAD)
July 14, 2009
Dear US Senate Judiciary Committee Members,
The Program on Corporations, Law and Democracy (POCLAD) calls on you
to continue your questioning of US Supreme Court nominee Sonia
Sotomayor. Judge Sotomayor's position on the larger issue of this
nation's democracy, trampled by the rights and powers of corporations
to govern, have so far been left untouched and unexplored in Senate
confirmation hearings.
The vast majority of non-criminal cases to be brought before the nine
robed ones of the Supreme Court in the next few years will relate to
matters of corporate "rights," protections, and dominance and their
impact on the rights of human beings in this so-called democracy. It
is appropriate, therefore, that questions be asked concerning the
doctrines of corporate autonomy and authority that insulate these
collections of capital and property from control by the people and
their legislatures - a control that existed at one time in this
nation.
Have the judiciary's efforts been so successful over the last 200
years to find corporations within the US Constitution and bestow
constitutional "rights" upon them that current lawmakers fail even to
question this democratic and illegitimate reality? Indeed, for two
centuries Supreme Court justices, the closest institution we have to
Kings and Queens, have been at the center of affirming and expanding
corporate rule and placing corporations well beyond the authority of
the people. We hope you do not concur with this history and its
consequences.
We hope the questions on the following page are asked of nominee
Sotomayor during her Senate hearings. Only after she responds to these
concerns and her answers promptly made available to the general public
and to all U.S. Senators should voting on her confirmation occur. It
should be noted that these questions were the same that we requested
be put to Judge Samuel Alito during his January, 2006 confirmation
hearings. To our knowledge, none of them were asked.
The appointment for life of a person who will assume a position of
vast and seemingly ever growing power in our society demands an
exhaustive review of every issue area that he/she is likely to address
in the high court. Corporate constitutional rights and their impact on
our rights as self-governing human beings certainly qualify as one
such area of questioning. This decision is of the utmost importance to
the fate of the country.
Respectfully,
The Program on Corporations, Law and Democracy
Attachments:
Questions for Supreme Court Justice Nominee Sonia Sotomayer
Quotes from Previous Supreme Court Decisions and Justices on
Corporations
Questions for Supreme Court Nominee Sonia Sotomayor
First a bit of background. In a 1978 case, First National Bank of
Boston v. Bellotti, the Supreme Court decided, 5 to 4, that business
corporations -- just as flesh and blood like you and me -- have a
First Amendment right to spend their money to influence elections.
Chief Justice William H. Rehnquist dissented. "It might reasonably be
concluded," he wrote, "that those properties, so beneficial in the
economic sphere, pose special dangers in the political sphere." The
late Chief Justice went on to write: "Furthermore, it might be argued
that liberties of political expression are not at all necessary to
effectuate the purposes for which States permit commercial
corporations to exist."
-- Do you believe that corporate money in our elections poses "special
dangers in the political sphere"?
-- Do you believe "that liberties of political expression" are
necessary "to effectuate the purposes for which States permit
commercial corporations to exist"?"
-- Do you believe that money is speech? Or is it property?
In 1886, only eighteen years after the people ratified the Fourteenth
Amendment, the Supreme Court had before it Santa Clara County v.
Southern Pacific Railroad. The issue was whether the Amendment's
guarantee of equal protection barred California from taxing property
owned by a corporation differently from property owned by a human
being. Chief Justice Morrison Waite disposed of it with a bolt-from-
the-blue pronouncement: "The Court does not wish to hear argument on
the question whether the provision in the Fourteenth Amendment to the
Constitution, which forbids a state to deny any person the equal
protection of the laws, applies to these corporations. We are all of
the opinion that it does." The conferring of Fourteenth Amendment
rights on the corporate form appeared in a clerk's headnote to the
case.
-- How would you characterize the Court's refusal to hear argument in
a momentous case before deciding it?
-- Was the "person" whose basic rights the framers and the people
sought to protect through the 14th amendment to the Constitution the
newly freed slave?
-- Was the "person" a corporation?
-- Is a corporation a person "born or naturalized in the United
States"?
-- In proclaiming a paper entity to be a person, was the court
faithful to the intent of the framers of the Amendment and to the
intent of the people who ratified it?
-- How would you characterize the court's refusal to hear argument in
a momentous case before deciding it?
-- Would you describe the court's action in Santa Clara as
conservative? As radical? As open-minded?
-- Would you characterize the Court's Santa Clara action as being an
example of judicial activism?
---
Quotes from Previous Supreme Court Decisions and Justices on
Corporations
Standard Oil of New Jersey v. United States, 221 U.S. 1 (1911):
All who recall the condition of the country in 1890 will remember that
there was everywhere, among the people generally, a deep feeling of
unrest. The nation had been rid of human slavery-- fortunately, as
all now feel--but the conviction was universal that the country was
in real danger from another kind of slavery sought to be fastened on
the American people: namely, the slavery that would result from
aggregations of capital in the hands of a few individuals and
corporations controlling, for their own profit and advantage
exclusively, the entire business of the country, including the
production and sale of the necessities of life.
Liggett Co. v. Lee 288 U.S. 517 (1933) (dissent by Justice Brandeis):
The prevalence of the corporation in America has led men of this
generation to act, at times, as if the privilege of doing business in
corporate form were inherent in the citizen; and has led them to
accept the evils attendant upon the free and unrestricted use of the
corporate mechanism as if these evils were the inescapable price of
civilized life and, hence, to be borne with resignation. Throughout
the greater part of our history, a different view prevailed. Although
the value of this instrumentality in commerce and industry was fully
recognized, incorporation for business was commonly denied long after
it had been freely granted for religious, educational and charitable
purposes. It was denied because of fear. Fear of encroachment upon
the liberties and opportunities of the individual. Fear of the
subjection of labor to capital. Fear of monopoly. Fear that the
absorption of capital by corporations, and their perpetual life, might
bring evils. . . There was a sense of some insidious menace inherent
in large aggregations of capital, particularly when held by
corporations.
Justice Brandeis warned ominously of the threat to democracy that
justifies sovereign control of corporations:
Able and discerning scholars have pictured for us the economic and
social results of thus removing all limitations upon the size and
activities of business corporations and of vesting in their managers
vast powers once exercised by stockholders--results not designed by
the states and long unsuspected. . . . Through size, corporations,
once merely an efficient tool employed by individuals in the conduct
of private business, have become an institution--an institution which
has brought such concentration of economic power that so-called
private corporations are sometimes able to dominate the state. The
typical business corporation of the last century, owned by a small
group of individuals, managed by their owners, and limited in size by
their personal wealth, is being supplanted by huge concerns in which
the lives of tens or hundreds of thousands of employees and the
property of tens or hundreds of thousands of investors are subjected,
through the corporate mechanism, to the control of a few men.
Ownership has been separated from control; and this separation has
removed many of the checks which formerly operated to curb the misuse
of wealth and power. And as ownership of the shares is becoming
continually more dispersed, the power which formerly accompanied
ownership is becoming increasingly concentrated in the hands of a few.
The changes thereby wrought in the lives of the workers, of the
owners and of the general public, are so fundamental and far-reaching
as to lead these scholars to compare the evolving "corporate system"
with the feudal system; and to lead other men of insight and
experience to assert that this "master institution of civilized life"
is committing it to the rule of a plutocracy. Liggett, pp. 564-565.
First National Bank of Boston v. Bellotti, 435 U.S. 765 (1978)
Dissents by Justices White, Brennan, Marshall
...the special status of corporations has placed them in a position to
control vast amounts of economic power which may, if not regulated,
dominate not only our economy but the very heart of our democracy, the
electoral process... The State need not allow its own creation to
consume it.
Opening statements in Sony v. Tenenbaum
BY MARC BOURGEIOS
posted by Marc W. Bourgeois @ 7/28/2009 06:17:00 PM
The second day of Sony v. Tenenbaum began as promised with the opening
statements of Plaintiffs followed by Defendant.
Mr. Reynolds for the Plaintiff began his opening by describing the
nature of the recording companies, stating that they are made up of
real people who work to record and distribute music for the public to
enjoy. He stated that his clients face a significant threat to their
livelihood from copyright infringement on the internet. He stated his
intention to show that Defendant had downloaded and distributed
thousands of song files, all on the internet for free. He stated that
while the infringement was massive his clients in the case were only
focusing on thirty of these songs. He stated that these songs were
distributed to millions of people without their permission with the
KaZaA file sharing application. He described in basic terms what file
sharing was, stating that it was sharing files with strangers that the
Defendant did not know and described how the KaZaA application was
downloaded and installed to a computer. He described the process of
searching for song files and finding those available for downloading.
Mr Reynolds stated that his clients hired MediaSentry, and on August,
10th of 2004 that MediaSentry was searching for files as any other
user would do. They then discovered a user with the username of
sublimeguy14@KaZaA who had over 800 song files on his computer, and
that some of these files were distributed to MediaSentry. He stated
his clients listened to these files and verified that they were in
fact sound files of songs that his clients sell. He then stated that
evidence would not be presented of other distribution other than to
MediaSentry, because the KaZaA application does not keep long files,
and is designed so that no one else can see what is happening when
these files are distributed on the internet. He stated however that
they know other distributions took place because that was the entire
purpose of the KaZaA application. He then described metadata in other
files that MediaSentry was able to imply other transfers for two
reasons, first that the metadata showed evidence that these files were
downloaded from the internet and second that the data packets shown
would show an IP address that identifies a specific device on the
internet.
He then described that while they knew this information, they still
did not know the identity of an anonymous sublimeguy14@KaZaA. He then
described the process of locating a subscriber, J. Tenenbaum, via a
subpoena on an Internet Service Provider, Cox Communications. Mr.
Reynolds then proceeded to describe other evidence that would be shown
by witnesses, such as the name sublimeguy14 being used by the
Defendant for other this, and that the Defendant would admit that the
KaZaA shared folder that was found was the one that he set up. He then
said that Defendant had attempted to blame others when they contact
for settlement, including other family members and friends. He then
stated that they would show evidence of a computer investigated by
Plaintiffs that would show over 2000 music files on it, and had other
file sharing software installed. Mr. Reynolds wrapped up the statement
by stating the jury would hear that defendant knew what he was doing
and knew that it was illegal, and would hear about the harm this type
of activity causes the music industry. He asked that the jury to hold
Defendant responsible for his actions.
Professor Nesson then began his opening statement for the Defendant by
stating that this story began long before 2004, it began in 1999 when
Napster was created. Plaintiffs had great success in years prior
selling music between the advent of the Compact Disc and when file
sharing came in to popularity with Napster. He then described that
before the internet the process of stealing music would likely involve
stealing physical goods from a physical retailer, but now that it was
something that could be done in someone’s own bedroom via the
internet. Professor Nesson described Joel’s background as a high
school student around the time that Napster came into existence and
described a summary of his life thereafter, going to college and
eventually enrolling in a PhD program at Boston University. He told
the jury that that they would hear from Joel’s family. He described
the Plaintiffs business model as a cube of styrofoam that was breaking
up in the new world of bits on the internet. He described the
Plaintiffs having a problem, and needing a new business model in
regards to the new technologies that had developed on the internet.
Professor Nesson said that Mr. Reynolds was attempting to portray Joel
as someone who ducked away from his responsibility, and described the
process that the case had put Joel through, with multiple depositions
and other difficulties to his life. Professor Nesson held up a poster
of the Necker Cube, and asked the jury to look at it, despite it being
a two-dimensional object was usually seen as a three-dimensional cube,
but that many people could see it in two ways. If you see the cube in
one form for a while and stare at it, often the cube will appear in a
different perspective. He likened this to the situation Plaintiffs
were attempting to place Joel in, that his actions could be seen in
two different ways. He asked the jury to see the case from Joel’s
point of view, and stated that he did not have the burden of proof. He
asked the jury to recognize the impediment that he has gone through to
reach them and allow them to see his point of view. Professor Nesson
states that no profit was sought by Joel, and that Joel was not the
part of any criminal syndicate. He then began describing the
litigation the recording industry engaged in starting with their suits
against Napster and Grokster. At this point the Plaintiffs object to
what Professor Nesson is attempting to describe, and their objections
are sustained.
He then said that the campaign got to the point where they couldn’t go
after the services any longer, and they needed to begin litigation
against individuals, and that this is where the lawsuit has its
origins. He described the case as about 30 songs in two categories.
Those songs first learned of in August 2007, a list of seven that was
later reduced to five. He described the other category as an
additional 25. He asked the jury to focus on the difference between
the two categories and asked the jury to find if Joel infringed on
each one. He asked the jury if they do get to a point where they have
to determine damages to award damages that are just. He asks that if
the jury finds a violation that they find it to be a minor violation.
He states that if he did violate any laws that the violation was a
part of the generation of which Joel is a member.
At this point Professor Nesson is reminded he is running out of time
for his opening statement and concludes his statement by thanking the
jury for their time.
Witnesses begin with Wade Leak of Sony.
[Ed. note. I'm definitely going to be sick. -R.B.]
---
> http://recordingindustryvspeople.blogspot.com/2009/07/witnesses-in-day-two-of-sony-v.html
Witnesses in day two of Sony v. Tenenbaum
BY MARC BOURGEOIS
posted by Marc W. Bourgeois @ 7/28/2009 07:04:00 PM
Wade Leak
Wade Leak of Sony BMG Music Entertainment began by describing what the
record companies do. They find new music, work with artists to match
these artists with songwriters and producers and described the basic
process of working with an artist to produce an album. He stated that
the record companies primary source of revenue is sale of record
albums and online sales of the tracks that they produce.
He states that he is familiar with the songs Sony and Arista are suing
over in the case. He identifies three songs that MediaSentry
downloaded and four additional songs which the copyrights were owned
by his companies in this case. He stated that Sony registered all the
copyrights of these recordings and described the content of the
certified copy of the copyright registration from the copyright
office. He stated that Sony has the exclusive right to these songs and
that they were sold in albums and also sold digitally.
He then described that MediaSentry was hired to gather evidence of
online infringement and that MediaSentry found a user,
sublimeguy14@KaZaA distributing these songs. MediaSentry downloaded
all of the three songs he initially identified, he listened to these
tracks and determined that they were identical to the songs that are
sold by his companies. He described the process of using a John Doe
suit to obtain the subscriber information for the IP address
MediaSentry identified from Cox Communications and sent a letter to J.
Tenenbaum to put him on notice of a copyright infringement claim. He
then described the screenshots of the sublimeguy14@KaZaA’s shared
folder and identified many works that are owned by Sony that they are
not pursuing claims on in this case. When asked why they were not
pursuing claims on all these files he stated that they were pursuing
claims on a “reasonable” number of songs. He stated that he wanted
fans to buy his companies music, and that copyright is instrumental in
making this happen.
He was asked why they were suing individuals in this manner. He
described their initial attempts to go after file sharing services, as
well as PR efforts that the recording industry attempted. Eventually
they decided to go after individuals engaging in file sharing because
they had no choice. He stated that he wanted people to love music, but
he also wanted them to pay for it. He likened the activity to
shoplifting, but in the digital universe. He stated that they do not
make money from these cases, because their expenses exceed any
settlements they get through them. He said that the reduced revenue
due to lost sales has led to numerous job losses at Sony. The stated
that many people feel file sharing is a victimless offense, but the
victims are those at Sony who have lost their jobs in recent years. He
stated that Sony is seeking statutory damages in this case and does
not have a number in mind of damages they would like to see awarded.
On cross examination Professor Nesson asked Mr. Leak about how they
hired MediaSentry and how thew coordinated with the RIAA, and again
asked about the issue of money in these cases. Mr. Leak repeated that
their expenses exceed any settlements they receive and that the goal
in the campaign is education. He stated that most settle these claims
before there is even a suit. He then asked about why Sony did not sue
on all the files they saw in the shared folder, and he again repeated
the intent to pursue a reasonable number. He did state that each was
infringement and was willful, and they could have sued on many more
songs.
Professor Nesson then focused on the issue of revenue. He focused on
the numbers from several albums that showed that the revenue figures
show a much greater amount of gross revenue from album sales than from
the sales of digital tracks as a general trend in all of the revenue
information for Sony tracks that he is asked to look at. He was asked
to describe the digital services that were available in 2004.
He was also asked to describe the difference between the songs listed
on the first exhibit of five and the other twenty-five identified,
after which he brought up the issue of spoofed songs being available
on file sharing networks. He asked if the songs that were not fully
downloaded could be so-called “spoof” songs put out on file-sharing
services to discourage people from using them. Mr. Leak stated that
their spoofing efforts were only focused on front-line releases, and
that they would not have been directed at these tracks because they
were all catalog tracks. Mr. Leak stated that each of the songs in the
shared folder represents a displaced sale and that the shared folder
was evidence that these files were available for potentially millions
of people free. He then again described in a way he attempted to under
direct examination that was objected to, the difficulty of his
business competing with a marketplace where music is available as
described for free and likened it to being in the business of selling
televisions where a truck pulls up outside your store and begins
giving away televisions for free.
Mr. Nesson then attempted to proceed down a line of questioning
regarding Sony’s ownership of Michael Jackson copyrights, which is
quickly shut down by Plaintiffs sustained objections to the questions.
He then asked about the labels no longer initialing new cases. Mr.
Leak stated that he was not involved in the decision regarding not
pursuing new cases, but stated that they were still continuing with
cases that had already begun. He also stated that they reserve the
right to start new cases at any time.
Professor Nesson’s questions then wrapped up asking about what damage
he thought was appropriate, and the answer was just that he wanted an
award that was relative to his culpability and that his activity
showed a blatant disregard for copyrights.
On redirect Mr. Oppenheim returned to the issue of revenue. Mr. Leak
described the life cycle of a track and described different events
that could cause a boost in sales at various times, such as the track
being used in a movie or television show or a greatest hits album
being released that would explain some of the variations in revenue
numbers that were shown earlier.
He then described the lower amounts shown for digital sales were in
part due to piracy, in part to it being new technology, and in part
due to the figures only being for the specific tracks being sold, and
not being full albums.
Chris Connelly
Mr. Connelly identified himself as an employee of
MediaSenty/MediaDefender. He described his work as to protect the
copyrights of his clients, specifically in cases such as these, to
search peer to peer networks as any other user would do for their
clients copyrighted works. He described the process of installing
KaZaA from KaZaA.com and the initial configuration process where the
user self-selects a username and sets up a shared folder. He described
the process of searching for files, selecting them, and downloading
them. He described their process as something any other user would do,
with the exception that they collect evidence of what is done, such as
the packets that are transferred between MediaSentry and KaZaA users
and the collection of screenshots produced by their process. He also
testified that their process had a 'zero-error rate', meaning they had
no examples of cases where the data they collected turned out to be
erroneous.
He then described the evidence that they found, such as the
screenshots of the sublimeguy14@KaZaA shared folder. He described the
user log that they created which showed the meta-data they were able
to transfer from over 800 files in this shared folder. He also
described the data log showing packets between a Cox Communications IP
address and MediaSentry. He was brought many pages of these logs
showing mp3 files, kpl files, and MetaData collected about them. He
testified that most of these files most likely did not come from
ripped CDs due to disparities in format of meta-data, varying
bitrates, et. cetera. which indicated that they most likely came from
different originating sources throughout the internet. One part of the
data log showed a portion where the sublimeguy14@KaZaA computer did
not respond to several requests, which he described as 'most likely
because the computer was busy' and the requested file then starting to
download from a different PC. He described this process as part of the
way KaZaA worked. He did testify that he had no evidence of other
transfers between sublimeguy14@KaZaA and any other party, because
peer-to-peer software does not show these activities taking place.
On cross-examination he again admitted he had no evidence of any other
transfers and Professor Nesson focused on some tracks that had
meta-data indicating they were ripped by someone named 'havok', he
asked if he had ever seen any songs indicating they came from
sublimeguy14 in any other case, to which he indicated he had not, but
that since none of the metadata from this shared folder had that name
in them that even if he had seen files that came from this shared
folder in any other case they would not contain that name.
The questioning then turned to the issue of impact. He went back to
the multi-source downloading testimony and asked that if someone had
attempted to download the songs and sublimeguy14's computer refused to
provide them that this would not have likely been an impediment to
anyone else recieving the files. Mr. Connelly stated that other users
probably could have recieved the files from other sources if
sublimeguy14's computer did not provide them. Professor Nesson then
stated that the fact that if Joel shared that it didn't change the
picture much, given that so many users are online with KaZaA at any
given time.
Professor Nesson then went to the issue of distribution. He described
distribution as a word that has an active component as in 'a
distributor'. He asked what actively Joel would have had to do to
distribute the files after they were downloaded to a shared folder.
Mr. Connelly stated that nothing needed to be done, when asked if it
was someone else that had to actively request the files in the shared
folder after Joel 'left them there' Mr. Connelly agreed.
Mark Matteo
Mr. Matteo works for Cox Communications and stated he had no relation
to Plaintiffs in the case. He stated that his group at Cox was
involved in the subpoena served on this case requesting subscriber
information for specific IP addresses at specific dates and times. He
described Cox's process for checking multiple systems to tie this
information together with subscriber data and that both their
technical and billing systems came back with the same information in
this case, that the subscriber indicated by the IP address and date
and time in question was a J. Tenenbaum of Providence, RI.
He stated that Cox sent a letter to the subscriber indicating that
someone had subpoenaed information about their service in a civil
case. He also pointed out when asked about specific sections in the
Cox Acceptable Use policy in regards to copyright. He stated that he
had no doubt that Cox identified the correct subscriber in this case.
On cross examination Mr. Matteo was asked about the letter he sent,
which had the language that a lawsuit had already been filed in
comparison to the initial letter sent by Plaintiffs indicating that
they file a case if the issue was not resolved. Professor Nesson asked
Mr. Matteo about the case Fonovisa v. Does 1-76 in which the subpoena
was issued. He also asked Mr. Matteo about the subscriber name of J.
Tenenbaum, and introduced Joel's mother Judith Tenenbaum.
James Chappel
Mr. Chappel is a high school friend of Joel. He was asked by
Plaintiffs about the PC in Joel's Providence home located in Joel's
bedroom. He was asked if he'd ever used it, to which he had indicated
that he had used it to check e-mail on rare occasion while he was in
high school. He was asked if he'd ever used KaZaA on the PC or any
other in the Tenenbaum home, to which he said he had not. He testified
that he had never used the sublimeguy14 username, knew what
filesharing was, and had seen some 'blank' CDs in Joel's bedroom while
he was in high school. He also testified that he had heard Joel brag
about obtaining music free on the internet while he was in high
school.
On cross-examination Professor Nesson asked if he was mad at Joel for
implying he may have used KaZaA on the computer in his bedroom. At
first Mr. Chappel was hesitant to answer, but did indicate he was
annoyed by the fact. He testified that he had not been deposed in the
case, but did 'sign something' for Plaintiffs. After a sidebar
conference a letter written to Plaintiffs by Mr. Chappel was
introduced indicating that he had often heard Joel brag in high school
about always having the latest music and getting it from the internet.
He indicated he wrote the letter along with a statement he was sent by
Plaintiffs and signed that was written 'in their words' because he
felt he wanted to submit something that he wrote in his own words.
Dr. Arthur Tenenbaum
Joel Tenenbaum's father was the last live witness of the day, being
asked by Plaintiffs mostly yes or no questions about artists his son
liked, and whether or not he had ever seen Joel use KaZaA. He
testified that he had seen Joel use KaZaA and even was shown by him
the process of using it at one point to try to obtain music that was
more in his fathers taste. He also indicated that he had called Joel
after reading about lawsuits during Joel's freshman year of college to
caution Joel not to do it. He testified that Joel had told him that
you would only be sued if you 'did it a lot'.
Tova Tenenbaum and Abagail Nathan
Deposition testimony was read from Tova Tenenbaum and Abagail Nathan,
Joel's younger and older sister. Both testified about Joel's music
tastes and that they never saw anyone else use the computer for
downloading music and had never done so themselves. Tova indicated
that in Joel's car which she now drove he left several burned CDs.
Student Must Pay $675,000 in Downloading Case
Published: July 31, 2009
Filed at 6:21 p.m. ET
BOSTON (AP) -- A Boston University student has been ordered to pay
$675,000 to four record labels for illegally downloading and sharing
music.
Joel Tenenbaum, of Providence, R.I., admitted he downloaded and
distributed 30 songs. The only issue for the jury to decide was how
much in damages to award the record labels.
Under federal law, the recording companies were entitled to $750 to
$30,000 per infringement. But the law allows as much as $150,000 per
track if the jury finds the infringements were willful. The maximum
jurors could have awarded in Tenenbaum's case was $4.5 million.
The case is only the nation's second music downloading case against an
individual to go to trial.
Last month, a federal jury in Minneapolis ruled a Minnesota woman must
pay nearly $2 million for copyright infringement.
Day four in Sony v. Tenenbaum
BY MARC BOURGEOIS
Friday, July 31, 2009
Testimony in day four of Sony v. Tenenbaum began with the continuing
cross-examination of Dr. Stanley Liebowitz
Professor Nesson continued his questioning from a point that he
offered he and Dr. Liebowitz both agreed, that the recording companies
began having declining revenues at a point after which Napster made
file sharing ubiquitous and due to the weakening of the property
rights of copyright holders. Professor Nesson asked that given the new
situation that had emerged if he believed that it was true that the
same companies may not emerge as leaders when a new business plan
plays out as the companies that were previously the leaders. He asked
the Doctor about an opinion he offered in his 2001 book that DRM would
be a part of the future of the music business. Dr. Liebowitz responded
that he was hopeful that DRM would be successful in restricting the
ability to copy music so that it would strengthen the property rights
of the copyright holders, but stated that DRM turned out to provide
only limited protection because it was relatively easy to defeat, such
as by burning CDs. He then asked about when the industry first offered
a product that was not restricted, and was comparable to the open MP3
file, he testified that he thought this happened in the 2007 time
frame.
Professor Nesson asked Dr. Liebowitz to explain to the jury an example
in his report which used a jewelry store. He explained his analogy of
one where a jewelry store owner was continuously robbed, thus forcing
the owner in to a different business model, such as selling for
another store. He generalized that this was a similar weakening of
property right which was detrimental to society because it would force
someone in to an unanticipated occupation, which despite how
successful they may be at it would be a loss to society because it
prevented someone from being in the occupation they desired. Professor
read in to this analogy comparing it to a store with no locks on the
doors or other methods by which it would be robbed or an alternative
product to jewels. Under this testimony Dr. Liebowitz maintained his
position, but did say that under a system where people have strong
enough will to break the laws relating to strong property rights that
there may not be an enforceable system which gives people the strong
property rights they once enjoyed.
He was asked if it was his position that if there was a weakening of
property rights that it led to a decline in production in general. He
agreed, and stated that this weakening of property rights likely led
to a drop of production in sound recordings in general. He was asked
if other experts in his field believed that the dip in record sales
was not due to file sharing, and offered Oberholzer-Gee's paper as an
example of a economist who disagreed with his position. He questioned
Dr. Liebowitz on an assertion in an Oberholzer-Gee paper that stated
the number of annual music albums released doubled since 2000. Dr.
Liebowitz said that he believed these numbers were not necessarily
reliable because it only included the number of releases registered
with Nielsen, and not necessarily the number of professional quality
albums released, which could not necessarily be compared since in
previous times it would not be possible to come by numbers for the
number of amateur quality music released and thus the current numbers
from Nielsen would be comparing apples to oranges with previous
numbers they released.
He then went in to a sports analogy to explain his proposition about
professional quality versus amateur quality albums released. He
explained that if the market for professional quality sports went away
because due to some market change professional sports franchises could
no longer sell tickets or make money from broadcasting that it would
not eliminate sports being played, since there is plenty of amateur
sports being played where the participants do not make money, but
since money is being paid to professional sports that the market
overall prefers these kind of sports. He explained that if the
property rights of the professional franchises were eliminated it
would be a harm to society because the professional tier of sports
would go away and would likely impact the total production of sports
for the marketplace.
He was then asked about the network effect, whereby the more people
that have access to technology the more overall value the whole
technology had. He explained this with an analogy to the telephone,
but concluded that a network effect due to file sharing was not
likely.
Upon redirect Dr. Liebowitz was asked if he agreed with the opinions
provided in the Oberholzer-Gee paper. He responded that he did not. He
was asked if there was any reason to believe that the specific
plaintiffs in this case would be companies that would not survive in
the new marketplace that was emerging, to which he also replied that
he thought there was no reason these companies should fail.
Joel Tenenbaum
The main witness of day four was the Defendant, Joel Tenenbaum. Joel
was asked basic questions about where he currently lived, as well as
where he had previously lived, and what computers he had both at his
Providence home and in college. He said nothing surprising about his
computer at home or at college that hadn't been revealed in previous
testimony. He also testified that he had used the sublimeguy14
username, admitted that he had used KaZaA, and that the KaZaA shared
folder in the screenshots from MediaSentry were his. He also testified
that it was not uncommon for him to see other people uploading files
from him on the KaZaA traffic tab. Mr. Reynolds then asked the
Defendant about the case that was against him. He testified that he
first found out about the case from his mother. He was asked about his
responses to interrogatories which asked who else may have used his
computer or KaZaA and requests for admissions about file sharing use.
His answers to both stating no knowledge were shown to the jury to
each of these questionnaires.
The questioning then turned to his deposition testimony where he
stated that there were many people who could have used his KaZaA
account, friends, other people who had stayed at his house, etc. He
also testified that he had never actually seen any of these people use
KaZaA. He was then asked about his musical tastes and asked if he
liked several artists that appeared in the KaZaA shared folder. He
testified that he had burned CDs of the music in his shared, and
testified that he had ripped CDs to his computer. He testified that he
had never filled in the "comments" meta data on any of the files
ripped to his PC. He testified that he may have changed the meta data
on some files to be consistent with others for it to be easier to find
in music programs, but did not do so for much of the music that he
had.
Joel was asked about a video he had recorded from a Deftones
performance on the David Letterman show. He testified that he had
recorded this video and put it himself in to his KaZaA shared folder
and made it known on a Deftones forum that he had done so for others
to download it from him.
Joel was then asked about his computer and music usage habits at
Goucher college, where he stated he and other students had used the
Network Neighborhood feature of Windows to share music with one
another in college. He was shown numerous items from Goucher college
warning about copyright infringement and peer to peer file sharing,
all of which he admitted to having received at some point as a Goucher
student. He was asked about other file sharing software such as
Napster, LimeWire, and iMesh and admitted that he had used them all.
He testified when asked that he did all of this to recieve the most
amount of music with the least effort.
Joel was asked about his letter to Plaintiffs after initially learning
that he may be sued for copyright infringement. The letter included a
line stating he was not near his computer in Providence at the time of
writing, but would return later and delete any copyrighted material if
it existed. He was also asked about the inspection of his computer and
the re installation of his computer, which he stated he took to Best
Buy to have done while inspection was pending, but asked Best Buy to
preserve all of the music because of the Plaintiffs inspection which
was potentially pending. He stated that he took this to be done
because the computer wouldn't boot up anymore. He was asked if he had
any reason to disbelieve anything in Dr. Jacbson's report stating that
he didn't because Dr. Jacobson was "a competent professional". He
testified that he had listened to, talked about, made mixes of, and
made available for distribution all of the music in his shared folder.
On cross-examination Professor Nesson asked Joel about his personal
and family history, places he had lived, when he became interested in
music, to which he explained with great narrative detail. He testified
about his usage of music, including borrowing CDs from friends, making
mix tapes from the radio, and purchasing music CDs from record stores.
He was asked what he found attractive about Napster to which he said
he'd previously used Yahoo! search to attempt to find mp3 files, but
it was much easier when Napster came about. He testified that he was
not the person who originally installed Napster on his computer in
Providence. He explained that Napster was a giant library of songs in
front of you and "the Google of music". He stated he did not have a
sense that it was illegal at the time he was using it. He also stated
that his friends also used Napster, and he was never insterest in
hurting the artists and record companies.
Professor Nesson asked about Joel's high school life and how he used
music throughout that time period, which he described driving around
with his friends listening to music in his car, and was asked about
his car which he testified that he and his father had installed a good
deal of upgraded stereo equipment in.
Joel testified that also used KaZaA and found it to be similar to
Napster in function. He was asked about his letter to Plaintiff and
why he didn't remove his music files as the letter said he would. He
stated that he intended to, but could not make himself do it after all
of the time he had put in to assembling the music collection. He then
described what happened at college afterwards, with his college moving
more and more year after year at college to make file sharing
applications not work, to which he stated that around his junior year
none of the applications he had used worked properly anymore. He
stated he continued buying CDs during this time period, due to quality
issues.
Professor Nesson then turned to issues of the present lawsuit and why
he lied on his written interrogatories. Joel said that his answers
seemed like the best response to give without a lawyer. He also asked
about some of his deposition testimony which he stated that he was
less than fully forthcoming in it. He eventually was asked if he was
taking responsibility, to which he said, "I did it".
He stated that he stopped in 2007 or 2008 because of problems using
filesharing with malware on his machine, encountering spoof file, and
because he began using iTunes. He stated that this lawsuit was one of
the reasons he stopped using file sharing.
He stated that during the time Plaintiffs accused him of infringement,
August of 2004, he was not aware of iTunes. He stated he may have
heard of some other music services but that he wasn't in a position to
switch his music acquisition to any other method. He was asked if he
ever used file sharing for the purpose of selling or any other
commercial activity, which he said he did not, that his use was
entirely personal.
The redirect was very short, asking about his bringing his computer to
Best Buy and if his intention was to destroy evidence by doing so, he
stated it wasn't he just did so because it wouldn't run. He was asked
about his testimony that he shared music with friends and was asked if
his friends with everyone on KaZaA and Napster, which he said he was
not. He was asked if he was now admitting liability, to which he said
yes.
Ron Wilcox
Mr. Wilcox is with Warner music group and formerly of Sony. He
testified as to the sale of music beginning in the early 1980s through
the present time. He explained the advent of the CD and that it was
not built with any encryption because the threat of copying was not
seen as a major threat at that time. He testified as to music industry
efforts in the 1990s to explore digital distribution methods, to which
he described generally in terms of the amount of effort expended on
it, but no specifics. He testified that all the technologies they
looked at during this time included some sort of copy protection.
He testified that efforts to add encryption to CDs were never fully
explored because it would have left a lot of existing equipment
obsolete and they did not believe this would be something that the
marketplace would accept. He testified about early forms of DRM such
as FairPlay on iTunes.
His cross examination was short, being asked about Warner's reaction
to Napster which he said concerned the company because it was an
illegitimate free product. He was asked if Warner or Sony ever tried
to partner with peer to peer services which he said they had but that
the partnerships never went very far because of animosity on the peer
to peer side and stated they never seriously wanted to work with
record companies in the way that they wante.d
Silda Palerm
Ms. Palerm's testimony was to authenticate the Warner tracks at issue
in the case. The only other issue she testified to was that Warner had
had an over 50% reduction in force since the year 2000. On cross
examination Mr. Feinberg asked if the reduction in force was at all
attributable to the economy. Ms. Palerm stated her opinion that since
the bulk of reduction in force was prior to the economy having trouble
that she believed it was due to file sharing.
After Ms. Palerm Plaintiffs ended their case.
Plaintiffs after the conclusion of their case moved for a directed
verdict on the issue of copyright ownership, liability, and
willfulness. Defendant conceded ownership, but not any of the other
factors. Judge Gertner indicated she was inclined to direct on the
issue of liability based on testimony but still planned to go to the
jury with willfulness and the award. The Defendant indicated that they
will likely wrap up their case by mid-morning on Friday, after which
there will be closing arguments. Plaintiffs indicated they only needed
20-30 minutes for their closing.
Keywords: lawyer digital copyright law online internet law legal
download upload peer to peer p2p file sharing filesharing music movies
indie independent label freeculture creative commons pop/rock artists
riaa independent mp3 cd favorite songs intellectual property portable
music player
posted by Marc W. Bourgeois @ 7/31/2009 01:51:00 AM
INTERNATIONAL FORUM ON FREE CULTURE & KNOWLEDGE
Barcelona, October 29 to November 1st2009
Website:http://www.fcforum.net/
Newsletter subscription:
http://openfsm.net/projects/freecultureforum/lists/freecultureforumbcn
There is in motion the celebration of an International Forum on free
culture and knowledge from October 29 to November 1st2009 at
Barcelona.
This is the opportunity to join under the same roof and from a
critical perspective, the main organizations and active voices in the
free culture and knowledge world; a meeting point to sit down, dance
and work together setting common agendas, strategies and address
disagreements. In this regard, the Forum is action-oriented. Besides
that, there are the conditions to get the visibility to reach a wider
public and point out another perspective about knowledge, culture and
creativity, different to the one that the entertainment industry and
universities insists in showing us.
Why Barcelona
In January 2010, Spanish State will take up the European Presidency of
the European Union. Spanish Government has already announced that one
of their flagships will be reinforcing the control of the Internet and
criminalizing the sharing culture in the digital environment. The
consequences of those decisions will be noticed in the rest of the
world. Furthermore, within this context, Barcelona is closing
agreements with cultural institutions to set new agreements to get a
fairer sharing of copyrights. These agreements will be spread to other
institutions in Catalonia and Spanish State.
In October 29ththis year Barcelona will hold the Second Edition of the
Oxcars Festival, an international event to honor the defense of
culture and to show that other creation channels, as good and with as
much quality as traditional ones, exist. The last edition was a
success with more than 2000 participants. It brought the attention
both of an interested public and of media. You can find more
information about last year show
athttp://exgae.net/exgae-multiply-and-share-forth/theoxcars.
In the Spanish State, there are very active organizations, movements
and persons related to free culture from different perspectives,
offering a very rich sharing space and a source of new proposals from
where to launch an international process. In this regard, from several
worldwide voices, like during the last World Social Forum (Belem do
Para Brazil 2009) is recognized the necessity to create international
spaces for networking, coordination and building of a global frame of
the free culture and knowledge issue, analyse similarities and develop
common agendas; the Free Culture Forum of Barcelona aims to create
such space.
What: Combine advocacy and building of infrastructure
The forum’ main objectives are on the one hand building networks to
optimize the efforts of the different groups and fixing common demands
against content’s industry and government proposals’ in its
eagerness to control culture and information and to self-organize to
build infrastructure to sustain free culture; and on the other hand
reinforcing the self-organization of tools and infrastructure to
support free culture.
How
The schedule and methodology of the Forum is organized in 3 days
* October 29th: Celebration of the Oxcars Free Culture
Awards Festival.
* October 30th: Panels presentation of key experiences
from around the globe and discussion on the key issues.
* October 31th: Working groups around the key issues of
the Forum and open space to meetings from participants
initiatives.
* November 1st: Finish placing in common the results and
initiatives from the meetings and working groups in
order to identify a common future agenda and manifesto.
Key issues
LEGAL PERSPECTIVES AND USERS ACCESS: From a legal point of view, we
will try to identify holes and flexibilities in national regulations
and International agreements to look for a strategy against the abuses
of knowledge and culture policies both in private and contractual
relations and against the international public policies.It is
necessary to consider users rights before culture. Last years tendency
has broken the balance between user and consumer rights and producer
and creators which must be reestablished.
EDUCATION AND KNOWLEDGE MANAGEMENT: Opposed to the corporative
approach to education, there is a new approach based on the idea of
sharing and keeping up solidarity. New ways of managing knowledge
created by public funded research and innovative research
methodologies taking into account social movements as knowledge
generators. It is also about taking advantage of the new educational
tools and the dissemination of knowledge that Internet and digital
culture have provided. Little by little, new initiatives for use and
creation of materials without copyrights and proposals to extend
limitations and exceptions to those rights with educational aims are
emerging.
ECONOMIES, NEW P2P MODELS AND SUSTAINABLE DISTRIBUTION: Also for
Economy, culture and knowledge management are basic. In the last few
years, more and more voices are questioning the costs for society and
its development of cultural and knowledge exploitation models that are
based on exclusive rights with too long life span. Favored by the
Internet, focus of economy has moved from property to access. Free
Culture philosophy, inherited from free software is the empirical
proof that a new ethic and new businesses based on knowledge
democratization are possible. Intermediaries disappear and author
becomes producer of her works.
FREE SOFTWARE AND KNOWLEDGE SHARING HACKER PHILOSOPHY: Even when the
term hackers is usually used by media with negative connotation,
around this movement alternative actions have been developed with a
clear philosophy of defense of user’s rights from a perspective of a
common conscience which promotes freedom of knowledge and social
justice. Hackers movement have also build key platforms and tools for
a free culture infrastructure.
ORGANIZATIONAL LOGIC AND POLITICAL IMPLICATIONS OF FREE CULTURE:
Critically reflect on the emerging collective action organizational
and democratic features related to free culture experiences (such as
the remix culture, prosumerirsm, descentralised organizing and open
and participative principles), looking to strains and weaknesses, and
to discuss on their political implications and the emerging
institutional logic. Furthermore, a critical analysis on the
“dark†side of technology and the risk linked to its uses, such as
increase of surveillance, control and concentration of benefits over
collectively generated value.
The Forum infrastructure is provided by Exgae, Networked Politics,
Free knowledge Institute and the collaboration of Students For Free
Culture and Hangar.
Website: http://fcforum.net/
Newsletter of informations:
http://openfsm.net/projects/freecultureforum/lists/freecultureforumbcn
Organization contact: in...@fcforum.net
CATALA
FÒRUM INTERNACIONAL DE LA CULTURA I EL CONEIXEMENT LLIURE: Acció i
organització. Barcelona del 29 d'Octubre al 1 de Novembre de 2009.
Web: http://fcforum.net/
Newsletter para rebre informacions del Fòrum de la culturalliure:
http://openfsm.net/projects/freecultureforum/lists/freecultureforumbcn
Del 29 d'Octubre al 1 de Novembre de 2009 se celebrarà a Barcelona el
Fòrum internacional de cultura i coneixement lliures. El Fòrum
constitueix una oportunitat única per a reunir sota el mateix sostre
les principals organitzacions, iniciatives i veus actives en el món
de la cultura i el conocimeinto lliures; un punt de trobada on
treballar conjuntament, construir agendes i estratègies comunes,
aixà com reflexionar, des d'un punt de vista crÃtic, sobre les
diverses visions, perills i contradiccions internes a la cultura
lliure. Aixà mateix el fòrum és una oportunitat per a donar major
visibilitat a concepcions alternatives del coneixement, la cultura i
la creativitat, diferents a la que insisteixen a imposar la indústria
de l'oci i les universitats neoliberalizadas.
Perquè Barcelona
Al gener de 2010, l'Estat Espanyol prendrà la Presidència Europea de
la Unió Europea. El Govern Espanyol ha anunciat que un dels seus bucs
insÃgnies serà reforçar el control d'Internet i criminalitzar la
cultura de distribució en l'ambient digital. Aquestes decisions
tendran conseqüències en la resta del món. A més, dins d'aquest
context, institucions culturals a Barcelona estan signant acords sobre
l'accés a la cultura que serviran com models per a altres
institucions a Catalunya, Estat Espanyol i Europa.
El 29 d'Octubre d'enguany a Barcelona se celebrarà la segona edició
del Festival dels Oxcars, un esdeveniment internacional en defensa de
la cultura que demostra que existeixen altres canals de creació
d'igual qualitat, o major, que els tradicionals. L'edició passada
dels Oxcars van ser un èxit, contant amb la participació de més de
2000 participants directament interessat@s i la cobertura dels mitjans
de comunicació (La informació sobre els Oxcars de l'any passat estÃ
disponible en:
http://exgae.net/exgae-multiply-and-share-forth/theoxcars).
Finalment, i importantemente, en l'Estat Espanyol, hi ha
organitzacions, moviments molt actius i persones relacionats amb la
cultura lliure des de diferents perspectives, que ofereixen un espai
molt ric des del qual llançar un procés internacional. En aquest
sentit, des de diversos llocs del món, com durant la celebració de
l'últim Fòrum Social Mundial en Belem do Per (Brasil Gener de 2009)
es reconeix la necessitat de crear espais internacionals per a
l'establiment d'una xarxa de coordinació i per a la creació d'un
marc global de la cultura i del coneixement lliures des del qual
analitzar similituds i diferències entre iniciatives en diferents
continents i desenvolupar agendes comunes; el Fòrum de la cultura
lliure de Barcelona tracta de crear eixe espai .
Que?
Els objectius principals del Forum són d'una banda construir xarxes
per a optimitzar els esforços dels diversos grups i compartir
objectius per a impedir que la indústria i els governs imposen la
seua polÃtica de control i restricció de la cultura i la
informació; i per altra banda, organitzar-nos per a construir
infraestructura per a la cultura lliure.
Com?
El Fòrum tindrà lloc en tres dias que combinen diverses
metododologias:
* 29 Octubre: Celebració del Festival de la cultura
lliure Oxcars
* 30 Octubre: Panells de presentacions d'experiències
claus i inspiradores entorn de 5 eixos de treball.
* 31 Octubre: Grups de treball (barcamp) al voltant de les
qüestions claus.
* 1 Novembre: Posada en comu dels resultats dels grups de
treball. Definició d'una agenda comu i aprovació de
manifests per temes.
Eixos
PERSPECTIVES LEGALS I ACCÉS DE L@s USUARI@S: Des d'un punt de vista
legal es tracta d'identificar les esquerdes i flexibilitats en les
legislacions nacionals i els acords internacionals per a cercar una
estratègia enfront dels abusos de les polÃtiques sobre difusió del
coneixement i cultura, a nivell tant de relacions privades i
contractuals com davant les normatives i polÃtiques públiques
internacionals. Aixà mateix, és necessari considerar els drets de
usuari@s.
EDUCACIÓ I GESTIO DEL CONEIXEMENT: Enfront d'un enfocament
empresarial de l'educació, es planteja una perspectiva basada en la
idea de compartir i a mantenir viva la solidaritat. Desenvolupar
formes de gestió del coneixement generat a través de la
investigació amb fons públics i independents i de metodologias
d'investigació més participatives. Des d'una perspectiva que
reconeix als moviments socials com generadors de coneixement. Es
tracta també d'aprofitar les noves eines educatives i de difusió del
coneixement que Internet i la cultura digital han proporcionat. A poc
a poc van apareixent iniciatives per a la utilització i creació de
materials sobre els quals no existeixen drets d'autor i propostes per
a estendre les limitacions i excepcions sobre aqueixos drets amb fins
educatives. Aixà mateix, cada vegada cobra major importà ncia el
domini públic com part del nostre patrimoni cultural i
intel·lectual. Un accés obert a la cultura i un domini públic
ampli, enfront de l'extensió de la durada dels drets d'autor, són la
base del desenvolupament dels països i de les societats en general.
ECONOMIES, NOUS MODELS P2P I SOSTENIBILITAT DISTRIBUTIVA: Aquesta
secció s'ocupara de dues importants lineas de treball. La primera
d'elles se centra en els nous models que sorgeixen de les noves formes
d'organització i la segona en les noves propostes d'explotació que
han sorgit i en les quals es qüestiona la necessitat dels
intermediaris. També per a l'economia, la gestió del coneixement i
la cultura són fonamentals. Cada vegada existeixen més veus que es
plantegen des del punt de vista econòmic el cost que en termes
generals suposen per a la societat i el seu desenvolupament els models
d'explotació de la cultura i el coneixement basats en drets exclusius
de durada excessiva. Propiciat per Internet el focus de l'economia es
desplaça de la propietat a l'accés La filosofia de la cultura
lliure, heretada del programari lliure és la demostració empÃrica
que una nova ètica i una nova empresa basada en la democratització
del coneixement, i dels mitjans de producció són possibles. Es
tendeix a la desaparició dels intermediaris i l'autor es converteix
en productor de les seues obres. També les xarxes P2P han suposat una
revolució en la forma d'aproximar-se a la creació i intercanvi
d'informació i coneixement. De la seua filosofia han sorgit noves
formes colaborativas de treball i difusió del coneixement, no jerÃ
rquiques sinó rizomáticas.
PROGRAMARI LLIURE I CONEIXEMENT QUE COMPARTEIXEN LA FILOSOFIA DELS I
LES HACKERS: Tot i que el terme hacker és utilitzat generalment per
mitjans de masses des d'una connotació negativa, el moviment hacker
es basa en una filosofia de defensa de les drets de usuari@ des d'una
perspectiva d'una consciència comuna que promou la llibertat de
coneixement i de justÃcia social i que ha construït importantÃsimas
plataformes i eines que són la infraestructura de la cultura lliure.
Impulsar la construcció d'infraestructura per a la cultura lliure és
clau.
LÒGICA D'ORGANITZACIÓ I IMPLICACIONS POLà TIQUES DE LA CULTURA
LLIURE: Es tracta de reflectir crÃticament sobre la lògica
organitzativa i democrà tica de l'acció col·lectiva relacionades amb
les experiències de cultura lliure (tals com la cultura de la
remezcla, el prosumerirsm, organització descentralised, principis
participatius, entre altres) analitzant els seus avantatges i
debilitats, aixi com tensions internes. Aixà mateix, discutir sobre
les implicacions polÃtiques.
La infraestructura per a la celebració del Fòrum està sent
proporcionada per Exgae, Networked Politics, Free Knowledge Foundation
i la col·laboració de la Xarxa d'estudiantes per la cultura lliure i
el Hangar.
Lloc web: http://fcforum.net/
Newsletter per rebre informacions del Fòrum de la cultura lliure:
http://openfsm.net/projects/freecultureforum/lists/freecultureforumbcn
Contacte de l'organització: in...@fcforum.net
CASTELLANO
FORO INTERNACIONAL DE LA CULTURA Y EL CONOCIMIENTO LIBRE:
Acción y organización. Barcelona del 29 de Octubre al 1 de Noviembre
de
2009
Sitio web: http://fcforum.net/
Newsletter para recibir informaciones del Foro de la cultura libre:
http://openfsm.net/projects/freecultureforum/lists/freecultureforumbcn
Del 29 de Octubre al 1 de Noviembre de 2009 se celebrará en Barcelona
el Foro internacional de cultura y conocimiento libres. El Foro
constituye una oportunidad única para reunir bajo el mismo techo las
principales organizaciones, iniciativas y voces activas en el mundo de
la cultura y el conocimeinto libres; un punto de encuentro donde
trabajar conjuntamente, construir agendas y estrategias comunes, asÃ
como reflexionar, desde un punto de vista crÃtico, sobre las diversas
visiones, peligros y contradicciones internas a la cultura libre. AsÃ
mismo el foro es una oportunidad para dar mayor visibilidad a
concepciones alternativas del conocimiento, la cultura y la
creatividad, diferentes a la que la industria del ocio y las
universidades neoliberalizadas insisten en imponer.
Porqué Barcelona
En enero de 2010, el Estado Español tomará la Presidencia Europea de
la Unión Europea. El Gobierno Español ha anunciado que uno de sus
buques insignias será reforzar el control de Internet y criminalizar
la cultura de distribución en el ambiente digital. Estas decisiones
tendran consecuencias en el resto del mundo. Además, dentro de este
contexto, instituciones culturales con Barcelona estan firmando
acuerdos sobre el acceso a la cultura que serviran como modelos para
otras instituciones en Cataluña, Estado Español y Europa.
El 29 de Octubre de este año en Barcelona se celebrará la segunda
edición del Festival de los Oxcars, un acontecimiento internacional
en defensa de la cultura que demuestra que existen otros canales de
creación de igual calidad, o mayor, que los tradicionales. La
edición pasada de los Oxcars fueron un éxito, contando con la
participación de mas de 2000 participantes directamente interesados y
la cobertura de los medios de comunicación (La información sobre los
Oxcars del año pasado está disponible en:
http://exgae.net/exgae-multiply-and-share-forth/theoxcars).
Finalmente, e importantemente, en el Estado Español, hay
organizaciones, movimientos muy activos y personas relacionados con la
cultura libre desde distintas perspectivas, que ofrecen un espacio muy
rico desde el que lanzar un proceso internacional. En este sentido,
desde varios lugares del mundo, como durante la celebración del
último Foro Social Mundial en Belem do Para (Brasil Enero de 2009) se
reconoce la necesitad de crear espacios internacionales para el
establecimiento de una red de coordinación y para la creación de un
marco global de la cultura y del conocimiento libres desde el que
analizar similitudes y diferencias entre iniciativas en diferentes
continentes y desarrollar agendas comunes; el Foro de la cultura libre
de Barcelona trata de crear tal espacio.
Qué
Los objetivos principales del Forum son por una parte construir redes
para optimizar los esfuerzos de los diversos grupos y compartir
objetivos para impedir que la industria y los gobiernos impongan su
politica de control y restricción de la cultura y la información; y
por otra parte, organizarnos para construir infraestructura para la
cultura libre.
Cómo
El Forum tendrá lugar en tres dias que combinan diversas
metododologias:
* 29 Octubre: Celebración del Festival de la cultura
libre Oxcars
* 30 Octubre: Paneles de presentaciones de experiencias
claves e inspiradoras en torno a 5 ejes de trabajo.
* 31 Octubre: Grupos de trabajo (barcamp) alrededor de las
cuestiones claves.
* 1 Noviembre: Puesta en comun de los resultados de los
grupos de trabajo. Definición de una agenda comun y
aprovación de manifiestos por temas.
Ejes
PERSPECTIVAS LEGALES Y ACCESO DE L@s USUARI@S: Desde el punto de vista
legal se trata de identificar las grietas y flexibilidades en las
legislaciones nacionales y los acuerdos internacionales para buscar
una estrategia frente a los abusos de las polÃticas sobre difusión
del conocimiento y cultura, a nivel tanto de relaciones privadas y
contractuales como ante las normativas y polÃticas públicas
internacionales. Asà mismo, es necesario considerar los derechos de
usuari@s.
EDUCACIÓN Y GESTION DEL CONOCIMIENTO: Frente a un enfoque empresarial
de la educación, se plantea una perspectiva basada en la idea de
compartir y en mantener viva la solidaridad. Se estan desarrollando
formas de gestión del conocimiento generado a través de la
investigación con fondos públicos e independientes y de metodologias
de investigación más participativas. Desde una perspectiva que
reconoce a los movimientos sociales como generadores de conocimiento.
Se trata también de aprovechar las nuevas herramientas educativas y
de difusión del conocimiento que Internet y la cultura digital han
proporcionado. Poco a poco van apareciendo iniciativas para la
utilización y creación de materiales sobre los que no existen
derechos de autor y propuestas para extender las limitaciones y
excepciones sobre esos derechos con fines educativos.Asà mismo, cada
vez cobra mayor importancia el dominio público como parte de nuestro
patrimonio cultural e intelectual. Un acceso abierto a la cultura y un
dominio público amplio, frente a la extensión de la duración de los
derechos de autor, son la base del desarrollo de los paÃses y de las
sociedades en general.
ECONOMÃ AS, NUEVOS MODELOS P2P Y SOSTENIBILIDAD DISTRIBUTIVA: Esta
sección se ocupara de dos importantes lineas de trabajo. La primera
de ellas se centra en los nuevos modelos que surgen de las nuevas
formas de organización y la segunda en las nuevas propuestas de
explotación que han surgido y en las que se cuestiona la necesidad de
los intermediarios. También para la economÃa, la gestión del
conocimiento y la cultura son fundamentales. Cada vez existen más
voces que se plantean desde el punto de vista económico el coste que
en términos generales suponen para la sociedad y su desarrollo los
modelos de explotación de la cultura y el conocimiento basados en
derechos exclusivos de duración excesiva. Propiciado por Internet el
foco de la economÃa se desplaza de la propiedad al acceso La
filosofÃa de la cultura libre, heredada del software libre es la
demostración empÃrica de que una nueva ética y una nueva empresa
basada en la democratización del conocimiento, y de los medios de
producción son posibles. Se tiende a la desaparición de los
intermediarios y el autor se convierte en productor de sus obras.
También las redes P2P han supuesto una revolución en la forma de
aproximarse a la creación e intercambio de información y
conocimiento. De su filosofÃa han surgido nuevas formas colaborativas
de trabajo y difusión del conocimiento, no jerárquicas sino
rizomáticas.
SOFTWARE LIBRE Y CONOCIMIENTO QUE COMPARTEN LA FILOSOFÃ A DEL PIRATA
INFORMà TICO: Aun cuando el término hacker es utilizado generalmente
por medios de masas desde una connotación negativa, el movimiento
hacker se basa en una filosofÃa de defensa de las derechos de usuari@
desde una perspectiva de una conciencia común que promueve la
libertad de conocimiento y de justicia social y que ha construido
importantÃsimas plataformas y herramientas que son la infraestructura
de la cultura libre. Impulsar la construcción de infraestructura para
la cultura libre es clave.
LÓGICA DE ORGANIZACIÓN E IMPLICACIONES POLà TICAS DE LA CULTURA
LIBRE: Se trata de reflejar crÃticamente sobre la lógica
organizativa y democrática de la acción colectiva relacionadas con
las experiencias de cultura libre (tales como la cultura de la
remezcla, el prosumerirsm, organización descentralised, principios
participativos, entre otros) analizando sus ventajas y debilidades,
asi como tensiones internas. Asà mismo, discutir sobre las
implicaciones polÃticas.
La infraestructura para la celebración del Foro está siendo
proporcionada por Exgae, Networked Politics, Free Knowledge Foundation
y la colaboración de la Red de estudiantes por la cultura libre y el
Hangar.
Sitio web: http://fcforum.net/
Newsletter para recibir informaciones del Foro de la cultura
libre:http://openfsm.net/projects/freecultureforum/lists/freecultureforumbcn
Contacto de la organización: in...@fcforum.net
Dave,
As I think you know, I am coordinating work here at the FCC on the
National Broadband Plan. We are just getting going but one of the
things we have to nail down is a smarter definition of broadband. We
just put out a public notice asking comments on this issue. I just
wanted to make sure you (and the tech community I know you are the
central node in) sees it as we particularly want their help in
thinking this through.
The notice can be found at:
http://hraunfoss.fcc.gov/edocs_public/attachmatch/DA-09-1842A1.pdf
I am cc?ing Carlos Kirjner, who has joined us on this effort and a
person who I hope you have a chance to talk with on this question.
You both bring great expertise to the effort.
Many thanks and hope all is well.
Blair Levin
(Comments from me: There are critical advantages to the fact that
Joel’s case was pressed on the “pure principle” basis on which it was
presented. The principles remain alive in the appeal process. Since
Gertner ruled that the jury would decide liability, Nesson/Joel were
given the opportunity to press the case the way they wanted to,
directly to the jury, as in a criminal proceeding. Joel was questioned
under that expectation, then that expectation was removed by the
judge’s invalid ruling of liability [since that was not a question of
fact that Joel was competent to answer] just before Nesson's
summation. This fundamentally damaged the integrity of the proceeding,
Joel's ability to present a consistent defense. -- Seth)
> http://blogs.law.harvard.edu/nesson/2009/08/25/howard-and-paul-geller-respond/
howard responds, and i to him
Published August 25th, 2009
[From Howard Knopf:]
> Dear Charlie:
>
> Here’s my response.
>
> http://excesscopyright.blogspot.com/2009/08/my-response-to-prof-charles-nesson-re.html
>
> First of all, given the facts as they have come out both
> before and as reported in the various media during the trial
> (I obviously haven’t seen the transcript), I still tend to
> doubt that this was a particularly winnable case.
[Charles Nesson:]
so stop right there. you mean winnable at trial.
> BTW, in 2004 we “won” this battle in Canada before it ever
> really started by preventing the disclosure of the names
> behind IP addresses in the Canadian version of the RIAA’s
> attempt to sue individuals. And we have a similar statutory
> minimum damages regime here, inspired by the USA but with
> some differences such as a max of CDN $20,000 per work.
> Still quite dangerous. The Canadian record companies were
> unable or unwilling to provide sufficient admissible
> evidence to warrant this disclosure in light of the “risk
> that the information as to identity may be inaccurate”, the
> resulting exposure to serious civil liability and the
> invasion of privacy. We were helped by a pretty good federal
> privacy statute in Canada and at least two ISPs that
> seriously stood up for their customers at the time (Shaw and
> Telus). See here and here. I was involved on the winning
> side. It’s really too bad that these cases weren’t likewise
> stopped at the outset in the USA, but that battle appears to
> have been lost a long time ago in other cases.
and never fought, a tragedy in leadership for harvard to
stand idly by, unwilling to put its weight behind motion to
stop their subpoenas
> There’s really not much I can add to my original blog post
> from August 3, following the July 31 verdict and my other
> posts on this.
>
> I can point to Ray Beckerman’s “wish list”, which outlines
> several possible technical and practical arguments based
> upon such matters as dates of registration, lack of proof of
> actual “distribution” according to the language of and case
> law on § 106(3), etc. which might or might not have worked
> to get Joel off the hook. Ray also mentions our Canadian
> case in his point that “Plaintiffs should be required to
> prove that the downloaded song file copies were played and
> listened to, and their contents verified, by a person
> qualified to make such determination. See Deposition of
> President of MediaSentry in BMG v. Doe.”
all respect to ray, these defenses do not join the
fundamental issues. this trial was not an exercise in
getting joel off the hook.
> I don’t know which of these issues were addressed at trial
> or how much evidence on these issues there is on the record.
>
> Apart from a victory based on issues such as those on Ray’s
> “wish list”, the only other conceivably “winnable” issues
> might have been a very uphill fair use argument and a
> potentially more successful argument on the
> unconstitutionality of the statutory minimum damages
> provisions. I know you have tried to pursue both of these
> issues.
these are the issues, not whether joel "did it"
> • Fair Use. If there was a winnable argument here, which far
> greater experts than me have doubted according to your own
> blog,
stop right there. starting from scratch the fair use issue
now looms as a fundamental question in the allocation of
function between judge and jury as providing a limitation in
wisdom to the expansive power of copyright, so let them
doubt, then consider, then be convinced
> it would probably have involved a lot of analysis of the
> fourth factor ("the effect of the use upon the potential
> market for or value of the copyrighted work") and this would
> presumably have required a lot of economic evidence. This
> evidence might have come, for starters, from your Harvard
> colleague Oberholzer-Gee and/or Andersen/Frenz in the UK as
> expert(s) to show that there was evidence as to no overall
> harm and maybe even a "benign" or "positive" effect on "the
> potential market for or value of the copyrighted work". At
> least such evidence might have enabled Judge Gertner to deny
> summary judgment on this issue. It would have also enabled a
> great debate with the very able Stan Liebowitz, with whom
> one may disagree - but he is still a very accomplished and
> important economist in the IP area and an experienced expert
> witness.
as far as i can see leibowitz and oberholzer-gee essentially
agree, stan putting his value judgment on "professionally"
recorded music and felix on the growth in volume and quality
of music from the people. but see how this very question
mistakes the nature of the inquiry as a judgment for the
jury to make case by case, this being joel’s case and joel’s
right to trial by jury in which the fairness and justice of
the actions being taken against him in the name of the state
are open to address
> Maybe other evidence in addition from someone with knowledge
> about the economic insides of the record industry would have
> helped. I frankly doubt, as you have suggested in the
> Canadian media in your interview with Jesse Brown, that the
> lack of "fairness" on the part of the record industry either
> in the way it has marketed music to its customers or treats
> is customers in its litigation campaign is a winnable fair
> use argument under §107, even if you are right that the four
> factors are not "exclusive" and that Court can go beyond the
> four factors and even devise a new "fair use" affirmative
> defense. Whether or not there is the makings of a potential
> "abuse of process" or Posnerian "misuse" of copyright
> argument or something along these lines is hypothetically an
> interesting issue to speculate upon for another day, but
> doesn’t seem to be on the record here and would also
> presumably require a lot of solid evidence.
say more about Posnerian "misuse" of copyright. and note how
the whole bogus strategy of imposing statutory damages on
noncommercial direct infringers was put across on posner’s
aimster dicta raised to holding by easterbrook in a case
managed by jenner & bloch in which no challenge to the
imposition of statutory damages was made
> • Unconstitutional statutory minimum damages. This seems
> potentially much more winnable than fair use. But if there
> is a winnable argument here, it would probably also require
> lots of evidence to show that a statute that permits an
> award of up to $150,000 per work in these circumstances and
> $22,500 per work times 30 works as actually awarded for
> downloading and supposedly sharing 30 songs that sell for
> about $0.99 each retail goes so far beyond any possibly
> valid "deterrent" or "punitive" purpose that it is, on its
> face, unconstitutional.
:<)
> Unfortunately, the SCOTUS may not see this as self evident.
> Again, maybe Oberholzer-Gee or Andersen/Frenz could have
> helped here, and perhaps other experts on the economics of
> the music industry, how file sharing actually works, how
> many of the ocean of unauthorized downloads can be causally
> attributed to Joel, and the overall question of
> proportionality. Maybe some expert sociological or
> criminological evidence on "deterrence". But given the post-
> Eldred approach to deference to Congress on quantifiable
> copyright policy matters such as extending the term from
> life + 50 to life + 70, I would imagine that you would now
> need a great deal of solid evidence to show that this choice
> of a numerical range of a minimum of $750 and up to $150,000
> per work for willful infringement is not only beyond
> "arguably unwise" but also somehow clearly unconstitutional.
> For better or worse, "unwise" and "unfair" may not equate
> with "unconstitutional."
there are two questions: first, when, if ever (and i say
never) did congress decide that draconian damages against
noncommercial consumers was the appropriate response to
peer-to-peer file sharing? second, reached only if the
answer to the first requires it, would be whether the power
to impose this damage at the unconstrained behest of the
copyright industry imposed upon individual by civil process
(thereby bypassing the protections afforded criminal
defenants) with no attendant compensatory component, no
proof of actual damage caused by the defendant, purely for
deterrence of conduct involving no trespass is
unconstitutional.
> BTW, there is an important article in the works by Pam
> Samuelson and Tara Wheatland, which I’m sure you know about,
> but for the benefit of other readers can found here as a
> work in progress (recently revised).
[more to come]
> Best regards,
>
> Howard
David Sugar
August 26, 2009 11:51 pm
For the last few years I had been working on what is called the GNU
Telephony Secure Calling initiative
(http://www.gnutelephony.org/index.php/Secure_Call). The GNU Telephony
Secure Calling initiative was itself originally formed specifically to
make passive voice communication intercept a thing of the past using
free software and public standards, and came out of ideas from and
work of the New York City civil liberties community and New York Fair
Use in the early part of this decade.
While it is true that technological means for mass communication
intercept has grown with incremental improvements in communication
technology, the means to apply and use encryption techniques to
counter these abuses and offer communication privacy on a large scale
using free software have also become possible. Given the nature of
this project, important work had been done by volunteers and
contributors in Europe such as Werner Dittmann who created the ZRTP
compliant stack we use, over the summer of 2006, and Federico Pouzols,
who re-wrote the RTP stack I originally authored for use in GNU
Bayonne. The use of non-US contributors was specifically encouraged to
avoid putting additional people in potential danger in the United
States for working on cryptographic systems for worldwide public use
specifically to avoid communication intercept.
One result of the initiative was creation of the GNU ZRTP stack (and
our related GNU ZRTP4J now used in SIP communicator). The project was
first publically introduced in October 2006 during the 4th
International Free Knowledge conference, where a complete ZRTP enabled
client (the Twinkle softphone) became immediately available for use by
anyone through Debian GNU/Linux for establishing simple secure point
to point VoIP calls over the public internet. This offered a basic
means of establishing secure calls using Phil Zimmerman’s ZRTP
protocol and a free software licensed implementation, but did not
offer a means to truly integrate and manage secure calling or make it
a standard or easy to deploy internet user service.
This latter goal became possible through the development of GNU SIP
Witch, which can be used to create and deploy network scalable secure
privacy enabling VoIP solutions for individuals, private
organizations, and even national governments. My focus in this project
over the past year has been on this recently introduced GNU SIP Witch
package. While this package is still rather new, there is a basic
howto for system admins to use and deploy GNU SIP Witch with Ubuntu
GNU/Linux, and this can be found at
http://www.gnutelephony.org/index.php/Howto_Deploy_SIP_Witch_On_Ubuntu.
Ideally I would like to do far more to make it easier to deploy secure
calling networks without requiring system admin skills.
GNU SIP Witch is different from many other VoIP servers, such as for
example Asterisk, in that it never establishes media connections with
or through a server, and hence does no protocol conversion or media
operations that would otherwise require decrypting a secure audio
session in a central location. Instead it relies on published open
standards and the SIP protocol to coordinate secure endpoints which
can then form direct peer to peer media connections. This means these
media sessions are not decrypted by a central server, nor are
encryption keys shared with or managed by a central server.
One use case for GNU SIP Witch is as a kind distributed domain service
to handle inbound VoIP calls directly received over the public
Internet for the SIP protocol much like something like sendmail does
for SMTP. In this role, one could then create local publicly reachable
SIP identities (URI’s) that match email addresses and thereby offer a
consistent means of contact. This eliminates the need for some kind of
centralized “registry” of callable users which so many other schemes
and services wish to reply upon since we can make use of DNS and
individually ran services. This suggests an alternate and much more
distributed model for enabling secure public voice, video, and instant
messaging contact to that of Skype, the latter of which requires a
central user directory and control point, as well as using source
secret protocols and methods which cannot be independently validated.
Another interesting use case is that of creating a secure calling
“domain” in conjunction with an already existing insecure VoIP
infrastructure, such as for example might be offered by Asterisk. Used
this way SIP Witch will maintain both a secure and “insecure” identity
for each ZRTP enabled node it is used to manage. The insecure identity
will be cross-registered to the insecure IP-PBX so insecure users can
reach users in the secure domain. Similarly, all non-secure
destinations dialing from a secure VoIP user agent are automatically
routed through the insecure IP-PBX. Dialing a secure destination from
a secure user agent will however bypass the insecure IP-PBX entirely,
and establish a direct peer to peer media session.
Awhile back I was asked about speaking at LinuxCon 2009 about this
project, and now I am ready to do so. Given my topic, I am uncertain
as to whether LinuxCon is really ready for me. However there is a
preliminary copy of my presentation next month now available at
http://www.gnutelephony.org/data/linuxcon2009.odp and
http://www.gnutelephony.org/data/linuxcon2009.pdf for those curious
about my talk next month.
GN Docket Nos. 0947, 0951, and 09137
Regarding the Definition of "Broadband"
By Seth Johnson
The National Broadband Plan must define "broadband"
according to a proper and full concept of what capabilities
constitute "advanced telecommunications service." Broadband
in this conception is constituted of two things:
1. a general purpose platform (in this document generally
associated with the term "Internet" and its consensus
protocols) which is optimized for maximum flexibility
and application innovation, and
2. certain other functions that may optimize particular
applications but that may compromise the flexibility
of the general purpose platform.
See RFC 4924, "Reflections on Internet Transparency"
(http://www.rfc-editor.org/rfc/rfc4924.txt):
A network that does not filter or transform the data
that it carries may be said to be "transparent" or
"oblivious" to the content of packets. Networks that
provide oblivious transport enable the deployment of new
services without requiring changes to the core. It is
this flexibility that is perhaps both the Internet's
most essential characteristic as well as one of the most
important contributors to its success.
"Architectural Principles of the Internet" [RFC1958],
Section 2 describes the core tenets of the Internet
architecture:
However, in very general terms, the community
believes that the goal is connectivity, the tool is
the Internet Protocol, and the intelligence is end
to end rather than hidden in the network.
The current exponential growth of the network seems
to show that connectivity is its own reward, and is
more valuable than any individual application such
as mail or the World-Wide Web. This connectivity
requires technical cooperation between service
providers, and flourishes in the increasingly
liberal and competitive commercial
telecommunications environment.
"The Rise of the Middle and the Future of End-to-End:
Reflections on the Evolution of the Internet
Architecture" [RFC3724], Section 4.1.1 describes some of
the desirable consequences of this approach:
One desirable consequence of the end-to-end
principle is protection of innovation. Requiring
modification in the network in order to deploy new
services is still typically more difficult than
modifying end nodes. The counterargument - that many
end nodes are now essentially closed boxes which are
not updatable and that most users don't want to
update them anyway - does not apply to all nodes and
all users. Many end nodes are still user
configurable and a sizable percentage of users are
"early adopters," who are willing to put up with a
certain amount of technological grief in order to
try out a new idea. And, even for the closed boxes
and uninvolved users, downloadable code that abides
by the end-to-end principle can provide fast service
innovation. Requiring someone with a new idea for a
service to convince a bunch of ISPs or corporate
network administrators to modify their networks is
much more difficult than simply putting up a Web
page with some downloadable software implementing
the service.
RFC 4924 proceeds to list developments that may affect the
advantages of the Internet's general purpose design based on
the end-to-end principle and the transmitting of packets
without regard for the application they are supporting,
including:
* Application Restrictions
* Quality of Service (QoS)
* Application Layer Gateways (ALGs)
* IPv6 Address Restrictions
* DNS Issues
* Load Balancing and Redirection
* Security considerations
The principle of transmitting Internet datagrams without
regard for the applications they support also provides for
"network neutrality" as an emergent phenomenon.
In addition, RFC 4084, "Terminology for Describing Internet
Connectivity" (http://www.rfc-editor.org/rfc/rfc4084.txt)
provides a useful description of what constitutes "full
Internet connectivity," considering this question with
regard to its design for flexibility, including stipulations
about functions that should be disclosed to the purchaser if
they are deployed. RFCs 1958, 2775, and 3724 more fully
describe these issues that arise as various functions are
proposed that may affect the Internet's design for greatest
flexibility.
The Dynamic Platform Standards Project's legislative
proposal for an "Internet Platform for Innovation Act"
(http://www.dpsproject.com/legislation.html) recognizes the
advantages of the design of the Internet Protocol. The DPS
proposal provides a technical characterization of the
general purpose platform provided by the Internet Protocol,
including its provision of uniform treatment of packet flow.
Recognizing and treating this general purpose platform as a
distinct category allows the particular advantages for which
it was designed to be acknowledged and provided for within
the regulatory scheme while other telecommunications
functions may be offered by network providers under the
general term of "broadband" (and may eventually become part
of consensus standards).
This document only seeks to present some initial comments
regarding the relevance of the general purpose platform to
the questions raised in this request for public input. Here
we refer chiefly to the design of the Internet according to
consensus standards. However, it is worth noting that a
general purpose platform can also be afforded by means of
the principle of common carriage. Indeed, some might hold
that the general scheme of digitizing communications into
packets delivered on a best efforts basis regardless of
application, in accordance with the Internet Protocol, is a
natural outcome and a self-evidently necessary means for
providing for interoperability and flexibility among the
autonomous routers that were originally administered by
thousands of competing Internet Service Providers on the
basis of a common carriage principle.
The general purpose platform must be a key component of the
plan for using broadband infrastructure and services in
advancing the full range of national purposes enumerated in
section 6001 (k) (1) of the ARRA, and must be recognized as
a key consideration in what constitutes "broadband
capability." The status of deployment of "broadband" in your
reporting should present the deployment of a general purpose
platform as a distinct category from other types of advanced
telecommunications service which may also be deployed, using
the consensus definitions given in relevant RFCs as an
analytical aid. A flexible, general purpose platform also
contributes to the strategy for maximizing utilization since
a platform that optimizes flexibility to make possible a
proliferation of innovative applications incentivizes
participation in connectivity. The general purpose platform
should also be borne in mind in relation to the strategy for
affordability, which should be developed with consideration
of the issues of recourse and enforcement that arise in the
context of public expeditures when contractual expectations
related to such a platform are not met.
A clear distinction should be maintained in your reporting
and pursuit of national goals, between this general purpose,
neutral platform and optimized telecommunications services
that may diverge from the principles that provide for
optimum flexibility and neutral transport. As part of the
dynamic process of adapting benchmarks over time, the FCC
should consult with experts and the public on
1. what constitutes the general purpose platform,
2. what innovations are recognized as not interfering
with general purpose,
3. which may interfere with general purpose but are of
value to some purchasers, and
4. in this last category, which functions should become a
basis for a category of "consumer connectivity" rather
than general purpose Internet connectivity.
In addition, the FCC should consult with experts and the
public on which functions or features should require
explicit notice and consent given privacy considerations (as
well as what form of consent is adequate for that purpose).
Some additional important considerations the FCC should be
mindful of are the implications of packet inspection, packet
discrimination, data collection and end-user privacy, as
well as the question of whether advertised services perform
as specified, perhaps taking input from other appropriate
agencies. Recourse and enforcement related to these concerns
may be appropriate considerations.
General comments on Benchmarks:
Benchmarks should exhibit and track the rapid evolution both
of the general purpose platform of the Internet and of
broadband as a general term that may include other types of
offerings. The widespread adoption of new Internet-based
applications will affect what "advanced" means to purchasers
of broadband, but this should not be construed as indicating
that special optimization features that some providers may
offer must equate with advanced telecommunications without
consideration of their impact on the general purpose
platform. "Dependability" and "experiential" metrics must be
considered carefully in relationship to the advantages of a
maximally flexible general purpose communications platform,
as some functions that may improve these aspects for
particular purposes may impair the general purpose character
of the platform.
In considering "the availability of advanced
telecommunications capability to all Americans", broadband
infrastructure data may be more objective than subscriber
data, but data should be collected regarding general purpose
connectivity as a distinct category, and the analysis should
present availability in those terms in addition to the ease
with which high speed can be deployed. Similar
considerations apply in the analysis of utilization.
"Broadband" and "advanced telecommunications capability" may
be defined by statute as independent of "any transmission
media or technology," but this does not mean that an
analysis of advanced telecommunications capability should
exclude describing the characteristic of a general purpose
platform as a key category.
Thank you.
Seth Johnson
(From
http://fjallfoss.fcc.gov/prod/ecfs/retrieve.cgi?native_or_pdf=pdf&id_document=7020037177
)
The Stallman Paradox
Until society can resolve what I will call for the first time the
“Stallman Paradox”, where learning and access enabling technologies,
such as for example digital books, conversely disable the freedom to
read and hence more than negate the actual benefits of said access,
the rush to embrace all digital libraries and textbooks is a rush to a
new dark ages.
This is perhaps best exemplified in the case of Cushing Acedemy. In
this place of assumed learning, the administration choose to abandon a
library collection of some 10,000 books which any student may freely
access and share for the presumed benefit of DRM (Digital Restriction
Management) disabled e-book solutions including the Amazon Kindle.
While it is true that the amount of material available is far greater
potentially for students, however in doing so, this institution has
also decided to accept that costs associated with DRM solutions will
mean each student will only be able to afford and have access to a far
smaller actual collection of material than they had access to before.
Furthermore, outside of the question of turning universal education to
a monetary privilege that only few will be able to afford, DRM
disabling solutions mean that the right to read and share and learn
together is immeasurably harmed. This is perhaps best exemplified in
Stallman’s essay on the “Right to Read”, and hence, along with a
question of basic freedom of access to knowledge and basic human
rights, why I propose this problem be called the “Stallman Paradox”.
The logical solution is one where the right to read and think, and to
share knowledge, is not made into a good that only few will be able to
experience. In the European dark age, education was an exclusive
privilege enabled only for a very few. While most societies today now
recognize that universal education is both a right and a need, the use
of mandated digitally restricted e-book solutions for education could
well return societies to a new dark age.
(34 seats remaining! -- Seth)
> http://www.isoc-ny.org/?p=815
New York City Community Fiber Meeting – Wed 9/9/09
Presentation, Discussion and Partnership Meeting
NYCCF invites you, elected representatives, city employees, and other
community-based organizations for a presentation of NYCCF’s research,
proposed business model and next steps going forward.
The evening will start with an overview of the history leading to
date, challenges and obstacles going forward, and discussion of the
ways in which interested parties can help build momentum.
Date: Wednesday September 9th, 2009
Time: 6:30-9:00 pm
Location: DCTV, Third Floor
Address: 87 Lafayette St, New York, NY 10013
Refreshments will be served, please RSVP so we can plan accordingly.
Space is limited to 60 people.
Registration: http://nyccf.eventbrite.com/
More info: http://communityfiberproject.net/
—
What is NYCCF?
New York City Community Fiber (NYCCF) is a grassroots non-profit
telecommunications initiative seeking to construct and maintain
community-owned optical networks in New York City.
It is the goal of NYCCF to construct optical networks in a manner that
is faster both in terms of service availability (time to deployment)
and network capacity (connectivity speeds) while at the same time
providing a significantly lower total cost of ownership (TCO) on a
10-20 year time frame.
In parallel with technical and financial modeling, NYCCF has been
actively reaching out to community-based academic and service-based
organizations. NYCCF has had discussions with CUNY, SUNY, NYSERnet,
the National Science Foundation, Manhattan College, NYIT, the New York
Public Library, Mount Hope Housing Company, Fordham University and
NYU, among many others, and invites all community-based organizations
to come out for an evening of business plan discussion and optical
network construction planning.
If this sounds of interest, we hope you’ll join us on September 9th.
Registration: http://nyccf.eventbrite.com/
Contact: Louis S. Klepner
NYC Community Fiber
l...@communityfiberproject.net
o: 212.796.0853
c: 914.456.7243
f: 425.955.8988
Appeal to ICANN for Fairer Treatment of Civil Society:
> http://ncdnhc.org/profiles/blogs/public-interest-groups-in
Top 10 Myths About Civil Society Participation in ICANN:
> http://ncdnhc.org/profiles/blogs/top-10-myths-about-civil
---
> http://ncdnhc.org/profiles/blogs/public-interest-groups-in
ICANN Public Interest Groups Call for Fairer Treatment:
NCUC Press Release
FOR IMMEDIATE RELEASE: 3 September 2009
Public Interest Groups in ICANN Appeal to New President For Fairer
Treatment For Civil Society
The organization that represents Non-Commercial Internet Users in the
Internet Corporation for Assigned Names and Numbers (ICANN) issued an
open letter to the Board this week, expressing concern about the
possible failure of ICANN's attempt to balance the representation of
commercial and noncommercial interests.
California (United States) – ICANN’s Non-Commercial Users Constituency
(NCUC), a group of 152 non-commercial organizations and individuals
from 52 countries who represent the noncommercial interests of
Internet users in ICANN policy development, recently appealed to
ICANN's Board of Directors and CEO to meet with them in Seoul to
resolve serious problems with its current plans to alter the
representation of noncommercial interests in its policy making
process.
Specifically, NCUC’s letter expressed concern over ICANN’s adoption of
a flawed charter for noncommercial users that disregarded the vast
majority of public comments and concerns expressed by noncommercial
Internet users. In late July 2009 ICANN’s Board decided to approve the
NCSG charter drafted by ICANN staff, rather than the charter drafted
by civil society in a 7-month long consensus process that included a
wide variety of noncommercial interests and was submitted to ICANN’s
Board by the NCUC.
ICANN’s staff did not provide its board with the competing charter
submitted by NCUC in order to properly inform the board’s decision.
The difference between staff’s charter and civil society’s charter is
stark. Staff’s charter ties council representation and resources to
arbitrary and more easily manipulated constituencies, while the NCUC
charter calls for stakeholder group wide elections of its
noncommercial representatives and other leaders. NCUC’s charter model
encourages consensus building among constituencies, while staff’s
charter model encourages divisiveness and favoritism among
noncommercial interests.
“ICANN’s decision has resulted in significant damage to ICANN’s
credibility within global civil society and has fueled further
distrust towards ICANN’s decision making process,” said NCUC Chair
Robin Gross. “Its treatment of noncommercial users in this instance
has significantly called into question ICANN’s legitimacy to govern
and its ability to protect the global public interest,” said Gross,
Executive Director of digital rights group IP Justice, a NCUC member
since 2004.
The board’s adoption of the stakeholder group charter is part of
ICANN’s ongoing effort to re-organize its Generic Names Supporting
Organization (GNSO), which currently consists of 5 commercial
constituencies and 1 non-commercial constituency, the NCUC. ICANN’s
GNSO is responsible for developing policy recommendations that relate
to Generic Top-Level Domains (GTLDs) or those domain names that end in
.com, .net, .edu, and .org. The GNSO plays an important role on
Internet-related policy issues since its recommendations affect all
who own or use GTLDs, including the way domain names can be
registered, used, transferred, and any applicable fees and associated
policies regarding the domain names. The process of changing the
GNSO’s structure from 6 constituencies to 4 stakeholder groups is
expected to be complete by the end of October 2009.
In its letter the NCUC states that “there is a misunderstanding over
non-commercial representation and participation in ICANN” and NCUC
calls on ICANN to acknowledge that there has been significant growth
among noncommercial participants at ICANN recently. NCUC’s membership
has grown by 240% since 2008 and now includes 75 noncommercial
organizations and 77 individuals. An independent study by the London
School of Economics verified that NCUC has the highest number of
different people on the GNSO Council of any ICANN constituency and
that NCUC has the most geographical diversity among its membership
with members now from 52 different countries.
“NCUC represents an extremely broad range of noncommercial Internet
users, including educational and academic institutions, human rights
organizations, libraries, consumer groups, religious organizations,
bloggers, open source software developers, development-oriented
groups, arts organizations, and other noncommercial interests,”
explained Dr. Milton Mueller, an Internet governance expert. Dr.
Mueller, now a professor at Syracuse University School of Information
Studies and Delft University of Technology in the Netherlands,
co-founded the constituency in 2002.
"Nonprofits and public interest advocacy groups have an irreplaceable
role to play in a self-regulatory scheme dominated by business
interests. Someone has to look out for the public interest. If we
handicap noncommercial voices and divide them into competing silos
they simply won't be able to participate effectively. ICANN's
legitimacy and the quality of its decisions will suffer," explained
Dr. Mueller.
In order to dispel pervasive myths about civil society’s role in
ICANN, the NCUC published a “Top 10 Myths about Civil Society
Participation in ICANN,” a document that explains why much of what
ICANN staff and other constituencies have claimed about noncommercial
participation is untrue.
For additional information on NCUC and noncommercial participation in
ICANN, please contact NCUC’s Chair Robin Gross or visit NCUC’s website
at http://ncdnhc.org.
Contact:
Robin Gross, NCUC Chair Milton Mueller, NCUC Co-Founder
Tel.: +1-415-553-6261 Tel: +1-315-443-5616
Email: robin – at - ipjustice.org Email: Mueller – at – syr.edu
More Info:
Non-Commercial Users Constituency (NCUC):
http://ncdnhc.org
NCUC’s Letter to ICANN Board of Directors and CEO:
http://ncdnhc.org/profiles/blogs/ncuc-letter-to-icann-board-of
NCUC’s “Top 10 Myths About Civil Society Participation in ICANN”:
http://ncdnhc.org/profiles/blogs/top-10-myths-about-civil
About the Noncommercial Users Constituency:
The NCUC is the home for civil society organizations and individuals
in the Internet Corporation for Assigned Names and Numbers (ICANN)
Generic Names Supporting Organization (GNSO). With real voting power
in ICANN policy-making and Board selection, it develops and supports
positions that favor non-commercial communication and activity on the
Internet. The NCUC is open to non-commercial organizations and
individuals involved in education, community networking, public policy
advocacy, development, promotion of the arts, children's welfare,
religion, consumer protection, scientific research, human rights and
many other areas. NCUC maintains a public website at
http://ncdnhc.org.
###
---
> http://ncdnhc.org/profiles/blogs/top-10-myths-about-civil
Top 10 Myths About Civil Society Participation in ICANN:
TOP 10 MYTHS ABOUT CIVIL SOCIETY PARTICIPATION IN ICANN
From The Non-Commercial Users Constituency (NCUC)
21 August 2009
________________________________________________________________
Myth 1
“Civil Society won’t participate in ICANN under NCUC’s charter
proposal.”
False. ICANN staffers and others claim that civil society is
discouraged from engaging at ICANN because NCUC’s charter proposal
does not guarantee GNSO Council seats to constituencies. The facts
show this claim could not be further from the truth. NCUC’s membership
includes 152 noncommercial organizations and individuals from 52
countries. Since 2008 NCUC’s membership has increased by more 240% –
largely in direct response to civil society’s support for the NCUC
charter. Not a single noncommercial organization commented in the
public comment forum that hard-wiring council seats to constituencies
will induce their participation in ICANN. None of the noncommercial
organizations that commented on the NCSG Charter said they would
participate in ICANN only if NCSG's Charter secured the constituencies
a guaranteed seat on the GNSO.
Myth 2
“More civil society groups will get involved if the Board intervenes.”
A complete illusion. Board imposition of its own charter and its
refusal to listen to civil society groups will be interpreted as
rejection of the many groups that commented and as discrimination
against civil society participation. ICANN’s reputation among
noncommercial groups will be irreparably damaged unless this action is
reversed or a compromise is found. Even if we were to accept these
actions and try to work with them, the total impact of the staff/SIC
NCSG charter will be to handicap noncommercial groups and make them
less likely to participate. The appointment of representatives by the
Board disenfranchises noncommercial groups and individuals. The
constituency-based SIC structure requires too much organizational
overhead for most noncommercial organizations to sustain; it also pits
groups against each other in political competition for votes and
members. Most noncommercial organizations will not enter the ICANN
GNSO under those conditions.
Myth 3
"The outpouring of civil society opposition can be dismissed as the
product of a 'letter writing campaign.'"
An outrageous claim. Overwhelming civil society opposition to the SIC
charter emerged not once, but twice. In addition, there is the massive
growth in NCUC membership stimulated by the broader community’s
opposition to the staff and Board actions. Attempts to minimize the
degree to which civil society has been undermined by these
developments are simply not going to work, and reveal a shocking
degree of insularity and arrogance. ICANN is required to have public
comment periods because it is supposed to listen to and be responsive
to public opinion. Public opinion results from networks of
communication and public dialogue on controversial issues, including
organized calls to action. No policy or bylaw gives ICANN staff the
authority to decide that it can discount or ignore nearly all of the
groups who have taken an interest in the GNSO reforms, simply because
they have taken a position critical of the staff’s. ICANN's attempt to
discount critical comments by labeling them a "letter writing
campaign" undermines future participation and confidence in ICANN
public processes.
Myth 4
"Civil society is divided on the NCSG charter issue."
Wrong. There has never been such an overwhelmingly lopsided public
comment period in ICANN’s history. While ICANN’s staff is telling the
Board that civil society is divided, the clear, documented consensus
among civil society groups has been against the ICANN drafted NCSG
charter and in favor of the NCUC one. Board members who rely only on
staff-provided information may believe civil society is divided, but
Board members who have actually read the public comments can see the
solidarity of civil society against what ICANN is trying to impose on
them.
Myth 5
"Existing civil society groups are not representative or diverse
enough."
Untrue by any reasonable standard. The current civil society grouping,
the Noncommercial Users Constituency (NCUC), now has 152 members
including 75 noncommercial organizations and 77 individuals in 52
countries. This is an increase of more than 240% since the parity
principle was established. Noncommercial participation in ICANN is now
more diverse than any other constituency, so it is completely unfair
to level this charge at NCUC without applying it to others. Even back
in 2006, an independent report by the London School of Economics
showed that NCUC was the most diverse geographically, had the largest
number of different people serving on the GNSO Council over time, and
the highest turn-over in council representatives of any of the 6
constituencies. In contrast, the commercial users’ constituency has
recycled the same 5 people on the Council for a decade and upon the
GNSO “reform”, the first 3 of 6 GNSO Councilors from the Commercial
Stakeholder Group will represent the United States.
Myth 6
"ALAC prefers the ICANN staff drafted charter over the civil society
drafted charter."
False. An ALAC leader said that she prefers the staff drafted charter.
ICANN staff ran away with this comment and falsely told the ICANN
Board of Directors that ALAC prefers the staff drafted charter. In
fact, the formal statement actually approved by ALAC said that some
members of ALAC supported the NCUC proposal and that “the de-linking
of Council seats from Constituencies is a very good move in the right
direction.”
Myth 7
"The NCUC charter would give the same small group 6 votes instead of
3."
False. For the past 8 months, NCUC has stated that it will dissolve
when the NCSG is formed. It does not make sense to have a
"Noncommercial Users Constituency" and a "Noncommercial Stakeholders
Group,” as they are synonymous terms. Thus, NCUC leaders would not be
in control of a new NCSG – a completely new leadership would be
elected. Under the NCUC charter proposal, all noncommercial groups and
individuals would vote on Council seats, not just former NCUC members.
Strict geographic diversity requirements would mean that candidates
from throughout the world would have to be selected even if they could
not get a majority of total votes.
Myth 8
"NCUC will not share council seats with other noncommercial
constituencies."
Wrong. NCUC’s proposed charter was designed to allow dozens of new
noncommercial constituencies to form at will and to advance their own
candidates for Council seats. Given the diversity and breadth of
NCUC's membership, many different constituencies with competing
agendas are likely to form. The organic, bottom-up self-forming
approach to constituency formation is much better than the board/staff
approach – and more consistent with the BGC recommendations. The SIC
charter makes constituency formation very top-heavy and difficult, and
gives the staff and Board arbitrary power to decide how
“representative” or “significant” new participants are. Because it
ties constituencies to Council seats, every new constituency
instigates power struggles over the allocation of Council seats.
Myth 9
"The NCUC wants to take away the Board's right to approve
constituencies."
False. People who said this have obviously not read the NCUC-proposed
charter. NCUC’s proposal let the board approve or disapprove of new
constituencies formed under its proposed charter. Our proposal simply
offered to apply some simple, objective criteria (e.g., number of
applicants) to new constituency groupings and then make a
recommendation to the Board. The idea was to reduce the burden of
forming a new constituency for both the applicants and the Board.
NCUC’s proposal made it easy to form new constituencies, unlike the
SIC charter, which makes it difficult to form new constituencies.
Myth 10
“The purpose of a constituency is to have your very own GNSO Council
Seat.”
False. Some claim GNSO Council seats must be hard-wired to specific
constituencies because a constituency is meaningless without a
guaranteed GNSO Council representative. However this interpretation
fails to understand the role of constituencies in the new GNSO, which
is to give a voice and a means of participation in the policy
development process -- not a guaranteed councilor who has little
incentive to reach beyond her constituency and find consensus with
other constituencies. Two of the other three stakeholder groups
(Registries and Registrars) adopted NCUC’s charter approach of
decoupling GNSO Council seats to constituencies, but NCUC has been
prevented from electing its councilors on a SG-wide basis.
_______________________________________________________________________________
JOIN NCUC
All noncommercial organizations and individuals are invited to join
NCUC and participate in policy development in ICANN’s GNSO. Bring your
experience and your perspective to Internet policy discussions and
help protect noncommercial users of the Internet by participating at
ICANN via the NCUC. Join today:
http://icann-ncuc.ning.com/main/authorization/signUp?
GLOSSARY OF ICANN ACRONYMS
ALAC - At-Large Advisory Committee
ICANN's At-Large Advisory Committee (ALAC) is responsible for
considering and providing advice on the activities of the ICANN, as
they relate to the interests of individual Internet users (the
"At-Large" community).
gTLD - Generic Top Level Domain
Most TLDs with three or more characters are referred to as "generic"
TLDs, or "gTLDs". They can be subdivided into two types, "sponsored"
TLDs (sTLDs) and "unsponsored TLDs (uTLDs), as described in more
detail below.
In the 1980s, seven gTLDs (.com, .edu, .gov, .int, .mil, .net, and
.org) were created. Domain names may be registered in three of these
(.com, .net, and .org) without restriction; the other four have
limited purposes. Over the next twelve years, various discussions
occurred concerning additional gTLDs, leading to the selection in
November 2000 of seven new TLDs for introduction. These were
introduced in 2001 and 2002. Four of the new TLDs (.biz, .info, .name,
and .pro) are unsponsored. The other three new TLDs (.aero, .coop, and
.museum) are sponsored.
GNSO - Generic Names Supporting Organization
The GNSO is responsible for developing policy recommendations to the
ICANN Board that relate to generic top-level domains (gTLDs).
The GNSO is the body of 6 constituencies, as follows: the Commercial
and Business constituency, the gTLD Registry constituency, the ISP
constituency, the non-commercial constituency, the registrar's
constituency, and the IP constituency.
However, the GNSO is in the process of restructuring away from a
framework of 6 constituencies to 4 stakeholder groups: Commercial,
Noncommercial, Registrar, Registry. The Noncommercial and Commercial
Stakeholder Groups together make up the “Non-contracting Parties
House” in the new bi-cameral GNSO; and the Registrar and Registry
Stakeholder Groups will together comprise the “Contracting Parties
House” in the new GNSO structure (beginning Oct. 2009).
ICANN - The Internet Corporation for Assigned Names and Numbers
The Internet Corporation for Assigned Names and Numbers (ICANN) is an
internationally organized, non-profit corporation that has
responsibility for Internet Protocol (IP) address space allocation,
protocol identifier assignment, generic (gTLD) and country code
(ccTLD) Top-Level Domain name system management, and root server
system management functions
NCUC - Noncommercial Users Constituency
The Noncommercial Users Constituency (NCUC) is the home for
noncommercial organizations and individuals in the Internet
Corporation for Assigned Names and Numbers (ICANN) Generic Names
Supporting Organization (GNSO). With real voting power in ICANN policy
making and Board selection, it develops and supports positions that
protect noncommercial communication and activity on the Internet. NCUC
works to promote the public interest in ICANN policy and is the only
noncommercial constituency in ICANN’s GSNO (there are 5 commercial
constituencies). The NCUC is open to noncommercial organizations and
individuals involved in education, community networking, public policy
advocacy, development, promotion of the arts, digital rights,
children's welfare, religion, consumer protection, scientific
research, human rights and many other areas. NCUC maintains a website
at http://ncdnhc.org.
NCSG - Noncommercial Stakeholders Group
The GNSO is in the process of being restructured from “6
constituencies” to “4 stakeholder groups”, including a Noncommercial
Stakeholders Group (NCSG) into which all noncommercial organizations
and individuals will belong for policy development purposes, including
members of the Noncommercial Users Constituency (NCUC). The NCSG and
the Commercial Stakeholder Group (CSG) will together comprise the
“Non-contracting Parties House” in the new bicameral GNSO structure
beginning October 2009.
LINKS TO BACKGROUND INFORMATION:
NCUC Letter to ICANN Board and CEO on NCSG Charter Controversy:
http://bit.ly/BiOg8
Noncommercial Users Constituency (NCUC):
http://ncdnhc.org
NCUC submitted NCSG charter proposal:
http://gnso.icann.org/en/improvements/ncsg-petition-charter.pdf
Robin Gross on “Is ICANN Accountable to the Public Interest?”:
http://ipjustice.org/ICANN/NCSG/NCUC-ICANN-Injustices.html
ICANN GNSO Chair Avri Doria on “Why I Joined the NCUC”:
http://tiny.cc/EPDtx
Internet Governance Project: “4 ICANN Board members dissent in vote on
NCSG charter”:
http://tiny.cc/S5CjP
2006 London School of Economics Independent Report on GNSO:
http://www.icann.org/en/announcements/announcement-15sep06.htm