The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Methods: A total of 942 members of a population-based, prospective cohort study were followed biennially to identify incident dementia cases. Cox proportional hazards models were used to estimate the risk of dementia in relation to baseline total number of activities and time commitment to reading and hobbies.
Results: A lower risk for dementia was found for a greater number of activities and for a high (about 1 hour each day) compared with low (less than 30 minutes each day) weekly time commitment to hobbies, independent of covariates. Only the protective effect of hobbies remained after methods were used to minimize bias due to potential preclinical dementia.
Pretty much everyone agrees that films were bolder and more iconoclastic during the period commonly referred to as New Hollywood, between about 1965 to 1980. Especially at the beginning of this period, cinema was heavily influenced by the counterculture, the civil rights and antiwar movements, and the feeling that a new world was within our grasp. Unfortunately, certain material processes were already in motion which would undermine those cultural victories in due time. In 1971, Hunter S. Thompson described the mood this way:
Despite Close Encounters recouping its production budget in the second week of wide release (eventually making $300 million at the box office), Begelman was forced to resign from Columbia Pictures within a year. The culmination of a series of power struggles within Columbia that matched those in the industry at large, his resignation signaled the end of New Hollywood and the beginning of the Blockbuster era.
Investors regard each film they buy and sell as a commodity, no different than a microwave. Like any modern corporation, studios today are highly attentive to the redundancies and inefficiencies inherent to Hollywood and seek in every instance to excise them to cut costs. The result is higher profits but also more predictable entertainment.
In 1948, the Supreme Court broke up the Big Five studios that were vertically integrated, forcing them to give up their theaters in order to abide by the ruling that the current scheme of distribution and exhibition violated antitrust law. Coupled with the suburbanization of America, which saw people going to theaters less and watching television at home more, this effectively killed the Hollywood studio system of the time.
On the one hand, breaking up the Big Five was good for competition among smaller studios. But on the other hand, when the studios owned their films from financing to the box office, there were fewer ways for investment banks or outside capital to influence them. Following the break-up, investment bankers seized the opportunity.
Columbia was able to stave them off for a few more decades only because it had acquired a television subsidiary that allowed it offset its losses in film without having to cede power to investors. If a slate of films did poorly, Columbia could just license some films out to television, and it would maintain a profit for that year.
In 1973, Allen & Company came to take over Columbia, making Alan Hirschfield CEO and David Begelman head of the Motion Pictures Division. The bankers had taken control of Columbia, but they lacked the ability to exert total control over film production, because they still needed the studio to be overseen by someone, in this case Begelman, with good creative instincts.
Whether a greenlit film can become a hit is an art, not a science. It requires more than a bit of luck and intuition. This instinct is anathema to functionaries used to reducing inefficiencies at major corporations. A Harvard business grad, Hirschfeld had a more risk-averse approach to filmmaking, wanting to invest sensible amounts of money in films and future diverse revenue streams that would guarantee returns.
Begelman had the opposite sensibility. A former talent agent for stars like Judy Garland and Barbra Streisand, he was used to spotting and cultivating the it factor, where it was something indescribable but universal. In some sense, the clash between these two figures represents the identity crisis Hollywood has faced ever since.
New media streaming platforms are a culmination of techniques for risk management in the industry. Among other things, they represent a radical break with previous measures of success. If a Warner Brothers movie bombs, everyone knows it, whereas streamers care more about subscriber retention than box-office returns. Thus, the streaming platforms are awash in a constant churn of content, seeking to entice and retain viewers with novelty.
Netflix is well-known for canceling television shows within two to three seasons, with observable effects. A television show that might have taken five or six seasons on a network needs to be accelerated in its plot if it hopes to finish the arc, leading to a new writing style that is often rushed, condensed, and constrained.
From the 1980s to today, the film industry has had to contend with financial constraints and innovations that have had an undeniable effect on the quality of films produced. Through market research and data analytics, it has been able to more easily manage the inherent risk of the industry at the cost of the films themselves.
Serious concerns about AI risk are often framed as completely discontinuous with rogue AI as depicted in fiction and in the public imagination; I think this is totally false. Rogue AI makes for a plausible sci-fi story for the exact same high-level reasons as it is an actual concern:
These two statements are obviously at least plausible, which is why there are so many popular stories about rogue AI. They are also why AI might in real life bring about an existential catastrophe. If you are trying to communicate to people why AI risk is a concern, why start off by undermining their totally valid frame of reference for the issue, making them feel stupid, uncertain, and alienated?
This may seem like a trivial matter, but I think it is of some significance. Fiction can be a powerful tool for generating public interest in an issue, as Toby Ord describes in the case of asteroid preparedness as part of his appearance on the 80,000 Hours Podcast:
Eliezer Yudkowsky: I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic. (laughs) Nobody on the sane alignment-is-necessary side of this argument is postulating that the CPUs are disobeying the laws of physics to spontaneously require a terminal desire to do un-nice things to humans. Everything here is supposed to be cause and effect.
Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.
The plot of The Terminator is not mostly about the creation of Skynet, but about a time-traveling cyborg assassin. This is obviously not at all realistic, and is a key part of why the movie is scorned by serious people.
Apart from superintelligence, Terminator is a fairly faithful depiction of a Yudkowsky/Bostrom-style fast takeoff scenario where a single AI system quickly becomes competent enough to endanger humanity and is instrumentally motivated to do so. Other failure modes, however, are considered more likely by others working on AI risk.
Human reliance on these systems, combined with the systems failing, leads to a massive societal breakdown. And in the wake of the breakdown, there are still machines that are great at persuading and influencing people to do what they want, machines that got everyone into this catastrophe and yet are still giving advice that some of us will listen to.
Dylan seems to think that when Paul describes AIs seeking influence, Paul means persuasive influence over people. This is a misunderstanding. Paul is using influence to mean influence over resources in general, including martial power. He explicitly states as much, replying to a comment that points out the mischaracterization in the Vox article:
Yes, I agree the Vox article made this mistake. Me saying "influence" probably gives people the wrong idea so I should change that---I'm including "controls the military" as a central example, but it's not what comes to mind when you hear "influence." I like "influence" more than "power" because it's more specific, captures what we actually care about, and less likely to lead to a debate about "what is power anyway."
In general I think the Vox article's discussion of Part II has some problems, and the discussion of Part I is closer to the mark. (Part I is also more in line with the narrative of the article, since Part II really is more like Terminator. I'm not sure which way the causality goes here though, i.e. whether they ended up with that narrative based on misunderstandings about Part II or whether they framed Part II in a way that made it more consistent with the narrative, maybe having been inspired to write the piece based on Part I.)
There are yet other views about about what exactly AI catastrophe will look like, but I think it is fair to say that the combined views of Yudkowsky and Christiano provide a fairly good representation of the field as a whole.
It would be terrible if AI destroys humanity. It would also be very embarrassing. The Terminator came out nearly 40 years ago; we will not be able to claim we did not see the threat coming. How is it possible that one of the most famous threats to humanity in all of fiction is also among the most neglected problems of our time?
To resolve this tension, I think many people convince themselves that the rogue AI problem as it exists in fiction is totally different from the problem as it exists in reality. I strongly disagree. People write stories about future AI turning on humanity because, in the future, AI might turn on humanity.
795a8134c1