<div>The basic structure is a familiar one for those who've found themselves drawn into the insular world of producer-driven hip-hop records by Madlib, MF Doom, and Prince Paul. It's a sequence of instrumentals, each averaging around a minute and a half in length, spliced together with lines of dialogue from records or films taken out of context as if to hint at an artistic or personal philosophy that, we presume, the artist shares. Or perhaps he just thinks they're funny. It depends on how seriously you intend to take him. If you don't take his art particularly seriously, though, it might be a difficult record to enjoy. Sequence is important; promos of the record were sent out as single-track mixes, rather than divided into songs. The message was clear: Much like a DJ mix, there is a method to his madness, a way the artist intends his listeners-- or at least, critics-- to experience his art. Unlike a dance mix, though, there isn't a dancefloor to please, nor is there an overarching pressure to push toward or against populist tastes; the only underlying logic is what Alchemist wants us to hear. As a result, the album takes a meandering path through a museum of different eras and influences and sonic timbres-- all of which, rapping aside, seem to originate in the 1970s or 80s.</div><div></div><div></div><div></div><div></div><div></div><div>russian roulette instrumental mp3 download</div><div></div><div>Download:
https://t.co/oXhpeUUhVQ </div><div></div><div></div><div>A series of rappers does appear on the tape, but it bears little resemblance to the producer's previous two LPs. Alchemist does tend to work with rappers with distinctive rap styles, if not always personalities, which helps keep things buoyant. But too often the rapping on Russian Roulette feels like just another instrumental texture, lacking in purposefulness. More than a few of the rappers, from longtime Alchemist acquaintance Evidence to newcomer Chuuwee, have lyrical approaches that border on filler. Some artists stand apart: MidaZ's flow on "Don Seymour's Theme" seems to echo Big Pun's syllable density with an avalanche of spit-stained rapping. Danny Brown, per usual, explodes from the track with a visceral shout-flow. Fashawn might have an overly familiar vocal style, but he also has a well-constructed verse ("This is cloud rap off a loud pack/ Committing foul acts with a wild batch..."). Many of the rappers on the tape seem more interested in sloppily packing together disconnected images and ideas, like Ghostface without the imagination or execution; Boldy James' narcotic delivery stands apart simply for taking his time and completing his thoughts in a straightforward manner. Final kudos go to Big Twin, whose gravelly delivery is almost always a welcome texture on an Alchemist tape, and Mr. Muthafuckin' eXquire, whose play on a classic Biggie verse is surreal in a goofily entertaining way.</div><div></div><div></div><div>My conclusion will be that most of the items on Bostrom's laundry list are not 'convergent' instrumental means, even in this weak sense. If Sia's desires are randomly selected, we should not give better than even odds to her making choices which promote her own survival, her own cognitive enhancement, technological innovation, or resource acquisition.</div><div></div><div></div><div>Section 4 deals with sequential decisions, but for some reason mainly gets distracted by a Newcomb-like problem, which seems irrelevant to instrumental convergence. I don't see why you didn't just remove Newcomb-like situations from the model? Instrumental convergence will show up regardless of the exact decision theory used by the agent.</div><div></div><div></div><div>Thanks for the read and for the response.</div><div></div><div></div><div></div><div>>None of your models even include actions that are analogous to the convergent actions on that list.</div><div></div><div></div><div></div><div>I'm not entirely sure what you mean by "model", but from your use in the penultimate paragraph, I believe you're talking about a particular decision scenario Sia could find herself in. If so, then my goal wasn't to prove anything about a particular model, but rather to prove things about every model.</div><div></div><div></div><div></div><div>>The non-sequential theoretical model is irrelevant to instrumental convergence, because instrumental convergence is about putting yourself in a better position to pursue your goals later on.</div><div></div><div></div><div></div><div>Sure. I started with the easy cases to get the main ideas out. Section 4 then showed how those initial results extend to the case of sequential decision making.</div><div></div><div></div><div></div><div>>Section 4 deals with sequential decisions, but for some reason mainly gets distracted by a Newcomb-like problem, which seems irrelevant to instrumental convergence. I don't see why you didn't just remove Newcomb-like situations from the model?</div><div></div><div></div><div></div><div>I used the Newcomb problem to explain the distinction between sophisticated and resolute choice. I wasn't assuming that Sia was going to be facing a Newcomb problem. I just wanted to help the reader understand the distinction. The distinction is important, because it makes a difference to how Sia will choose. If she's a resolute chooser, then sequential decisions reduce to a single non-sequential decisions. She just chooses a contingency plan at the start, and then sticks to that contingency plan. Whereas if she's a sophisticated chooser, then she'll make a series of non-sequential decisions. In both cases, it's important to understand how she'll choose in non-sequential decisions, which is why I started off thinking about that in section 3.</div><div></div><div></div><div></div><div>>It seems clear to me that for the vast majority of the random utility functions, it's very valuable to have more control over the future world state. So most sampled agents will take the instrumentally convergent actions early in the game and use the additional power later on.</div><div></div><div></div><div></div><div>I am not at all confident about what would happen with randomly sampled desires in this decision. But I am confident about what I've proven, namely: if she's a resolute chooser with randomly sampled desires, then for any two contingency plans, Sia is just as likely to prefer the first to the second as she is to prefer the second to the first.</div><div></div><div></div><div></div><div>When it comes to the 'power-seeking' contingency plans, there are two competing biases. On the one hand, Sia is somewhat biased towards them for the simple reason that there are more of them. If some early action affords more choices later on, then there are going to be more contingency plans which make that early choice. On the other hand, Sia is somewhat biased against them, since they are somewhat less predictable---they leave more up to chance. </div><div></div><div></div><div></div><div>I've no idea which of these biases will win out in your particular decision. It strikes me as a pretty difficult question.</div><div></div><div></div><div></div><div></div><div></div><div></div><div>Almost all optimal action-sequences could contain "improve-technology" at the beginning, while any two particular action sequences are equally likely to be preferred to the other on average across desires. These two facts don't contradict each other. The first fact is true in many environments (e.g. the one I described[2]) and this is what we mean by instrumental convergence. The second fact is unrelated to instrumental convergence.</div><div></div><div></div><div>Sorry this is pretty messy feedback. It's late and I didn't understand this paper very much. Insofar as I am somebody who you want to read + understand +update from your paper, that may be worth addressing. After some combination of skimming and reading, I have not changed my beliefs about the orthogonality thesis or instrumental convergence in response to your paper. Again, I think this is mostly because I didn't understand key parts of your argument.</div><div></div><div></div><div>This is actually interesting, because it implies that instrumental convergence is too weak to, on it's own, be much of an argument around AI x-risk, without other assumptions, and that makes it a bit interesting, as I was arguing against the inevitability of instrumental convergence, given that enough space for essentially unbounded instrumental goals is essentially useless for capabilities, compared to the lack of instrumental convergence, or perhaps very bounded instrumental convergence.</div><div></div><div></div><div>On the one hand, this makes my argument less important, since instrumental convergence mattered less than I believed it did, but on the other hand it means that a lot of LW reasoning is probably invalid, not just unsound, because it incorrectly assumes that instrumental convergence alone is sufficient to predict very bad outcomes.</div><div></div><div></div><div>And in particular, it implies that LWers, including Nick Bostrom, incorrectly applied instrumental convergence as if it were somehow a good predictor of future AI behavior, beyond very basic behavior.</div><div></div><div></div><div>It seems you found one terminal goal which doesn't give rise to the instrumental subgoal of self-preservation. Are there others, or does basically every terminal goal benefit from instrumental self-preservation except for suicide?</div><div></div><div></div><div>Sia is biased towards choices which allow for more choices---but this isn't the same thing as being biased towards choices which guarantee more choices. Consider a resolute Sia who is equally likely to choose any contingency plan, and consider the following sequential decision. At stage 1, Sia can either take a 'safe' option which will certainly keep her alive or she can play Russian roulette, which has a 1-in-6 probability of killing her. If she takes the 'safe' option, the game ends. If she plays Russian roulette and survives, then she'll once again be given a choice to either take a 'safe' option of definitely staying alive or else play Russian roulette. And so on. Whenever she survives a game of Russian roulette, she's again given the same choice. All else equal, if her desires are sampled normally, a resolute Sia will be much more likely to play Russian roulette at stage 1 than she will be to take the 'safe' option.</div><div></div><div></div><div>I wonder how one could test whether or not the models bind to reality? E.g. maybe there are case examples (of agents/people behaving in instrumentally rational ways) one could look at, and see if the models postdict the actual outcomes in those examples?</div><div></div><div></div><div>Instrumental Convergence? questions the argument that a rational agent, regardless of its terminal goal, will seek power and dominance. While there are instrumental incentives to seek power, it is not always instrumentally rational to seek it. While there are incentives to become a billionaire, but it is not necessarily rational for everyone to try to become one. Moreover, in multi-agent settings, AIs that seek dominance over others would likely be counteracted by other AIs, making it often irrational to pursue dominance. Pursuing and maintaining power is costly, and simpler actions often are more rational. Last, agents can be trained to be power averse, as explored in the MACHIAVELLI paper.</div><div></div><div>
9738318194</div>