Posts

Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading 2017-03-25T19:29:42.499Z · score: 1 (2 votes)

Comments

Comment by redman on Cognitive Enhancers: Mechanisms And Tradeoffs · 2018-10-24T11:53:26.531Z · score: 3 (2 votes) · LW · GW

Here's a PSA kids: https://www.ecstasydata.org/view.php?id=6629&mobile=1

Ecstasydata is a place that promotes testing your stuff before you use it, and will GC/MS samples you send them, not sure if there's a nominal fee or not, I've never used the service.

This entry is a Berkeley CA submission of an 'adderall tablet' purchased on the internet, which turned out to be meth.

Popularity of gray to black market study drugs has apparently been recognized by dealers who are apparently using it to drive methamphetamine into a market (intellectual overachievers) where it ordinarily would not exist.

Smart people are usually super good at rationalizing things, like you know, addictions, so yeah this is insidious and please signal boost it if you're a nootropic user.

Comment by redman on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-27T04:26:43.268Z · score: 0 (2 votes) · LW · GW

Congrats, you rediscovered the rationale for the atomic energy act of 1954, and postulated the logical problems stemming from it.

Way to go guys.

Comment by redman on Resolving the Dr Evil Problem · 2018-06-10T19:11:11.152Z · score: 10 (5 votes) · LW · GW

Push button. Tortured self will take comfort in the fact that free self is coming for the martians next.

Comment by redman on Challenges to Christiano’s capability amplification proposal · 2018-05-19T21:06:40.982Z · score: 3 (1 votes) · LW · GW

What standard is your baseline for a safe AGI?

If it is 'a randomly generated AGI meeting the safety standard is no more dangerous than a randomly selected human intelligence', this proposal looks intended to guarantee no worse performance than an average case human alignment.

Not sure it hits that target, but it looks like it's aiming for it. I understand your argument to be that the worst case AI alignment in the scheme could be as bad as the worst human amplified, and that you have no way of assessing the average or worst case alignments prior to firing up the machine.

The HI (Human Intelligence) alignment/safety problem is presently unsolved, in that it is impossible to predict the future alignment of a specific human with absolute certainty. This is awkward for many industries that require high reliability humans. I have long suspected that the AGI alignment problem will ultimately reduce to a case of the HI alignment problem (take a HI, give it infinite capability to both act on the world and hide its actions along with instantaneous cognition, now you have a AGI-equivalent).

The default solution is, per a paper I've seen about the IQ based communications barrier (no citation handy) is essentially 'humans reflexively mistrust other humans with +30 more IQ points'.

The challenges created by the possibility of a misaligned HI can obviously be solved locally by the model implemented in Equatorial Guinea in the 1970s: https://en.m.wikipedia.org/wiki/Francisco_Macías_Nguema but this denies us the benefits of their potential outputs.

If you could snap your fingers and build an AGI tomorrow, knowing the state of alignment research, would you do it?

How risk tolerant are you relative to other humans who could enable emergence of AGI?

Comment by redman on Henry Kissinger: AI Could Mean the End of Human History · 2018-05-16T13:27:56.563Z · score: 8 (2 votes) · LW · GW

Based on the content of the initial op-ed, I am confident in my assertion.

Based on long familiarity with Kissinger's work, he knows that not even he is immune to the dunning-kruger effect and takes steps to mitigate it. I assess that this op-ed was written after an extremely credible effort to inform himself on the state of the field of AI. Unfortunately, based on my analysis of the content of the op-ed, that effort either failed to identify the AI safety community of practice, or determined that its current outputs were not worth detailed attention.

Kissinger's eminence is unquestionable, so the fact that up to date ideas about AI safety were not included is indicative of a problem with the state of the AI safety / x risk community of practice's ability to show relevance to people who can actually take meaningful action based on its' conclusions.

If your primary concern in life is x risk from technology, and the guy who literally once talked the military into 'waiting until the President sobered up' to launch the nuclear war the president ordered is either unaware of your work, or doesn't view it as useful, either you have not effectively marketed yourself, or your work is not useful.

Comment by redman on Henry Kissinger: AI Could Mean the End of Human History · 2018-05-16T07:27:59.595Z · score: 3 (1 votes) · LW · GW

Can you think of an AI catalyzed x risk where technologies that worsen risk are likely to succeed in the market, while technologies that reduce it are unlikely to succeed in the market due to coordination or capital problems?

If the answer is yes, you need either the government or philanthropy.

Comment by redman on Henry Kissinger: AI Could Mean the End of Human History · 2018-05-16T07:21:24.937Z · score: 3 (3 votes) · LW · GW

See below

Comment by redman on Terrorism, Tylenol, and dangerous information · 2018-05-16T02:41:18.809Z · score: 3 (1 votes) · LW · GW

How would you suggest disclosing a novel x risk, or a novel scheme for an AGI that the originator believes has a reasonable chance of succeeding?

Comment by redman on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-16T02:36:33.635Z · score: 5 (3 votes) · LW · GW

Questions about priors:

Are you assessing that the rate of false accusations from an individual is correlated positively, negatively, or not at all with the rate of actual rapes experienced by those individuals?

Do you model the probability of an individual experiencing abuse as random, negatively correllated with prior abuse, or positively correllated with prior abuse?

Does adjusting these priors affect the assessment you made above?

Relevant: http://slatestarcodex.com/2017/10/02/different-worlds/ section three paragraph 7 on serial abuse victims. There's probably more to say, but I think that some combinations of answers to the above questions lend support to the OP.

Comment by redman on Henry Kissinger: AI Could Mean the End of Human History · 2018-05-15T21:38:03.801Z · score: 1 (5 votes) · LW · GW

See below

Comment by redman on A Detailed Critique of One Section of Steven Pinker’s Chapter “Existential Threats” in Enlightenment Now (Part 2) · 2018-05-15T08:42:53.063Z · score: 2 (1 votes) · LW · GW

The space warre scenario in the linked paper is a logical consequence of extant technologies, with increases in scale rather than major technical leaps being the primary hurdle.

We can either get good at solving collective action problems like the space-warre scenario, or we can fail to make it to space.

Comment by redman on AI Alignment is Alchemy. · 2018-05-15T08:17:46.826Z · score: 0 (3 votes) · LW · GW

I am not an AI, but I am an I.

I dislike repetitive physical activities like jogging and weightlifting. I do the bare minimum to stay in shape and try to focus on less repetitive or more technical physical tasks for my fitness. Arguably, the act of high volume human to human data transfer is a very repetitive physical task, as the major muscles of my body are coordinating the same intense motion over and over, for bursts of time that frequently extend well beyond a typical 'workout'.

For whatever reason, possibly having to do with my 'alignment', that one specific activity does not feel boring or tedious. While I recognize that this is probably related to an evolutionary urge, and that some relatively simple mechanical or pharmacological interventions could probably destroy this bizarre and irrational urge, I prefer to keep it intact, possibly due to those same evolutionary urges mixed with cultural pressures and sentimentality.

Plus my dong is gorgeous, like seriously, it's art.

I would like to think that an aligned AI would make irrational decisions to further the interests of humans in the same way that I make irrational decisions to help pretty ladies achieve fulfillment and emotional well-being.

Comment by redman on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-15T07:54:52.798Z · score: 17 (5 votes) · LW · GW

Charities that work with victims of torture rarely put actual victims in front of cameras to try to drive donations, as they're rarely sympathetic, in large part due to visible mental and physical consequences of the abuse they suffered. Adults who were victims of severe and prolonged child abuse are a good example of this as well.

Additional datapoint, researchers studying 'ability to read emotions in faces' found that incarcerated serial rapists were on average the best at it, and victims of sexual assault were among the worst. If I remember correctly, the paper contained a categorical refusal to speculate further about a predator-prey dynamic.

Comment by redman on Personal relationships with goodness · 2018-05-15T06:47:13.681Z · score: 7 (2 votes) · LW · GW

Relevant and still funny: https://en.m.wikipedia.org/wiki/The_Goode_Family

Every study I'm aware of investigating the topic shows that 'doing good' mentally buys people rationalizations for bad behavior. So I would say, the difficulty of doing good increases as more good is done, and the slope of that rate of increase is so steep that most people end up pretty close to 'even' between doing good and doing not-good. Here's a recent popular press article referencing academic research: http://m.nautil.us/blog/why-doing-good-makes-it-easier-to-be-bad

Charity, particularly self-directed charity performed as pennance, is a downright disgusting practice when viewed in this light.

I try to have good habits like recycling so I am 'good without thinking about it'

Comment by redman on Terrorism, Tylenol, and dangerous information · 2018-05-14T09:53:17.613Z · score: 1 (1 votes) · LW · GW

"unless it can be reasonably established that the information is oddly asymmetric"

How often do you think these ideas come along? Defining danger of idea by logs of deaths per incident (upper bound of typical event caused by 'a few' people), and frequency of generation by annual (decadal? Monthly?) rate.

Comment by redman on Terrorism, Tylenol, and dangerous information · 2018-05-14T09:34:35.680Z · score: 8 (3 votes) · LW · GW

I've spent many years with this issue.

Luckily for the forces of law and order, we have a few things going for us. Number one, effective people with good judgement generally find ways of resolving their grievances that do not involve terrorism--these types of people are required to scale any idea. Two, the world's security services are tending towards more, not less, effectiveness and generally prioritize shooting members, and would be members, of terrorist groups' technical staffs. Three, terrorist groups tend to develop along predictable lines, with predictable organizational structures and predictable personalities in various roles; these predictable features are not conducive to technical achievement.

Over-valuing received wisdom, and inordinate amounts of time spent on reinforcing ideology distract from effective engineering development. Non-technical management staff, waterfall development, a fear of innovation emerging from outside the (non technical) gurus who produce the ideological ideas, and the insularity driven by both ideological and security requirements really do hamper effective engineering.

The movie Four Lions is by far the best movie I've seen on terrorism. For whatever reason, people don't like to believe me when I tell them this. 'The Joker' or 'mad evil genius' archetypes, though common in movies, are extremely rare in practice.

Rather than a $ scale of damage, I like to think of technique danger on a log scale, meaning, how many logs of human lives could this attack destroy at the high end.

Morons will always be able to hit 0; anyone at all with a little thought could hit 1. 2 is challenging, but unfortunately achievable by someone acting alone. 3 logs is 9/11.

Truck attacks probably cap out at 1 log, so in an absolute sense, are probably better from the perspective of the security services than attacks involving explosives, which have hit two logs with alarming frequency.

When it comes to 'evil psychology', most people when 'thinking like the baddie' start from a place of 'if I were exactly who I am today, and set the goal of , how would I go about doing it. Obviously I'd have to just turn off my emotions and pretend I'm a psychopath to start'.

This creative thinking often leads to anxiety (when interesting 1-2 log ideas are generated), and confusion that usually goes something like this: 'my goodness, it would be so easy to , we should be terrified that someone could figure it out!'

This fails to take into account that on the one hand, 'psychopathic' terrorists are rare because psychopaths are notoriously unable to execute plans that require discipline over a long period of time, and on the other, that a person who is pursuing a terrorist attack is often in a mental and emotional state that is very, very different from yours. The terrorist may be prioritizing things that in your view, would seem counterproductive to the point being downright stupid.

As an example, I recall a school shooting (don't make me look it up, there have been a whole mess of them) where one of the victim stories described the result of an authority figure shouting something to the effect of 'I have a wife and kids' to the shooter through a door. A normal person, in a normal state of mind would find that to be a sympathetic, though not necessarily a persuasive message. The shooter, whose motives were later ascribed to frustration at his own inability to achieve things like a girlfriend or family, responded to this plea with a hail of gunfire. Though we cannot know what he was thinking, I assess that the shooter could be believed to have added 'and YOU never will' to the victim's statement, and responded with rage and frustration to the perceived insult. A separate school shooter was apparently defused when a female teacher repeated 'I love you' over and over to him when he entered the building with a gun. Note: the suggestion that 'if we have a mass shooting, one of the cute but not intimidatingly hot girls should try mustering up as much sincerity as she can and repeat I love you to the shooter' probably will not go over well at your office for cultural reasons.

Anyway, if you are nervous about any of your ideas that you're thinking about releasing on the internet, feel free to PM me and I'll be happy to help you work through the logic. If your proposed technique can do one log of harm, it's probably fun to talk about in public and unlikely to make the world worse; two and up might require some discretion when it comes to the technical details (everyone loves finding a dead terrorist splattered across his apartment), but I would err on the side of disclosure in general terms, particularly if the technique is novel or simple, as the people who are best positioned to spot an attack in its incipient stages probably are not security professionals. If you can reliably generate ideas that you're sure can hit three or more, for the sake of your health, I suggest avoiding participation in radical politics.

Comment by redman on The reverse job · 2018-05-14T07:54:33.751Z · score: 15 (6 votes) · LW · GW

Students of mafias (and taxation) find that 20-30% of income is the point where negative effects on the business' viability begin to show. I don't have citations handy.

Former Pakistani president Zardari was known as Mr. 10%. Christian churchs have a tradition of 10% tithes, and some successful churches use 'status as one who tithes' as a necessary criteria for (unpaid) positions of trust and responsibility within the organization of the church.

According to the film 'American Pimp', pimps in general demand 100% from their workers, rationalized with (paraphrasing to remove colorful language) 'I post 100% of bail, and provide 100% of a roof, so they need to be out earning 100%'.

I react negatively to the 50% ask, and upon reflection, suspect that this emotional reaction may be due to a feeling that this obligation resembles the one inflicted by an ex-wife in the American system.

Good luck!

Comment by redman on Popular religions suggest extrapolated volition is non-existence and wireheading · 2018-02-26T19:34:53.312Z · score: 0 (0 votes) · LW · GW

The last paragraph, read without the context of the rest of the post, sounds like the monologue a movie villain would deliver to his girlfriend/accomplice immediately before savagely murdering a minor character he had just shown some affection.

A rationalist theology could include elements of most others, I think.

Eternal life away from your body? Working on the software, come back in a few decades. Mind melding with other human souls? Hardware is mostly there, but see above. Eternal bliss? We have it, just try to feed yourself. Extinguishment? Sure, go for it, that's old tech. All seeing entity that manipulates the world around you? Sup google. Hell if you don't convert? Basilisks don't exist.

Comment by redman on Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering · 2018-01-08T02:10:42.380Z · score: 0 (0 votes) · LW · GW

An ethical injunction doesn't work for me in this context, killing can be justified with lots of more base motives than 'preventing infinity suffering'.

So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I've done my part.

I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload. I have some > 0 probability of preventing infinity suffering.

I'm pretty effectively altruistic, dang. It's not even February.

I prefer your borg scenarios to individualized uploading. I feel like it's technically feasible using extant technology, but I'm not sure how much interest there really is in mechanical telepathy.

Comment by redman on Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering · 2018-01-05T13:32:59.969Z · score: 1 (1 votes) · LW · GW

Thank you for the detailed response!

I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: 'effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended'

I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying 'heaven' vastly outnumber the copies suffering 'hell', on balance, uploading is a good. Based on your paper's citation of Omelas, I assert that you would weight 'all future heaven copies' in aggregate, and all future hell copies individually.

So if the probability of one or more hell copies of an upload coming into existence for as long as any heaven copy exceeds the probability of a single heaven copy existing long enough to outlast all the hell copies, that person's future suffering will eventually exceed all suffering previously experienced by biological humans. Under the EA philosophy described above, this creates a moral imperative to prevent that scenario, possibly with a blender.

If uploading tech takes the form of common connection and uploading to an 'overmind', this can go away--if everyone is Borg, there's no way for a non-Borg to put Borg into a hell copy, only Borg can do that to itself, which is, at least from an EA standpoint, probably an acceptable risk.

At the end of the day, I was hoping to adjust my understanding of EA axioms, not be talked down from chasing my friends around with a blender, but that isn't how things went down.

SF is a tolerant place, and EAs are sincere about having consistent beliefs, but I don't think my talk title "You helped someone avoid starvation with EA and a large grant. I prevented infinity genocides with a blender" would be accepted at the next convention.

Comment by redman on Paper: Superintelligence as a Cause or Cure for Risks of Astronomical Suffering · 2018-01-05T00:15:25.415Z · score: 1 (1 votes) · LW · GW

Curious about your take on my question here: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/ Awesome paper.

Comment by redman on The map of "Levels of defence" in AI safety · 2018-01-04T19:17:21.489Z · score: 0 (0 votes) · LW · GW

I hadn't thought about it that way.

I do think that either compiler time flags for the AI system or a second 'monitor' system chained to the AI system in order to enforce the named rules would probably limit the damage.

The broader point is that probabilistic AI safety is probably a much more tractable problem than absolute AI safety for a lot of reasons, to further the nuclear analogy, emergency shutdown is probably a viable safety measure for a lot of the plausible 'paperclip maximizer turns us into paperclips' scenarios.

"I need to disconnect the AI safety monitoring robot from my AI-enabled nanotoaster robot prototype because it keeps deactivating it" might still be the last words a human ever speaks, but hey, we tried.

Comment by redman on The map of "Levels of defence" in AI safety · 2018-01-04T02:00:03.061Z · score: 0 (0 votes) · LW · GW

Rules for an AI:

If an action it takes results in more than N logs of $ worth of damage to humans/kills more than N logs of humans, transfer control of all systems it can provide control inputs to designated backup (human, formally proven safe algorithmic system, etc), power down.

When choosing among actions which affect a system external to it, calculate probable effect on human lives. If probability of exceeding N assigned in rule 1 is greater than some threshold Z, ignore that option, if no options are available, loop.

Most systems would be set to N= 1, Z = 1/10000, giving us five 9s of certainty that the AI won't kill anyone. Some systems (weapons, climate management, emergency management dispatch systems) will need higher N scores and lower Z scores to maintain effectiveness.

JFK had an N of like 9 and a Z score of 'something kind of high', and passed control to Lyndon B Johnson of 'I have a minibar and a shotgun in the car I keep on my farm so I can drive and shoot while intoxicated' fame. We survived that, we will be fine.

Are we done?

Comment by redman on How I accidentally discovered the pill to enlightenment but I wouldn’t recommend it. · 2018-01-04T01:45:11.947Z · score: 0 (0 votes) · LW · GW

That's great to hear, stay safe.

This sort of data was a contributor to my choice of sport for general well being: https://graphiq-stories.graphiq.com/stories/11438/sports-cause-injuries-high-school#Intro

There is truth to it: https://www.westside-barbell.com/blogs/2003-articles/extra-workouts-2

Really grateful for the info, I never could put my finger on what exactly I was not liking about CM when I wasn't pushing myself, stuff is amazing for preventing exercise soreness though.

Comment by redman on How I accidentally discovered the pill to enlightenment but I wouldn’t recommend it. · 2018-01-03T12:50:47.333Z · score: 0 (0 votes) · LW · GW

You back to trampolining yet?

Way to eat a broken bone and not seek medical attention for it, someone I knew did about what you did and ended up having to have a doctor re-break and set the bone to fix things. Lots of 'newly fit' people, particularly teenagers, have your 'injury from stupidity' behavior pattern. This is one of the reasons professional athletes are banned from amateur sports by their contracts

The great coach Louis Simmons is worth reading, he will expand your mind on your weakest link theory.

My own conclusion on your magic enlightenment pill, based on my lived experience: super awesome when you're lifting, Fs you up a bit when you're not. Use it around intense exercise, otherwise avoid.

Comment by redman on Heuristics for textbook selection · 2017-12-21T19:54:35.909Z · score: 0 (0 votes) · LW · GW

Correct. I have found that the works written at the time when the relevant technical work had just recently been completed, by the people who made those breakthroughs, is often vastly superior to summary work written decades after the field's last major breakthrough.

If I remember correctly, Elon Musk cited some older texts on rocketry as his 'tree trunk' of knowledge about the subject.

This advice only applies to mature fields, in places where fundamental breakthroughs are happening regularly, this advice is downright awful.

Comment by redman on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-21T19:49:15.992Z · score: 0 (0 votes) · LW · GW

Assertion: Any fooming non-human AI incompatible with uplifting technology would be too alien to communicate with in any way. If you happen to see one of those, probably a good idea to just destroy it on sight.

Comment by redman on Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest” · 2017-12-05T02:34:04.054Z · score: 0 (0 votes) · LW · GW

If we can build uplifted human-ai hybrids, which eventually will be more engineered than evolved, why bother building any more baseline humans?

Comment by redman on Heuristics for textbook selection · 2017-09-07T06:54:10.193Z · score: 1 (1 votes) · LW · GW

When approaching a new field:

Google scholar for recent papers -> select the ones that appear relevant to your query -> trace citations backwards until you find the seminal papers in the subfield -> pull the first authors and last authors' CVs -> they will likely have written or contributed to a broad survey textbook, and may have written a specialist one on your chosen subtopic.

This can sometimes produce funny results with mature fields, where most of the major work was done decades ago. Reading high quality works by the giants of the 20th century and comparing it to more modern material can be a humbling experience for some--it certainly has been for me on more than one occasion.

Comment by redman on Thoughts on civilization collapse · 2017-05-22T23:29:05.683Z · score: 0 (0 votes) · LW · GW

Thank you for clarifying, in the long run, there was stability and we do not fully understand it...I believe that my assertion about the transition being messy and involving the collapse of bronze age civilizations rather than their persistence still stands though.

My point is that new developments upended the old social order, and cleared the way for the eventual rise of alternatives. Today, similar levels of destruction will be challenging to recover from, because infrastructure, once trashed, leads to things like the birth defect rate in Fallujah, not just empty space where new things can be built, and battlefields which yiels bumper crops.

Comment by redman on Thoughts on civilization collapse · 2017-05-22T23:23:41.282Z · score: 0 (0 votes) · LW · GW

They certainly swung. I'm not certain that they successfully imposed their will on the activities of the nation states they attacked. Neither of them are comparable to Alaric, one is comparable to https://en.m.wikipedia.org/wiki/Bernard_Délicieux who despite making a big scene, had no immediate or meaningful impact on the institution he rebelled against.

Do you have a better, easier example of what I've described, or do you disagree with the broad statement in addition to the specific example of Flint?

Comment by redman on Thoughts on civilization collapse · 2017-05-13T19:03:38.226Z · score: 0 (0 votes) · LW · GW

Assertion: Statement about heavy weapons in OP is incorrect.

In collapse scenarios any entity capable of bringing modern military technology with the attached organizational requirements to bear can and will dominate organizations which cannot.

In many collapse scenarios, political wrangling over who controls the institutions capable of managing that force becomes the dominant struggle. In Venezuela of today, for example, the government is incapable of guaranteeing security or access to reaources for the population at large, but is capable of staying in power. The standard scenario assumes that individuals can win against large, well resourced militaries, this has been true at various times in the past, but is not true today.

The 'bronze age collapse' is instructive, when everyone learned to make iron, barbarians destroyed every hierarchy and the cities fell. Today, any technology that can have a similar effect requires specialist knowledge and access to the fruits of infrastructure (Home-made explosives can be made from common industrial chemicals, but not really from things you can grow in your yard).

Destruction of social infrastructure will not create individual liberty, but it will scatter a bunch of toxic waste that will require even greater levels of development to clean up.

In Flint, MI, institutional collapse was followed by a loss of control of infrastructure, which lead(pun intended) to a collapse of control systems, and the resultant toxic pollution will destroy the population resident there without external intervention.

Bad news all around when entropy wins.

Comment by redman on The Ancient God Who Rules High School · 2017-04-08T14:28:37.786Z · score: 0 (0 votes) · LW · GW

The prestige the ivies have in the eyes of the families of Irvington is misplaced. Anything to promote that community's pride in itself, rather than investment in a declining institution, is probably a win.

Winning within the rules is obviously taking a toll, the prize isn't really worth it, so exit is an option, and in my opinion, it isn't a bad one.

Comment by redman on The Ancient God Who Rules High School · 2017-04-08T12:01:14.875Z · score: 0 (0 votes) · LW · GW

Thank you for the reply. I'll rephrase.

I assess that the following statements are true, please correct me if I am wrong:

-Based on your writing samples, you personally are probably capable of handling the academic workload at a high prestige college.

-You are typical in terms of ability in comparison to your peer group

-Race and geographic location may be working against you and your peers in your admissions process

-You and your peers will find yourselves scattered to the four winds attending less prestigious universities that you're not particularly happy with.

In light of the above, I suggest that you should look into founding (or taking over, I don't know what the community college landscape looks like where you are) a community college explicitly to serve the interests of members of your community affected by the above truths.

You have the most important ingredient for a successful college, which is to say, you have a cohort of motivated learners. From a business and legal standpoint, founding such an institution is an attainable objective. You are right next door to a lot of companies that need talented people, these companies could be persuaded to invest in infrastructure for churning out a future talent pool. You have enough money in Fremont (pass the hat, do a lottery, it's there) to rent property, hire instructors, pay for subscriptions to professional journals, and probably build a lab or two.

If you're not going to get the 'big name', stay local, work within your own community, and build something better.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-08T03:01:32.923Z · score: 0 (0 votes) · LW · GW

Time traveling super-jerks are not in my threat model. They would sure be terrible, bu as you point out, there is no obvious solution, though fortunately time travel does not look to be nearly as close technologically as uploading does. The definition of temporal I am using is as follows:

"relating to worldly as opposed to spiritual affairs; secular." I believe the word is appropriate in context, as traditionally, eternity is a spiritual matter and does not require actual concrete planning. I assert that if uploading becomes available within a generation, the odds of some human or organization doing something utterly terrible to the uploaded are high not low. There are plenty of recent examples of bad behavior by instituions that are around today and likely to persist.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-07T18:22:21.072Z · score: 0 (0 votes) · LW · GW

Is your position consistent with effective altruism?

The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like 'ideological kin' selection?)

You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?

Comment by redman on The Ancient God Who Rules High School · 2017-04-07T16:22:20.121Z · score: 0 (0 votes) · LW · GW

Exactly, you gotta differentiate. How hard is it really to build a fusor like the Taylor Wilson kid the article references did as a teen?

Just have a hook, make the news, and you'll be golden. You can't just be a smarty pants, you have to be a smarty pants and an 'oh isn't that interesting'.

When you're in a terrible game with a perverse incentive structure...either play to win or don't play. If his blog took anonymous comments, I'd suggest starting an 'Irvington community college' with the kids who didn't want to go to low prestige schools, passing the hat in that community could pay real dividends and in a generation, it might become one of those high prestige schools...I mean, if that kid is average for his high school...

Comment by redman on The Ancient God Who Rules High School · 2017-04-07T15:20:06.488Z · score: 1 (1 votes) · LW · GW

Asian kid at Irvington, wants to get into a high competition school in the US, needs to differentiate.

Strongly suspect that legally changing his name to 'Yacouba Aboubacar', listing French as a language on his application, checking 'African American' instead of 'Asian', and writing an admissions essay about the challenges of having an African name in a high-pressure academic environment would, dollar for dollar (name change fees might be close to a single sat prep class fee) be a better investment of resources than just about anything else he can do.

His friends would hate him for it, some would imitate, and maybe one or two would escalate by going for estrogen prescriprions in 11th grade and starting 'transitions' that they will abandon after submitting college applications.

I believe that the lawsuit mentioned here has merit, I don't know where it is now, and look forward to seeinf it wind its' way through the courts: https://studentsforfairadmissions.org/updates/

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-07T14:54:45.081Z · score: 0 (0 votes) · LW · GW

Thank you for the thoughtful response! I'm not convinced that your assertion successfully breaks the link between effective altruism and the blender.

Is your argument consistent with making the following statement when discussing the inpending age of em?

If your mind is uploaded, a future version of you will likely subjectively experience hell. Some other version of you may also subjectively experience heaven. Many people, copies of you split off at various points, will carry all the memories of your human life' If you feel like your brain is in a blender trying to conceive of this, you may want to put it into an actual blender before someone with temporal power and an uploading machine decides to define your eternity for you.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-07T14:44:16.907Z · score: 0 (0 votes) · LW · GW

'People in whereveristan are suffering, but we have plenty of wine to go around, so it is our sacred duty to get wicked drunk and party like crazy to ensure that the average human experience is totally freaking sweet.'

Love it! This lovely scene from an anime is relevant, runs for about a minute: https://youtu.be/zhQqnR55nQE?t=21m20s

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-07T14:18:34.637Z · score: 0 (0 votes) · LW · GW

Addressing 2...this argument is compelling, I read it to be equivalent to the statement that 'humqn ethics do not apply to ems, or human behavior regarding ems', so acting from the standpoint of 'ems are not human, therefore human ethics do not apply, and em suffering is not human suffering, so effective altruism does not apply to ems' is a way out of the trap.

Taking it to its' conclusion, we can view Ems as vampires (consume resources, produce no human children, are not alive but also not dead), and like all such abominations must be destroyed to preserve the lives and futures of humans! Van Helsing would be proud of this approach to AI safety.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-07T13:53:44.826Z · score: 0 (0 votes) · LW · GW

I think I am addressing most of your position in this post here in response to HungryHobo: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/dqfi And also the 'overall probability space' was mentioned by RobinHanson, and I addressed that in a comment too: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/dq6x

Thank you for the thoughtful responses!

An effective altruist could probably very efficiently go about increasing the joy in the probability space for all humans by offering wireheading to a random human as resources permit, but it doesn't do much for people who are proximately experiencing suffering for other reasons. I instinctively think that this wireheading example is an incorrect application of effective altruism, but I do think it is analagous to the 'overall space is good' argument.

Do you support assisted suicide for individuals incarcerated in hell simulations, or with a high probability of being placed into one subsequent to upload? For example, if a government develops a practice of execution followed by torment-simulation, would you support delivering the gift of secure deletion to the condemned?

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-01T11:35:19.494Z · score: 0 (0 votes) · LW · GW

A suicide ban in a world of immortals is an extreme case of a policy of force-feeding hunger striking prisoners. The latter is normal in the modern United States, so it is safe to assume that if the Age of Em begins in the United States, secure deletion of an Em would likely be difficult, and abetting it, especially for prisoners, may be illegal.

I assert that the addition of potential immortality, and abandonment of 'human scale' times for brains built to care about human timescales creates a special case. Furthermore, a living human has, by virtue of the frailty of the human body, limits on the amount of suffering it can endure. An Em does not, so preventing an Em, or potential Em from being trapped in a torture-sim and tossed into the event horizon of a black hole to wait out the heat death of the universe is preventing something that is simply a different class of harm than the privations humans endure today.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-04-01T11:24:06.799Z · score: 0 (0 votes) · LW · GW

I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If 're-merging' is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.

Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.

This is far from a settled issue, and has analogy in the question 'should you terminate an uncomplicated preganancy with terminal birth defects?' Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a 'yes', and I read your line of argument as consistent with a clear 'no'.

Thanks again for the food for thought.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-29T20:00:53.458Z · score: 0 (0 votes) · LW · GW

The suicide rate for incarcerated Americans is three times that of the general population, anecdotally, many death row inmates have expressed the desire to 'hurry up with it'. Werner Herzog's interviews of George Rivas and his co-conspirators are good examples of the sentiment. There's still debate about the effectiveness of the death penalty as a deterrent to crime.

I suspect that some of these people may prefer the uncertain probability of confinement to hell by the divine, to the certain continuation of their sentences at the hands of the state.

Furthermore, an altruist working to further the cause of secure deletion may be preventing literal centuries of human misery. Why is this any less important than feeding the hungry, who at most will suffer for a proportion of a single lifetime?

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-29T00:09:57.814Z · score: 0 (0 votes) · LW · GW

Under paragraph 2, destroying the last copy is especially heinous. That implies that you view replacing the death penalty in US states with 'death followed by uploading into an indefinite long-term simulation of confinement' to be less heinous? The status quo is to destroy the only copy of the mind in question.

Would it be justifiable to simulate prisoners with sentences they are expected to die prior to completing, so that they can live out their entire punitive terms and rejoin society as Ems?

Thank you for the challenging responses!

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-29T00:05:21.560Z · score: 0 (0 votes) · LW · GW

Correct, that is different from the initial question, you made your position on that topic clear.

Would the copy on the satellite disagree about the primacy of the copy not in the torture sim? Would a copt have the right to disagree? Is it morally wrong for me to spin up a dozen copies of myself and force them to fight to the death for my amusement?

I'm guessing based on your responses that you would agree with the statement 'copies of the same root individual are property of the copy with the oldest timestamped date of creation, and may be created, destroyed, and abused at the whims of that first copy, and no one else'

If you copy yourself, and that copy commits a crime, are all copies held responsible, just the 'root' copy, or just the 'leaf' copy?

Thank you for the challenging responses!

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-28T23:59:29.370Z · score: 0 (0 votes) · LW · GW

Thank you for your reply to this thought experiment professor!

I accept your assertion that the ratio of aggregate suffering to aggregate felicity has been trending in the right direction, and that this trend is likely to continue, even into the Age of Em. That said, the core argument here is that as humans convert into Ems, all present day humans who become Ems have a high probability of eventually subjectively experiencing hell. The fact that other versions of the self, or other Ems are experiencing euphoria will be cold comfort to one so confined.

Under this argument, the suffering of people in the world today can be effectively counterbalanced by offering wireheading to Americans with a lot of disposable income--it doesn't matter if people are starving, because the number of wireheaded Americans is trending upwards!

An Age of Em is probably on balance a good thing, even though I see the possibility of intense devaluation of human life, and the possibility of some pretty horrific scenarios, I think that mitigating the latter is important, even if the proposed (controversial!) mechanism is inappropriate.

After all, if we didn't use cars, nobody would be harmed in car accidents.

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-28T23:46:56.709Z · score: 0 (0 votes) · LW · GW

The question asks if ensuring secure deletion is an example of effective altruism. If I have the power to dramatically alter someone's future risk profile (say, funding ads enxouraging smoking cessation, even if the person is uninterested in smoking cessation at present), isn't it my duty as an effective altruist to atrempt to do so?

Comment by redman on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-28T23:44:10.399Z · score: 0 (0 votes) · LW · GW

Negative utility needs a non-zero weight. I assert that it is possible to disagree with your scenarios (refugees, infant) and still be trapped by the OP, if negative utility is weighted to a low but non-zero level, such that avoiding the suffering of a human lifespan is never adequate to justify suicide. After all, everyone dies eventually, no need to speed up the process when there can be hope for improvement.

In this context, can death be viewed as a human right? Removing the certainty of death means that any non-zero weight to negative utility can result in an arbtrarily large aggregate negative utility in the (potentially unlimited) lifetime of an individual confined in a hell simulation.