Posts

AlphaGo variant reaches superhuman play in multiple games 2017-12-26T16:19:35.804Z
DeepMind article: AI Safety Gridworlds 2017-11-30T16:13:42.603Z
FYI: Here is the RSS link 2017-11-11T17:21:26.617Z
Marginal Revolution Thoughts on Black Lives Matter Movement 2017-01-18T18:12:45.712Z
Mysterious Go Master Blitzes Competition, Rattles Game Community 2017-01-04T17:18:34.479Z
Barack Obama's opinions on near-future AI [Fixed] 2016-10-12T15:46:44.334Z
Seven Apocalypses 2016-09-20T02:59:20.173Z
Meme: Valuable Vulnerability 2016-06-27T23:54:15.107Z

Comments

Comment by scarcegreengrass on We should probably buy ADA? · 2021-05-25T15:49:08.385Z · LW · GW

I'm not sure either. Might only be needed for the operating fees.

Comment by scarcegreengrass on We should probably buy ADA? · 2021-05-25T15:45:46.007Z · LW · GW

Agreed. We might refer to them as 'leaderless orgs' or 'staffless networks'.

Comment by scarcegreengrass on Why do stocks go up? · 2021-01-17T19:51:22.650Z · LW · GW

Does this reduction come from seniority? Is the idea that older organizations are generally more reliable?

Comment by scarcegreengrass on Covid 12/24: We’re F***ed, It’s Over · 2020-12-27T14:46:00.758Z · LW · GW

Are you saying there would be a causal link from the poor person's vaccine:other ratio to the rich person's purchasing decision? How does that work?

Comment by scarcegreengrass on Nuclear war is unlikely to cause human extinction · 2020-11-21T19:26:23.449Z · LW · GW

Thanks! Useful info.

Comment by scarcegreengrass on Nuclear war is unlikely to cause human extinction · 2020-11-19T15:19:17.877Z · LW · GW

Can you clarify why the volcano triggering scheme in 3 would not be effective? It's not obvious. The scheme sounds rather lethal.

Comment by scarcegreengrass on Open & Welcome Thread – October 2020 · 2020-11-02T23:13:23.117Z · LW · GW

Welcome! Discovering the rationalsphere is very exciting, isn't it? I admire your passion for self improvement.

I don't know if I have advice that isn't obvious. Read whoever has unfamiliar ideas. I learned a lot from reading Robin Hanson and Paul Christiano.

As needed, journal or otherwise speak to yourself.

Be wary of the false impression that your efforts have become ruined. Sometimes i encounter a disrespectful person or a shocking philosophical argument that makes me feel like giving up on a wide swathe of my life. I doubt giving up is appropriate in these disheartening circumstances.

Seek to develop friendships with people you can have great conversations with.

Speak to rationalists like you would speak to yourself, and speak tactfully to everyone else.

That's the advice i would give to a version of myself in your situation. Have fun!

Comment by scarcegreengrass on The Solomonoff Prior is Malign · 2020-10-22T16:28:21.126Z · LW · GW

Okay, deciding randomly to exploit one possible simulator makes sense.

As for choosing exactly what to see the output cells of the simulation to... I'm still wrapping my head around it. Is recursive simulation the only way to exploit these simulations from within?

Comment by scarcegreengrass on The Solomonoff Prior is Malign · 2020-10-22T06:15:22.041Z · LW · GW

Great post. I encountered many new ideas here.

One point confuses me. Maybe I'm missing something. Once the consequentialists in a simulation are contemplating the possibility of simulation, how would they arrive at any useful strategy? They can manipulate the locations that are likely to be the output/measurement of the simulation, but manipulate to what values? They know basically nothing about how the input will be interpreted, what question the simulator is asking, or what universe is doing the simulation. Since their universe is very simple, presumably many simulators are running identical copies of them, with different manipulation strategies being appropriate for each. My understanding of this sounds less like malign and more like blindly mischievous.

TLDR How do the consequentialists guess which direction to bias the output towards?

Comment by scarcegreengrass on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-03-06T21:00:23.536Z · LW · GW

I indeed upvoted it for the update / generally valuable contribution to the discussion.

Comment by scarcegreengrass on A letter on optimism about human progress · 2019-12-06T02:46:53.419Z · LW · GW

a) Agreed, although I don't find this inappropriate in context.

b) I do agree that the fact that many successful past civilizations are now in ruins with their books lost is a important sign of danger. But surely there is some onus of proof in the opposite direction from the near-monotonic increase in population over the last few millennia?

c) These are certainly extremely important problems going forwards. I would particularly emphasize the nukes.

d) Agreed. But on the centuries scale, there is extreme potential in orbital solar power and fusion.

e) Agreed. But I think it's easy to underestimate the problems our ancestors faced. In my opinion, some huge ones of past centuries include: ice ages, supervolcanic eruptions, the difficulty of maintaining stable monarchies, the bubonic plague, Columbian smallpox, the ubiquitous oppression of women, harmful theocracies, majority illiteracy, the Malthusian dilemma, and the prevalence of total war as a dominant paradigm. Is there evidence that past problems were easier than 2019 ones?

It sounds like your perspective is that, before 2100, wars and upcoming increases in resource scarcity will cause a inescapable global economic decline that will bring most of the planet to a 1800s-esque standard of living, followed by a return to slow growth (standard of living, infrastructure, food, energy, productivity) for the next couple centuries. Do I correctly understand your perspective?

Comment by scarcegreengrass on Arguments about fast takeoff · 2019-12-06T01:56:42.652Z · LW · GW

Epistemics: Yes, it is sound. Not because of claims (they seem more like opinions to me), but because it is appropriately charitable to those that disagree with Paul, and tries hard to open up avenues of mutual understanding.

Valuable: Yes. It provides new third paradigms that bring clarity to people with different views. Very creative, good suggestions.

Should it be in the Best list?: No. It is from the middle of a conversation, and would be difficult to understand if you haven't read a lot about the 'Foom debate'.

Improved: The same concepts rewritten for a less-familiar audience would be valuable. Or at least with links to some of the background (definitions of AGI, detailed examples of what fast takeoff might look like and arguments for its plausibility).

Followup: More posts thoughtfully describing positions for and against, etc. Presumably these exist, but i personally have not read much of this discussion in the 2018-2019 era.

Comment by scarcegreengrass on Disentangling arguments for the importance of AI safety · 2019-01-22T21:22:34.589Z · LW · GW

This is a little nitpicky, but i feel compelled to point out that the brain in the 'human safety' example doesn't have to run for a billion years consecutively. If the goal is to provide consistent moral guidance, the brain can set things up so that it stores a canonical copy of itself in long-term storage, runs for 30 days, then hands off control to another version of itself, loaded from the canonical copy. Every 30 days control is handed to a instance of the canonical version of this person. The same scheme is possible for a group of people.

But this is a nitpick, because i agree that there are probably weird situations in the universe where even the wisest human groups would choose bad outcomes given absolute power for a short time.

Comment by scarcegreengrass on Disentangling arguments for the importance of AI safety · 2019-01-22T21:12:08.096Z · LW · GW

I appreciate this disentangling of perspectives. I had been conflating them before, but i like this paradigm.

Comment by scarcegreengrass on Act of Charity · 2018-12-13T23:21:20.998Z · LW · GW

I found this uncomfortable and unpleasant to read, but i'm nevertheless glad i read it. Thanks for posting.

Comment by scarcegreengrass on LW Update 2018-11-22 – Abridged Comments · 2018-12-13T22:56:41.205Z · LW · GW

I think the abridgement sounds nice but don't anticipate it affecting me much either way.

I think the ability to turn this on/off in user preferences is a particularly good idea (as mentioned in Raemon's comment).

Comment by scarcegreengrass on Embedded World-Models · 2018-12-04T21:37:53.050Z · LW · GW

I can follow most of this, but i'm confused about one part of the premise.

What if the agent created a low-resolution simulation of its behavior, called it Approximate Self, and used that in its predictions? Is the idea that this is doable, but represents a unacceptably large loss of accuracy? Are we in a 'no approximation' context where any loss of accuracy is to be avoided?

My perspective: It seems to me that humans also suffer from the problem of embedded self-reference. I suspect that humans deal with this by thinking about a highly approximate representation of their own behavior. For example, when i try to predict how a future conversation will go, i imagine myself saying things that a 'reasonable person' might say. Could a machine use a analogous form of non-self-referential approximation?

Great piece, thanks for posting.

Comment by scarcegreengrass on The funnel of human experience · 2018-10-10T18:23:26.119Z · LW · GW

It's relevant to some forms of utilitarian ethics.

Comment by scarcegreengrass on Reframing misaligned AGI's: well-intentioned non-neurotypical assistants · 2018-04-10T18:19:31.013Z · LW · GW

I think this is a clever new way of phrasing the problem.

When you said 'friend that is more powerful than you', that also made me think of a parenting relationship. We can look at whether this well-intentioned personification of AGI would be a good parent to a human child. They might be able to give the child a lot of attention, a expensive education, and a lot of material resources, but they might take unorthodox actions in the course of pursuing human goals.

Comment by scarcegreengrass on Metaphilosophical competence can't be disentangled from alignment · 2018-04-10T17:10:40.322Z · LW · GW

(I'm not zhukeepa; i'm just bringing up my own thoughts.)

This isn't quite the same as a improvement, but one thing that is more appealing about normal-world metaphilosophical progress than empowered-person metaphilosophical progress is that the former has a track record of working*, while the latter is untried and might not work.

*Slowly and not without reversals.

Comment by scarcegreengrass on Against Occam's Razor · 2018-04-05T20:47:39.758Z · LW · GW
It implies that the Occamian prior should work well in any universe where the laws of probability hold. Is that really true?

Just to clarify, are you referring to the differences between classical probability and quantum amplitudes? Or do you mean something else?

Comment by scarcegreengrass on [deleted post] 2018-04-03T15:27:41.041Z

Why do you think so? It's a thought experiment about punitive acausal trade from before people realized that benevolent acausal trade was equally possible. I don't think it's the most interesting idea to come out of the Less Wrong community anymore.

Comment by scarcegreengrass on AlphaGo variant reaches superhuman play in multiple games · 2018-01-03T19:36:12.448Z · LW · GW

Noted!

Sorry, i couldn't find the previous link here when i searched for it.

Comment by scarcegreengrass on The Mad Scientist Decision Problem · 2017-11-30T16:33:42.636Z · LW · GW

Just to be clear, i'm imagining counterfactual cooperation to mean the FAI building vaults full of paperclips in every region where there is a surplus of aluminium (or a similar metal). In the other possibility branch, the paperclip maximizer (which thinks identically) reciprocates by preserving semi-autonomous cities of humans among the mountains of paperclips.

If my understanding above is correct, then yes, i think these two would cooperate IF this type of software agent shares my perspective on acausal game theory and branching timelines.

Comment by scarcegreengrass on Increasing day to day conversational rationality · 2017-11-16T22:08:48.470Z · LW · GW

In the last 48 hours i've felt the need for more than one of the abilities above. These would be very useful conversational tools.

I think some of these would be harder than others. This one sounds hard: 'Letting them now that what they said set off alarms bells somewhere in your head, but you aren’t sure why.' Maybe we could look for both scripts that work between two people who already trust each other, and scripts that work with semi-strangers. Or scripts that do and don't require both participants to have already read a specific blog post, etc.

Comment by scarcegreengrass on Call for Ideas: Industrial scale existential risk research · 2017-11-09T22:32:48.993Z · LW · GW

Something like a death risk calibration agency? Could be very interesting. Do any orgs like this exist? I guess the CDC (in the US govt) probably quantitively compares risks within the context of disease.

One quote in your post seems more ambitious than the rest: 'helping retrain people if a thing that society was worried about seems to not be such a problem'. I think that tons of people evaluate risks based on how scary they seem, not based on numerical research.

Comment by scarcegreengrass on Cutting edge technology · 2017-10-31T19:38:27.324Z · LW · GW

Note on 3D printing: Yeah, that one might take a while. It's actually been around for decades, but still hasnt become cheap enough to make a big impact. I think it'll be one of those techs that takes 50+ years to go big.

Source: I used to work in the 3D printer industry.

Comment by scarcegreengrass on Just a photo · 2017-10-29T01:22:12.400Z · LW · GW

I first see the stems, then i see the leaves.

I think humans spend a lot of time looking at our models of the world (maps) and not that much time looking at our actual sensory input.

Comment by scarcegreengrass on Zero-Knowledge Cooperation · 2017-10-26T18:16:11.271Z · LW · GW

A similar algorithm appears in Age of Em by Robin Hanson ('spur safes' in Chapter 14). Basically, a trusted third party allows copies of A and B to analyze each other's source code in a sealed environment, then deletes almost everything that is learned.

A and B both copy their source code into a trusted computing environment ('safe'), such as an isolated server or some variety of encrypted VM. The trusted environment instantiates a copy of A (A_fork) and gives it B_source to inspect. Similarly, B_fork is instantiated and allowed to examine A_source. There can be other inputs, such as some contextual information and a contract to discuss. They examine the code for several hours or so, but this is not risky to A or B because all information inside the trusted environment will mandatorily be deleted afterwards. The only outputs from the trusted environment are a secure channel from A_fork to A and one from B_fork to B. These may only ever output an extremely low-resolution one-time report. This can be one of the following 3 values: 'Enter into the contract with the other', 'Do not enter into the contract with the other', or 'Maybe enter the contract'.

This does require a trusted execution environment, of course.

I don't know if this idea is original to Hanson.

Comment by scarcegreengrass on Poets are intelligence assets · 2017-10-26T16:02:15.196Z · LW · GW

Favorite highlight:

'Likewise, great literature is typically an integrated, multi-dimensional depiction. While there is a great deal of compression, the author is still trying to report how things might really have happened, to satisfy their own sense of artistic taste for plausibility or verisimilitude. Thus, we should expect that great literature is often an honest, highly informative account of everything except what the author meant to put into it.'

Comment by scarcegreengrass on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T18:25:12.631Z · LW · GW

The techniques you outline for incorporating narrow agents into more general systems have already been demoed, I'm pretty sure. A coordinator can apply multiple narrow algorithms to a task and select the most effective one, a la IBM Watson. And I've seen at least one paper that uses a RNN to cultivate a custom RNN with the appropriate parameters for a new situation.

Comment by scarcegreengrass on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T16:38:19.252Z · LW · GW

I'm updating because I think you outline a very useful concept here. Narrow algorithms can be made much more general given a good 'algorithm switcher'. A canny switcher/coordinator program can be given a task and decide which of several narrow programs to apply to it. This is analogous to the IBM Watson system that competed in Jeopardy and to the human you describe using a PC to switch between applications. I often forget about this technique during discussions about narrow machine learning software.

Comment by scarcegreengrass on Postmodernism for rationalists · 2017-10-19T16:04:00.337Z · LW · GW

Yes, i think a big aspect of postmodernist culture is speaking in riddles because you want to be interacting with people who like riddles.

I don't think that the ability to understand a confusingly-presented concept is quite the same thing as intellectual quality, however. I think it's a more niche skill.

Comment by scarcegreengrass on Postmodernism for rationalists · 2017-10-19T15:56:27.654Z · LW · GW

Conflict vs the Author: The novel White Noise by Don DeLillo has a humanities professor as a protagonist who likes to talk about reducing the number of plotlines in his life. Whenever something interesting happens to him he avoids it, doesn't investigate, tries to keep his life bland. There's an interplay between the book trying to present a story about a character and that character taking actions to minimize how narratively interesting his life is.

Comment by scarcegreengrass on 10/19/2017: Development Update (new vote backend, revamped user pages and advanced editor) · 2017-10-19T15:50:50.734Z · LW · GW

Thanks a lot for the new blog-specific header! A requested and appreciated feature!

Comment by scarcegreengrass on Alpha Go Zero comments · 2017-10-19T15:46:05.657Z · LW · GW

Presumably finding profitable new technology is a sufficient motive.

Comment by scarcegreengrass on Alpha Go Zero comments · 2017-10-19T15:44:38.291Z · LW · GW

Yes, altho it is of course possible that the protein folding search space has a low maximal speedup from software, and could turn out to be hardware bottlenecked.

Comment by scarcegreengrass on Defense against discourse · 2017-10-17T18:08:31.158Z · LW · GW

Very depressing!

I agree that it seems reasonable to expect some people to be blinded by distrust. That's a good point.

Reading O'Neil's article, i like the quadrant model more than i expected. That seems like a useful increase in resolution. However i disagree about which demographics fall into which quadrant. Even if we limit our scope to the USA, i'm sure many women and people of color are worried about machines displacing humanity (Q2).

I think there is plenty of software in the world that encodes racist or otherwise unfair policies (as in the Q4 paragraphs), and the fact that this discrimination is sometimes concealed by the term 'AI' is a serious issue. But i think this problem deserves a more rigorous defense than this O'Neil article.

Comment by scarcegreengrass on Contra double crux · 2017-10-08T21:43:05.068Z · LW · GW

This is a good phrasing of my opinion also. I don't think this is an issue of resource scarcity.

Comment by scarcegreengrass on Why hive minds might work · 2017-10-07T02:15:07.186Z · LW · GW

I share your concern that users are not yet able to distinguish between blog posts and frontpage posts. I'm not sure how to tell either, aside from going to your blog and seeing if i can spot it there.

Comment by scarcegreengrass on The Anthropic Principle: Five Short Examples · 2017-10-05T22:09:41.050Z · LW · GW

Your 'if' statements made me update. I guess there is also a distinction between what conclusions one can draw from this type of anthropic reasoning.

One (maybe naive?) conclusion is that 'the anthropic principle is protecting us'. If you think the anthropic principle is relevant, then you continue to expect it to allow you to evade extinction.

The other conclusion is that 'the anthropic perspective is relevant to our past but not our future'. You consider anthropics to be a source of distortion on the historical record, but not a guide to what will happen next. Under this interpretation you would anticipate extinction of [humans / you / other reference class] to be more likely in the future than in the past.

I suspect this split depends on whether you weight your future timelines by how many observers are in them, etc.

Comment by scarcegreengrass on Beta - First Impressions · 2017-10-05T21:06:20.720Z · LW · GW

When i subscribe to a user, what happens? Does that affect the magical sorting algorithm or what?

Comment by scarcegreengrass on Infant Mortality and the Argument from Life History · 2017-10-05T20:50:19.857Z · LW · GW

Compelling arguments. I'm updating about how complex this topic is.

I also think that the 'zero line' we intuitively use to divided negative experience from positive experience is a little bit arbitrary. In planetary science, the sea level of a planet may vary over time, or the planet might have no seas. Because of this, scientists chose an arbitrary height ('datum') and consider that to be the geological zero altitude. I suspect that some disagreements about wild animal suffering might stem from people using different 'zero altitudes' for animal suffering. Some people think of animals as happy most of the time and some people think of animals as hungry and stressed most of the time.

Comment by scarcegreengrass on Notes From an Apocalypse · 2017-09-22T17:22:09.123Z · LW · GW

Great essay. I was totally unfamiliar with the idea that ~50% of modern animal phyla appeared at the same time.

Comment by scarcegreengrass on LW 2.0 Strategic Overview · 2017-09-19T16:05:08.527Z · LW · GW

This is a real dynamic that is worth attention. I particularly agree with removing HPMoR from the top of the front page.

Counterpoint: The serious/academic niche can also be filled by external sites, like https://agentfoundations.org/ and http://effective-altruism.com/.

Comment by scarcegreengrass on 2017 LessWrong Survey · 2017-09-19T15:59:44.391Z · LW · GW

I took the survey. It's probably my favorite survey of each year :) Thanks.

Comment by scarcegreengrass on New business opportunities due to self-driving cars · 2017-09-12T17:37:03.956Z · LW · GW

I agree although i do not dislike Lumifer's comments in general, just the overly negative ones.

Comment by scarcegreengrass on New business opportunities due to self-driving cars · 2017-09-12T17:29:07.111Z · LW · GW

Something like this sounds plausible ... or at least, it's very similar to existing pickup laundry companies.

Comment by scarcegreengrass on New business opportunities due to self-driving cars · 2017-09-12T17:26:30.589Z · LW · GW

Maybe it only works in places with very straight freeways, like deserts :P

Comment by scarcegreengrass on Nasas ambitious plan to save earth from a supervolcano · 2017-08-29T17:32:52.262Z · LW · GW

Just because the magnitude of the bad outcome is enormous. Caution seems prudent for such a slow, dangerous process.