Posts

[Link] Is the Orthogonality Thesis Defensible? (Qualia Computing) 2019-11-13T03:59:00.955Z · score: 6 (4 votes)
(a) 2019-10-13T17:39:52.883Z · score: 38 (16 votes)
[Link] Tools for thought (Matuschak & Nielson) 2019-10-04T00:42:32.116Z · score: 21 (6 votes)
[Link] Truth-telling is aggression in zero-sum frames (Jessica Taylor) 2019-09-11T05:53:59.188Z · score: 10 (6 votes)
[Link] Book Review: Reframing Superintelligence (SSC) 2019-08-28T22:57:09.455Z · score: 47 (17 votes)
What supplements do you use? 2019-07-28T17:01:30.441Z · score: 18 (8 votes)
If physics is many-worlds, does ethics matter? 2019-07-10T15:32:56.085Z · score: 14 (9 votes)
What's state-of-the-art in AI understanding of theory of mind? 2019-07-03T23:11:34.426Z · score: 15 (7 votes)
Cash prizes for the best arguments against psychedelics being an EA cause area 2019-05-10T18:24:47.317Z · score: 21 (8 votes)
Complex value & situational awareness 2019-04-16T18:46:22.414Z · score: 8 (5 votes)
Microcosmographia excerpt 2019-03-29T18:29:14.239Z · score: 16 (4 votes)
[Link] OpenAI on why we need social scientists 2019-02-19T16:59:32.319Z · score: 14 (7 votes)
Aphorisms for meditation 2019-02-18T17:47:05.526Z · score: 8 (5 votes)
Who owns OpenAI's new language model? 2019-02-14T17:51:26.367Z · score: 18 (7 votes)
Individual profit-sharing? 2019-02-13T17:58:41.388Z · score: 11 (2 votes)
What we talk about when we talk about life satisfaction 2019-02-04T23:52:38.052Z · score: 9 (6 votes)
Is intellectual work better construed as exploration or performance? 2019-01-25T21:59:28.381Z · score: 14 (4 votes)
No standard metric for CFAR workshops? 2018-09-06T18:06:00.021Z · score: 12 (5 votes)
On memetic weapons 2018-09-01T03:25:36.489Z · score: 44 (25 votes)
Doing good while clueless 2018-02-15T16:02:27.060Z · score: 36 (9 votes)

Comments

Comment by ioannes_shade on Matthew Walker's "Why We Sleep" Is Riddled with Scientific and Factual Errors · 2019-11-17T14:52:27.292Z · score: 3 (2 votes) · LW · GW

Also cross-posted on the EA Forum: https://forum.effectivealtruism.org/posts/pv9MAFNHyZdJecbGn/link-against-why-we-sleep-guzey

Comment by ioannes_shade on The Zettelkasten Method · 2019-11-08T00:33:18.706Z · score: 5 (3 votes) · LW · GW

How'd it go?

Comment by ioannes_shade on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-06T01:11:02.224Z · score: 5 (3 votes) · LW · GW
In one comical case, AlphaStar had surrounded the units it was building with its own factories so that they couldn't get out to reach the rest of the map. Rather than lifting the buildings to let the units out, which is possible for Terran, it destroyed one building and then immediately began rebuilding it before it could move the units out!

I feel confused about how a system that can't figure out stuff like this is able to defeat strong players. (I don't know very much about StarCraft.)

Help build my intuition here?

Comment by ioannes_shade on Ways that China is surpassing the US · 2019-11-06T01:04:59.496Z · score: 2 (2 votes) · LW · GW
China’s success challenges our implicit ideology and deep-seated assumptions about governance.

+1

The Scholar's Stage is a good entry point for learning about this: http://scholars-stage.blogspot.com/2019/07/two-case-studies-in-communist-insecurity.html

Inside the Mind of Xi Jinping is good for showing how different the intellectual commitments of the Chinese leadership are from our own.

Comment by ioannes_shade on Introducing Foretold.io: A New Open-Source Prediction Registry · 2019-10-25T17:23:56.533Z · score: 7 (2 votes) · LW · GW

Recently published in Science – Predict science to improve science

The associated platform: https://socialscienceprediction.org

Comment by ioannes_shade on Make more land · 2019-10-24T19:39:20.446Z · score: 1 (1 votes) · LW · GW

cf. Inadequate Equilibria

Comment by ioannes_shade on Healthy Competition · 2019-10-21T16:57:27.719Z · score: 1 (1 votes) · LW · GW
Research orgs benefit from having a number of smart people bouncing ideas around.

Probably most of (more of?) this benefit can also be unlocked by an ecosystem of multiple orgs in friendly competition who regularly talk to each other in ways that feel psychologically safe.

Comment by ioannes_shade on (a) · 2019-10-17T20:03:36.255Z · score: 1 (1 votes) · LW · GW

Not my only concern but definitely seems important. (Otherwise you're constrained by what you can personally maintain.)

A browser plugin seems like a good approach.

Comment by ioannes_shade on Make more land · 2019-10-17T03:29:05.042Z · score: 1 (1 votes) · LW · GW

Okay, but I don't really understand the incentives here. Why is bad policy attractive to anyone? Is it all NIMBY-ism or are you pointing to other drivers also?

Comment by ioannes_shade on (a) · 2019-10-17T03:24:30.469Z · score: 1 (1 votes) · LW · GW
For this issue, you could implement something like a 'first seen' timestamp in your link database and only create the final archive & substituting after a certain time period - I think a period like 3 months would capture 99% of the changes which are ever going to be made, while not risking exposing readers to too much linkrot.

This makes sense, but it takes a lot of activation energy. I don't think a practice like this will spread (like even I probably won't chunk out the time to learn how to implement it, and I care a bunch about this stuff).

Plausibly "(a)" could spread in some circles – activation energy is low and it only adds 10-20 seconds of friction per archived link.

But even "(a)" probably won't spread far (10-20 seconds of friction per link is too much for almost everyone). Maybe there's room for a company doing this as a service...

Comment by ioannes_shade on Make more land · 2019-10-17T01:37:32.808Z · score: 1 (1 votes) · LW · GW

People vote in bad government because of NIMBY-ism, or something else?

Comment by ioannes_shade on (a) · 2019-10-17T00:56:23.278Z · score: 5 (3 votes) · LW · GW

Thanks, this is great. (And I didn't know about your Archiving URLs page!)


And the functionality is one that will be rarely exercised by users, who will click on only a few links and will click on the archived version for only a small subset of said links, unless link rot is a huge issue - in which case, why are you linking to the broken link at all instead of the working archived version?

I feel like I'm often publishing content with two audiences in mind – my present-tense audience and a future audience who may come across the post.

The original link feels important to include because it's more helpful to the present-tense audience. e.g. Often folks update the content of a linked page in response to reactions elsewhere, and it's good to be able quickly point to the latest version of the link.

The archived link is more aimed at the future audience. By the time they stumble across the post, the original link will likely be broken, and there's a better chance that the archived version will still be intact. (e.g. many of the links on Aaron Swartz's blog are now broken; whenever I read it I find myself wishing there were convenient archived versions of the links).

Comment by ioannes_shade on Make more land · 2019-10-16T23:51:59.362Z · score: 4 (3 votes) · LW · GW

Though note most of this (in Marin) is park land.

Comment by ioannes_shade on Make more land · 2019-10-16T23:07:16.765Z · score: 6 (5 votes) · LW · GW

These are all abandoned:

  • Oakland army base
  • Oakland outer harbor
  • Naval Air Station Alameda
  • Yerba Buena Island coast guard base
  • Most of Treasure Island
  • Parts of Hunters Point

Parts of South San Francisco are undeveloped, though I don't know how that interacts with San Bruno Mountain State Park.

Large swaths of the western side of the Peninsula are undeveloped.

Comment by ioannes_shade on Make more land · 2019-10-16T19:32:37.268Z · score: 4 (3 votes) · LW · GW

Not arguing against this proposal, but want to note that there's plenty of land in the Bay Area that's only developed to low density or hasn't been developed at all.

Changing housing policy such that it's easier to build is probably upstream of both making new land and making existing land higher density.

Comment by ioannes_shade on Introducing Foretold.io: A New Open-Source Prediction Registry · 2019-10-16T15:28:20.378Z · score: 16 (10 votes) · LW · GW

Nice!

What are the main ways by which Foretold.io is differentiated from Metaculus?

Comment by ioannes_shade on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2019-10-08T21:40:38.041Z · score: 1 (1 votes) · LW · GW

+1 to targeting finance-types, though probably many/most are savvy enough that they won't find EA compelling.

Comment by ioannes_shade on Why are the people who could be doing safety research, but aren’t, doing something else? · 2019-08-29T23:38:03.870Z · score: 1 (1 votes) · LW · GW

cf. https://www.lesswrong.com/posts/wkF5rHDFKEWyJJLj2/link-book-review-reframing-superintelligence-ssc


Plausibly a lot of them have something like Drexler's or Hanson's view, such that it doesn't seem super-urgent & isn't aligned with their comparative advantage.

Comment by ioannes_shade on A Personal Rationality Wishlist · 2019-08-28T22:51:03.366Z · score: 2 (2 votes) · LW · GW
But presumably it’s possible to be too relaxed, calm, and/or happy, and one should instead be anxious, angry, and/or sad. How can I tell when this is the case, and what should I do to increase my neuroticism in-the-moment?

cf. https://neuroticgradientdescent.blogspot.com/2019/07/core-transformation.html

Specifically:


This stuff is mutually reinforcing with ego's 'forever' identity based narratives. 'If I relax then I become the sort of person who is just relaxed all the time and never does anything AHHHH!' Whereas what actually happens is that given the ability to choose which stresses to take on, rather than it being an involuntary process, we choose a lot better in apportioning our efforts to the things we care about. One of the noticeable changes is that people take on fewer projects, but put more effort into those they do take on. Most of us, if we were taking a rigorous accounting, would be forced to admit that our project start:project finish ratio is quite bad by default. Core Transformation puts us directly in touch with these and potentially lots of other objections. The point isn't to sweep them under the rug but to identify the true content of these objections and figure out how we want to engage with that while letting the non-true parts drop away once all objecting parts are actually satisfied.

h/t @romeostevensit

Comment by ioannes_shade on Ought: why it matters and ways to help · 2019-08-09T00:28:24.205Z · score: 3 (2 votes) · LW · GW

Though we're still actively searching for the senior web developer role: https://ought.org/careers/web-developer

Comment by ioannes_shade on Building up to an Internal Family Systems model · 2019-08-05T16:30:41.841Z · score: 4 (2 votes) · LW · GW

I'm finding it fruitful to consider the "exiles" discussion in this post alongside Hunting the Shadow.

Comment by ioannes_shade on Open problems in human rationality: guesses · 2019-08-02T21:23:50.182Z · score: 2 (2 votes) · LW · GW
Try harder to learn from tradition than you have been on the margin. Current = noisy.

What does "Current = noisy" mean here?

Comment by ioannes_shade on Open problems in human rationality: guesses · 2019-08-02T21:19:36.111Z · score: 1 (1 votes) · LW · GW
Funding people who are doing things differently from how you would do them is incredibly hard but necessary. EA should learn more Jessica Graham

What does "learn more Jessica Graham" mean?

Comment by ioannes_shade on What supplements do you use? · 2019-08-01T22:57:08.130Z · score: 1 (1 votes) · LW · GW
A majority of these choices are influenced by Bredesen's book The End of Alzheimers, or by a prior source with similar advice.

Oh interesting. Do you know if anyone's done an epistemic spot-check of The End of Alzheimers?


Comment by ioannes_shade on What supplements do you use? · 2019-07-30T17:04:35.702Z · score: 1 (1 votes) · LW · GW
... health risks of fish oil while linking to a page saying fish oil doesn't contain Mercury. Is that not the health risk you were thinking of?

No good reason. I stopped taking for health risk concerns like mercury (plus not noticing any effect).

I think I'm a bit paranoid about heavy metals from fish. Probably irrationally so

Comment by ioannes_shade on What supplements do you use? · 2019-07-28T23:41:44.025Z · score: 3 (2 votes) · LW · GW

Thanks! This meta-analysis of Metformin makes it seem promising.

Comment by ioannes_shade on Being the (Pareto) Best in the World · 2019-07-28T17:19:58.746Z · score: 1 (1 votes) · LW · GW

cf. Talent Stacks

Comment by ioannes_shade on Ought: why it matters and ways to help · 2019-07-25T23:55:26.032Z · score: 13 (5 votes) · LW · GW

(I'm helping Ought hire for the web dev role.)

Ought is based in SF (office in North Beach).

Ideally we'd find someone who could work out of the SF office, but we're open to considering remote arrangements. One of our full-time researchers is based remotely and periodically visits the SF office to co-work.

Comment by ioannes_shade on What's state-of-the-art in AI understanding of theory of mind? · 2019-07-11T04:49:03.328Z · score: 1 (1 votes) · LW · GW
And then it's trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.

If by "a lot of ready means of mass destruction" you're thinking of nukes, it doesn't seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI's own survival.

We don't have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn't have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).

It seems like an AGI wouldn't execute a "kill all humans" plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don't see how an AGI could become confident about high-variance "kill all humans" plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)

Comment by ioannes_shade on What's state-of-the-art in AI understanding of theory of mind? · 2019-07-11T03:21:09.341Z · score: 1 (1 votes) · LW · GW

Wouldn't an AI following that procedure be really easy to spot? (Because it's not deceptive, and it just starts trying to destroy things it can't predict as it encounters them.)

Comment by ioannes_shade on On alien science · 2019-06-02T15:07:59.829Z · score: 6 (4 votes) · LW · GW
Firstly, on a historical basis, many of the greatest scientists were clearly aiming for explanation not prediction.

Could you expand a bit more on how you view explanation as distinct from prediction?

(As I think about the concepts, I'm finding it tricky to draw a crisp distinction between the two.)

Comment by ioannes_shade on Evidence for Connection Theory · 2019-05-28T18:10:05.439Z · score: 9 (3 votes) · LW · GW

Here's an archived version of the doc.

Comment by ioannes_shade on Habryka's Shortform Feed · 2019-05-19T15:58:48.860Z · score: 1 (1 votes) · LW · GW

See Sinclair: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


Comment by ioannes_shade on Literature Review: Distributed Teams · 2019-05-07T21:10:29.721Z · score: 0 (3 votes) · LW · GW

cf. https://en.wikipedia.org/wiki/Knuth_reward_check

Comment by ioannes_shade on Open Problems in Archipelago · 2019-04-17T17:57:01.825Z · score: 1 (3 votes) · LW · GW

lol

Comment by ioannes_shade on The Hard Work of Translation (Buddhism) · 2019-04-13T20:18:52.020Z · score: 4 (2 votes) · LW · GW

I agree with your basic point here, though have some nits to pick about your characterization of zen :-)

Comment by ioannes_shade on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-04-06T20:07:46.816Z · score: 1 (1 votes) · LW · GW

Seems somewhat related: Liberal Radicalism: A Flexible Design For Philanthropic Matching Funds

Especially interesting because the authors are rich enough & credible enough to stand up a big project here, if they decide to.

Comment by ioannes_shade on The Case for The EA Hotel · 2019-04-01T15:47:39.408Z · score: 11 (5 votes) · LW · GW

Also Vipul's donation report is interesting + helpful.

Comment by ioannes_shade on The Case for The EA Hotel · 2019-03-31T16:16:37.611Z · score: 7 (5 votes) · LW · GW

Note there's also some good discussion about this post over on the EA Forum.

Comment by ioannes_shade on List of Q&A Assumptions and Uncertainties [LW2.0 internal document] · 2019-03-30T17:03:28.314Z · score: 11 (4 votes) · LW · GW

Curious how LessWrong sees its Q&A function slotting in amongst Quora, Stack Exchange, Twitter, etc.

(There are a lot of question-answering platforms currently extant; I'm not clear on the business case for another one.)

Comment by ioannes_shade on "Other people are wrong" vs "I am right" · 2019-02-24T15:50:58.947Z · score: 5 (3 votes) · LW · GW

This is awesome.

Reminds me of Ben Kuhn's recent question on the EA Forum – Has your EA worldview changed over time?

Comment by ioannes_shade on Aphorisms for meditation · 2019-02-23T18:59:56.766Z · score: 3 (2 votes) · LW · GW

Makes sense, thanks for laying out some of your reasoning!

I think there's a lot of inferential distance between Zen practices & the median LessWrong post. Bridging that distance would be a big project. Maybe one day I'll take a stab at it – in the meantime it makes sense for stuff like this to live in the personal blog section.

Comment by ioannes_shade on Some disjunctive reasons for urgency on AI risk · 2019-02-16T01:57:06.181Z · score: 1 (1 votes) · LW · GW

Related, on the EA Forum. (I am the post's author.)

Comment by ioannes_shade on Some disjunctive reasons for urgency on AI risk · 2019-02-15T23:05:04.747Z · score: 2 (2 votes) · LW · GW
... even if my property rights are technically secure, I don't know how I would secure my mind.

Training up one's concentration & present-moment awareness are probably helpful for this.

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T17:08:41.830Z · score: 1 (1 votes) · LW · GW

Haven't read it yet, but here's an academic review of "Federal Patent Takings," which seems relevant.

Comment by ioannes_shade on Why do you reject negative utilitarianism? · 2019-02-15T16:58:18.887Z · score: 2 (2 votes) · LW · GW

I like tranquilism as a view preserving some of the attractive aspects of negative utilitarianism whilst sidestepping some of its icky implications.

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T15:52:19.281Z · score: 1 (1 votes) · LW · GW

Thanks, this is helpful.

What I'm really puzzled by is the extremely counterfactuality of the question.

It doesn't feel too extreme to me (powerful new dual-use technology), but probably our priors are just different here :-)

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T06:57:29.071Z · score: 3 (2 votes) · LW · GW

Right, this is the area I'm curious about.

I imagine that if a private firm created a process for making nuclear suitcase bombs with a unit cost < $100k, the US government would show up and be like "nice nuclear-suitcase-bomb process – that's ours now. National security reasons, etc."

(Something like this is what I meant by "requisition.")

I wonder what moves the private firm could make, in that case. Could they be like "No, sorry, we invented this process, it's protected by our copyright, and we will sue if you try to take it?"

Would they have any chance of preserving their control of the technology through the courts?

Would they be able to just pack up shop & move operations to Ireland, or wherever?

I also wonder how distant the current OpenAI case is from the hypothetical nuclear-suitcase-bomb case, in terms of property rights & moves available to the private firm.

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T16:16:20.201Z · score: 3 (2 votes) · LW · GW

Interesting.

My intuition is that this would work better without a bureaucratic intermediary administering it.

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T00:56:48.639Z · score: 1 (1 votes) · LW · GW

DM'd you