Posts

[Link] Truth-telling is aggression in zero-sum frames (Jessica Taylor) 2019-09-11T05:53:59.188Z · score: 10 (6 votes)
[Link] Book Review: Reframing Superintelligence (SSC) 2019-08-28T22:57:09.455Z · score: 47 (17 votes)
What supplements do you use? 2019-07-28T17:01:30.441Z · score: 18 (8 votes)
If physics is many-worlds, does ethics matter? 2019-07-10T15:32:56.085Z · score: 14 (9 votes)
What's state-of-the-art in AI understanding of theory of mind? 2019-07-03T23:11:34.426Z · score: 15 (7 votes)
Cash prizes for the best arguments against psychedelics being an EA cause area 2019-05-10T18:24:47.317Z · score: 21 (8 votes)
Complex value & situational awareness 2019-04-16T18:46:22.414Z · score: 8 (5 votes)
Microcosmographia excerpt 2019-03-29T18:29:14.239Z · score: 16 (4 votes)
[Link] OpenAI on why we need social scientists 2019-02-19T16:59:32.319Z · score: 14 (7 votes)
Aphorisms for meditation 2019-02-18T17:47:05.526Z · score: 8 (5 votes)
Who owns OpenAI's new language model? 2019-02-14T17:51:26.367Z · score: 18 (7 votes)
Individual profit-sharing? 2019-02-13T17:58:41.388Z · score: 11 (2 votes)
What we talk about when we talk about life satisfaction 2019-02-04T23:52:38.052Z · score: 9 (6 votes)
Is intellectual work better construed as exploration or performance? 2019-01-25T21:59:28.381Z · score: 14 (4 votes)
No standard metric for CFAR workshops? 2018-09-06T18:06:00.021Z · score: 12 (5 votes)
On memetic weapons 2018-09-01T03:25:36.489Z · score: 44 (25 votes)
Doing good while clueless 2018-02-15T16:02:27.060Z · score: 36 (9 votes)

Comments

Comment by ioannes_shade on Why are the people who should be doing safety research, but aren’t, doing something else? · 2019-08-29T23:38:03.870Z · score: 1 (1 votes) · LW · GW

cf. https://www.lesswrong.com/posts/wkF5rHDFKEWyJJLj2/link-book-review-reframing-superintelligence-ssc


Plausibly a lot of them have something like Drexler's or Hanson's view, such that it doesn't seem super-urgent & isn't aligned with their comparative advantage.

Comment by ioannes_shade on A Personal Rationality Wishlist · 2019-08-28T22:51:03.366Z · score: 2 (2 votes) · LW · GW
But presumably it’s possible to be too relaxed, calm, and/or happy, and one should instead be anxious, angry, and/or sad. How can I tell when this is the case, and what should I do to increase my neuroticism in-the-moment?

cf. https://neuroticgradientdescent.blogspot.com/2019/07/core-transformation.html

Specifically:


This stuff is mutually reinforcing with ego's 'forever' identity based narratives. 'If I relax then I become the sort of person who is just relaxed all the time and never does anything AHHHH!' Whereas what actually happens is that given the ability to choose which stresses to take on, rather than it being an involuntary process, we choose a lot better in apportioning our efforts to the things we care about. One of the noticeable changes is that people take on fewer projects, but put more effort into those they do take on. Most of us, if we were taking a rigorous accounting, would be forced to admit that our project start:project finish ratio is quite bad by default. Core Transformation puts us directly in touch with these and potentially lots of other objections. The point isn't to sweep them under the rug but to identify the true content of these objections and figure out how we want to engage with that while letting the non-true parts drop away once all objecting parts are actually satisfied.

h/t @romeostevensit

Comment by ioannes_shade on Ought: why it matters and ways to help · 2019-08-09T00:28:24.205Z · score: 3 (2 votes) · LW · GW

Though we're still actively searching for the senior web developer role: https://ought.org/careers/web-developer

Comment by ioannes_shade on Building up to an Internal Family Systems model · 2019-08-05T16:30:41.841Z · score: 4 (2 votes) · LW · GW

I'm finding it fruitful to consider the "exiles" discussion in this post alongside Hunting the Shadow.

Comment by ioannes_shade on Open problems in human rationality: guesses · 2019-08-02T21:23:50.182Z · score: 2 (2 votes) · LW · GW
Try harder to learn from tradition than you have been on the margin. Current = noisy.

What does "Current = noisy" mean here?

Comment by ioannes_shade on Open problems in human rationality: guesses · 2019-08-02T21:19:36.111Z · score: 1 (1 votes) · LW · GW
Funding people who are doing things differently from how you would do them is incredibly hard but necessary. EA should learn more Jessica Graham

What does "learn more Jessica Graham" mean?

Comment by ioannes_shade on What supplements do you use? · 2019-08-01T22:57:08.130Z · score: 1 (1 votes) · LW · GW
A majority of these choices are influenced by Bredesen's book The End of Alzheimers, or by a prior source with similar advice.

Oh interesting. Do you know if anyone's done an epistemic spot-check of The End of Alzheimers?


Comment by ioannes_shade on What supplements do you use? · 2019-07-30T17:04:35.702Z · score: 1 (1 votes) · LW · GW
... health risks of fish oil while linking to a page saying fish oil doesn't contain Mercury. Is that not the health risk you were thinking of?

No good reason. I stopped taking for health risk concerns like mercury (plus not noticing any effect).

I think I'm a bit paranoid about heavy metals from fish. Probably irrationally so

Comment by ioannes_shade on What supplements do you use? · 2019-07-28T23:41:44.025Z · score: 3 (2 votes) · LW · GW

Thanks! This meta-analysis of Metformin makes it seem promising.

Comment by ioannes_shade on Being the (Pareto) Best in the World · 2019-07-28T17:19:58.746Z · score: 1 (1 votes) · LW · GW

cf. Talent Stacks

Comment by ioannes_shade on Ought: why it matters and ways to help · 2019-07-25T23:55:26.032Z · score: 13 (5 votes) · LW · GW

(I'm helping Ought hire for the web dev role.)

Ought is based in SF (office in North Beach).

Ideally we'd find someone who could work out of the SF office, but we're open to considering remote arrangements. One of our full-time researchers is based remotely and periodically visits the SF office to co-work.

Comment by ioannes_shade on What's state-of-the-art in AI understanding of theory of mind? · 2019-07-11T04:49:03.328Z · score: 1 (1 votes) · LW · GW
And then it's trivial to find a means to dispose of the threat, humans are fragile and stupid and have created a lot of ready means of mass destruction.

If by "a lot of ready means of mass destruction" you're thinking of nukes, it doesn't seem trivial to design a way to use nukes to destroy / neutralize all humans without jeopardizing the AGI's own survival.

We don't have a way of reliably modeling the results of very many simultaneous nuclear blasts, and it seems like the AGI probably wouldn't have a way to reliable model this either unless it ran more empirical tests (which would be easy to notice).

It seems like an AGI wouldn't execute a "kill all humans" plan unless it was confident that executing the plan would in expectation result in a higher chance of its own survival than not executing the plan. I don't see how an AGI could become confident about high-variance "kill all humans" plans like using nukes without having much better predictive models than we do. (And it seems like more empirical data about what multiple simultaneous nuclear explosions do would be required to have better models for this case.)

Comment by ioannes_shade on What's state-of-the-art in AI understanding of theory of mind? · 2019-07-11T03:21:09.341Z · score: 1 (1 votes) · LW · GW

Wouldn't an AI following that procedure be really easy to spot? (Because it's not deceptive, and it just starts trying to destroy things it can't predict as it encounters them.)

Comment by ioannes_shade on On alien science · 2019-06-02T15:07:59.829Z · score: 6 (4 votes) · LW · GW
Firstly, on a historical basis, many of the greatest scientists were clearly aiming for explanation not prediction.

Could you expand a bit more on how you view explanation as distinct from prediction?

(As I think about the concepts, I'm finding it tricky to draw a crisp distinction between the two.)

Comment by ioannes_shade on Evidence for Connection Theory · 2019-05-28T18:10:05.439Z · score: 9 (3 votes) · LW · GW

Here's an archived version of the doc.

Comment by ioannes_shade on Habryka's Shortform Feed · 2019-05-19T15:58:48.860Z · score: 1 (1 votes) · LW · GW

See Sinclair: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"


Comment by ioannes_shade on Literature Review: Distributed Teams · 2019-05-07T21:10:29.721Z · score: 0 (3 votes) · LW · GW

cf. https://en.wikipedia.org/wiki/Knuth_reward_check

Comment by ioannes_shade on Open Problems in Archipelago · 2019-04-17T17:57:01.825Z · score: 1 (3 votes) · LW · GW

lol

Comment by ioannes_shade on The Hard Work of Translation (Buddhism) · 2019-04-13T20:18:52.020Z · score: 4 (2 votes) · LW · GW

I agree with your basic point here, though have some nits to pick about your characterization of zen :-)

Comment by ioannes_shade on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-04-06T20:07:46.816Z · score: 1 (1 votes) · LW · GW

Seems somewhat related: Liberal Radicalism: A Flexible Design For Philanthropic Matching Funds

Especially interesting because the authors are rich enough & credible enough to stand up a big project here, if they decide to.

Comment by ioannes_shade on The Case for The EA Hotel · 2019-04-01T15:47:39.408Z · score: 11 (5 votes) · LW · GW

Also Vipul's donation report is interesting + helpful.

Comment by ioannes_shade on The Case for The EA Hotel · 2019-03-31T16:16:37.611Z · score: 7 (5 votes) · LW · GW

Note there's also some good discussion about this post over on the EA Forum.

Comment by ioannes_shade on List of Q&A Assumptions and Uncertainties [LW2.0 internal document] · 2019-03-30T17:03:28.314Z · score: 11 (4 votes) · LW · GW

Curious how LessWrong sees its Q&A function slotting in amongst Quora, Stack Exchange, Twitter, etc.

(There are a lot of question-answering platforms currently extant; I'm not clear on the business case for another one.)

Comment by ioannes_shade on "Other people are wrong" vs "I am right" · 2019-02-24T15:50:58.947Z · score: 5 (3 votes) · LW · GW

This is awesome.

Reminds me of Ben Kuhn's recent question on the EA Forum – Has your EA worldview changed over time?

Comment by ioannes_shade on Aphorisms for meditation · 2019-02-23T18:59:56.766Z · score: 3 (2 votes) · LW · GW

Makes sense, thanks for laying out some of your reasoning!

I think there's a lot of inferential distance between Zen practices & the median LessWrong post. Bridging that distance would be a big project. Maybe one day I'll take a stab at it – in the meantime it makes sense for stuff like this to live in the personal blog section.

Comment by ioannes_shade on Some disjunctive reasons for urgency on AI risk · 2019-02-16T01:57:06.181Z · score: 1 (1 votes) · LW · GW

Related, on the EA Forum. (I am the post's author.)

Comment by ioannes_shade on Some disjunctive reasons for urgency on AI risk · 2019-02-15T23:05:04.747Z · score: 2 (2 votes) · LW · GW
... even if my property rights are technically secure, I don't know how I would secure my mind.

Training up one's concentration & present-moment awareness are probably helpful for this.

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T17:08:41.830Z · score: 1 (1 votes) · LW · GW

Haven't read it yet, but here's an academic review of "Federal Patent Takings," which seems relevant.

Comment by ioannes_shade on Why do you reject negative utilitarianism? · 2019-02-15T16:58:18.887Z · score: 2 (2 votes) · LW · GW

I like tranquilism as a view preserving some of the attractive aspects of negative utilitarianism whilst sidestepping some of its icky implications.

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T15:52:19.281Z · score: 1 (1 votes) · LW · GW

Thanks, this is helpful.

What I'm really puzzled by is the extremely counterfactuality of the question.

It doesn't feel too extreme to me (powerful new dual-use technology), but probably our priors are just different here :-)

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T06:57:29.071Z · score: 3 (2 votes) · LW · GW

Right, this is the area I'm curious about.

I imagine that if a private firm created a process for making nuclear suitcase bombs with a unit cost < $100k, the US government would show up and be like "nice nuclear-suitcase-bomb process – that's ours now. National security reasons, etc."

(Something like this is what I meant by "requisition.")

I wonder what moves the private firm could make, in that case. Could they be like "No, sorry, we invented this process, it's protected by our copyright, and we will sue if you try to take it?"

Would they have any chance of preserving their control of the technology through the courts?

Would they be able to just pack up shop & move operations to Ireland, or wherever?

I also wonder how distant the current OpenAI case is from the hypothetical nuclear-suitcase-bomb case, in terms of property rights & moves available to the private firm.

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T16:16:20.201Z · score: 3 (2 votes) · LW · GW

Interesting.

My intuition is that this would work better without a bureaucratic intermediary administering it.

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T00:56:48.639Z · score: 1 (1 votes) · LW · GW

DM'd you

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T00:20:15.251Z · score: 1 (1 votes) · LW · GW

Very interesting – professional poker didn't occur to me!

I wonder if these agreements are enforced by legal contract? Sounds like it, from the tone of the article.

Comment by ioannes_shade on The RAIN Framework for Informational Effectiveness · 2019-02-13T18:11:08.876Z · score: 3 (3 votes) · LW · GW

This is great.

Could you link to some specific examples of content that hits the different sweet spots of the framework?

Comment by ioannes_shade on Attacking enlightenment · 2019-01-29T18:18:49.253Z · score: 3 (2 votes) · LW · GW

I'm reading Altered Traits & have been impressed with its epistemological care. Perhaps helpful for your project?

Comment by ioannes_shade on Building up to an Internal Family Systems model · 2019-01-26T17:55:12.940Z · score: 6 (3 votes) · LW · GW
So I finally read up on it, and have been successfully applying it ever since.

Could you give some examples of where you've been applying IFS and how it's been helpful in those situations?

Comment by ioannes_shade on What math do i need for data analysis? · 2019-01-19T15:30:32.494Z · score: 4 (3 votes) · LW · GW

Perhaps check out dataquest.io, which teaches the data scientist's basic skillset.

Comment by ioannes_shade on What makes people intellectually active? · 2019-01-19T04:28:14.324Z · score: 1 (1 votes) · LW · GW

"the mid-19605"

Should be "1960s", I think

Comment by ioannes_shade on Some Thoughts on My Psychiatry Practice · 2019-01-19T04:26:40.111Z · score: 7 (6 votes) · LW · GW

Just some quick thoughts:

  • Housemates and other group-living arrangements can make living in cities affordable
  • Pets are expensive (though probably worth it for lots of people)
  • Flights are expensive (though deals can be had)
  • Car ownership is expensive
  • The internet can provision almost all media at high quality, low inconvenience, minimal risk, for free
Comment by ioannes_shade on New edition of "Rationality: From AI to Zombies" · 2018-12-16T17:48:47.126Z · score: 9 (6 votes) · LW · GW

Leather-bound!

Comment by ioannes_shade on Good Samaritans in experiments · 2018-12-08T16:25:07.109Z · score: 5 (3 votes) · LW · GW

The ToC feature is dope :-)

Comment by ioannes_shade on AI Safety Research Camp - Project Proposal · 2018-11-19T20:05:40.781Z · score: 2 (2 votes) · LW · GW

Are there plans to do another one of these in 2019?

Comment by ioannes_shade on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-11-19T20:02:20.771Z · score: 3 (2 votes) · LW · GW

Thank you for doing this, and for giving feedback to all submissions!

Comment by ioannes_shade on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-11-19T19:56:43.032Z · score: 2 (3 votes) · LW · GW

Future Perfect is sorta this, but probably too mainstream / high-powered for the use-case you have in mind.

... and it looks like they're hiring a writer!

Comment by ioannes_shade on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-11-19T19:54:19.281Z · score: 1 (1 votes) · LW · GW

"that we should expect DeepMind et all to have some projects..."

et all should be et al.

Comment by ioannes_shade on No Really, Why Aren't Rationalists Winning? · 2018-11-07T19:25:29.452Z · score: 6 (3 votes) · LW · GW

cf. Antigravity Investments (investment advisor service for EAs), which recommends a passive index fund approach.

Comment by ioannes_shade on Anders Sandberg: "How to Shape the Future Sensibly" · 2018-10-17T20:16:52.788Z · score: 2 (4 votes) · LW · GW

Saw a version of this talk recently & can recommend it as worthwhile.

Also it's a joy to watch Anders present :-)

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-10-01T18:21:09.503Z · score: 1 (1 votes) · LW · GW

Thanks for this comment; I found it really useful :-)

I’m curious why you’re especially interested in Raven’s Progressive Matrices.

In part interested because it's a performance measure rather than self-report.

Also speaking from my experience, my performance on tests like Raven's has been heavily mediated by things that don't seem directly related to g, and that I'd imagine could be affected by CFAR's curriculum.

e.g. I perform better on tests like Raven's when I'm feeling low-anxiety & emotionally cohered. (Seems plausible that CFAR could lower anxiety & increase emotional coherence.)

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-13T15:23:10.260Z · score: -1 (3 votes) · LW · GW
...is exactly the pattern I would expect from someone who was somewhat interested in answering but was busy.

Agreed.

It's also the pattern I'd expect from someone who wasn't interested in engaging, but wanted to give the impression that they've got it covered / already thought about this / have good reasons for doing what they're doing.

I'm not sure which is closer to the truth here.