Cash prizes for the best arguments against psychedelics being an EA cause area

2019-05-10T18:24:47.317Z · score: 19 (7 votes)
Comment by ioannes_shade on Literature Review: Distributed Teams · 2019-05-07T21:10:29.721Z · score: 0 (3 votes) · LW · GW

cf. https://en.wikipedia.org/wiki/Knuth_reward_check

Comment by ioannes_shade on Open Problems in Archipelago · 2019-04-17T17:57:01.825Z · score: 1 (3 votes) · LW · GW

lol

Complex value & situational awareness

2019-04-16T18:46:22.414Z · score: 8 (5 votes)
Comment by ioannes_shade on The Hard Work of Translation (Buddhism) · 2019-04-13T20:18:52.020Z · score: 4 (2 votes) · LW · GW

I agree with your basic point here, though have some nits to pick about your characterization of zen :-)

Comment by ioannes_shade on If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix? · 2019-04-06T20:07:46.816Z · score: 1 (1 votes) · LW · GW

Seems somewhat related: Liberal Radicalism: A Flexible Design For Philanthropic Matching Funds

Especially interesting because the authors are rich enough & credible enough to stand up a big project here, if they decide to.

Comment by ioannes_shade on The Case for The EA Hotel · 2019-04-01T15:47:39.408Z · score: 11 (5 votes) · LW · GW

Also Vipul's donation report is interesting + helpful.

Comment by ioannes_shade on The Case for The EA Hotel · 2019-03-31T16:16:37.611Z · score: 7 (5 votes) · LW · GW

Note there's also some good discussion about this post over on the EA Forum.

Comment by ioannes_shade on List of Q&A Assumptions and Uncertainties [LW2.0 internal document] · 2019-03-30T17:03:28.314Z · score: 4 (3 votes) · LW · GW

Curious how LessWrong sees its Q&A function slotting in amongst Quora, Stack Exchange, Twitter, etc.

(There are a lot of question-answering platforms currently extant; I'm not clear on the business case for another one.)

Microcosmographia excerpt

2019-03-29T18:29:14.239Z · score: 16 (4 votes)
Comment by ioannes_shade on "Other people are wrong" vs "I am right" · 2019-02-24T15:50:58.947Z · score: 5 (3 votes) · LW · GW

This is awesome.

Reminds me of Ben Kuhn's recent question on the EA Forum – Has your EA worldview changed over time?

Comment by ioannes_shade on Aphorisms for meditation · 2019-02-23T18:59:56.766Z · score: 3 (2 votes) · LW · GW

Makes sense, thanks for laying out some of your reasoning!

I think there's a lot of inferential distance between Zen practices & the median LessWrong post. Bridging that distance would be a big project. Maybe one day I'll take a stab at it – in the meantime it makes sense for stuff like this to live in the personal blog section.

[Link] OpenAI on why we need social scientists

2019-02-19T16:59:32.319Z · score: 14 (7 votes)

Aphorisms for meditation

2019-02-18T17:47:05.526Z · score: 8 (5 votes)
Comment by ioannes_shade on Some disjunctive reasons for urgency on AI risk · 2019-02-16T01:57:06.181Z · score: 1 (1 votes) · LW · GW

Related, on the EA Forum. (I am the post's author.)

Comment by ioannes_shade on Some disjunctive reasons for urgency on AI risk · 2019-02-15T23:05:04.747Z · score: 2 (2 votes) · LW · GW
... even if my property rights are technically secure, I don't know how I would secure my mind.

Training up one's concentration & present-moment awareness are probably helpful for this.

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T17:08:41.830Z · score: 1 (1 votes) · LW · GW

Haven't read it yet, but here's an academic review of "Federal Patent Takings," which seems relevant.

Comment by ioannes_shade on Why do you reject negative utilitarianism? · 2019-02-15T16:58:18.887Z · score: 1 (1 votes) · LW · GW

I like tranquilism as a view preserving some of the attractive aspects of negative utilitarianism whilst sidestepping some of its icky implications.

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T15:52:19.281Z · score: 1 (1 votes) · LW · GW

Thanks, this is helpful.

What I'm really puzzled by is the extremely counterfactuality of the question.

It doesn't feel too extreme to me (powerful new dual-use technology), but probably our priors are just different here :-)

Comment by ioannes_shade on Who owns OpenAI's new language model? · 2019-02-15T06:57:29.071Z · score: 3 (2 votes) · LW · GW

Right, this is the area I'm curious about.

I imagine that if a private firm created a process for making nuclear suitcase bombs with a unit cost < $100k, the US government would show up and be like "nice nuclear-suitcase-bomb process – that's ours now. National security reasons, etc."

(Something like this is what I meant by "requisition.")

I wonder what moves the private firm could make, in that case. Could they be like "No, sorry, we invented this process, it's protected by our copyright, and we will sue if you try to take it?"

Would they have any chance of preserving their control of the technology through the courts?

Would they be able to just pack up shop & move operations to Ireland, or wherever?

I also wonder how distant the current OpenAI case is from the hypothetical nuclear-suitcase-bomb case, in terms of property rights & moves available to the private firm.

Who owns OpenAI's new language model?

2019-02-14T17:51:26.367Z · score: 18 (7 votes)
Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T16:16:20.201Z · score: 3 (2 votes) · LW · GW

Interesting.

My intuition is that this would work better without a bureaucratic intermediary administering it.

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T00:56:48.639Z · score: 1 (1 votes) · LW · GW

DM'd you

Comment by ioannes_shade on Individual profit-sharing? · 2019-02-14T00:20:15.251Z · score: 1 (1 votes) · LW · GW

Very interesting – professional poker didn't occur to me!

I wonder if these agreements are enforced by legal contract? Sounds like it, from the tone of the article.

Comment by ioannes_shade on The RAIN Framework for Informational Effectiveness · 2019-02-13T18:11:08.876Z · score: 3 (3 votes) · LW · GW

This is great.

Could you link to some specific examples of content that hits the different sweet spots of the framework?

Individual profit-sharing?

2019-02-13T17:58:41.388Z · score: 11 (2 votes)

What we talk about when we talk about life satisfaction

2019-02-04T23:52:38.052Z · score: 9 (6 votes)
Comment by ioannes_shade on Attacking enlightenment · 2019-01-29T18:18:49.253Z · score: 3 (2 votes) · LW · GW

I'm reading Altered Traits & have been impressed with its epistemological care. Perhaps helpful for your project?

Comment by ioannes_shade on Building up to an Internal Family Systems model · 2019-01-26T17:55:12.940Z · score: 6 (3 votes) · LW · GW
So I finally read up on it, and have been successfully applying it ever since.

Could you give some examples of where you've been applying IFS and how it's been helpful in those situations?

Is intellectual work better construed as exploration or performance?

2019-01-25T21:59:28.381Z · score: 14 (4 votes)
Comment by ioannes_shade on What math do i need for data analysis? · 2019-01-19T15:30:32.494Z · score: 4 (3 votes) · LW · GW

Perhaps check out dataquest.io, which teaches the data scientist's basic skillset.

Comment by ioannes_shade on What makes people intellectually active? · 2019-01-19T04:28:14.324Z · score: 1 (1 votes) · LW · GW

"the mid-19605"

Should be "1960s", I think

Comment by ioannes_shade on Some Thoughts on My Psychiatry Practice · 2019-01-19T04:26:40.111Z · score: 7 (6 votes) · LW · GW

Just some quick thoughts:

  • Housemates and other group-living arrangements can make living in cities affordable
  • Pets are expensive (though probably worth it for lots of people)
  • Flights are expensive (though deals can be had)
  • Car ownership is expensive
  • The internet can provision almost all media at high quality, low inconvenience, minimal risk, for free
Comment by ioannes_shade on New edition of "Rationality: From AI to Zombies" · 2018-12-16T17:48:47.126Z · score: 9 (6 votes) · LW · GW

Leather-bound!

Comment by ioannes_shade on Good Samaritans in experiments · 2018-12-08T16:25:07.109Z · score: 5 (3 votes) · LW · GW

The ToC feature is dope :-)

Comment by ioannes_shade on AI Safety Research Camp - Project Proposal · 2018-11-19T20:05:40.781Z · score: 2 (2 votes) · LW · GW

Are there plans to do another one of these in 2019?

Comment by ioannes_shade on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-11-19T20:02:20.771Z · score: 3 (2 votes) · LW · GW

Thank you for doing this, and for giving feedback to all submissions!

Comment by ioannes_shade on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-11-19T19:56:43.032Z · score: 2 (3 votes) · LW · GW

Future Perfect is sorta this, but probably too mainstream / high-powered for the use-case you have in mind.

... and it looks like they're hiring a writer!

Comment by ioannes_shade on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-11-19T19:54:19.281Z · score: 1 (1 votes) · LW · GW

"that we should expect DeepMind et all to have some projects..."

et all should be et al.

Comment by ioannes_shade on No Really, Why Aren't Rationalists Winning? · 2018-11-07T19:25:29.452Z · score: 6 (3 votes) · LW · GW

cf. Antigravity Investments (investment advisor service for EAs), which recommends a passive index fund approach.

Comment by ioannes_shade on Anders Sandberg: "How to Shape the Future Sensibly" · 2018-10-17T20:16:52.788Z · score: 2 (4 votes) · LW · GW

Saw a version of this talk recently & can recommend it as worthwhile.

Also it's a joy to watch Anders present :-)

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-10-01T18:21:09.503Z · score: 1 (1 votes) · LW · GW

Thanks for this comment; I found it really useful :-)

I’m curious why you’re especially interested in Raven’s Progressive Matrices.

In part interested because it's a performance measure rather than self-report.

Also speaking from my experience, my performance on tests like Raven's has been heavily mediated by things that don't seem directly related to g, and that I'd imagine could be affected by CFAR's curriculum.

e.g. I perform better on tests like Raven's when I'm feeling low-anxiety & emotionally cohered. (Seems plausible that CFAR could lower anxiety & increase emotional coherence.)

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-13T15:23:10.260Z · score: -1 (3 votes) · LW · GW
...is exactly the pattern I would expect from someone who was somewhat interested in answering but was busy.

Agreed.

It's also the pattern I'd expect from someone who wasn't interested in engaging, but wanted to give the impression that they've got it covered / already thought about this / have good reasons for doing what they're doing.

I'm not sure which is closer to the truth here.

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-13T15:18:41.490Z · score: 1 (1 votes) · LW · GW

Do you mean "AI safety focused"?

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-13T03:27:45.705Z · score: 1 (1 votes) · LW · GW

Makes sense – there's really no commitment mechanism at play here.

I still find it disappointing though.

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-13T02:38:54.374Z · score: 1 (2 votes) · LW · GW
...good communication takes a lot of time.

Eh, I think good communication takes time, but not an inordinate amount of time.

For a consumer-facing organization like CFAR, being able to clearly articulate why you're doing what you're doing is a core competency, so I'd expect them to bring that to bear in a place like LessWrong, where a lot of CFAR stakeholders hang out.

LW isn’t a project any of the CFAR team work on so they wouldn’t naturally be checking LW or something or be trying to use the platform to talk about their research...

Sure, but they replied within a half hour of me posting.

The fast response + lack of follow-up feels more defensive than if they hadn't replied at all.

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-13T02:14:44.987Z · score: 1 (4 votes) · LW · GW

Just want to note that I'm sorta disappointed by CFAR's response here.

Dan responded very quickly to my initial post (very quick = within 30 minutes), pointing to CFAR's 2015 impact assessment. (I didn't know about the 2015 assessment and was grateful to be pointed to it.)

But as far as I can tell, no one affiliated with CFAR engaged with the discussion that followed. A bunch of follow-up questions fell out of that discussion; I'm sad that no one from CFAR fielded them.

I'm parsing the very quick initial response + lack of follow-on engagement as CFAR acting defensively, rather than engaging in open discourse. This is disappointing.

Also this is the internet – I'm open to the possibility that I'm misinterpreting things here.

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-07T03:12:41.476Z · score: 3 (1 votes) · LW · GW

Huh. I'm surprised that after finding significant changes on well-validated psychological instruments in the 2015 study, CFAR didn't incorporate these instruments into their pre- / post-workshop assessments.

Also surprised that they dropped them from the 2017 impact analysis.

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-07T01:03:58.662Z · score: 3 (2 votes) · LW · GW
... pre-paradigmatic research is the very sort where you’re trying to come up with a metric to validate, not the type of research where you start with one.

This makes sense, though even if doing the pre-paradigmatic thing it seems useful & low-cost to benchmark your performance on existing metrics.

In this specific case, I bet workshop participants would actually find it fun + worthwhile to take before/after big 5 & Raven's surveys, so it could be a value-add in addition to a benchmarking metric.

Comment by ioannes_shade on No standard metric for CFAR workshops? · 2018-09-06T19:51:27.036Z · score: 2 (5 votes) · LW · GW

Thanks, I didn't know about the 2015 study :-)

  • Any plans to track the 2015 study measures on a rolling basis? (i.e. for every workshop cohort?) Seems useful to measure performance over time.
  • Why did the 2017 impact report move away from measuring Big 5 traits? (and the other measures looked at in 2015?)
  • Any thoughts re: using Raven's Matrices?

No standard metric for CFAR workshops?

2018-09-06T18:06:00.021Z · score: 12 (5 votes)
Comment by ioannes_shade on On memetic weapons · 2018-09-06T16:08:02.665Z · score: 0 (2 votes) · LW · GW

Perhaps the crux here is how bad we think clickbait-y titles are.

I think clickbait-y titles are bad if the article's content is low quality. I don't think they're bad otherwise.

Calling it memetic weapon sounds like someone wants to put it on a blacklist.

There are other forms of censorship which are worrisome, though softer than a blacklist.

Comment by ioannes_shade on Great Founder Theory · 2018-09-06T16:01:18.588Z · score: 10 (3 votes) · LW · GW
The creation of functional institutions is the means by which people are hugely impactful.

Curious what you think of hugely impactful people who didn't create institutions:

  • Gautama Buddha
  • Socrates
  • Jesus
  • ...
  • Gauss
  • Wittgenstein
  • Gödel

It seems like the catalyst for impactful change often comes from a person who is at best indifferent to institution-building.

Maybe you're arguing that the bulk of the impact should be attributed to the institution-builders who followed? (the Sangha, Plato, Paul...)

Comment by ioannes_shade on Great Founder Theory · 2018-09-06T15:54:10.105Z · score: 1 (1 votes) · LW · GW
The term institution is not synonymous with the concept of<a href="https://medium.com/@samo.burja/empire-theory-part-1-competitive-landscape-b0b1b3bbce9e"> empire, though they can overlap in some cases.

FYI broken link

Comment by ioannes_shade on On memetic weapons · 2018-09-06T02:14:28.733Z · score: 1 (1 votes) · LW · GW

I don't think anyone is motivated to explicitly censor talking about the memetic weapon (by putting it on a blacklist or something).

But I do think there's a good chance that memetic weaponry gets deployed against discussion about the memetic weapon.

Comment by ioannes_shade on On memetic weapons · 2018-09-05T23:19:46.702Z · score: 1 (1 votes) · LW · GW

Looks like the next piece in that sam[]zdat sequence is even more on point.

Comment by ioannes_shade on Psychology Replication Quiz · 2018-09-02T19:08:01.757Z · score: 1 (1 votes) · LW · GW

34 points – 17 studies out of 21. Mostly looked at the effect size & the p-values, also thought about whether the proposed causal mechanism seemed plausible.

Comment by ioannes_shade on On memetic weapons · 2018-09-02T18:37:34.749Z · score: 6 (2 votes) · LW · GW
Again, who are we talking about? Damore-like? Peterson and the whole of the IDW?

As stated in the post, I'm mostly worried about people who start self-policing their speech instead of speaking openly about what they believe. I think there are probably a lot of people like this.

Who is the Left you are talking about?

I have in mind a pretty broad swath, including:

  • Most university administrations
  • Most of the Bay Area tech industry
  • Most of the LA entertainment industry
  • About half of the D.C. lobbying & think tank industry
  • Not sure about New York... maybe 30-40% of Wall Street?

The problem seems to be with the discourse norms of those communities – what is okay to talk about & what isn't in those places. I don't yet have a good model of who sets & maintains those norms.

Is the DSA guilty of what you see happening?

I don't know very much about the DSA, but from a quick scan of their twitter, I'd guess they are within the discourse-sphere I'm worried about.

Comment by ioannes_shade on On memetic weapons · 2018-09-02T18:26:24.681Z · score: 16 (6 votes) · LW · GW
(updated after serious down-votes)

For what it's worth, I'm sad that you're getting down-voted so much.

I'm reading you as engaging in good faith from a different starting viewpoint, and I'd like to see more of that kind of thing :-)

Comment by ioannes_shade on On memetic weapons · 2018-09-01T18:52:24.207Z · score: 9 (4 votes) · LW · GW
do you know of any evidence that people's minds where changed significantly or mostly due to debate/discussion?

I think for both gay rights & cannabis advocacy, the model that best explains what happened goes something like:

  • Activists do a bunch of public education & direct action to push on the issue
  • The work of the activists moves the Overton window such that more people feel comfortable coming out (as gay, as cannabis users)
  • More & more people come into personal contact with the members of the group in question (gay people, cannabis users)
  • People update their views about the issue via personal interactions with someone they know, who turns out to be a member of the group in question

Not evidence, just a model that might explain how a lot of opinion change happens.

... shouting down the milos of the world is bad is evidence that talk is what had changed the world.

I think shouting down Milos is important & should keep happening to some extent (though I tend to bias towards reasoned discussion & indoor voices).

I also think that too many people are getting pattern-matched as Milos, and shouting down people who have been mis-typed as Milos has negative consequences (via the mechanism I sketched out in the post).

On memetic weapons

2018-09-01T03:25:36.489Z · score: 44 (25 votes)

Doing good while clueless

2018-02-15T16:02:27.060Z · score: 36 (9 votes)