Posts

Why a Theory of Change is better than a Theory of Action for acheiving goals 2017-01-09T13:46:19.439Z · score: 3 (4 votes)
Crony Beliefs 2016-11-03T20:54:07.716Z · score: 12 (13 votes)
[LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR 2016-09-07T02:21:32.442Z · score: 6 (7 votes)
[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions 2015-01-28T15:29:07.226Z · score: 4 (7 votes)
The Useful Definition of "I" 2014-05-28T11:44:23.789Z · score: 4 (11 votes)

Comments

Comment by ete on I Want To Live In A Baugruppe · 2017-03-17T04:09:02.498Z · score: 4 (4 votes) · LW · GW

This probably won't make sense in the early stages when there's just a small team setting things up, but in the mid term the accelerator project (whirlwind tour) hopes to seed a local rationalist community in a lower cost location than the bay (current top candidate location is the canary islands). I imagine most would prefer to stay in more traditional places, but perhaps this would appeal to some rationalist parents?

Comment by ete on [deleted post] 2017-01-25T21:37:42.200Z

Weird Sun Twitter also has a blog, which you may want to include.

Comment by ete on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-21T21:27:42.898Z · score: 1 (1 votes) · LW · GW

Yeah, I'm worrying about this. Switching before it's better than current LW is bad; switching once it's better than current LW is okay but might waste the "reopening!" PR event; switching once it's at the full feature set is great but possibly late.

Perhaps switch once it's as good, but don't make a big deal of it? Then make a big deal at some semi-arbitrary point in the future with the release of full 2.0.

Comment by ete on Why a Theory of Change is better than a Theory of Action for acheiving goals · 2017-01-09T16:06:42.591Z · score: 1 (1 votes) · LW · GW

Many people I talk with profess a strong desire to change the world for the better. This often manifests in their decision processes as something like "out of the life paths and next steps I have categorized as 'things I might do', which one pattern matches to helping best?".

This has for a long time felt like a strategic error of some kind. Reading Aaron Swartz's explanation of Theory of Change vs Theory of Action helped crystallize why.

Comment by ete on The Adventure: a new Utopia story · 2016-12-26T17:19:11.018Z · score: 1 (1 votes) · LW · GW

I also know a good number of people from subcultures where nature is the foundational moral value, a few from ones where family structures are core (who'd likely be horrified by altering them at will), and some from the psychonaut space where mindstates and exploring them is the primary focus. I'd also guess than people for whom religions are central would find the idea of forked selves committing things they consider sins breaks this utopia for them. These groups seem to have genuine value differences, and would not be okay with a future which does not carve out a space for them. "Adventure" and a bunch of specifics here points at a very wide region of funspace, but one centered around our culture to some extent.

There's some rich territory in the direction of people who want reality to be different in reasonable ways coming together to work out what to do. The suffering reduction vs nature preservation bundle seems the largest, but there's also the complex value vs raw qualia maximization. Actually, this kinda fits into a 2x2?

Moral plurality is a moral necessity, and signalling it clearly seems crucial, since we'd be taking everyone else along for the ride.

Edit: This is touched on by characters exchanging values, and that seems very good.

Comment by ete on The Adventure: a new Utopia story · 2016-12-26T14:05:30.285Z · score: 1 (1 votes) · LW · GW

Thank you, I enjoyed this.

I'd be interested in seeing moral trade feature explicitly in things like this, for example there are many people who claim to have "create a hedonium shockwave" or "minimize suffering" as their goal rather than the complex human value thing, and demonstrating that it's possible (and why it's good) to share the future seems important.

Comment by ete on Superintelligence: The Idea That Eats Smart People · 2016-12-24T23:27:34.900Z · score: 3 (3 votes) · LW · GW

The discussion at HN seems mostly critical of it, so it's not clear to me how much else needs to be added.

The memes got spread far and wide. A lot of AI safety people will run into arguments with this general form, and they mostly won't have read enough comments to form a good reply (also, most criticism does not target the heart because the other parts are so much weaker, so will be unconvincing where it's needed most). Some can come up with a reply to the heart on the fly, but it seems fairly positive to have this on LW to spread the antibody memes.

Sure, but... what can you do to convince someone who doesn't evaluate arguments? You can't use the inside view to convince someone else that they should abandon the outside view, because the outside view specifically ignores inside view arguments.

Show them outside view style arguments? People are bounded agents, and there are a bunch of things in the direction of epistemic learned helplessness which make them not want to load arbitrary complex arguments into their brain. This should not lead them to reject reference-class comparisons as evidence of it being worth looking at closer / not having an extreme prior against (though maybe in actual humans this mostly fails anyway).

Admittedly, this does not have an awesome hitrate for me, maybe 1/4? Am interested in ideas for better replies.

Comment by ete on Superintelligence: The Idea That Eats Smart People · 2016-12-24T01:14:10.042Z · score: 0 (0 votes) · LW · GW

My post could be taken as a reply.

Comment by ete on Superintelligence: The Idea That Eats Smart People · 2016-12-24T00:58:44.799Z · score: 5 (5 votes) · LW · GW

This got over 800 points on HN. Having a good reply seems important, even if a large portion of it is.. scattergun, admittedly and intentionally trying to push a point, and not good reasoning.

The core argument, that the several reference classes that Superintelligence and AI safety ideas fall into (promise of potential immortality, impending apocalypse, etc) are full of risks of triggering biases, that other sets of ideas in this area don't hold up to scrutiny, and that it has other properties that should make you wary is correct. It is entirely reasonable to take this as Bayesian evidence against the ideas. I have run into this as the core reason for rejecting this cluster of beliefs several times, by people with otherwise good reasoning skills.

Given limited time to evaluate claims, I can see how relying on this kind of reference class heuristic seems like a pretty good strategy, especially if you don't think black swans are something you should try hard to look for.

My reply is that:

  1. This only provides some evidence. In particular, there is a one-time update available from being in a suspect reference class, not an endless stream from an experiment you can repeat to gain increasing confidence. Make it clear you have made this update (and actually make it).
  2. There are enough outside-view things which indicate that it's different from the other members of the suspect reference classes that strongly rejecting it seems unreasonable. Support from a large number of visibly intellectually impressive people is the core thing to point to here (not as an attempt to prove it or argue from authority, just to show it's different from e.g. the 2012 stuff).
  3. (only applicable if you personally have a reasonably strong model of AI safety) I let people zoom in on my map of the space, and attempt to break the ideas with nitpicks. If you don't personally have a clear model, that's fine, but be honest about where your confidence comes from.

To summarize: Yes, it pattern matches to some sketchy things. It also has characteristics they don't, like being unusually appealing to smart thoughtful people who seem to be trying to seek truth and abandon wrong beliefs. Having a moderately strong prior against it based on this is reasonable, as is having a prior for it, depending on how strongly you weight overtly impressive people publicly supporting it. If you don't want to look into it based on that, fair enough, but I have and doing so (including looking for criticism) caused me to arrive at my current credence.

Comment by ete on Open thread, Dec. 19 - Dec. 25, 2016 · 2016-12-23T23:52:56.429Z · score: 2 (2 votes) · LW · GW

RSS feed: Not yet. Will add a +1 to that bug.

Arbital Slack is probably the best place currently.

Comment by ete on Kidney Trade: A Dialectic · 2016-11-28T23:04:26.448Z · score: 1 (1 votes) · LW · GW

Mostly horror, though it's a decent point in favor of setting up better legal options for organ transplants, in order to reduce the incentives towards that kind of system.

Comment by ete on Kidney Trade: A Dialectic · 2016-11-22T20:01:16.280Z · score: 2 (2 votes) · LW · GW

Agree with the overall thrust, but

"letting people sell their organs after they're dead doesn't seem like it would increase the supply that much"

seems very suspect. If you could sell the rights to your organs, there's now incentive to set up a "pay people to be signed up for organ donation" business. This is also not harmful to the donor, unlike kidneys.

Also, for added horror, a link to this may be worth including somehow.

Comment by ete on [LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR · 2016-09-07T13:28:24.017Z · score: 1 (1 votes) · LW · GW

The books have different focuses, and will have different blurbs. The first book will have a Science! focused blurb since that's what it contains, and the later ones will have blurbs more appropriate to their content.

Edit: Added

The blurbs should fit the volume. A non-exhaustive list of possible things to emphasize:

  • First: science, rationality, agentyness, seeing the world through fresh eyes and strategic thinking.
  • Second: seeing the darkness in the world, heroism, caring, psychology, and rationality.
  • Third: maturing, realizing the stakes, making difficult moral choices, realizing you're not perfect but still trying, and rationality.
Comment by ete on [LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR · 2016-09-07T13:26:56.742Z · score: 2 (2 votes) · LW · GW

I'll try a printing company, and look into other options if it does not work.

A modified version of one of the fan PDFs, yes.

Comment by ete on [LINK] Collaborate on HPMOR blurbs; earn chance to win three-volume physical HPMOR · 2016-09-07T02:24:22.535Z · score: 3 (3 votes) · LW · GW

Possible quotes:

"It's a terrific series, subtle and dramatic and stimulating. Smart guy, good writer. Poses hugely terrific questions that I, too, had thought of... and a number that I hadn't. I wish all Potter fans would go here, and try on a bigger, bolder and more challenging tale."

  • David Brin

'This is a book whose title still makes me laugh and yet it may just turn out to be one of the greatest books ever written. The writing is shockingly good, the plotting is some of the best in all of literature, and the stories are simply pure genius. I fear this book may never get the accolades it deserves, because it's too hard to look past the silly name and publishing model, but I hope you, dear reader, are wiser than that! I must-read."

- Aaron Swartz 

"Oh Thoth Trismegistus, oh Ma'at, oh Ganesha, oh sweet lady Eris... I have not laughed so hard in years! Read it and laugh. Read it and learn. Eliezer re-invents Harry Potter as a skeptic genius who sets himself the task of figuring out just how all this 'magic' stuff works. Strongly recommended. And if you manage to learn about sources of cognitive sias like the Planning Fallacy and the Bystander Effect (among others) while your sides are hurting with laughter, so much the better."

  • Eric S. Raymond

"Harry Potter and the Methods of Rationality is the sort of thing that would technically be called a fanfic, but is more appropriately named a work of sheer genius. It takes the basic Harry Potter story and asks 'what if, instead of a boy locked in a closet, he was a child genius raised by a loving pair of adoptive parents who brought science, reason, and modern thinking to the wizarding world?' LOVE. IT. Read it, seriously. It will change your way of looking at the world."

  • Rachel Aaron
Comment by ete on Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas · 2016-07-03T01:40:31.281Z · score: 0 (0 votes) · LW · GW

Among other forms, yes.

Comment by ete on Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas · 2016-06-12T19:25:45.205Z · score: 6 (6 votes) · LW · GW

Excellent post. Agree with all major points.

I think Less Wrong experienced the reverse of the evaporative cooling EY feared, where people gradually left the arena as the proportional number of critics in the stands grew ever larger.

I'd think it was primarily not the proportional number of critics, but lower quality of criticism and great users getting tired of replying to/downvoting it. Most of the old crowd of lesswrongers welcomed well thought out criticism, but when people on the other side of an inferential distance gap try to imitate those high-criticism norms it is annoying to deal with, so they end up leaving. Especially if the lower quality users are loud and more willing to use downvotes as punishment for things they don't understand.

Comment by ete on Lesswrong 2016 Survey · 2016-03-26T17:07:55.646Z · score: 41 (41 votes) · LW · GW

I have taken the survey.

Comment by ete on Updating towards the simulation hypothesis because you think about AI · 2016-03-12T13:31:57.885Z · score: 0 (0 votes) · LW · GW

Fun next question: Assuming this line of reasoning holds, what does it mean for EA?

Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T12:01:07.651Z · score: 0 (2 votes) · LW · GW

It's easily seen in this context, because of the material covered and the fact that they don't try very hard to be subtle about it. In other contexts the same set of tricks may slip past, unless you have an example to pattern match to (not a whole new habit). Immunization using a weak form of memetic attack you're primed to defend against.

Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T11:36:06.988Z · score: 0 (2 votes) · LW · GW

I figured anti-polyamory propaganda did not need annotations on LessWrong. It's telling that all but one reply took it as something which needs to be suppressed/counterargued, despite me calling it propaganda and saying it was interesting as an example of psychological tricks people pull. No one here is going to be taken in by this. I would not have posted this on facebook or another more general audience site.

I did not feel like writing it up in any detail would make a great use of my time, the examples to use for future pattern matching to are pretty obvious in the video and don't need spelling out. I just wanted to drop the link here because I'd found it mildly enlightening, and figured others may have a similar experience. I take the negative feedback as meaning content people are politically sensitive to is not welcome, even if it's rationality relevant (resistance to manipulation tricks) and explicitly non-endorsed. That's unfortunate, but okay.

Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-23T00:49:59.819Z · score: 0 (2 votes) · LW · GW

Good point about Google. I've asked a question on stackexchange about how to avoid promoting a thing I've linked to. I'll switch it over as soon as I know how.

And, for the reasons in my reply to gjm. I think it's both interesting and useful rationality training to expose yourself to and analyze the psychological tools used in something you can easily pick out as propaganda. Here your brain will raise nice big red warning flags when it hits a trick, and you'll be more able to notice similar things which may have been used to reinforce false beliefs by your own side's propaganda. It's also a good idea to have accurate models of why people come to the views they do, and what reinforces their norms.

(I don't think this is super important at all, but noticed a few tricks which I had not specifically thought about before, and figured other people may get something similar out of it)

Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T21:07:42.974Z · score: 0 (2 votes) · LW · GW

Of course, they're very clearly trying to push a right wing traditional morals agenda, with a bit of dressing up to make it appear balanced to the unobservant. Their other major video is even more overtly propaganda.

I just find it fascinating to watch this kind of attempt at manipulating people's views, especially when a bunch of smart people have clearly tried to work out how to get their message across as effectively as possible. Being aware of those tricks seems likely to offer some protection against them being used to try and push me in ways I may be more susceptible, and knowing the details of what as been used to shape certain opinion means I am better prepared if I get in a debate with people who have been persuaded by them.

Comment by ete on The Talos Principle · 2016-02-22T18:54:36.034Z · score: 1 (1 votes) · LW · GW

Seconding this recommendation. The questions you are starting to ask are ones which have been considered here, and we mostly feel we have sound answers to them (or dissolutions of the questions). Chapters likely to be relevant to your current thoughts:

  • N — A Human's Guide to Words
  • O — Lawful Truth
  • P — Reductionism 101
  • R — Physicalism 201
Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T18:21:08.335Z · score: -1 (3 votes) · LW · GW

Anti-polyamory propoganda which clearly had some thought put into constructing a persuasive argument while doing lots of subtle or not so subtle manipulations. Always interesting to observe which emotional/psychological threads this kind of thing tries to pull on.

Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T17:56:19.258Z · score: 1 (1 votes) · LW · GW

Fixed typo, thanks!

Comment by ete on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T17:19:43.624Z · score: 1 (1 votes) · LW · GW

You can put aaronsw into the LessWrong karma tool to see Aaron Swartz's post history, and read his most highly rated comments. I bet some of them would be good to spread more widely.

Comment by ete on Less Wrong Karma Chart Website · 2016-02-22T14:49:50.036Z · score: 2 (2 votes) · LW · GW

Aaron Swartz's highest ranked LW posts can be found with this. I bet a lot of people would love to be able to find his highest rated posts, and share some more widely.

Comment by ete on LessWrong 2.0 · 2015-12-08T23:08:13.003Z · score: 4 (4 votes) · LW · GW

So compared to when most things were either posted or crossposted to LW, it seems like we currently spend too little attention on aggregating and unifying content spread across many different places. If most of the action is happening offsite, and all that needs to be done is link to it, Reddit seems like the clear low-cost winner. Or perhaps it makes sense to try to do something like an online magazine, with an actual editor. (See Viliam's discussion of the censor role in an online community.) I note that FLI is hiring a news website editor (but they're likely more x-risk focused than I'm imagining).

It should be not extrodinarily hard to re-enable the ability to submit links which this sites software came with (aka make https://www.reddit.com/submit?selftext=false and https://www.reddit.com/submit?selftext=true both work), and run a bot which scrapes from a list of blogs/tumblers/etc to auto-submits those links (could make the list drawn from the XML export of a wiki page, protected so only wiki admins can edit, with a request thread requiring x current recognized people to vouch for you before you're added, or having someone/a small team designated to handle those requests).

Then put top rated posts on the front page with reasonable turnover/ability to see past ones and LW becomes again the best place to rapidly check for new content across the wider rationality community.

Comment by ete on Open thread 7th september - 13th september · 2015-09-12T12:58:55.053Z · score: 4 (4 votes) · LW · GW

Does anyone know where I could find a Steelmaned version of the pro-death arguments which people often bring up in discussions (around stagnation, inequality, etc) written by someone who has thought about a post-singularity world?

Comment by ete on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-09T21:14:12.120Z · score: 0 (0 votes) · LW · GW

Every time someone downvotes would probably be too much, but maybe the first time, or if we restrict downvotes only for users with some amount of karma then when they hit that level of karma?

Comment by ete on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-09T16:04:07.195Z · score: 1 (1 votes) · LW · GW

Another approach would be not allowing downvote to be open to all users. On the Stackexchage network for example, you need a certain amount of reputation to downvote someone. I'd bet that a very large majority of the discouraging/unnecessary/harmful downvotes come from users who don't have above, say, 5-15 karma in the last month. Perhaps official downvote policies messaged to a user the first time they pass that would help too.

This way involved users can still downvote bad posts, and the bulk of the problem is solved.

But it requires technical work, which may be an issue.

Comment by ete on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-03T00:33:26.471Z · score: 2 (2 votes) · LW · GW

Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).

Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely accepted", rather than rationality's normal meanings as being effective. I'd also think it more appropriate for the A3 stream than A2, for what I have in mind at least.

I'd think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from "person who is smart and will probably get some good job in some mildly evil corporation" to "person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside" on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).

Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I'm not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I'm not sure about a good set of labels for this though, but perhaps something like:

  • Immediate (aka: things which we could/are just working on right now)
  • Near future (single digit years? things which need some foundations, but are within sight)
  • Mid-term (unsure when we'll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can't get into the details until previous layers of tech/organization are ready)
  • Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they're accessible)
  • Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I'm not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.

And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.

Comment by ete on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-02T13:04:58.251Z · score: 0 (0 votes) · LW · GW

I'm curious about why this was downvoted?

Comment by ete on Roadmap: Plan of Action to Prevent Human Extinction Risks · 2015-06-01T23:49:51.603Z · score: 0 (6 votes) · LW · GW

Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.

One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for getting much momentum behind the other, more direct, goals you've laid out.

  • Make it less and less acceptable to be partisan/tribal in a moloch-fueling way in the public sphere (starting with our corner of it, spreading opportunistically).
  • Grow EA, so there's funding for high-impact causes like some of the projects listed, and caring about solving problems is normalized.
  • Pick up potentially high-impact people with training and give them a support network of people who have an explicit goal to fix the world, like CFAR does, to create the kind of people to staff the projects.

In a few places, particularly in A1, you drift into general "things that would be good/cool", rather than appearing to stay focused on things applicable to countering an extinction risk. Maybe there is a link that I'm missing, but other than bringing more resources I'm not sure what risk "Planetary mining" for example helps counter.

I'd advise against giving dates. AI timelines in particular could plausibly be much quicker or much slower than your suggestions, and it'd have massive knock-on effects. False confidence on specifics is not a good impression to give, maybe generalize them a bit?

"Negotiation with the simulators or prey for help"

pray?

Comment by ete on A Challenge: Maps We Take For Granted · 2015-05-31T00:03:45.005Z · score: 1 (1 votes) · LW · GW

Things that spring to mind (assumes a few specific bits I'm not sure whether are taught at high school, because I was home ed and skipped most of school):

  • Cowpox vaccine - A safe, readily available, protection from a major threat. May be hard to convince people to use it, but if you spread the idea and take credit it'd catapult you into fame (particularly medical).
  • Penicillin (would need to test a load of different moulds to find the right one), germ theory, sterilization, hygiene, other medical basics. - Once you've got a name for yourself, start spreading other modern medical standards.
  • Depending on exact time-period, I may be able to preempt gunpowder. This may set me up nicely with whichever king is in charge.
  • Attempt to build basics of electricity. If you can find a magnet (magnetite?), making a basic dynamo is not too hard, and would be an awesome way to show off your knowledge (though may get you burned as a witch).
  • My everyday clothes would be really strange to someone from that era. Hide them until I find people who seem trustworthy enough/of the right charachter, build a secret society/cult with them as sacred objects, with top members knowing some of my actual story (avoid conflict with church/royals, perhaps claim I'd been sent back by god?). Find students, teach them everything I can remember (perhaps bar a few things like nuclear weapons), spend most of my life organization-building and laying plans for the future. Give them a mission, to take the knowledge around the world, and avoid some of the largest disasters coming.

Organization goals:

  • Reach the Americas with the cowpox vaccine before smallpox (even if it takes 100s of years before ships go out, make sure my cult is leading them).
  • Intercept every super famous person I can remember as they come into adulthood, attempt to bring them into the group, give them a list of topics I remember them having made major progress on, give them all I remember of their results.
  • Be hyper-secretive about the core books on the future (especially historical events and non-medical technology), and the fact that this is what's happening, unless very large gains can be made.
  • Spread certain moral ideas, when possible. Have locked books which are only to be opened when specific criteria are met (e.g. to trigger dropping internal religious elements when it's safe). Maybe a very simple cipher which splits information across several books, so locked books can be copied without being read.

Set in motion plans for dealing with current issues

  • preempt global warming by introducing photo-voltaic early (even if I don't know details, I can point at roughly the things needed and someone will figure it out), may not be perfect or win out right away but hopefully sooner than the normal timeline with the right talent
  • attempt to shut down nuclear weapons, even if this requires major deception. Backup plan, gain a monopoly of them.
  • Set FAI work in motion much earlier, with the power of a ~700 year old conspiracy that's had foreknowledge of every major technological advance and captured a notable fraction of world-changing geniuses.

All a bit optimistic, but it seems like the vast majority of the good I could do would happen after my lifetime, so I'd need to take some shot at making a lasting organization. If it seems plausible, and I'm in a position by that time to direct people and live where I want, try to die somewhere near a place I could get natural cryo.

Comment by ete on Brainstorming new senses · 2015-05-21T19:28:58.340Z · score: 4 (4 votes) · LW · GW
  • High-speed direct information/language port (combined with a camera/text recognition software, or phone with wifi). Eyes are not optimized for reading at the maximum speed the brain can handle, and as http://www.spritzinc.com/ shows even fairly basic hacks can give huge gains. I bet we could push it much further. Especially good for the blind.
  • Glasses which convert various interesting non-visible wavelengths of light into a specific one (possibly camera+projecting onto google glass, possibly using the hearing thing?), gradually cycling through different wavelengths in a predictable way. There's a lot of detail we miss out on by only sensing three colors.
  • Whiskers for air current sensing. Probably needs a wide channel though.
  • Not exactly a new sense, but total area awareness via a swarm of microdrones with cameras/microphones (and split projection onto glasses) would be pretty awesome. Like a poor man's Skitter.
Comment by ete on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-10T15:54:19.734Z · score: 7 (7 votes) · LW · GW

Co-operate as default if you don't have a good read on whether the other person is co-operative is included in my suggestion, is that not enough for it to feel non-aggressive to you? Tit-for-tat, with co-operate first turn. If they come in at you sufficiently rudely without giving a chance for a reasonable request, you can take that as them playing defect preemptively.

And it's not specifically expected reciprocation I'd look for, but whether the person would be as helpful to a person in general if they were on the other side of the situation.

Comment by ete on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-10T13:05:35.725Z · score: 14 (14 votes) · LW · GW

A reasonable process seems to be: Determine whether the neighbor would do a similar thing for you (or stranger in a similar situation) out of niceness using your highly advanced social modeling software and past experience with them, and mimic their expected answer. Co-operate by default if you have not enough information to simulate them accurately.

The rude/not rude thing is useful as a hint about whether they would agree to do something beyond basic requirements if they were on the other side.

Thus incentivising people to do nice things for each other. If they fake a lot of special needs, they better go out of their way to prove they'll help other people with them.

Comment by ete on Sapiens · 2015-04-08T14:08:01.680Z · score: 5 (5 votes) · LW · GW

Excellent review, thank you.

Comment by ete on Natural Selection of Government Systems · 2015-02-10T20:00:56.085Z · score: 0 (2 votes) · LW · GW

I'm curious as to why this was downvoted? China is definitely at the industrial level and there appears to be strong evidence for forced labor driving parts of the economy, which seems a relevant counter to the suggestion that there's not an economic incentive to enslave groups in the industrialized world.

Perhaps extracting forced labor from an internal group rather than captured neighbors is a more important distinction to some people?

Comment by ete on We live in an unbreakable simulation: a mathematical proof. · 2015-02-10T03:58:25.870Z · score: 3 (3 votes) · LW · GW

I also think that is the cause of almost all the downvotes, the numbers match well. The ideas presented are about interesting and relevant topics so potentially worthwhile even if the specific proof does not hold up. If not for the appearance of certainty (esp. "mathematical proof") where you should have been offering it up as something you'd like thoughts on (esp. if you had specific questions, pointed out parts you were less sure about, etc) I would likely have upvoted, as is I only didn't downvote because it seems you've lost more than enough karma for this already and it would feel like kicking a dead horse. I'm sure by now anyone who sees this will realize clickbait titles are not a great way to get points on lesswrong.

Comment by ete on Natural Selection of Government Systems · 2015-02-09T16:38:30.295Z · score: 1 (5 votes) · LW · GW

In the modern economies, mechanization generally made forced labor economically inefficient.

Although this may become true as automation becomes able to do more things more cheaply, China seems to be a very strong counterexample at our current/recent technological level. Prison slaves, Forced student labour is central to the Chinese economic miracle, Laogai or "reform through labor" programmes, etc. America's prison labor system is also quite scary.

Comment by ete on [Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions · 2015-01-29T03:26:43.684Z · score: 0 (0 votes) · LW · GW

Glad it's of interest to you. I found it while checking the sources of this motherboard article. There's another document linked from there which you may or may not have seen, but it lacks mention of strong AI, instead focusing on automating war in general.

After reading that I feel like in the near future it'll be much easier to justify concrete AI takeover mechanisms to the public.

Comment by ete on CFAR fundraiser far from filled; 4 days remaining · 2015-01-29T03:06:53.353Z · score: 27 (27 votes) · LW · GW

Donated £100.

Comment by ete on [Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions · 2015-01-28T17:02:04.403Z · score: 2 (2 votes) · LW · GW

Yes, that does seem like the primary focus. However, they cite this article about Stephen Wolfram when saying "It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings." which suggests that the researchers are at least aware of the wider risks around creating AGI, even if they don't choose to focus on them.

Comment by ete on What are the thoughts of Less Wrong on property dualism? · 2015-01-03T15:05:20.374Z · score: 2 (2 votes) · LW · GW

For convenience, it's here.

Comment by ete on Thoughts on Death · 2014-12-03T02:01:38.467Z · score: 0 (0 votes) · LW · GW

What succeeds at being stable and producing more of itself most efficiently is not necessarily what we consider to be most good or contain the most moral value. The market is a very powerful optimizer, but not an inherently friendly one.

For example, contrast two countries, one an idyllic paradise with happy citizens who pay very low taxes and a country with very high taxes, unhappy workers, but a much more powerful military. If there's a conflict between them, the country with unhappy workers+high taxes has a large advantage, and will win, despite this being not in the interests of the average worker. A possible real life example is the Comache Indians, and more development of this idea (multipolar traps) can be found on SlateStarCodex's Mediations on Molch.

Comment by ete on You have a set amount of "weirdness points". Spend them wisely. · 2014-11-28T13:30:38.059Z · score: 5 (5 votes) · LW · GW

True, perhaps I should have been more clear in my dealing with the two, and explained how I think the they can blur across unintentionally. I do think being selective with signals can be instrumentally effective, but I think it's important to be intentionally aware when you're doing that and not allow your current mask to bleed over and influence your true beliefs unduly.

Essentially I'd like this post to come with a "Do this sometimes, but be careful and mindful of the possible changes to your beliefs caused by signaling as if you have different beliefs." warning.

Comment by ete on You have a set amount of "weirdness points". Spend them wisely. · 2014-11-28T00:58:22.553Z · score: 11 (11 votes) · LW · GW

I believe the effect you describe exists, but I think there are two effects which make it unclear that implementing your suggestions is an overall benefit to the average reader. Firstly, to summarize your position:

Each extra weird belief you have detracts from your ability to spread other, perhaps more important, wierd memes. Therefore normal beliefs should be preferred to some extent, even when you expect them to be less correct or less locally useful on an issue, in order to improve your overall effectiveness at spreading your most highly valued memes.

  1. If you have a cluster of beliefs which seem odd in general then you are more likely to share a "bridge" belief with someone. When you meet someone who shares at least one strange belief with you, you are much more likely to seriously consider their other beliefs because you share some common ground and are aware of their ability to find truth against social pressure. For example, an EA vegan may be vastly more able to introduce the other EA memes to a non-EA vegan than a EA non-vegan. Since almost all people have at least some weird beliefs, and those who have weird beliefs with literally no overlap with yours are likely to not be good targets for you to spread positive memes to, increasing your collection of useful and justifiable weird memes may well give you more opportunities to usefully spread the memes you consider most important

  2. Losing the absolute focus on forming an accurate map by making concessions to popularity/not standing out in too many ways seems epistemologically risky and borderline dark arts. I do agree that some situations that not advertizing all your weirdness at once may be a useful strategic choice, but am very wary of the effect putting too much focus on this could have on your actual beliefs. You don't want to strengthen your own absurdity heuristic by accident and miss out on more weird but correct and important things.

While I can imagine situations the advice given is correct (especially the for interacting with domain limited policymakers, or people you have a good read on likely reactions to extra weirdness), recommending it in general seems not sufficiently justified and I believe would have significant drawbacks.