Comment by elityre on Eli's shortform feed · 2019-07-17T17:05:48.547Z · score: 1 (1 votes) · LW · GW

New post: my personal wellbeing support pillars

Comment by elityre on Inconvenience Is Qualitatively Bad · 2019-07-17T06:55:29.636Z · score: 5 (3 votes) · LW · GW

Wow. I really appreciate the curious spirit of this comment.

Comment by elityre on Eli's shortform feed · 2019-07-14T21:52:48.749Z · score: 5 (3 votes) · LW · GW

New (unedited) post: The bootstrapping attitude

Comment by elityre on Eli's shortform feed · 2019-07-14T21:51:54.416Z · score: 3 (2 votes) · LW · GW

New (unedited) post: Exercise and nap, then mope, if I still want to

Comment by elityre on 3 Levels of Rationality Verification · 2019-07-13T05:01:56.213Z · score: 4 (3 votes) · LW · GW

Let's see...

  • Prediction contests are an obvious one.
  • Also, perhaps, having people compete at newly designed games, so that everyone has the same amount of time to learn the rules and how to win, given the rules.
  • Perhaps we could design puzzles that intentionally have places where one would make a mistake, error, or wrong choice, and such errors are visible (to an observer who knows the puzzle) when made.
Comment by elityre on An Alien God · 2019-07-12T02:07:17.406Z · score: 24 (4 votes) · LW · GW
When I design a toaster oven, I don't design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils. It would be a waste of effort. Who designed the ecosystem, with its predators and prey, viruses and bacteria? Even the cactus plant, which you might think well-designed to provide water fruit to desert animals, is covered with inconvenient spines.

Well, to be fair, if want to design an image classifier, I might very well make one part that tries hard to categorize photos and another part that tries hard to miscategorize them.

Comment by elityre on Schism Begets Schism · 2019-07-12T01:57:09.566Z · score: 2 (2 votes) · LW · GW
If the other group or community is, as you say, much worse than it could be, helping to improve it from the inside makes things better for the people already involved, while going and starting your own group might leave them in the lurch.

Sure. When everyone (or at least a majority) in the initial group are on board with your reform efforts, you should often try to reform the group. But very often there will be a conflict of visions or a conflict of interests.

In general I think you should probably at least initially try to reform things, though if it doesn't work well there's a point where you might have to say "sorry, the time has come, we're making our own group now".

I certainly agree with this, though it seems plausible that we have different views of the point at which you should switch to the "found a splinter group" strategy.

Comment by elityre on The AI Timelines Scam · 2019-07-12T01:32:52.279Z · score: 15 (5 votes) · LW · GW

Tangent:

...if you think both an urgent concern and a distant concern are possible, almost all of your effort goes into the urgent concern instead of the distant concern (as sensible critical-path project management would suggest).

This isn't obvious to me. And I would be interested in a post laying out the argument, in general or in relation to AI.

Comment by elityre on Schism Begets Schism · 2019-07-10T20:51:46.148Z · score: 12 (4 votes) · LW · GW
In point of fact, doing important things often requires coordination, teamwork, and agreeing to compromises. If you insist on everything being exactly your way, you'll have a harder time finding collaborators, and in many cases that will be fatal to a project -- I do not say all, but many.

This is true and important, and the same or a very similar point to the one made in Your Price for Joining.

But that post has a different standard than the one given by the OP:

If the issue isn't worth your personally fixing by however much effort it takes, and it doesn't arise from outright bad faith, it's not worth refusing to contribute your efforts to a cause you deem worthwhile. [emphasis mine]

Sometimes things are bad or (or much worse than they could be) in some group or community. When that's the case, one can 1) try and change the community from the inside, or 2) get a group of his/her friends together to do [thing] the way they think they should do it, or 3) give up and accept the current situation.

When you're willing to put in the work to make 2 happen, it sometimes results in a new healthier group. If (some) onlookers can distinguish between better and worse on the relevant axis, it will attract new members.

It seems to me that taking option 2, instead of option 1, is cooperative. You leave the other group doing it their way, in peace, and also create something good in the world in addition.

Granted, I think the situation may be importantly different in online communities, specifically because the activation energy for setting up a new online group is comparatively small. In that case, it is too easy to found a new group, and accordingly they splinter to regularly for any single group to be good.

Are there easy, low cost, ways to freeze personal cell samples for future therapies? And is this a good idea?

2019-07-09T21:57:28.537Z · score: 21 (9 votes)
Comment by elityre on Raised in Technophilia · 2019-07-08T00:06:28.555Z · score: 3 (2 votes) · LW · GW

Anyone have a citation for Drexler's motivations?

Comment by elityre on Being the (Pareto) Best in the World · 2019-06-25T03:34:12.808Z · score: 1 (5 votes) · LW · GW

This was great. Thank you!

Comment by elityre on Eli's shortform feed · 2019-06-24T04:34:05.117Z · score: 9 (2 votes) · LW · GW

new (boring) post on controlled actions.

Comment by elityre on No Really, Why Aren't Rationalists Winning? · 2019-06-22T18:50:12.128Z · score: 1 (1 votes) · LW · GW

This is relevant to my interests. Do you have a particular source that describes their "pitch"?

Comment by elityre on No Really, Why Aren't Rationalists Winning? · 2019-06-22T18:45:36.154Z · score: 3 (2 votes) · LW · GW
The every day world is roughly inexploitable, and very data rich. The regions you would expect rationality to do well in are the ones where there isn't a pile of data so large even a scientist can't ignore it. Fermi Paradox, AGI design, Interpretations of Quantum mechanics, Philosophical Zombies, ect.

I think I would add to this, "domains where there is lots of confusing/ conflicting data, where you have to filter the signal from the noise". I'm thinking of fields where there are many competing academic positions like macroeconomics, or nutrition, or (of highest practical relevance) medicine.

Many of Scott Alexander's posts, for instance are a wading into a confusing morass of academic papers and then using principles of good reasoning to figure out, as best he/we can, what's actually going on.

Comment by elityre on No Really, Why Aren't Rationalists Winning? · 2019-06-22T18:40:02.513Z · score: 3 (2 votes) · LW · GW

This is a very important point, and I think it is worthy of being its own, titled, top-level post.

Comment by elityre on No Really, Why Aren't Rationalists Winning? · 2019-06-22T18:37:43.319Z · score: 1 (1 votes) · LW · GW

Val started (didn't finish) a sequence once, but it looks like he removed the sequence-index from his blog:

In any case, I (who am not Val), would endorse that description.

Comment by elityre on Mandatory Secret Identities · 2019-06-22T16:19:42.822Z · score: 5 (1 votes) · LW · GW

Oh. I thought that the use of min( ) here, was immediately readable and transparent to me. The meaning of "the lesser of the two quantities" is less obvious, and the phrase is longer to say.

Comment by elityre on Does scientific productivity correlate with IQ? · 2019-06-20T15:36:20.595Z · score: 5 (3 votes) · LW · GW

I remember seeing a thread on Less Wrong that started with someone hearing that Feynman had an IQ of 115, and being surprised, and then asking what's up with that.

I can find the thread, now, but I remember mostly people saying that that number was false, and offering various explanations for why one might think that was Feynman's IQ, including that the test in question was from his teen-aged years, and IQ often stabilizes later in life.

In any case, Feynman was named a Putnam Fellow (top five scorer) in 1939, which gives some context on his general mathematical ability (aside from being a ground-breaking, noble prize-winning, theoretical physicist).

Comment by elityre on Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? · 2019-06-17T17:21:26.140Z · score: 3 (2 votes) · LW · GW

Finally, I'm curious what people make of the last paper psychs listed ("Testing Sleep Consolidation in Skill Learning: A Field Study Using an Online Game"). They didn't find any evidence for a sleep consolidation effect over and above non-sleep breaks.

This is a very surprising result, and I'm not sure what to make of it.

They give some possible reasons for that result in the discussion section, but none of them reduce my surprise much.

Comment by elityre on Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? · 2019-06-17T16:44:39.028Z · score: 3 (2 votes) · LW · GW

The link for the Robertson paper is broken for me. Can you post the full title?

Comment by elityre on Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? · 2019-06-17T16:43:01.705Z · score: 3 (2 votes) · LW · GW
...which found interference between a motor skill task and a verbal task is minimized...

A verbal task and a motor task can interfere with each other? I thought that interference only occurs between similar tasks.

Comment by elityre on Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation? · 2019-06-17T16:42:44.032Z · score: 9 (3 votes) · LW · GW

Thank you! The Walker study is exactly what I was looking for.

For those following along at home, this is the relevant graph.

It looks like a sleep session produces a comparatively huge boost in skill performance, regardless of the timing of the practice.

Now I want to know if this has replicated for larger (or just other) samples.

Comment by elityre on Does scientific productivity correlate with IQ? · 2019-06-17T15:26:03.009Z · score: 3 (2 votes) · LW · GW

I don't know?

He gives three examples in the next paragraph: Richard Feynman (IQ: 126), James Watson (IQ: 124), and William Shockley (IQ: 125), all of whom are 20th century scientists. (All IQ are from Ericsson).

Comment by elityre on Does scientific productivity correlate with IQ? · 2019-06-17T15:20:38.747Z · score: 7 (4 votes) · LW · GW

There are a number of ways we could measure scientific success: number of citations, number of importance weighted citations, or winning of Nobel prizes.

Do Nobel prize winning scientists tend to have higher IQs than scientists in general?

Does scientific productivity correlate with IQ?

2019-06-16T19:42:29.980Z · score: 28 (9 votes)

Does the _timing_ of practice, relative to sleep, make a difference for skill consolidation?

2019-06-16T19:12:48.358Z · score: 29 (10 votes)
Comment by elityre on Eli's shortform feed · 2019-06-04T16:05:44.620Z · score: 1 (1 votes) · LW · GW

They should both be fixed now.

Thanks!

Comment by elityre on Eli's shortform feed · 2019-06-04T16:04:22.197Z · score: 1 (1 votes) · LW · GW

New post: The seed of a theory of triggeredness

Comment by elityre on Eli's shortform feed · 2019-06-04T08:33:53.379Z · score: 9 (2 votes) · LW · GW

New post: Why does outlining my day in advance help so much?

Comment by elityre on Eli's shortform feed · 2019-06-04T08:16:19.621Z · score: 3 (2 votes) · LW · GW

New post: _Why_ do we fear the twinge of starting?


Comment by elityre on Eli's shortform feed · 2019-06-04T02:41:34.522Z · score: 5 (3 votes) · LW · GW

Cool! I'd be glad to hear more. I don't have much of a sense of which thing I write are useful or how.

Comment by elityre on Eli's shortform feed · 2019-06-02T22:16:00.239Z · score: 10 (5 votes) · LW · GW

New post: some musings on deliberate practice

Comment by elityre on Habryka's Shortform Feed · 2019-06-02T09:45:41.665Z · score: 1 (1 votes) · LW · GW

Seconded.

Comment by elityre on Habryka's Shortform Feed · 2019-06-02T09:45:25.048Z · score: 10 (3 votes) · LW · GW

This was a great post that might have changed my worldview some.

Some highlights:

1.

People's rationality is much more defined by their ability to maneuver themselves into environments in which their external incentives align with their goals, than by their ability to have correct opinions while being subject to incentives they don't endorse. This is a tractable intervention and so the best people will be able to have vastly more accurate beliefs than the average person, but it means that "having accurate beliefs in one domain" doesn't straightforwardly generalize to "will have accurate beliefs in other domains".

I've heard people say things like this in the past, but haven't really taken it seriously as an important component of my rationality practice. Somehow what you say here is compelling to me (maybe because I recently noticed a major place where my thinking was majorly constrained by my social ties and social standing) and it prodded me to think about how to build "mech suits" that not only increase my power but incentives my rationality. I now have a todo item to "think about principles for incentivizing true beliefs, in team design."

2.

I think a generally better setup is to choose a much smaller group of people that you trust to evaluate your actions very closely,

Similarly, thinking explicitly about which groups I want to be accountable to sounds like a really good idea.

I had been going through the world keeping this Paul Graham quote in mind...

I think the best test is one Gino Lee taught me: to try to do things that would make your friends say wow. But it probably wouldn't start to work properly till about age 22, because most people haven't had a big enough sample to pick friends from before then.

...choosing good friends, and and doing things that would impress them.

But what you're pointing at here seems like a slightly different thing. Which people do I want to make myself transparent to, so that they can judge if I'm living up to my values.

This also gave me an idea for a CFAR style program: a reassess your life workshop, in which a small number of people come together for a period of 3 days or so, and reevaluate cached decisions. We start by making lines of retreat (with mentor assistance), and then look at high impact questions in our life: given new info, does your current job / community / relationship / life-style choice / other still make sense?

Thanks for writing.


Eli's shortform feed

2019-06-02T09:21:32.245Z · score: 30 (5 votes)
Comment by elityre on How to determine if my sympathetic or my parasympathetic nervous system is currently dominant? · 2019-06-02T04:49:48.194Z · score: 3 (2 votes) · LW · GW

Quick answer: You can apparently use a Galvanic Skin response meter for this, though I have only experimented with one for about 10 minutes, and can't give you first-person verification.

Comment by elityre on No Really, Why Aren't Rationalists Winning? · 2019-05-22T00:39:52.283Z · score: 1 (1 votes) · LW · GW
Peter Thiel who actually meet him in person considers him to be expectional at understanding how individual people with whom he deals tick.

How do you know this?

Comment by elityre on No Really, Why Aren't Rationalists Winning? · 2019-05-20T17:30:10.608Z · score: 7 (2 votes) · LW · GW
I'm not sure you can short-circuit the "spend two thousand years flailing around and being terrible" step.

It sure seems like you should be able to do better than spending literally two thousand years. There are much better existing methodologies now than there were then.

Comment by elityre on S-Curves for Trend Forecasting · 2019-04-25T15:05:36.757Z · score: 6 (3 votes) · LW · GW

Intriguingly, after coming back to this comment after only 3 months, my feeling is something like "That seems pretty obvious. It's weird that that seemed like a new insight to me."

So I guess you actually taught me something in a way that stuck.

Thanks again.

Comment by elityre on Best reasons for pessimism about impact of impact measures? · 2019-04-25T14:53:02.326Z · score: 5 (3 votes) · LW · GW

I believe the strong form is generating a counter argument for any proposition, and then concluding that all propositions are equally likely and therefore that knowledge is impossible.

From wikipedia:

The main principle of Pyrrho's thought is expressed by the word acatalepsia, which connotes the ability to withhold assent from doctrines regarding the truth of things in their own nature; against every statement its contradiction may be advanced with equal justification.

I don't recommend the strong form.

Comment by elityre on The noncentral fallacy - the worst argument in the world? · 2019-04-24T23:28:26.598Z · score: 3 (2 votes) · LW · GW

The link seems broken? : (

Comment by elityre on Best reasons for pessimism about impact of impact measures? · 2019-04-15T16:27:49.641Z · score: 14 (4 votes) · LW · GW

On the process level: I would offer a bit of unsolicited advice about the method you used to generate reasons for pessimism. You (and others), might try it in future.

First of all, I strongly applaud the step of taking out a physical clock/ timer and making a solid attempt at answering the question for yourself. Virtue points (and karma) awarded!

However, when I read your list, it's blatantly one-sided. You're only trying to generate reasons for pessimism not reasons for optimism. This is not as bad as writing the bottom line, but generating arguments for only one side of a question biases your search.

Given this, one thing that I might do is first, spend 5 minutes generating the best arguments for (or concrete scenarios which inspire) pessimism about impact measures, then shift my mental orientation and spend 5 minutes generating arguments for (or concrete scenarios in which) impact measures seem promising.

But I wouldn't stop there. I would then spend 5 minutes (or as long as I need), looking over the first list and trying to generate counterarguments: reasons why the world probably isn't that way. Once I had done that, I would look over my new list of counter arguments, and try to generate counter-counterarguments, iterating until I either get stuck, or reach a sort of equilibrium where the arguments I've made are as strong as I can see how to make.

Then I would go back to my second original list (the one with reasons for optimism) and do the same back and forth, generating counterarguments and counter-counterarguments, until I get stuck or reach equilibrium on that side.

At that point, I should have two lists of the strongest reasons I can muster, arguments in favor of pessimism and arguments in favor of optimism, both of which have been stress-tested by my own skepticism. I'd then compare both lists, and if any of the arguments invalidates or weakens another, I adjust them accordingly (there might be a few more rounds of back and forth).

At this point, I've really thoroughly considered the issue. Obviously this doesn't mean that I've gotten the right answer, or that I've thought of everything. But it dose mean that for all practical purposes, I've exhausted the low hanging fruit of everything I can think of.

To recap...

Steps:

0. Take a binary question.
1. Make the best case I can for one answer, giving what ever arguments, or ways the world would have to be, that support that outcome.
2. Similarly make the best case I can for the other answer.
3. Take the reasoning for my first answer generate counterarguments. Generate responses to those counterarguments. Continue Iterate until you reach equilibrium.
4. Do the same to the reasoning for your second answer
5. Compare your final arguments on both sides of the question, adjusting as necessary.

(This procedure is inspired by a technique that I originally learned from Leverage Research / Paradigm Academy. By their terminology, this procedure is called (the weak form of) Pyrrhonian skepticism, after the Greek philosopher Pyrrho (who insisted that knowledge was impossible, because there were always arguments on both sides of a question). I've also heard it referred to, more generally, as "alternate stories".)

Of course, this takes more time to do, and that time cost may or may not be worth it to you. Furthermore, there are certainly pieces of your context or thinking process that I'm missing. Maybe you, in fact, did part of this process. But this is an extended method to consider.

Comment by elityre on Announcing the Center for Applied Postrationality · 2019-04-09T19:57:21.406Z · score: 7 (6 votes) · LW · GW

This is surprisingly near to a cogent response.

Comment by elityre on Personalized Medicine For Real · 2019-03-05T22:29:11.320Z · score: 9 (5 votes) · LW · GW

I'm very glad to read disambiguations like this one.

(It has tentatively prompted me to write up one for all the different things that "rationality" can mean when one is doing "rationality development". We'll see if I get around to actually writing it up anytime soon, though.)

Comment by elityre on S-Curves for Trend Forecasting · 2019-01-24T18:40:19.680Z · score: 11 (6 votes) · LW · GW

I'm glad to have read this. In particular:

Sometimes, people get confused and call S-curves exponential growth. This isn't necessarily wrong but it can confuse their thinking. They forget that constraints exist and think that there will be exponential growth forever. When slowdowns happen, they think that it's the end of the growth - instead of considering that it may simply be another constraint and the start of another S-Curve.

This is obvious in hindsight, but I hadn't put my finger on it.

Thank you!

Comment by elityre on In Favor of Niceness, Community, and Civilization · 2019-01-20T05:35:06.457Z · score: 7 (4 votes) · LW · GW

I want to offer salutations to this post.

I intend to link to it whenever I have opportunity to declare my allegiance. I am on the side of civilization.

Comment by elityre on What is a reasonable outside view for the fate of social movements? · 2019-01-16T05:41:41.955Z · score: 1 (1 votes) · LW · GW

Looking at this list I kind of want to see these movements mapped on a timeline. When did they start? How fast did they grow?

Comment by elityre on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T07:24:21.648Z · score: 10 (6 votes) · LW · GW

As a note, I belive that FHI is planning to publish a(n edited?) version of this document as an actual book ala Superintelligence: Paths, Dangers, Strategies.

Comment by elityre on Open question: are minimal circuits daemon-free? · 2019-01-07T06:20:42.407Z · score: 2 (2 votes) · LW · GW

(Eli's personal "trying to have thoughts" before reading the other comments. Probably incoherent. Possibly not even on topic. Respond iff you'd like.)

(Also, my thinking here is influenced by having read this report recently.)

On the one hand, I can see the intuition that if a daemon is solving a problem, there is some part of the system that is solving the problem, and there is another part that is working to (potentially) optimize against you. In theory, we could "cut out" the part that is the problematic agency, preserving the part that solves the problem. And that circuit would be smaller.

________________________________________________________________________

Does that argument apply in the evolution/human case?

Could I "cut away" everything that isn't solving the problem of inclusive genetic fitness and end up with a smaller "inclusive genetic fitness maximizer"?

On the on hand, this seems like a kind of confusing frame. If some humans do well on the metric of inclusive genetic fitness (in the ancestral environment), this isn't because there's a part of the human that's optimizing for that and then another part that's patiently waiting and watching for a context shift in order to pull a treacherous turn on evolution. The human is just pursuing its goals, and as a side effect, does well at the IGF metric.

But it also seems like you could, in principle, build an Inclusive Genetic Fitness Maximizer out of human neuro-machinery: a mammal-like brain that does optimize for spreading its genes.

Would such an entity be computationally smaller than a human?

Maybe? I don't have a strong intuition either way. It really doesn't seem like much of the "size" of the system is due to the encoding of the goals. Approximately 0 of the difference in size is due to the goals?

A much better mind design might be much smaller, but that wouldn't make it any less daemonic.

And if, in fact, the computationally smallest way to solve the IGF problem is as a side-effect of some processes optimizing for some other goal, then the minimum circuit is not daemon-free.

Though I don't know of any good reason why is should be the case that not optimizing directly for the metric works better than optimizing directly for it. True, evolution "chose" to design human as adaptation-executors, but this seems due to evolution's constraints in searching the space, not due to indirectness having any virtue over directness. Right?

Comment by elityre on Corrigible but misaligned: a superintelligent messiah · 2019-01-05T08:59:33.917Z · score: 5 (3 votes) · LW · GW

Side note:

but a ragtag team of hippie-philosopher-AI-researchers

I love this phrase. I think I'm going to use it in my online dating profile.

Comment by elityre on What makes people intellectually active? · 2018-12-31T07:21:39.430Z · score: 3 (2 votes) · LW · GW
Building up an intellectual edifice (of whatever quality) around some topic of interest: fairly rare

I definitely do this. I have half formed books that I might write one day on topics that interest me, and have sprawling Yed graphs in which I'm trying to make sense of confusions and conflicting evidence.

One thing of note is that I was introduced to explicit model building and theorizing a couple of years ago. Because of this had the mental handle of "building a model" as a thing that one could do, with a few role models of people doing it.

I was doing model building of some kind before then (I remember drawing out a graph of body language signals when I was about 21), but I think having the explicit handle helped a lot.

Comment by elityre on What makes people intellectually active? · 2018-12-31T02:02:21.094Z · score: 7 (5 votes) · LW · GW

I think this is worth being one of the answers.

Comment by elityre on How did academia ensure papers were correct in the early 20th Century? · 2018-12-30T18:53:00.748Z · score: 25 (11 votes) · LW · GW

I upvoted this post as strongly as I could with my Karma, and I'm putting this comment here to reinforce: this is a great question, and I learned some things about the 19th century from it.

I would love to see more things on Less Wrong on the topics of:

  • Intellectual progress, and what are the necessary and sufficient conditions for its occurrence.
  • If past eras were more intellectually productive, either overall or per capita.

Historical mathematicians exhibit a birth order effect too

2018-08-21T01:52:33.807Z · score: 109 (34 votes)