Posts

Against Belief-Labels 2017-03-09T20:01:11.277Z

Comments

Comment by I_D_Sparse on The Wonder of Evolution · 2017-03-21T21:57:37.024Z · LW · GW

First comes some gene A which is simple, but at least a little useful on its own, so that A increases to universality in the gene pool. Now along comes gene B, which is only useful in the presence of A, but A is reliably present in the gene pool, so there's a reliable selection pressure in favor of B. Now a modified version of A arises, which depends on B, but doesn't break B's dependency on A/A. Then along comes C, which depends on A and B, and B, which depends on A* and C.

Can anybody point me to some specific examples of this type of evolution? I'm a complete layman when it comes to biology, and this fascinates me. I'm having a bit of a hard time imagining such a process, though.

Comment by I_D_Sparse on Double Crux — A Strategy for Mutual Understanding · 2017-03-18T20:56:42.900Z · LW · GW

Unfortunately, yes.

Comment by I_D_Sparse on Double Crux — A Strategy for Mutual Understanding · 2017-03-18T00:50:32.765Z · LW · GW

If someone uses different rules than you to decide what to believe, then things that you can prove using your rules won't necessarily be provable using their rules.

Comment by I_D_Sparse on Double Crux — A Strategy for Mutual Understanding · 2017-03-17T19:31:58.604Z · LW · GW

Yes, but the idea is that a proof within one axiomatic system does not constitute a proof within another.

Comment by I_D_Sparse on Double Crux — A Strategy for Mutual Understanding · 2017-03-13T20:12:10.807Z · LW · GW

Not particularly, no. In fact, there probably is no such method - either the parties must agree to disagree (which they could honestly do if they're not all Bayesians), or they must persuade each other using rhetoric as opposed to honest, rational inquiry. I find this unfortunate.

Comment by I_D_Sparse on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-11T21:19:29.446Z · LW · GW

Fixed the formatting.

Comment by I_D_Sparse on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-11T09:17:08.064Z · LW · GW

Regarding instrumental rationality: I've been wondering for a while now if "world domination" (or "world optimization", as HJPEV prefers) is feasible. I haven't entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one's approximate chances of success with various possible paths to achieving one's values. I have therefore come up with the "feasibility problem." Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one's values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.

Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I'm talking about the hoomun world, not physics and chemistry).

I've even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you've completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.

This is probably a bit longer than 100 words, but I'm posting it here and not in the open thread because I have no idea if it's of any value whatsoever.

Comment by I_D_Sparse on Double Crux — A Strategy for Mutual Understanding · 2017-03-10T19:57:56.574Z · LW · GW

What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems - e.g. rationalism and empiricism, or skepticism and "faith" - double crux should, if used correctly, eventually lead all disagreements back to epistemology, at which point... what, exactly? Use double-crux again? What if the parties don't have a meta-epistemological system in common, or indeed, any nth-order epistemological system in common? Double crux sounds really useful, and this is a great post, but a system for resolving epistemological disputes would be extremely helpful as well (especially for those of us who regularly converse with "faith"-ists about philosophy).

Comment by I_D_Sparse on [stub] 100-Word Unpolished Insights Thread (3/10-???) · 2017-03-10T19:43:43.965Z · LW · GW

This is an interesting idea, although I'm not sure what you mean by

It can work without people understanding why it works

Shouldn't the people learning it understand it? It doesn't really seem much like learning otherwise.

Comment by I_D_Sparse on Against Belief-Labels · 2017-03-10T01:02:44.881Z · LW · GW

Moved it to the top.

Comment by I_D_Sparse on Against Belief-Labels · 2017-03-10T01:01:20.211Z · LW · GW

Good point - "aspiring rationalist", perhaps?

Comment by I_D_Sparse on Against Belief-Labels · 2017-03-09T21:24:23.254Z · LW · GW

That's a valid point - I suppose there's no harm as long as one is careful. Allowing any part of your map to gain too much autonomy, however - internalizing a belief-label - is something to avoid. That's not to say that identity is bad - there's nothing wrong with being proud that you're a fan of Lost, or of your sexual orientation, etc. There is, I believe, something wrong with being proud that you're an atheist/socialist/republican/absurdist/singularitarian (etc.).

Comment by I_D_Sparse on Against Belief-Labels · 2017-03-09T20:08:21.712Z · LW · GW

Sorry about the text at the top, it's the wrong size for some reason. Does anybody know how to fix that?

Comment by I_D_Sparse on Open Thread, March. 6 - March 12, 2017 · 2017-03-09T05:48:59.187Z · LW · GW

Yes.

Comment by I_D_Sparse on Open Thread, March. 6 - March 12, 2017 · 2017-03-09T05:46:56.134Z · LW · GW

I must admit to some amount of silliness – the first thought I had upon stumbling onto LessWrong, some time ago, was: “wait, if probability does not exist in the territory, and we want to optimize the map to fit the territory, then shouldn’t we construct non-probabilistic maps?” Indeed, if we actually wanted our map to fit the territory, then we would not allow it to contain uncertainty – better some small chance of having the right map, then no chance, right? Of course, in actuality, we don’t believe that (p with x probability) with probability 1. We do not distribute our probability-mass over actual states of reality, but rather, over models of reality; over maps, if you will! I find it helpful to visualize two levels of belief: on the first level, we have an infinite number of non-probabilistic maps, one of which is entirely correct and approximates the territory as well as a map possibly can. On the second level, we have a meta-map, which is the one we update; it consists of probability distributions over the level-one maps. What are we actually optimizing the level-two map for, though? I find it misleading to talk of “fitting the territory”; after all, our goal is to keep a meta-map that best reflects the state of the data we have access to. We alter our beliefs based (hopefully!) on evidence, knowing full well that this will not lead us to a perfect picture of reality, and that a probabilistic map can never reflect the territory.

Comment by I_D_Sparse on Open Thread, March. 6 - March 12, 2017 · 2017-03-09T05:23:21.362Z · LW · GW

I don't see it... do you need a certain amount of karma to vote?

Comment by I_D_Sparse on Open Thread, March. 6 - March 12, 2017 · 2017-03-09T02:16:57.729Z · LW · GW

Ah, thanks. Uh, this may be a stupid question, but how do I upvote?

Comment by I_D_Sparse on Open Thread, March. 6 - March 12, 2017 · 2017-03-09T01:40:44.695Z · LW · GW

I wrote an article, but was unable to submit it to discussion, despite trying several times. It only shows up in my drafts. Why is this, and how do I post it publicly? Sorry, I'm new here, at least so far as having an account goes - I've been a lurker for quite some time and have read the sequences.