Comment by decius on The Intelligent Social Web · 2020-01-09T12:19:12.956Z · score: 2 (1 votes) · LW · GW

The subtext is only clear in retrospect.

Comment by decius on The LessWrong 2018 Review · 2019-11-21T04:54:46.506Z · score: 2 (1 votes) · LW · GW

Is the intent in the review phase to display the number of nominations received (which will impact which posts get reviewed) or not (which fails to display information that I am likely to find useful in using the list of posts that have been nominated by enough people to form a reading list)?

Comment by decius on Robust Agency for People and Organizations · 2019-07-22T15:41:11.922Z · score: 2 (1 votes) · LW · GW

Is the thought behind "Wait as long as possible before hiring people" that you will be better able to spread values to people when you are busier, or that you can hire a bunch of people at once and gain economy of scale when indoctrinating them?

Because the naive view would be to hire slowly and well in advance, and either sync up with the new hires or terminate them if they can't get into the organizational paradigm you're trying to construct, and that requires more slack.

Comment by decius on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-09T00:20:22.955Z · score: 7 (3 votes) · LW · GW

One error of the stag/rabbit hunt framing is that it makes it explicit that it's a coordination problem, not a values problem. To frame it differently would require that the stag and rabbit hunts not produce different utility numbers, but yield different resources or certainties of resource. If a rabbit hunt yields 3d2 rabbits hunted per hunter, but the stag hunt yields 1d2-1 stag hunted if all hunters work together and 0 if they don't, then even with a higher expected yield of meat and of hide from the stag hunt, for some people the rabbit hunt might yield higher expected utility, since the certainty of not starving is much more utility than an increase in the amount of hides.

In order to confidently assert that a Schelling point exists, one should have viewed the situation from everyone's point of view and applying their actual goals- NOT look at everyone's point of view and apply your goal, or the average goals, or the goals they think they have.

Comment by decius on Interpersonal Entanglement · 2019-06-03T19:36:14.857Z · score: 2 (1 votes) · LW · GW

An answer of "There is probably one but I can't figure out what it is." is equivalent to an answer of "I can't find one."

I'm not making a mathematical conjecture that is probably true but might not have a proof; I'm asking what is wrong with engineering fully sentient catgirls who want to serve people in a volcano fortress that isn't also wrong with allowing existing people to follow their dreams of changing themselves into sentient catgirls and serving people in a volcano fortress.

Comment by decius on Interpersonal Entanglement · 2019-06-02T23:22:22.167Z · score: 2 (1 votes) · LW · GW

Is there any significant difference between finding sentient beings who self-modify into becoming sentient catgirls for the purpose of serving you in your volcano fortress and engineering de novo sentient catgirls who desire to serve you in your volcano fortress?

Comment by decius on Overconfident talking down, humble or hostile talking up · 2018-12-06T13:36:50.228Z · score: 4 (3 votes) · LW · GW

I don't think it's inherently difficult to tell the difference between someone who is speaking N levels above you and someone who is speaking N+1 levels above you. The one speaking at a higher level is going to expand on all of the things they describe as errors, giving *more complex* explanations.

The difficulty is that it's impossible to tell if someone who is higher level than you is wrong, or telling a sophisticated lie, or correct, or some other option. The only way to understand how they reached their conclusion is to level up to their level and understand it the hard way.

There's a related problem, where it's nigh impossible to tell if someone who is actually at level N but speaking at level N+X is making shit up completely unless you are above the level they are (and can spot errors in their reasoning).

Take a very simple case: A smart kid explaining kitchen appliances to a less smart kid. First he talks about the blender, and how there's an electric motor inside the base that makes the gear thingy go spinny, and that goes through the pitcher and makes the blades go spinny and chop stuff up. Then he talks about the toaster, and talks about the hot wires making the toast go, and the dial controls the timer that pops the toast out.

Then he goes +x over his actual knowledge level, and says that the microwave beams heat radiation into the food, created by the electronics, and that the refrigerator uses an 'electric cooler' (the opposite of an electric heater) to make cold that it pumps into the inside, and the insulated sides keep it from making the entire house cold.

Half of those are true explanations, and half of those are bluffs, but someone who is barely has the understanding needed to verify the first two won't have the understanding needed to refute the last two. If someone else corrects the wrong descriptions, said unsophisticated observer would have to use things other than the explanation to determine credibility (in the toy cases given, a good explanation could level up the observer enough to see the bluff, but in the case of +5 macroeconomics that is impractical). If the bluffing actor tries to refute the higher-level true explanation, they merely need to bluff more; people high enough level to see the bluff /already weren't fooled/, and people of lower level see the argument see the higher level argument settle into an equilibrium or cycle isomorphic to all parties saying "That's not how this works, that's not how anything works; this is how that works", and can only distinguish between them by things other than the content of what they say (bias, charisma, credentials, tribal affiliation, or verified track records are all within the Overton Window for how to select who to believe).

Comment by decius on How to Build a Lumenator · 2018-08-12T08:55:57.718Z · score: 2 (1 votes) · LW · GW

How useful would it be to have someone who produced luminators that were pegboards with lights mounted via zip ties or something equally aesthetically bad? If the labor of collecting and assembling the components can efficiently be outsourced into buying a nonstandard light fixture, it might be more accessible.

Comment by decius on How to Build a Lumenator · 2018-08-12T08:51:09.635Z · score: 2 (1 votes) · LW · GW

Are you suggesting blacklightboxes?

Comment by decius on How to Build a Lumenator · 2018-08-12T08:50:15.657Z · score: 5 (3 votes) · LW · GW

Has anyone who has gotten relief by using luninators done rigorous a/b testing with different temperatures/colors or intensity or duration or other possibly important variables?

Not just gold standard clinical trials, something like “I tried color a for a week and logged 3 episodes, but color b for a week resulted in 8” could be informative for people deciding which type of bulb to get.

Comment by decius on Cash transfers are not necessarily wealth transfers · 2017-12-03T21:17:53.183Z · score: 12 (4 votes) · LW · GW

If 20 percent of children in third grade could read at at least the first grade level, what percentage of children that age who didn't attend school could do so?

Comment by decius on The Darwin Game · 2017-11-26T07:09:25.287Z · score: 3 (2 votes) · LW · GW

The mockingbird: Find whatever method the current leader(s) is/are using to enable self-cooperation, and find the way to mimic them with a small advantage. (e.g. if they use a string of 0,1,4,5s to self-identify, spam 4 until they identify as you, then identify how to get into the side of mutual cooperation that is sometimes up a point.

Tit-for-tat with correction: Start with a even distribution, then play what they played last round, except if the total last round was above five and they played higher, reduce the value you played by the amout that exceeded five last round; if the total last round was below five and they played lower, increase the value you play this round by the shortfall. (If the values played were the same, adjust by half the difference, randomly selecting between two values if a .5 change is indicated. (Loses at most 5 points to fivebot, loses about half a point per round to threebot, leaves some on the table with twobot, but self-cooperates on round two with 80% probability.

Comment by decius on Living in an Inadequate World · 2017-11-11T17:45:29.577Z · score: 5 (2 votes) · LW · GW

Nominal GDP also increases by 1000 times, and everyone's currency savings increases by 1k-fold, but the things which are explictly in nominal currency rather than in notes will keep the same number. The effect would be to destroy people who plan on using payments from debtors to cover future expenses, in the same way they would as if their debtors defaulted and paid only one part in a thousand of the debt, but without any default occuring.

Comment by decius on Moloch's Toolbox (1/2) · 2017-11-07T01:18:33.411Z · score: 3 (2 votes) · LW · GW

My predcition is that having a sincerely held belief to 'defy Moloch whenever possible' would result in suffering the harm caused by being the first actors to switch from the worse Nash equilibrium.

Let's talk about how timed-collective-action-threshold-conditional-commitment.

Comment by decius on In defence of epistemic modesty · 2017-10-31T11:07:14.567Z · score: 5 (2 votes) · LW · GW

The very most important thing about having the all-things-considered view is not multiply propogating the consensus belief, especially when the information flow is one-way: if you report your credence after updating from a consensus that you didn't agree with, but without causing the consensus to update at least a tiny bit towards your belief, then someone who updates their inside view with the view you have after updating on others, and on the view that others have without updating on you, will develop and propogate errors even if everyone involved is doing the math diligently and accurately.

Comment by decius on Intellectual Hipsters and Meta-Contrarianism · 2017-06-29T04:04:08.811Z · score: 0 (0 votes) · LW · GW

There will always be tasks at which better (Meta-)*Cognition is superior to the available amounts of computing power and tuning search protocols.

It becomes irrelevant if either humans aren't better than easily created AI at that level of meta or AI go enough levels up to be a failure mode.

Comment by Decius on [deleted post] 2017-06-29T03:59:59.491Z

No individual cares about anything other than the procedures. Thus, the organization as a whole cares only about the procedures. The behavior is similar /with the procedures that exist/ to caring about fitness, but there is also a procedure to change procedure.

If the organization cared about fitness, the procedure to change the height/weight standards would be based on fitness. As it is, it is more based on politics. Therefore I conclude that the Army cares more about politics and procedures than fitness, and any behavior that looks like caring about fitness is incidental to their actual values.

Comment by decius on Filter on the way in, Filter on the way out... · 2017-05-30T23:41:08.616Z · score: 1 (1 votes) · LW · GW

The listener's filter needs as an input the nature of the speaker's filter, or information is irretrievably lost.

The speaker's filter needs as an input the nature of the listener's filter, or information is irretrievably lost.

Having two codependent filters like that has a lot of stable non-lossy outcomes. One easy one to describe is the one where both filters are empty.

Unless you can convince me of a specific pair of filters such that many more people that I want to talk to use those two filters than use empty filters (increasing the number of people with whom I can communicate losslessly) or that provide some benefit superior to empty filters, I'll continue to use empty filters as much as possible, even if I have to aggressively enforce that choice on others.

Signalling higher status by applying 'tact' when I don't want to be insulting is not a benefit to me. Giving others more deference than myself regarding what filters to apply is not a benefit to me. If I want to insult someone, I can do that as effectively by insulting them as a tact culture communicator could by speaking without tact.

Comment by Decius on [deleted post] 2017-05-30T23:25:10.634Z

.... will banish you from the tribe.

The only person I heard of go to the brig was one who broke into barracks and stole personal property. Falsifying official records or running off to run a side job as a real estate broker was more of a '30 days restriction, 30 days extra duty, reduction in rate to the next inferior rate, forfeiture of 1/2 month's base pay for 2 months' thing.

Comment by Decius on [deleted post] 2017-05-30T02:42:00.826Z

The Army works just fine, and has goals that aren't ours. Why not steal much of their model /which works and has been proven to work/?

Especially if the problematic aspects of Army culture can be avoided by seeing the skulls on the ground.

Comment by Decius on [deleted post] 2017-05-30T02:35:51.097Z

Part of the program is separating people who don't lose weight. That doesn't mean they care about the height/weight, only that the next box is 'process for separation'.

There's not a lot other than adherence to procedure that most of the military actually does care about.

Comment by Decius on [deleted post] 2017-05-28T21:06:48.597Z

I read that "this is causing substantial harm" would be insufficient to cancel a norm, but expect that "this is creating a physical hazard" would be enough to reject the norm mid-cycle. The problem is that every edge has edge cases, and if there's a false negative in a mideterm evaluation of danger...

Maybe I'm concluding that the paramilitary aesthetic will be more /thing/ than others are. In my observation authoritarian paramilitary styled groups are much more /thing/ than other people expect them to be. (My own expectations, OTOH are expected to be accurate because subjectivity.

Comment by Decius on [deleted post] 2017-05-28T08:14:32.506Z

"Last fortnight, we canceled [Idea which appeared to be horrible seconds after implementing it], which we continued for an entire fortnight because of our policy. Today we look at all available evidence and must decide if the meta-experiment generates benefits greater than the costs."

If you have no norm for evaluating that rule explicitly, it doesn't mean that you won't evaluate it. Maybe evaluating it every time it applies is excessive, but pretending that you won't quickly learn to put exit clauses in experiments that are likely to need them 'notwithstanding any other provision' is failing to accurately predict.

Comment by Decius on [deleted post] 2017-05-28T08:02:00.757Z

That's not because he didn't do the exercise. Bootcamp doesn't care if you lose weight, they only care if you execute the weight loss program. If you doesn't meet any of the body proportion standards, you just have to perform extra exercise.

Comment by Decius on [deleted post] 2017-05-28T07:58:18.654Z

A defection would be any case in which a member did not arrive on time or participate fully. Period.

I'm suggesting that there be a formal process by which a member arrives late, performs ten pushups, and joins the event in progress. At the conclusion of the event, he says "My Uber driver was involved in a minor collision on my way here and that delayed me for too long to arrive on time." and (by secret ballot?) the Army votes and some adequate margin of them excuse the failure.

The other aspect I suggested is that a Dragon might say "[event] is next week and I would like to attend but it conflicts with exercise. May I be excused from exercise for [event]?". Again, the Army would vote and decide if the absence is excused.

I'm at a loss as to what to do to sanction a member who is not excused. The military has a long list of 'corrective actions' and 'punishments' that they can apply only because they don't constitute 'kidnapping' or other crimes. I guess you could possibly make those '[task] or removal from the Army', but that runs straight into the eviction problem. I think that it's absolutely critical that there's a credible threat underlying the discipline, precisely so that it is less likely to be needed, and the only one I find plausible is ejection, which becomes complicated because of Housing law and morality.

Comment by Decius on [deleted post] 2017-05-28T07:43:43.294Z

I'm managing/leading an internet gaming community, and the only tools I've ever had to use are selection and conversation.

I've had one person leave because their goal in joining was to acquire enough information and power to cause harm and they were so unsubtle about it that I was able to identify that and stop them. One additional person left because our norms of 'don't cheat' and 'be nice to our friends' were given to him gently by everyone in voice chat every time they were violated.

Oddly enough, both of those people ended up joining a specific competing group that held neither of the norms 'don't cheat' nor 'don't make public rape threats towards people who call out your cheating'.

And my selection method? Be public and pushy about what kind of norms you have, and push away people who don't already have and want to follow those norms.

Comment by Decius on [deleted post] 2017-05-28T07:27:08.425Z

That's only useful if the outside advisor has some level of veto power. I'd suggest something like allowing them to trigger a discussion meeting /outside of Dragon Army Territory/ with the advised, optionally including the Commander and/or other members, and also at the option of the advisor including legal counsel or a medical practitioner.

Not because I expect anyone to need the safeguards involved, but because making those explicitly part of the Expectations makes it harder to coerce somebody into not getting help. Making coercion of the type "You're fine, no need to waste time and leaving your ingroup to try to explain to some /outsider/ what's going on, they won't understand anyway" ring red alarm bell flags is a feature.

Comment by Decius on [deleted post] 2017-05-28T07:16:36.379Z

I see a possible failure mode where a member of a participant's family not into any rationalist community sees the Dragon Army rules and pattern-matches the rules and behavior into 'cult' (not arguing whether that pattern match is correct here, just saying that it might happen).

A family member concerned that their loved one might be involved in a dangerous cult might take extraordinary measures to remove that person from the situation, which might get very ugly.

I'm not sure that a nonparticipating buddy is sufficient to mitigate the risk of 'rescue'.

Comment by Decius on [deleted post] 2017-05-28T07:07:50.785Z

Evaluating whether to change a thing at the moment when it is maximally annoying (as would be the case in ad-hoc votes) will have different results from evaluating it at a predetermined time.

I'd suggest evaluating the policy of 'demand that an approved norm be in place until the scheduled vote' on the first scheduled vote following each scheduled vote in which 'a norm was dropped that people wanted to have it dropped mid-cycle but couldn't because of the policy'.

Comment by Decius on [deleted post] 2017-05-28T07:02:33.252Z

It's hard to go from being the boss of someone to being their subordinate, and vice versa. I think it's more plausible to shift into an advisory, strategic, consultant, or executive role rather than swap.

Comment by Decius on [deleted post] 2017-05-28T06:59:02.878Z

The only way there would be nothing useful to learn is if there was a complete failure due to circumstances outside of the influence of anyone involved, such as an earthquake that halted the plan. Even then a quick note to that effect would be of use.

Comment by Decius on [deleted post] 2017-05-28T06:46:50.377Z

For someone who thinks that they are immune to being shunned, you sure do use an anononym.

Comment by Decius on [deleted post] 2017-05-27T01:22:14.877Z

Having a well-calibrated belief in your own reliability is better than being overconfident in yourself.

Making yourself more reliable is also an improvement. Whether that improvement is worth the cost is beyond my ability to guess.

Comment by Decius on [deleted post] 2017-05-27T01:19:54.361Z

If you don't commit to publishing negative results, I commit to refusing to trust any positive results you publish.

Comment by Decius on [deleted post] 2017-05-27T01:16:00.002Z

21 hours most weeks is 3 hours per day, or 2 hours during each weekday and ~10 for the weekend. Just making sure that your daily and weekly estimates don't contain math errors, not saying anything about the sufficiency of those numbers.

Comment by Decius on [deleted post] 2017-05-27T01:05:01.555Z

Losing one's job to avoid missing a house meeting (needed to work late) is the kind of bad priority that should be addressed.

Perhaps some kind of explicit measure where housemates judge and excuse or not each case on a case-by-case basis, including a measure to request leave in advance as well as in arrears?

Comment by Decius on [deleted post] 2017-05-27T00:57:04.800Z

Part right.

Most of the arguments you set forth are more fallacious and less relevant than not liking all the author's fiction.

But that's because most of the arguments you set forth were of the type "Bay Area rationalists have had a lot of problems and therefore this specific plan will have similar problems."

Comment by Decius on [deleted post] 2017-05-26T00:04:50.923Z

I would also add the rules that cover the edge cases:

A Dragon does not skirt the letter or intent of the rule, or attempt to comply minimally with either.

Comment by Decius on [deleted post] 2017-05-26T00:00:34.033Z

"roughly 90 hours a month (~1.5hr/day plus occasional weekend activities)" My math says that those weekend activities total the 1.5 hours every day has and also 10 additional hours every weekend.

"Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement." "

It seems counterproductive to have people who have left the experiment living in the same house until they are replaced. Exit terms such as 'two months notice, or less if a suitable replacement can be found or otherwise agreed' are less coercive.

Comment by decius on Allegory On AI Risk, Game Theory, and Mithril · 2017-02-17T16:23:42.495Z · score: 3 (3 votes) · LW · GW

You assume that balrogs can only be stopped by unmined bedrock. Since the chance of a given balrog being stopped by bedrock but not by the combined efforts of the dwarves is muniscule compared to the chance of a weak one that can be stopped by mithril-clad soldiers or a strong one that can dig through mere stone, the best defense against balrogs is to mine and guard the mines well.

Comment by Decius on [deleted post] 2017-01-09T16:39:03.556Z

is it the post-truth world where true facts are lies because of reasons?

The false statement is "… therefore to be fair we should multiply every woman's wage by 10/7." Instead of something like "… so to promote equality we should stop discouraging fort grade girls from studying math."

Those look like not-even-false claims because they almost are.

Comment by Decius on [deleted post] 2017-01-09T16:33:09.562Z

I would only agree that every major political party uses post-truth rhetorical methods and it is sad that each of them does. If you want to propose a unit of measurement for truthiness I'd consider comparing them.

Comment by Decius on [deleted post] 2017-01-09T16:27:24.505Z

A political group composed only of people who prioritize the good of the country over their own subtribe or self will lack the support needed to flourish.

It's not that people disagree or don't know about the object level facts. It's that people are actively fighting to gain relative advantage over others. And that is a cultural problem, not a political one.

Comment by Decius on [deleted post] 2017-01-09T15:11:08.591Z

The reason why culturally homogenous groups are higher trust is racism. The discussion from both sides needs to be about bad things, and racism is not infinitely bad or even any more inherently bad than inequality is.

Comment by Decius on [deleted post] 2017-01-09T15:07:35.737Z

It seems like you are trying to create a new partisan political party. To skip unrelated drama I'll refer to it as Evidence Based Politics Proponents, or EBPP, because that summarizes what I think you want and taboos the things that I think people find objectionable about the name.

The current two-party system has evolved over a hundred or so federal elections to approach at least a local maximum in their strategy. Their strategy is likely significantly better than the typical one; in particular the winning strategy is expected to be significantly better than intuitive strategies that are not incorporated into the winning strategy. I think that the values and methods you are proposing for the EBPP are intuitive and have been tried repeatedly, and have failed to ever take hold.

Why do you think that trying for honesty and rational decision making will be significantly more effective at winning elections or accomplishing goals in 2018 than it has been from 1791-present?

Do you think it hasn't been tried before, or do you think that you have a better plan thanThe Coalition for Evidence-Based Policy, the group currently considering as "top tier" interventions that result in up to 14% of students reporting that they had never smoked (the largest effect size of various studies; typical values were between 5-10% less of the experimental group reported recent drug use or heavy intoxication than the control group, after a long period. With their strongest recommendations being for effect sizes that large, I can't image how they would tackle fiscal policy recommendations and other policies that have a large expected value but are currently managed by ideological and tribal forces.

The current facts on the political reality is that once a domain of science gets close to suggesting political policy, political control of the science becomes certain. For example, regardless of what the facts are about climate trends, the conclusions drawn by "independent" groups have an implied political policy which correlates strongly with the desired policy of their funding agency. The actual facts are unavailable for public perusal partly because they are arcane and partly because they are obfuscated. The rational politcs strategy would be determining how desirable each climate is and how desirable each level of CO2 production is and how CO2 production maps to climate to find the optimum balance between the two; that optimum strategy cannot happen when one camp is focusing on ideological goals of zero net emissions for reasons unrelated to climate and another camp is demanding zero restrictions based on ideaology.

Comment by decius on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T04:39:10.829Z · score: 0 (0 votes) · LW · GW

"Lives saved" is finite within a given light cone.

Comment by decius on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T04:37:20.641Z · score: -2 (2 votes) · LW · GW

The perfectly rational agent considers all possible different world-states, determines the utility of each of them, and states "X", where X is the utility of the perfect world.

For the number "X+epsilon" to have been a legal response, the agent would have had to been mistaken about their utility function or what the possible worlds were.

Therefore X is the largest real number.

Note that this is a constructive proof, and any attempt at counterexample should attempt to prove that the specific X discovered by a perfectly rational omniscient abstract agent with a genie. If the general solution is true, it will be trivially true for one number.

Comment by decius on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T04:30:03.215Z · score: 0 (0 votes) · LW · GW

For the Unlimited Swap game, are you implicitly assuming that the time spent swapping back and forth has some small negative utility?

Comment by decius on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T00:48:53.071Z · score: 0 (0 votes) · LW · GW

There is an optimal strategy for negotiation. It requires estimating the negotiation zone of the other party and the utility of various outcomes (including failure of negotiation).

Then it's just a strategy that maximizes the sum of the probability of each outcome times the utility thereto.

The hard parts aren't the P(X1)U(X1) sums, it's getting the P(X1) and U(X1) in the first place.

Comment by decius on The Number Choosing Game: Against the existence of perfect theoretical rationality · 2016-01-06T00:41:38.563Z · score: 1 (1 votes) · LW · GW

Suppose instead that the game is "gain n utility". No need to speak the number, wait n turns, or even to wait for a meat brain to make a decision or comprehend the number.

I posit that a perfectly rational, disembodied agent would decide to select an n such that there exists no n higher. If there is a possible outcome that such an agent prefers over all other possible outcomes, then by the definition of utility such an n exists.