The National Security Commission on Artificial Intelligence Wants You (to submit essays and articles on the future of government AI policy) 2019-07-18T17:21:56.522Z · score: 32 (8 votes)
Slack Club 2019-04-16T06:43:22.442Z · score: 57 (20 votes)
Trust Me I'm Lying: A Summary and Review 2018-08-13T02:55:16.044Z · score: 102 (39 votes)
On Authority 2018-07-05T02:37:28.793Z · score: 14 (4 votes)
Curriculum suggestions for someone looking to teach themselves contemporary philosophy 2013-05-31T04:20:58.811Z · score: 11 (11 votes)
Ruthless Extrapolation 2012-07-13T20:51:23.463Z · score: 0 (7 votes)
Betrand Russell's Ten Commandments 2012-05-06T19:52:22.012Z · score: 7 (26 votes)
[LINK] Signalling and irrationality in Software Development 2011-11-21T16:24:33.744Z · score: 9 (14 votes)
How did you come to find LessWrong? 2011-11-21T15:32:34.377Z · score: 5 (8 votes)


Comment by quanticle on The Real Rules Have No Exceptions · 2019-07-24T02:42:01.179Z · score: 6 (3 votes) · LW · GW

That's something you see in movies, yes, but as I understand what Paul Scharre is saying, it's not something that's actually true. According to him, the laws of war "care about what you do, not who you are." If you are behaving in a soldierly fashion, you are a soldier, whether you are a young man, old man, woman, or child.

Comment by quanticle on The Real Rules Have No Exceptions · 2019-07-23T14:18:23.656Z · score: 28 (7 votes) · LW · GW

Paul Scharre, in his excellent book about the application of AI to military technology, Army of None, has an anecdote which I think is relevant. In the book, he talks about leading a patrol up an Afghan hillside. As he and the troops under his command ascend the hillside, they're spotted by a farmer. Realizing that they've been spotted, the patrol hunkers down and awaits the inevitable attack by Afghan insurgent forces. However, before the attackers arrive, something unexpected happens. A little girl, about 5 or 6 years of age, comes up to the position, with some goats and a radio. She reports the details of the Americans' deployment to the attacking insurgents and departs. Shortly thereafter, the insurgent attack begins in earnest, and results in the Americans being driven off the hillside.

After the failed patrol, Scharre's troop held an after-action briefing where they discussed what they might have done differently. Among the things they discussed was potentially detaining the little girl, or at least relieving her of her radio so as to limit the information being passed back to the attackers. However, at no point, did anyone suggest the alternative of shooting the girl, even though they would have been perfectly justified, under the laws of war and rules of engagement, in doing so. Under the laws of war, anyone who acts like a soldier is a soldier, and this includes 5-year-old-girls conducting reconnaissance for insurgents. However, everyone understood, on a visceral level, that there was a difference between permissible and correct and that the choice of shooting the girl, while permissible, was morally abhorrent to the point where it was discarded at an unconscious level.

That said, no one in the troop also said, "Okay, well, we need to amend our rules of engagement to say, 'Shooting at people conducting reconnaissance is permissible... except when the person is a cute little 5-year-old-girl.'" Everyone recognized, again, at an unconscious level, that there was value to having a legible rule "Shooting at people behaving in a soldierly manner is acceptable," with illegible exceptions ("Except when that person is a 5-year-old girl leading goats"). The drafters of rules cannot anticipate every circumstance in which the rule might be applied, and thus having some leeway about the specific obligations (while making the intent of the rule clear) is valuable insofar as it allows people to take actions without being paralyzed by doubt. This applies as much to rules governing as an organization as it does to rules that you make for yourself.

The application to AI is, I hope, obvious. (Unfriendly) AIs don't make a distinction between permissible and correct. Anything that is permissible is an option that can be taken, if it furthers the AI's objective. Given that, I would summarize your point about having illegible exceptions as, "You are not an unfriendly AI. Don't act like one."

Comment by quanticle on What questions about the future would influence people’s actions today if they were informed by a prediction market? · 2019-07-21T06:40:35.818Z · score: 1 (2 votes) · LW · GW


  • What is the probability that the earth's average temperature will rise less than 2°C from 1990s averages?
  • What is the probability of increased droughts/storms/wildfire/etc. in my location?
  • What is the probability of food shortages in my country over the next 10 years?

The answers to these questions would inform whether one takes greater or lesser preparations to deal with a changing climate. If you know climate change will affect the area in which you live, and have a decent prediction for the nature and magnitude of said changes, then you can prepare. As it is, the messaging around climate change is largely, "We know things are going to change! But we don't really know how or when! Be afraid!"


  • What is the probability of a nuclear exchange within the next 10 years?
  • What is the probability that the US will activate selective service for an armed conflict in the next 10 years?
  • What is the probability of <regulation X> affecting <industry Y> in the next 10 years?

Even a limited nuclear exchange would have widespread economic and social consequences. If there's a high probability of such in the near future, it might be a sign to move one's assets into more liquid forms. Selective service is a US-specific question, but, depending on how fiercely one opposes forced military service, it might be useful to know if there was a decent chance of getting drafted to fight in an armed conflict.


  • What is the probability of a human level AI developing within the next 10 years?
  • What is the probability that a human-level AI will be able to recursively self-improve without outside assistance?
  • What is the probability of a whole-brain emulation within the next 10 years?
  • What is the probability that, if a recursively improving human-level AI is developed in the next 10 years, the AI is friendly?

The AI questions will inform one's views on giving to AI friendliness organizations and one's viewpoints regarding the probability and time until a singularity occurring.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-20T00:01:32.540Z · score: 14 (5 votes) · LW · GW

I don't know if this counts as evidence, per se, but DeLong, Schleiffer, Summers and Waldman had a fairly seminal paper on this in 1987: The Economic Consequences of Noise Traders. In it, they explain how the addition of "noise traders" (i.e. traders who trade randomly) can make financial markets less efficient. Conventional economic theory, at the time, held that the presence of noise traders didn't reduce the efficiency of the market, because rational investors would be able to profit off the noise traders and prices would still converge to their true value.

In the paper, DeLong, et. al. demonstrate that it's possible for noise traders to earn higher returns than rational investors, and, in the process significantly affect asset prices. Key to their insight is that, in the real world, investors have limited amounts of capital, and the presence of noise traders significantly raises the amount of risk that rational investors have to take on in order to invest in markets that have large numbers of noise traders. This risk causes potentially wide, but not permanent divergences between asset prices and fundamental values, which can serve to drive rational investors from the market.

I don't see any reason to believe that prediction markets would behave differently from the stock markets that DeLong et. al's paper targeted. My hypothesis would be that prediction markets have shown increasing accuracy with increasing participation so far, but that relationship will break down once the relatively limited pool of people who are willing to think before they trade is exhausted and further increases in prediction market participation draw from a pool of noise traders.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T23:53:31.132Z · score: 2 (1 votes) · LW · GW

I endorse clone of saturn's reply elsewhere in the thread. I didn't often go into the discussion section, so I thought that there were fewer active users, when in reality it could very well have been fewer active users posting in the Main section.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T14:04:00.616Z · score: 11 (2 votes) · LW · GW

It's not so much that the process had a massive amount of risk as it implemented a Taleb-style anti-fragile strategy. It lost money, by dribs and drabs every year when times were good, but when times turned bad, it made a massive amount of money. According to The Big Short Burry was paying out premiums on CDO insurance every year while times were good, and got the insurance payout when the market turned and things went bad. So, for three or four years, he was invested in these really weird securities, securities that his investors hadn't signed up for, securities that were losing money, while they waited for a payout.

As far as why they wouldn't be interested in risking a smaller fraction of their money, the strategy only works if you have enough buffer to wait out the good years and capitalize on the inevitable downturn when it happens. We've seen this with Taleb himself. While he did well in the dotcom crash and the global financial crisis, he's had basically negative returns since.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T13:09:27.811Z · score: 13 (3 votes) · LW · GW

Many companies have their culture decline as they hire more, and have to spend an incredible amount of resources simply to prevent this (which is far from getting better as more people join). (E.g. big tech companies can probably have >=5 candidates spend >=10 hours in interviews for a a single position. And that’s not counting the probably >=50 candidates for that position spending >=1h.)

Is the super-elaborate hiring game really necessary, though? I've worked at Amazon and Microsoft. I've also worked at other firms which had much looser hiring practices. In my experience, the elaborate hiring game that these tech companies play are more about signalling to candidates, "We are a Very Serious Technology Company who use Only The Latest Modern Hiring Practices™." It's quite possible to me that these hiring practices could be considerably streamlined without actually affecting the quality of the candidates that got through. But, if they did that, then the hiring process would lose some of its signalling value, and the company wouldn't be seen as a Super Prestigious Institution™ which accepts Only The Best™.

tl;dr: In my view FAAMNG hiring process works in the same way as the Harvard application process. It's as much about advertising and signalling to candidates that the company is an elite institution as it is about actually hiring elite candidates.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T13:00:44.332Z · score: 11 (2 votes) · LW · GW

The problem was how he made those billions of dollars. Burry's initial investment thesis was stocks. When he pitched his fund to investors, it was a stock fund. Then, later, as Burry found that there was no way in the stock market to short the housing market, he branched out into the sorts of exotic collateralized debt obligations that would make him his profits.

From the perspective of his investors (a perspective I personally agree with), Burry was a loose cannon. The only reason he made a bunch of money instead of going down with every penny that his investors entrusted him with is that he managed to get lucky. Ask yourself, what would have happened to Burry's fund if the housing market hadn't cratered in 2007-2008. What if the housing market rally had gone on for another five or six years?

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T05:40:04.950Z · score: 9 (4 votes) · LW · GW

Online forums usually decline with growing user numbers (this happened to Reddit, HackerNews, as well as LessWrong 1.0)

Reddit and HackerNews, sure, but was the decline of LessWrong really due to growing user numbers? From what I've seen and read of LessWrong history, the decline was due to reductions in post volume, rather than post quality, which seems to me that it was a symptom of stagnating or shrinking active user numbers. Simply put, fewer people posting → fewer reasons to check the site → fewer comments → stagnation and death.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T05:30:47.053Z · score: 14 (4 votes) · LW · GW

But it’s still the case that a system in a bad equilibrium with deeply immoral consequences rewarded the outcasts who pointed out those consequences with billions of dollars.

That's not exactly true. There were outcasts who correctly pointed out that the housing market was deeply troubled in e.g. 2004 and 2005. Did the market reward them? No. They went bust, as the market proved to be capable of staying irrational longer than they were capable of remaining solvent. Even in The Big Short, Michael Burry very nearly did go bust, and had to resort to exercising fail-safe clauses in his investment contracts to keep from going bust. The exercise of these clauses, and the resulting rancor they caused with his investors meant that even though he "won" and got a fair chunk of a billion dollar payout, he was basically frozen out of investing afterwards.

Comment by quanticle on The AI Timelines Scam · 2019-07-18T19:25:53.769Z · score: 8 (3 votes) · LW · GW

I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)

Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then.

I think this is an important point to make, since the implicit lesson I'm reading here is that there's no difference between giving false information intentionally ("lying") and giving false information unintentionally ("being wrong"). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.

Comment by quanticle on What is your Personal Knowledge Management system? · 2019-07-18T16:50:08.807Z · score: 4 (2 votes) · LW · GW

I'll repeat the endorsements of org-mode, and add some links to specific org-mode features that I use.

  • I use org-capture to capture to-do items. I've found nothing quite as streamlined as being able to hit C-c c t and just type what I need to do
  • I use org-agenda to create daily to-do lists for myself. When I org-capture a new to-do item, I will add a deadline (C-c C-d) to indicate when I want to do the task.
  • I use org-publish to write my website. This is nice because it's very immediate for me to write, and I can hit C-c C-e P x to publish a new version of my website
  • For other org-mode tips, you can look at my emacs tips page, which has my emacs configuration documented (in a somewhat haphazard and idiosyncratic fashion)

I've tried other PIM tools, like vimwiki, workflowy, DynaList and Dropbox Paper, and I've found that of all of them org mode offers the right mix of customizability and immediacy for me. That said, I'll be the first to admit that org-mode doesn't have the easiest learning curve, and it's support for mobile devices is pretty much trash. (People have recommended orgzly, but, honestly, orgzly's UI pretty terrible.) What I do when I'm out and about is capture notes in Google Keep and then copy those notes over into org-mode when I get back to my computer.

Comment by quanticle on Self-consciousness wants to make everything about itself · 2019-07-04T01:04:21.620Z · score: 4 (2 votes) · LW · GW

I'm not sure there's a difference. Either you're asking up front ("Hey, do you mind if I set this timer to auto-publish in a week?") or you're asking later ("Hey, we just discussed something that I think would be of interest, do you mind if I publish it?").

In fact, I think asking after the fact might be easier, because you can point to specific things that were discussed and say, "I'm going to excerpt <x>, <y>, and <z>. Is that okay?"

Comment by quanticle on Self-consciousness wants to make everything about itself · 2019-07-03T21:47:43.624Z · score: 4 (2 votes) · LW · GW

This line of thought caused me to think that it might be quite valuable to have some kind of "conversation escrow" that allows people to have [a] conversation in private that still reliably gets published. As an example, you could imagine a feature on LessWrong to invite someone to a private comment-thread. The whole thread happens in private, and is scheduled to be published a week after the last comment in the thread was made, unless any of the participants presses a veto button...

I'm not sure I understand either the problem or the proposed solution. If there's a veto button, it's not reliable publishing, is it? How is this any better or different than having a private exchange via e-mail, Slack, Discord, etc, and then asking the other person, "Do you mind if I publish an excerpt?"

More generally, I'm not sure what kind of problem this tool would solve. Can you name some kinds of conversations that this tool would be used for?

Comment by quanticle on Circle Games · 2019-06-09T18:36:58.805Z · score: 16 (5 votes) · LW · GW

The baby’s fascination with circle games completely belies the popular notion that drill is an intrinsically unpleasant way to learn. Repetition isn’t boring to babies who are in the process of mastering a skill. They beg for repetition.

I disagree with the implication there that drill is repetition. Drill, to me, is repetition with predictable results. If I'm doing the same thing over and over again, and I'm getting exactly what I expect each time, that's a drill. The sort of entertaining repetition you're pointing at here, is something where I don't necessarily know what to expect every time I take an action.

A good contrast is painting a deck versus playing a slot machine. They're both extremely repetitive actions. Heck, even the physical movements in each are similar (if anything, deck painting involves less repetitive movement than playing a slot machine). Yet, we see people getting addicted to playing slot machines. I've never heard of anyone getting addicted to deck painting. The difference is that deck painting is pretty predictable. Dip paint in paintbrush, apply paint to deck, and there's paint on the deck, exactly as you'd expect. A slot machine, on the other hand, is geared toward unpredictability. You pull the lever, and you don't know what's going to happen when the reels stop. Will you get the jackpot? A lesser prize? Nothing at all? The sorts of circle games that babies enjoy are closer (from the perspective of the baby) to a slot machine than to deck painting.

For example, let's look at the Jack in the Box. It's predictable and boring to an adult. An adult (or even an older child) will pretty quickly catch on on the pattern that the box pops open after a number of turns or on a particular musical note ("Pop goes the weasel," etc.). However, to a child, especially to a child that's still grappling with the concept of cause and effect, a Jack in the Box is endlessly fascinating. Here's a mechanism, and when I manipulate the mechanism, something seemingly entirely unrelated happens?! How? Why?

Peek-a-boo is similar to that as well. Yes, the child might know that you're still there. But I'm willing to bet that you don't make exactly the same expression when you open your hands and reveal your face each and every time. It's the variety of facial expressions, and the effort required to predict them that provides the unpredictability that transforms peek-a-boo from a drill into a game.

Comment by quanticle on Is AI safety doomed in the long term? · 2019-05-26T03:20:30.160Z · score: 3 (2 votes) · LW · GW

On the basis that humans determine the fate of other species on the planet

Do they? There are many species that we would like to control or eliminate, but which we have not been able to do so. Yes, we can eliminate certain highly charismatic species (or bring them back from the brink of extinction, as needs be) but I wouldn't generalize that to humans being able to control species in general. If we had that level of control, the problem of "invasive" species would be trivially solved.

Comment by quanticle on Which scientific discovery was most ahead of its time? · 2019-05-17T05:44:47.787Z · score: 6 (3 votes) · LW · GW

What about Mendelian Inheritance? It was initially discovered by Gregor Mendel in 1865, but it was seen as being a very narrow special case of genetics until about 1900, when de Vries, Correns and von Tschermak "rediscovered" his work. So that's about 35 years during which the statistical laws of inheritance were published, but weren't being used or built upon.

Comment by quanticle on Probability interpretations: Examples · 2019-05-14T03:25:08.877Z · score: 2 (1 votes) · LW · GW

You seem to be saying that "external shared reality" is an approximation in the same way that Newtonian mechanics is an approximation for Einsteinian relativity. That's fine. So what is "external shared reality" an approximation of? Just what exactly is out there generating inputs to my senses, and by what mechanism does it remain in sync with everyone else (approximately)?

Comment by quanticle on Probability interpretations: Examples · 2019-05-13T14:19:54.254Z · score: 2 (1 votes) · LW · GW

The examples you use reinforce my point. We argue about extremely fine details. When supporters of opposing teams argue over whether a point was or was not scored, they're disputing whether the ball was here or there by a few millimeters. You won't find very many people arguing that actually, the ball was clear on the other side of the field and in reality, the disputed point is one that would have been scored by the other team.

Similarly, we might argue about whether the British, Americans or Russians were primarily responsible for the United Nations' victory in World War 2, but I don't think you'll find very many people arguing that actually it was the Italians who won World War 2.

The fact that our perceptions of reality match each other 99.999% of the time, to me, indicates that there's something out there that exists regardless of whether I perceive it or not. I call that "reality".

Comment by quanticle on Probability interpretations: Examples · 2019-05-12T22:53:49.447Z · score: 3 (2 votes) · LW · GW

Human and animal brains do complicated calculations all the time in real time to get through life, like solving what amounts to non-linear partial differential equations to even get a bite of food into your mouth. Just because it is subconscious, it is no less of a math than proving theorems.

I agree. So if there is no "objective" reality, apart from that which we experience, then why is it that we all seem to experience the same reality? When I shoot a basketball, or hit a tennis ball, both I and the referee see the same trajectory and are in approximate agreement about where the ball lands. When I lift a piece of food to my mouth and eat it, it would surprise me if someone across the table said that they saw it spill from my fork and stain my shirt.

In the absence of an external reality, why is it that everyone's model of the world appears to be in such concordance with everyone else's?

Comment by quanticle on Type-safeness in Shell · 2019-05-12T19:06:21.121Z · score: 14 (6 votes) · LW · GW

PowerShell does a lot of this, doesn't it? PowerShell abandons the concept of programs transferring data as text, and instead has them tranferring serialized .Net objects (with type annotations) back and forth. It doesn't extend to the filesystem, but it's entirely possible to write functions that enforce type guarantees on their input (i.e. requiring numbers, strings, or even more complicated data types, like JSON).

A good example is searching with regexps. In Unix, grep returns a bunch of strings (namely the lines which match the specified regex). In PowerShell, Select-String returns match objects, which have fields containing the file that matched, the line number that matched, the matching line itself, capture groups, etc. It's a much richer way of passing data around than delimited text.

Comment by quanticle on Probability interpretations: Examples · 2019-05-12T17:58:55.036Z · score: 6 (3 votes) · LW · GW

I still don't think you've answered Said's question. The question is whether two people can observe different values of pi. Or, to put it differently, why is it that, whenever anyone computes a value of pi, it seems to come out to the same value (3.14159...). Doesn't that indicate that there is some kind of objective reality, to which our mathematics corresponds?

One of the questions that Wigner brings up in The Unreasonable Effectiveness of Mathematics in the Natural Sciences is why does our math work so well at predicting the future? I would put the same question to you, but in a more general form. If there is no such thing as non-experienced mathematical truths, then why does everyone's experience of mathematical truths seem to be the same?

Comment by quanticle on Why books don't work · 2019-05-12T04:20:20.063Z · score: 4 (2 votes) · LW · GW

If wishes were horses, all men would ride.

More seriously, I would love for there to be a better way to learn than books, but in practice, books inhabit a sweet spot at the intersection of information density, ease of searching, and portability that's hard for other forms of media to match.

Comment by quanticle on Why books don't work · 2019-05-11T21:39:37.325Z · score: 8 (4 votes) · LW · GW

The author seems to spend almost no time engaging with or thinking critically about the books that he's read, and then claims that "books don't work". Has the author tried writing an outline? Or writing a review?

Simply reading a book, and letting its contents wash over you won't magically make you retain the contents of that book. There is no royal road to knowledge. One has to engage with a book in order to retain not just the conclusions of the book, but also the reasoning that led to the conclusions.

Comment by quanticle on [deleted post] 2019-05-11T20:26:43.527Z

From the post:

A lot of people like to use the prisoner's dilemma to justify being shitty to other humans they personally know.

Can you post a specific example of someone using the prisoner's dilemma to justify being shitty to someone they personally know? One of the preconditions of the prisoners' dilemma is that the prisoners don't know one another very well (else, otherwise, they'd have come up with some kind of prearranged strategy). You see this with real prisoners and real gangs: they often flow along family and social lines, precisely because you can rely on your brother or a childhood friend when you've both been arrested, in a way that you can't with a relatively unknown stranger.

Comment by quanticle on [deleted post] 2019-05-11T20:21:47.712Z

Here's the post text. I was able to copy/paste it from Facebook, through some combination of running Linux, running Firefox, having AdBlock and not being logged in to Facebook:

A lot of people like to use the prisoner's dilemma to justify being shitty to other humans they personally know. I think think is a dumb analogy, at least for First World, relatively affluent adult life, because the original PD doesn't let you say "Hey, wait a minute, my game partner is an asshole who keeps defecting, I'm out of here".

That's why creating avenues of escape for people who don't have that luxury is so important to me. Entrapment is one of the most insidious parts of abuse, because of course you're going to come up with a counter-asshole strategy if you have to keep playing with an asshole. But most of those strategies have a cost of emotional guardedness and alienation that can make it tough to get along with genuinely good and nice people.

A better lesson from the PD? The winning strategy is always, always, always dependent on the strategies of your other players. It's really hard to juggle many wildly different strategies at once. Keep it simple. Pick a strategy that synchronizes well with itself, go and find people running the same strategy, and make that your people.

Comment by quanticle on Nash equilibriums can be arbitrarily bad · 2019-05-02T03:51:11.161Z · score: 4 (2 votes) · LW · GW

I see. And from then it follows the same pattern as a dollar auction, until the "winning" bet goes to zero.

Comment by quanticle on Nash equilibriums can be arbitrarily bad · 2019-05-01T16:21:31.585Z · score: 7 (4 votes) · LW · GW

Doesn't the existence of the rule that says that no money changes hands if there's a tie alter the incentives? If we both state that we want 1,000,000 pounds, then we both get it and we both walk away happy. What incentive is there for either of the two agents to name a value that is lower than 1,000,000?

Comment by quanticle on Pecking Order and Flight Leadership · 2019-04-29T21:37:37.027Z · score: 1 (2 votes) · LW · GW

Do people hate prophets, or hate being prophets?

The former. Being a prophet is great! You've achieved enlightenment! All you're doing is trying to spread the good word of your revelations with the rest of humanity. Here are all of these people, living lives of immense suffering, and you have the solution. You can bring them peace. You can ease the torment of their souls! Even a cold-blooded utilitarian can see that a 5% reduction in suffering multiplied by several hundred million people represents a substantial gain in overall utility. And if a bit of force needs to be applied in order to get people to see the Good Word, then that is justified, is it not?

Comment by quanticle on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-07T03:17:32.223Z · score: 3 (2 votes) · LW · GW

And “knowing thyself” is especially important.

Why? If you took a test, and it came back telling you that you had an IQ of 140, what about your day-to-day life would change? Likewise, what would you do different if you took a test and it came back telling you that you had an IQ of 90?

Comment by quanticle on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-07T03:15:30.509Z · score: 1 (3 votes) · LW · GW

What would you gain from knowing your own IQ?

As far as I can tell, knowing my own IQ is a no-win scenario. Either my IQ is higher than I expected it to be, in which case I feel like I'm a disappointment, or it's lower than I expect it to be, in which case I'd feel like a fraud. I wouldn't gain any actionable data from it, so why bother?

Comment by quanticle on Will superintelligent AI be immortal? · 2019-03-31T00:38:32.541Z · score: 3 (2 votes) · LW · GW

If the system is float in the vacuum heat wont go out.

The heat won't escape by conduction, nor will it escape by convection. However, it will escape via radiation.

Comment by quanticle on General v. Specific Planning · 2019-03-28T18:59:09.504Z · score: 3 (2 votes) · LW · GW

On the other hand, AlphaZero seems to play to obtain a specific, although gradually accumulated, positional advantage that ultimately results in a resounding victory. It is happy to sacrifice “generally useful” material to get this.

AlphaZero plays chess in a manner that is completely unlike how humans, or even human-designed chess programs play chess. A human grandmaster does play much like you describe yourself playing: accumulating piece advantages, and only making limited sacrifices to gain position, when it's clear that the positional advantage outweighs the piece disadvantage.

AlphaZero, on the other hand, plays much more positionally. In its games against Stockfish, it would make sacrifices that Stockfish thought were crazy, as Stockfish was evaluating the board based on pieces and AlphaZero was evaluating the board based on position.

Comment by quanticle on Do you like bullet points? · 2019-03-27T02:15:48.044Z · score: 19 (5 votes) · LW · GW

I find that bullet points lose out on the ability to include story type data that my system 1 responds to.

That's an advantage, in my opinion. I have a habit of turning articles into bullet point summaries, and I've found that the more difficult something is to turn into a bullet-point summary, the less actual content there is in the article. Ease of transformation into bullet points is a quick, yet remarkably effective heuristic to distinguish insight from insight porn.

Comment by quanticle on The tech left behind · 2019-03-16T04:12:40.838Z · score: 3 (2 votes) · LW · GW

In the same vein as OpenDoc, XMPP and RSS both come to mind. While they "saw the light of day", they never seemed to reach the threshold of popularity necessary for long-term survival, and they're not well supported any more. I would argue that they're both good examples of "left-behind" tech.

Comment by quanticle on The tech left behind · 2019-03-16T04:05:14.523Z · score: 5 (2 votes) · LW · GW

I would argue that spaced repetition is one such technology. We've known about forgetting curves and spaced repetition as a way of efficiently memorizing data since at least the '60s, if not before. Yet, even today, it's hardly used and if you talk to the average person about spaced repetition, they won't have a clue as to what you're referring to.

Here we have a really cool technology, which could significantly improve how we learn new information, and it's being used maybe 5% as often as it should be.

Comment by quanticle on The tech left behind · 2019-03-16T03:55:40.411Z · score: 2 (1 votes) · LW · GW

Could you clarify how Damascus Steel qualifies? As I understand it, the question is asking about technologies which demonstrated promise, but never reached widespread use, and thus languished in obscurity. Damascus Steel was famous and highly prized in medieval Europe. While it was rare and expensive, I'm not sure that it manages to meet the obscurity criterion.

Comment by quanticle on To understand, study edge cases · 2019-03-03T00:06:18.010Z · score: 24 (11 votes) · LW · GW

There are fields where studying edge cases leads to confusion and actually hinders progress. From gwern's excellent essay on Bakewell and the origins of genetics:

But surviving theoretical scientific discussions of heredity are baffling. People lurch between ‘only fathers matter’ & ‘only mothers matter’, endlessly elaborating on wildly speculative (and wildly wrong) mechanistic explanations of how exactly sperm & eggs & embryos connected and formed, and in an example of “hard cases make bad law”, the focus on ‘monsters’ and other extreme cases among humans or animals badly misguided their premature attempts to elucidate universal principles comparable to that of astronomy or physics

The lesson is that when attempting to study statistical effects that aggregate across populations (like with genetics), studying the edge cases will lead one away from truth rather than towards it. Bakewell, Mendel and Darwin didn't develop their theories of heredity and genetics by studying plants and animals deformed by mutation. They studied populations of "normal" plants and animals, and kept very careful records of the statistical rate at which characteristics were transmitted from parent generations to child generations.

Comment by quanticle on What makes a good culture? · 2019-02-11T04:51:56.811Z · score: 3 (2 votes) · LW · GW

And to go back to your point about cohesion not necessarily being an unqualified good, South Korean culture (especially its emphasis on one-shot high-stakes exams as a way of determining future life prospects) results in one of the highest suicide rates in the world.

Comment by quanticle on One Website To Rule Them All? · 2019-01-13T09:33:32.940Z · score: 3 (2 votes) · LW · GW

(Is minimum wage a good thing? Should I adopt a paleo or keto or vegan or Shangri-la diet? What do we really know and not know about the historical Jesus?)

I would point out that the three examples you've listed are of three different categories. The first, "Is minimum wage a good thing?" has a significant value component. Do you value whether people have money? How much inefficiency are you willing to trade off in the economy in order to ensure that people have a certain amount of minimum spending power from work? Without knowing your specific values, I cannot answer whether a minimum wage is or is not a good thing.

Your second question, "What kind of diet should I adopt?" has significant dependencies on your physiology. Are you gluten-allergic? Do you have allergies to nuts? Do you have diabetes? Kidney issues? All of these things impact which of the listed diets (if any) is going to be best for you. And this is just from a strictly physiological perspective -- it leaves aside issues of preferences (i.e. maybe veganism isn't really right for you if you really like bacon).

The third question, "What do we really know and not know about the historical Jesus?" is answered, to a first approximation, by Wikipedia.

I think you're really asking for three sites. For the first question, there should be a site where people can debate moral values. Ideally, this site would taboo "good" and "bad" altogether, and force people to frame value judgments in the context of the value systems that result in those judgments, allowing others to see the criteria by which those judgments are made.

For the second question, a site that provides guidelines rather than recommendations would be helpful. Ideally this site would present a way for you to submit details about what your medical situation is and what your dietary preferences are and then it would output a decision tree that you could use to arrive at a diet that would work best for you.

Finally, for the third site, it'd be something like Wikipedia (only perhaps with better filtering tools to help weed out out unsourced data).

I'm not sure that it's possible to put together one site to rule them all because the the they're doing such different things. We're going from "there might not even be a 'right' answer" to "there is a right answer but it might be different for every person" to "there is a single, externally verifiable objective truth". How do you handle that range of epistemologies with a single site?

Comment by quanticle on Littlewood's Law and the Global Media · 2019-01-13T09:03:01.410Z · score: 5 (2 votes) · LW · GW

I was able to use this post when discussing the news with a family member of mine. The example of a one-in-a-million event occurring 8 times a month (plus increasing global interconnection ensuring that we hear about these events every time they do occur) was especially helpful in helping debiasing someone who had read too much of the news.

Comment by quanticle on What are the open problems in Human Rationality? · 2019-01-13T07:59:35.217Z · score: 9 (16 votes) · LW · GW

How about: "What is rationality?" and "Will rationality actually help you if you're not trying to design an AI?"

Don't get me wrong. I really like LessWrong. I've been fairly involved in the Seattle rationality community. Yet, all the same, I can't help but think that actual rationality hasn't really helped me all that much in my everyday life. I can point to very few things where I've used a Rationality Technique to make a decision, and none of those decisions were especially high-impact.

In my life, rationality has been a hobby. If I weren't reading the sequences, I'd be arguing about geopolitics, or playing board games. So, to me, the most open question in rationality is, "Why should one bother? What special claim does rationality have over my time and attention that, say, Starcraft does not?"

Comment by quanticle on Spaghetti Towers · 2018-12-23T18:30:12.365Z · score: 3 (2 votes) · LW · GW

Meta note: the actual link URL ( results in an error when I click on it.

Comment by quanticle on Why Don't Creators Switch to their Own Platforms? · 2018-12-23T17:35:34.183Z · score: 14 (6 votes) · LW · GW

I don't know enough about Wistia to say. However, from a cursory examination of their website, I would be skeptical. Wistia is designed for hosting product videos for business. These videos don't go viral in the same way that PewDiePie's content does. If Wistia did host PewDiePie's content my prediction would be that they'd have a deal with PewDiePie where he pays significantly more than he paid YouTube to host his content and, eventually, they'd incur enough controversy and protest to kick him off their platform.

Wistia's primary business is hosting boring promotional videos for businesses. Why should they put that boring-but-profitable business model at risk to host someone as troublesome as PewDiepPie? Moreover, why should PewDiePie move his videos to Wistia? Despite the controversy, we must remember that the cost that PewDiePie pays to YouTube is negative. YouTube pays PewDiePie (unless he's been demonetized, in which case the cost to PewDiePie is zero).

I would be willing to bet that if Slate Star Codex got controversial enough to get kicked off Wordpress, then Scott Alexander would have a heck of a time building out his own site. Even if he were a programmer, and even if he knew enough about PHP and Wordpress to build out his own hosting, he'd have to deal with people protesting his new hosting provider. He'd have to deal with people complaining to Patreon and PayPal about his content. He'd have to deal with people launching hacking and DDOS attacks against his site, constantly.

Comment by quanticle on Why Don't Creators Switch to their Own Platforms? · 2018-12-23T06:43:54.813Z · score: 37 (11 votes) · LW · GW

The technology that YouTube provides was hard to build when YouTube started a decade and a half ago, but surely today it’s not a huge challenge.

PDP has 20 billion total views. He doesn’t need traffic from the algorithm suggesting his videos, everyone else is trying to game the algorithm to get redirected by PDP!

The problem is that building a platform to enable those 20-billion views carries enormous fixed costs that only make sense when they are amortized across a truly massive amount of users, both in terms of uploaders and users. Video delivery at scale is one of the most difficult engineering problems out there. The only companies that have mastered it (YouTube, Vimeo, PornHub, Netflix, Amazon) are all billon dollar enterprises.

Sure, PewDiePie could pay to build out his own video service. But would it be as good as YouTube? It's very doubtful that it would have the level of polish that YouTube offers. YouTube is far more than just tossing up a bunch of .mp4 files on a web server.

Finally, I think you're underestimating the power of YouTube's algorithms. When Logan Paul (another YouTube celebrity) got delisted from YouTube, he suffered a massive revenue hit, even though his videos were still on the platform (but not showing up in search results). So I do think that PewDiePie is beholden to the algorithm. I would be willing to bet that if PewDiePie got delisted from YouTube, he would rapidly be forgotten, and would be replaced by the next YouTube celebrity willing to walk the fine line between "outrageous enough to be entertaining" and "so outrageous as to cause offense".

Edit: Scott Alexander has addressed the part of your question regarding hosting other comedians on his excellent post, Freedom on the Centralized Web. He correctly points out that the initial group of switchers are all going to be people who YouTube has deemed undesirable. However, YouTube deeming people undesirable is an effect. The cause is that these people have offended some powerful group (copyright holders, activists, etc). If all of these people abandon YouTube and start their own platform, the same forces that kicked them off YouTube will ensure that their new platform is starved of funding and respectability. For a good example of this, look at what happened to Gab. I don't support Gab, but the saga of Gab shows how difficult it really is to set up an entirely independent platform, which supports content that society doesn't approve of.

Comment by quanticle on Playing Politics · 2018-12-06T16:51:54.946Z · score: 15 (5 votes) · LW · GW

I know this is how things are frequently done, but it bothers me. When an issue is officially the jurisdiction of a committee, everyone on the committee is equally entitled to be part of the discussion, and entitled to know what’s going on; having secret side conversations creates a hierarchy between those “in the know” and those who aren’t.

I disagree quite strongly with this. Being part of a discussion is a tax. It's overhead. It makes perfect sense, in my head, for a committee to split into subcommittees that have responsibility for specialized tasks, but which report back to and are accountable to the primary committee. In fact, I don't see how else one would accomplish any kind of complex task that requires specialized domain knowledge. And for tasks that don't require specialized domain knowledge, having everything presented before the committee usually results in needless bikeshedding, as everyone on the committee has to demonstrate their status and worth by proposing a change or critique, in order to show that they've considered the proposal and are more than a mere rubber stamp.

Even disregarding things like social signalling, group dynamics, and all the other things that geeks categorize as "social drama", making everything that is under the jurisdiction of the committee the responsibility of the entire committee is incredibly inefficient, just from a communications perspective. It requires, in networking terms, a "fully connected mesh", where every node has to be communicating with every other node. It's much more efficient, even from a communication and information theory perspective, for a committee to break into smaller groups, each of which has responsibility for a specific task or specialization. These groups can then report back to the overall committee, and the overall committee can choose to adopt or reject their ideas without having to go through the expensive process of having the entire committee deliberate on every proposal for every subtask.

Comment by quanticle on Anyone use the "read time" on Post Items? · 2018-12-02T03:42:20.189Z · score: 26 (9 votes) · LW · GW

I'm on the other side. I prefer word count to read time, because I know approximately how many words per minute I read. The read time calculation that LessWrong uses is approximately 300 words per minute. If you read faster or slower than that, the read times will be off for you.

This is more impactful for people who are slow readers; being told that something is a five minute read and finishing it in three minutes isn't a big deal. Being told that something is a five minute read and actually taking seven or eight minutes to finish is considerably worse. For this reason I would prefer word count to be the default.

Also, if you use the GreaterWrong viewer, you get the option to choose. You can click on the read time to switch it to word count. Clicking again switches it back.

Comment by quanticle on October links · 2018-11-01T16:50:25.105Z · score: 5 (3 votes) · LW · GW

But why would that be an advantage exclusive to MLP? One could say the same about the Star Wars universe, for example (and indeed, there is a lot of Star Wars fanfiction out there).

Comment by quanticle on October links · 2018-11-01T03:08:50.607Z · score: 11 (6 votes) · LW · GW

This is worth reading for the excellent review of My Little Pony: Friendship Is Magic.

Comment by quanticle on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-10-28T04:55:06.726Z · score: 4 (2 votes) · LW · GW

Note that I said discussion, not engagement. Would your conclusion be the same if a post got relatively few replies, but was upvoted to +100?