Posts

The National Security Commission on Artificial Intelligence Wants You (to submit essays and articles on the future of government AI policy) 2019-07-18T17:21:56.522Z · score: 32 (8 votes)
Slack Club 2019-04-16T06:43:22.442Z · score: 57 (20 votes)
Trust Me I'm Lying: A Summary and Review 2018-08-13T02:55:16.044Z · score: 102 (39 votes)
On Authority 2018-07-05T02:37:28.793Z · score: 14 (4 votes)
Curriculum suggestions for someone looking to teach themselves contemporary philosophy 2013-05-31T04:20:58.811Z · score: 11 (11 votes)
Ruthless Extrapolation 2012-07-13T20:51:23.463Z · score: 0 (7 votes)
Betrand Russell's Ten Commandments 2012-05-06T19:52:22.012Z · score: 7 (26 votes)
[LINK] Signalling and irrationality in Software Development 2011-11-21T16:24:33.744Z · score: 9 (14 votes)
How did you come to find LessWrong? 2011-11-21T15:32:34.377Z · score: 5 (8 votes)

Comments

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-03T16:16:26.550Z · score: 2 (1 votes) · LW · GW

I've replied below with a similar question, but do you have a source on "satellite radar operators"? The published accounts of the incident imply that Petrov was the satellite radar operator. He followed up with the operators of the ground-based radar later, but at the time he made the decision to stay silent, he had no data that contradicted what the satellite sensors were saying.

As far as the Bayesian justification goes, I think this is bottom-line reasoning. We're starting with, "Petrov made a good decision," and looking backwards in order to find reasons as to why his reasoning was reasonable and justifiable.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-03T16:04:17.108Z · score: 2 (1 votes) · LW · GW

Do you have a source on Petrov consulting the radar operators? The Wikipedia article on the 1983 incident seems to imply that he did not.

Shortly after midnight, the bunker's computers reported that one intercontinental ballistic missile was heading toward the Soviet Union from the United States. Petrov considered the detection a computer error, since a first-strike nuclear attack by the United States was likely to involve hundreds of simultaneous missile launches in order to disable any Soviet means of a counterattack. Furthermore, the satellite system's reliability had been questioned in the past. Petrov dismissed the warning as a false alarm, though accounts of the event differ as to whether he notified his superiors or not after he concluded that the computer detections were false and that no missile had been launched. Petrov's suspicion that the warning system was malfunctioning was confirmed when no missile in fact arrived. Later, the computers identified four additional missiles in the air, all directed towards the Soviet Union. Petrov suspected that the computer system was malfunctioning again, despite having no direct means to confirm this. The Soviet Union's land radar was incapable of detecting missiles beyond the horizon.

From the passage above, it seems like, at the time of the decision, Petrov had no way of confirming whether the missile launches were real or not. He decided that the missile launch warnings were the result of equipment malfunction, and then followed up with land-based radar operators later to confirm that his decision had been correct.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-03T13:58:19.855Z · score: 2 (1 votes) · LW · GW

No, that's not what happens in Dr. Strangelove at all. In Dr. Strangelove, a legitimate launch order is given, the bombers take off, and then, while they're on their way to their destination, the launch order is rescinded. However, the one bomber (due to equipment failure, I think), fails to receive the retraction of the launch order. The President, realizing that this bomber did not receive the order to turn back, authorizes the Soviets to shoot down the plane. The Soviets, however, are unable to do so, as the bomber has diverted from its primary target and is heading towards a nearer secondary target. The bomber crew, following their orders to the letter, undertake heroic efforts to get their bomb operational and drop it, even though that means sacrificing their commander.

In a sense, Dr. Strangelove is the very opposite of what Stanislav Petrov did. Rather than save humanity by disobeying orders, the crew dooms humanity by following its orders.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-03T13:46:42.327Z · score: 0 (2 votes) · LW · GW

I also think it's weird that The Sequences, Thinking Fast and Slow, and other rationalist works such as Good and Real all emphasize gathering data and trusting data over intuition, because human intuition is fallible, subject to bias, taken in by narratives, etc... and then we're celebrating someone who did the opposite of all that and got away with it.

The steelman interpretation is that Petrov made a Bayesian assessment, starting with a prior that a nuclear attack (and especially a nuclear attack with five missiles) was an extremely unlikely scenario, and appropriately discounted the evidence being given to him by the satellite detection system because the detection system was new and therefore prone to false alarms, and found that the posterior probability of attack did not justify his passing the attack warning on. However, this seems to me like a post-hoc justification of a decision that was made on intuition.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-03T13:33:28.153Z · score: 4 (2 votes) · LW · GW

Well, that's one of the questions I'm raising. I'm not sure we want to encourage more "heroic responsibility" with AI technologies. Do we want someone like Stanislav Petrov to decide, "No, the warnings are false, and the AI is safe after all," and release a potentially unfriendly general AI? I would much rather not have AI at all than have it in the hands of someone who decides without consultation that their instruments are lying to them and that they know the correct thing to do based upon their judgment and intuition alone.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-10-03T13:24:28.525Z · score: 2 (1 votes) · LW · GW

From https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident:

Shortly after midnight, the bunker's computers reported that one intercontinental ballistic missile was heading toward the Soviet Union from the United States. Petrov considered the detection a computer error, since a first-strike nuclear attack by the United States was likely to involve hundreds of simultaneous missile launches in order to disable any Soviet means of a counterattack. Furthermore, the satellite system's reliability had been questioned in the past.[12] Petrov dismissed the warning as a false alarm, though accounts of the event differ as to whether he notified his superiors[11] or not[8] after he concluded that the computer detections were false and that no missile had been launched. Petrov's suspicion that the warning system was malfunctioning was confirmed when no missile in fact arrived. Later, the computers identified four additional missiles in the air, all directed towards the Soviet Union. Petrov suspected that the computer system was malfunctioning again, despite having no direct means to confirm this.[13] The Soviet Union's land radar was incapable of detecting missiles beyond the horizon.[12]

The initial detection was one missile. Petrov dismissed this as a false alarm. Later four more missiles were detected, and Petrov also dismissed this as a false alarm. Other accounts combine both sub-incidents together and say that five missiles were detected.

I choose to focus on the first detection because that's when Petrov made the critical decision, in my mind, to not trust the satellite early warning network. The second detection of four missiles isn't as important to me, because at that point Petrov has already chosen to disregard warnings from the satellite network.

Comment by quanticle on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists · 2019-09-29T03:50:08.394Z · score: 8 (4 votes) · LW · GW

We already know from history that that regimes may become so... self-serving and detached from reality, as one could put it... that they'll feel the need to actively select against smart, sincere idealists or any permutation thereof.

Coincidentally, I was reading this excellent article about the mindset behind Leninism, and I felt like this passage was particularly insightful:

In his history of Marxism, Kołakowski explains some puzzling aspects of Bolshevik practice in these terms. Everyone understands why Bolsheviks shot liberals, socialist revolutionaries, Mensheviks, and Trotskyites. But what, he asks, was the point of turning the same fury on the Party itself, especially on its most loyal, Stalinists, who accepted Leninist-Stalinist ideology without question? Kołakowski observes that it is precisely the loyalty to the ideology that was the problem. Anyone who believed in the ideology might question the leader’s conformity to it. He might recognize that the Marxist-Leninist Party was acting against Marxism-Leninism as the Party itself defined it; or he might compare Stalin’s statements today with Stalin’s statements yesterday. 'The citizen belongs to the state and must have no other loyalty, not even to the state ideology,' Kołakowski observes.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-28T07:24:02.798Z · score: 33 (9 votes) · LW · GW

If you will not honor literally saving the world, what will you honor?

I find it extremely troubling that we're honoring someone defecting against their side in a matter as serious as global nuclear war, merely because in this case, the outcome happened to be good.

(but deterrence would have helped no one if he had launched)

That is exactly the crux of my disagreement. We act as if there were a direct lever between Petrov and the keys and buttons that launch a retaliatory counterstrike. But there wasn't. There were other people in the chain of command. There were other sensors. Do we really find it that difficult to believe that the Soviets would not have attempted to verify Petrov's claim before retaliating? That there would not have been practiced procedures to carry out this verification? From what I've read of the Soviet Union, their systems of positive control were far ahead of the United States' as a result of the much lower level of trust the Soviet Politburo had in their military. I find it exceedingly unlikely that the Soviets would have launched without conducting at least some kind of verification with a secondary system. They knew the consequences of nuclear attack just as well as we did.

In that context, Petrov's actions are far less justifiable. He threw away all of the procedures and training that he had... for a hunch. While everything did turn out okay in this instance, it's certainly not a mode of behavior I'd want to see established as a precedent. As I said above: Petrov's actions were just as unilateralist as the people releasing the GPT-2 models, and I find it discomfiting that a holiday opposing that sort of unilateral action is named after someone who, arguably, was maximally unilateralist in his thinking.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-28T02:32:20.129Z · score: 3 (2 votes) · LW · GW

Indeed, Eliezer has written extensively about this very phenomenon. No argument is universally compelling -- there is no sequence of propositions so self evident that it will cause our opponents to either agree or spontaneously combust.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T05:24:25.385Z · score: 9 (3 votes) · LW · GW

In 1983, Moscow was protected by the A-35 anti-ballistic missile system. This system was (in theory) capable of stopping either a single ICBM or six Pershing II IRBMs from West Germany. The threat that Petrov's computers reported was a single ICBM, coming from the United States. If the threat had been real, Petrov's actions would have prevented the timely activation of the ABM system, preventing the Soviets from even trying to shoot down the incoming nuke.

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T05:16:31.095Z · score: 24 (12 votes) · LW · GW

Petrov's choice was obviously the correct one in hindsight. What I'm questioning is whether Petrov's choice was obviously correct in foresight. The rationality community takes as a given Petrov's assertion that it was obviously silly for the United States to attack the Soviet Union with a single ICBM. Was that actually as silly as Petrov suggested? There were scenarios where small numbers of ICBMs were launched in a surprise attack against an unsuspecting adversary in order to kill leadership, and disrupt command and control systems. How confident was Petrov that this was not one of those scenarios?

Another assumption that the community makes is that Petrov choosing to report the detection would have immediately resulted in a nuclear "counterattack" by the Soviet Union. But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union. We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn't know for certain that it was one, Petrov was defecting against the system. He was deliberately feeding false data to his superiors, betting that his model of the world was more accurate than his commanders'. Is that the sort of behavior we really want to lionize?

Comment by quanticle on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T03:22:44.543Z · score: 44 (22 votes) · LW · GW

In a world where dangerous technology is widely available, the greatest risk is unilateralist action.

What Stanislav Petrov did was just as unilateralist as any of the examples linked in the OP. We must remember that when he chose to disregard the missile alert (based off his own intuition regarding the geopolitics of the world), he was violating direct orders. Yes, in this case everything turned out great, but let's think about the counterfactual scenario where the missile attack had been real. Stanislav Petrov would potentially have been on the hook for more deaths than Hitler and the utter destruction of his nation.

A unilateral choice not to act is as much of a unilateral choice as a unilateral choice to act.

Comment by quanticle on If you had to pick one thing you've read that changed the course of your life, what would it be? · 2019-09-16T06:50:02.981Z · score: 3 (2 votes) · LW · GW

It's tempting to reply with a book of philosophy or mathematics or fiction here. However, if I look at actual impact on my life, the book that did the most to change my future was The C Programming Language, by Dennis Ritchie and Brian Kernighan. If it hadn't been for that book, it's very well possible that I'd never have gotten into computers or programming (I'd probably have become a doctor instead).

Comment by quanticle on The Real Rules Have No Exceptions · 2019-07-24T02:42:01.179Z · score: 6 (3 votes) · LW · GW

That's something you see in movies, yes, but as I understand what Paul Scharre is saying, it's not something that's actually true. According to him, the laws of war "care about what you do, not who you are." If you are behaving in a soldierly fashion, you are a soldier, whether you are a young man, old man, woman, or child.

Comment by quanticle on The Real Rules Have No Exceptions · 2019-07-23T14:18:23.656Z · score: 30 (9 votes) · LW · GW

Paul Scharre, in his excellent book about the application of AI to military technology, Army of None, has an anecdote which I think is relevant. In the book, he talks about leading a patrol up an Afghan hillside. As he and the troops under his command ascend the hillside, they're spotted by a farmer. Realizing that they've been spotted, the patrol hunkers down and awaits the inevitable attack by Afghan insurgent forces. However, before the attackers arrive, something unexpected happens. A little girl, about 5 or 6 years of age, comes up to the position, with some goats and a radio. She reports the details of the Americans' deployment to the attacking insurgents and departs. Shortly thereafter, the insurgent attack begins in earnest, and results in the Americans being driven off the hillside.

After the failed patrol, Scharre's troop held an after-action briefing where they discussed what they might have done differently. Among the things they discussed was potentially detaining the little girl, or at least relieving her of her radio so as to limit the information being passed back to the attackers. However, at no point, did anyone suggest the alternative of shooting the girl, even though they would have been perfectly justified, under the laws of war and rules of engagement, in doing so. Under the laws of war, anyone who acts like a soldier is a soldier, and this includes 5-year-old-girls conducting reconnaissance for insurgents. However, everyone understood, on a visceral level, that there was a difference between permissible and correct and that the choice of shooting the girl, while permissible, was morally abhorrent to the point where it was discarded at an unconscious level.

That said, no one in the troop also said, "Okay, well, we need to amend our rules of engagement to say, 'Shooting at people conducting reconnaissance is permissible... except when the person is a cute little 5-year-old-girl.'" Everyone recognized, again, at an unconscious level, that there was value to having a legible rule "Shooting at people behaving in a soldierly manner is acceptable," with illegible exceptions ("Except when that person is a 5-year-old girl leading goats"). The drafters of rules cannot anticipate every circumstance in which the rule might be applied, and thus having some leeway about the specific obligations (while making the intent of the rule clear) is valuable insofar as it allows people to take actions without being paralyzed by doubt. This applies as much to rules governing as an organization as it does to rules that you make for yourself.

The application to AI is, I hope, obvious. (Unfriendly) AIs don't make a distinction between permissible and correct. Anything that is permissible is an option that can be taken, if it furthers the AI's objective. Given that, I would summarize your point about having illegible exceptions as, "You are not an unfriendly AI. Don't act like one."

Comment by quanticle on What questions about the future would influence people’s actions today if they were informed by a prediction market? · 2019-07-21T06:40:35.818Z · score: 1 (2 votes) · LW · GW

Climate

  • What is the probability that the earth's average temperature will rise less than 2°C from 1990s averages?
  • What is the probability of increased droughts/storms/wildfire/etc. in my location?
  • What is the probability of food shortages in my country over the next 10 years?

The answers to these questions would inform whether one takes greater or lesser preparations to deal with a changing climate. If you know climate change will affect the area in which you live, and have a decent prediction for the nature and magnitude of said changes, then you can prepare. As it is, the messaging around climate change is largely, "We know things are going to change! But we don't really know how or when! Be afraid!"

Politics

  • What is the probability of a nuclear exchange within the next 10 years?
  • What is the probability that the US will activate selective service for an armed conflict in the next 10 years?
  • What is the probability of <regulation X> affecting <industry Y> in the next 10 years?

Even a limited nuclear exchange would have widespread economic and social consequences. If there's a high probability of such in the near future, it might be a sign to move one's assets into more liquid forms. Selective service is a US-specific question, but, depending on how fiercely one opposes forced military service, it might be useful to know if there was a decent chance of getting drafted to fight in an armed conflict.

AI

  • What is the probability of a human level AI developing within the next 10 years?
  • What is the probability that a human-level AI will be able to recursively self-improve without outside assistance?
  • What is the probability of a whole-brain emulation within the next 10 years?
  • What is the probability that, if a recursively improving human-level AI is developed in the next 10 years, the AI is friendly?

The AI questions will inform one's views on giving to AI friendliness organizations and one's viewpoints regarding the probability and time until a singularity occurring.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-20T00:01:32.540Z · score: 14 (5 votes) · LW · GW

I don't know if this counts as evidence, per se, but DeLong, Schleiffer, Summers and Waldman had a fairly seminal paper on this in 1987: The Economic Consequences of Noise Traders. In it, they explain how the addition of "noise traders" (i.e. traders who trade randomly) can make financial markets less efficient. Conventional economic theory, at the time, held that the presence of noise traders didn't reduce the efficiency of the market, because rational investors would be able to profit off the noise traders and prices would still converge to their true value.

In the paper, DeLong, et. al. demonstrate that it's possible for noise traders to earn higher returns than rational investors, and, in the process significantly affect asset prices. Key to their insight is that, in the real world, investors have limited amounts of capital, and the presence of noise traders significantly raises the amount of risk that rational investors have to take on in order to invest in markets that have large numbers of noise traders. This risk causes potentially wide, but not permanent divergences between asset prices and fundamental values, which can serve to drive rational investors from the market.

I don't see any reason to believe that prediction markets would behave differently from the stock markets that DeLong et. al's paper targeted. My hypothesis would be that prediction markets have shown increasing accuracy with increasing participation so far, but that relationship will break down once the relatively limited pool of people who are willing to think before they trade is exhausted and further increases in prediction market participation draw from a pool of noise traders.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T23:53:31.132Z · score: 2 (1 votes) · LW · GW

I endorse clone of saturn's reply elsewhere in the thread. I didn't often go into the discussion section, so I thought that there were fewer active users, when in reality it could very well have been fewer active users posting in the Main section.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T14:04:00.616Z · score: 11 (2 votes) · LW · GW

It's not so much that the process had a massive amount of risk as it implemented a Taleb-style anti-fragile strategy. It lost money, by dribs and drabs every year when times were good, but when times turned bad, it made a massive amount of money. According to The Big Short Burry was paying out premiums on CDO insurance every year while times were good, and got the insurance payout when the market turned and things went bad. So, for three or four years, he was invested in these really weird securities, securities that his investors hadn't signed up for, securities that were losing money, while they waited for a payout.

As far as why they wouldn't be interested in risking a smaller fraction of their money, the strategy only works if you have enough buffer to wait out the good years and capitalize on the inevitable downturn when it happens. We've seen this with Taleb himself. While he did well in the dotcom crash and the global financial crisis, he's had basically negative returns since.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T13:09:27.811Z · score: 13 (3 votes) · LW · GW

Many companies have their culture decline as they hire more, and have to spend an incredible amount of resources simply to prevent this (which is far from getting better as more people join). (E.g. big tech companies can probably have >=5 candidates spend >=10 hours in interviews for a a single position. And that’s not counting the probably >=50 candidates for that position spending >=1h.)

Is the super-elaborate hiring game really necessary, though? I've worked at Amazon and Microsoft. I've also worked at other firms which had much looser hiring practices. In my experience, the elaborate hiring game that these tech companies play are more about signalling to candidates, "We are a Very Serious Technology Company who use Only The Latest Modern Hiring Practices™." It's quite possible to me that these hiring practices could be considerably streamlined without actually affecting the quality of the candidates that got through. But, if they did that, then the hiring process would lose some of its signalling value, and the company wouldn't be seen as a Super Prestigious Institution™ which accepts Only The Best™.

tl;dr: In my view FAAMNG hiring process works in the same way as the Harvard application process. It's as much about advertising and signalling to candidates that the company is an elite institution as it is about actually hiring elite candidates.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T13:00:44.332Z · score: 11 (2 votes) · LW · GW

The problem was how he made those billions of dollars. Burry's initial investment thesis was stocks. When he pitched his fund to investors, it was a stock fund. Then, later, as Burry found that there was no way in the stock market to short the housing market, he branched out into the sorts of exotic collateralized debt obligations that would make him his profits.

From the perspective of his investors (a perspective I personally agree with), Burry was a loose cannon. The only reason he made a bunch of money instead of going down with every penny that his investors entrusted him with is that he managed to get lucky. Ask yourself, what would have happened to Burry's fund if the housing market hadn't cratered in 2007-2008. What if the housing market rally had gone on for another five or six years?

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T05:40:04.950Z · score: 9 (4 votes) · LW · GW

Online forums usually decline with growing user numbers (this happened to Reddit, HackerNews, as well as LessWrong 1.0)

Reddit and HackerNews, sure, but was the decline of LessWrong really due to growing user numbers? From what I've seen and read of LessWrong history, the decline was due to reductions in post volume, rather than post quality, which seems to me that it was a symptom of stagnating or shrinking active user numbers. Simply put, fewer people posting → fewer reasons to check the site → fewer comments → stagnation and death.

Comment by quanticle on When does adding more people reliably make a system better? · 2019-07-19T05:30:47.053Z · score: 14 (4 votes) · LW · GW

But it’s still the case that a system in a bad equilibrium with deeply immoral consequences rewarded the outcasts who pointed out those consequences with billions of dollars.

That's not exactly true. There were outcasts who correctly pointed out that the housing market was deeply troubled in e.g. 2004 and 2005. Did the market reward them? No. They went bust, as the market proved to be capable of staying irrational longer than they were capable of remaining solvent. Even in The Big Short, Michael Burry very nearly did go bust, and had to resort to exercising fail-safe clauses in his investment contracts to keep from going bust. The exercise of these clauses, and the resulting rancor they caused with his investors meant that even though he "won" and got a fair chunk of a billion dollar payout, he was basically frozen out of investing afterwards.

Comment by quanticle on The AI Timelines Scam · 2019-07-18T19:25:53.769Z · score: 8 (3 votes) · LW · GW

I don’t actually know the extent to which Bernie Madoff actually was conscious that he was lying to people. What I do know is that he ran a pyramid scheme. The dynamics happen regardless of how conscious they are. (In fact, they often work through keeping things unconscious)

Bernie Madoff plead guilty to running a pyramid scheme. As part of his guilty plea he admitted that he stopped trading in the 1990s and had been paying returns out of capital since then.

I think this is an important point to make, since the implicit lesson I'm reading here is that there's no difference between giving false information intentionally ("lying") and giving false information unintentionally ("being wrong"). I would caution that that is a dangerous road to go down, as it just leads to people being silent. I would much rather receive optimistic estimates from AI advocates than receive no estimates at all. I can correct for systematic biases in data. I cannot correct for the absence of data.

Comment by quanticle on What is your Personal Knowledge Management system? · 2019-07-18T16:50:08.807Z · score: 4 (2 votes) · LW · GW

I'll repeat the endorsements of org-mode, and add some links to specific org-mode features that I use.

  • I use org-capture to capture to-do items. I've found nothing quite as streamlined as being able to hit C-c c t and just type what I need to do
  • I use org-agenda to create daily to-do lists for myself. When I org-capture a new to-do item, I will add a deadline (C-c C-d) to indicate when I want to do the task.
  • I use org-publish to write my website. This is nice because it's very immediate for me to write, and I can hit C-c C-e P x to publish a new version of my website
  • For other org-mode tips, you can look at my emacs tips page, which has my emacs configuration documented (in a somewhat haphazard and idiosyncratic fashion)

I've tried other PIM tools, like vimwiki, workflowy, DynaList and Dropbox Paper, and I've found that of all of them org mode offers the right mix of customizability and immediacy for me. That said, I'll be the first to admit that org-mode doesn't have the easiest learning curve, and it's support for mobile devices is pretty much trash. (People have recommended orgzly, but, honestly, orgzly's UI pretty terrible.) What I do when I'm out and about is capture notes in Google Keep and then copy those notes over into org-mode when I get back to my computer.

Comment by quanticle on Self-consciousness wants to make everything about itself · 2019-07-04T01:04:21.620Z · score: 4 (2 votes) · LW · GW

I'm not sure there's a difference. Either you're asking up front ("Hey, do you mind if I set this timer to auto-publish in a week?") or you're asking later ("Hey, we just discussed something that I think would be of interest, do you mind if I publish it?").

In fact, I think asking after the fact might be easier, because you can point to specific things that were discussed and say, "I'm going to excerpt <x>, <y>, and <z>. Is that okay?"

Comment by quanticle on Self-consciousness wants to make everything about itself · 2019-07-03T21:47:43.624Z · score: 4 (2 votes) · LW · GW

This line of thought caused me to think that it might be quite valuable to have some kind of "conversation escrow" that allows people to have [a] conversation in private that still reliably gets published. As an example, you could imagine a feature on LessWrong to invite someone to a private comment-thread. The whole thread happens in private, and is scheduled to be published a week after the last comment in the thread was made, unless any of the participants presses a veto button...

I'm not sure I understand either the problem or the proposed solution. If there's a veto button, it's not reliable publishing, is it? How is this any better or different than having a private exchange via e-mail, Slack, Discord, etc, and then asking the other person, "Do you mind if I publish an excerpt?"

More generally, I'm not sure what kind of problem this tool would solve. Can you name some kinds of conversations that this tool would be used for?

Comment by quanticle on Circle Games · 2019-06-09T18:36:58.805Z · score: 16 (5 votes) · LW · GW

The baby’s fascination with circle games completely belies the popular notion that drill is an intrinsically unpleasant way to learn. Repetition isn’t boring to babies who are in the process of mastering a skill. They beg for repetition.

I disagree with the implication there that drill is repetition. Drill, to me, is repetition with predictable results. If I'm doing the same thing over and over again, and I'm getting exactly what I expect each time, that's a drill. The sort of entertaining repetition you're pointing at here, is something where I don't necessarily know what to expect every time I take an action.

A good contrast is painting a deck versus playing a slot machine. They're both extremely repetitive actions. Heck, even the physical movements in each are similar (if anything, deck painting involves less repetitive movement than playing a slot machine). Yet, we see people getting addicted to playing slot machines. I've never heard of anyone getting addicted to deck painting. The difference is that deck painting is pretty predictable. Dip paint in paintbrush, apply paint to deck, and there's paint on the deck, exactly as you'd expect. A slot machine, on the other hand, is geared toward unpredictability. You pull the lever, and you don't know what's going to happen when the reels stop. Will you get the jackpot? A lesser prize? Nothing at all? The sorts of circle games that babies enjoy are closer (from the perspective of the baby) to a slot machine than to deck painting.

For example, let's look at the Jack in the Box. It's predictable and boring to an adult. An adult (or even an older child) will pretty quickly catch on on the pattern that the box pops open after a number of turns or on a particular musical note ("Pop goes the weasel," etc.). However, to a child, especially to a child that's still grappling with the concept of cause and effect, a Jack in the Box is endlessly fascinating. Here's a mechanism, and when I manipulate the mechanism, something seemingly entirely unrelated happens?! How? Why?

Peek-a-boo is similar to that as well. Yes, the child might know that you're still there. But I'm willing to bet that you don't make exactly the same expression when you open your hands and reveal your face each and every time. It's the variety of facial expressions, and the effort required to predict them that provides the unpredictability that transforms peek-a-boo from a drill into a game.

Comment by quanticle on Is AI safety doomed in the long term? · 2019-05-26T03:20:30.160Z · score: 3 (2 votes) · LW · GW

On the basis that humans determine the fate of other species on the planet

Do they? There are many species that we would like to control or eliminate, but which we have not been able to do so. Yes, we can eliminate certain highly charismatic species (or bring them back from the brink of extinction, as needs be) but I wouldn't generalize that to humans being able to control species in general. If we had that level of control, the problem of "invasive" species would be trivially solved.

Comment by quanticle on Which scientific discovery was most ahead of its time? · 2019-05-17T05:44:47.787Z · score: 6 (3 votes) · LW · GW

What about Mendelian Inheritance? It was initially discovered by Gregor Mendel in 1865, but it was seen as being a very narrow special case of genetics until about 1900, when de Vries, Correns and von Tschermak "rediscovered" his work. So that's about 35 years during which the statistical laws of inheritance were published, but weren't being used or built upon.

Comment by quanticle on Probability interpretations: Examples · 2019-05-14T03:25:08.877Z · score: 2 (1 votes) · LW · GW

You seem to be saying that "external shared reality" is an approximation in the same way that Newtonian mechanics is an approximation for Einsteinian relativity. That's fine. So what is "external shared reality" an approximation of? Just what exactly is out there generating inputs to my senses, and by what mechanism does it remain in sync with everyone else (approximately)?

Comment by quanticle on Probability interpretations: Examples · 2019-05-13T14:19:54.254Z · score: 2 (1 votes) · LW · GW

The examples you use reinforce my point. We argue about extremely fine details. When supporters of opposing teams argue over whether a point was or was not scored, they're disputing whether the ball was here or there by a few millimeters. You won't find very many people arguing that actually, the ball was clear on the other side of the field and in reality, the disputed point is one that would have been scored by the other team.

Similarly, we might argue about whether the British, Americans or Russians were primarily responsible for the United Nations' victory in World War 2, but I don't think you'll find very many people arguing that actually it was the Italians who won World War 2.

The fact that our perceptions of reality match each other 99.999% of the time, to me, indicates that there's something out there that exists regardless of whether I perceive it or not. I call that "reality".

Comment by quanticle on Probability interpretations: Examples · 2019-05-12T22:53:49.447Z · score: 3 (2 votes) · LW · GW

Human and animal brains do complicated calculations all the time in real time to get through life, like solving what amounts to non-linear partial differential equations to even get a bite of food into your mouth. Just because it is subconscious, it is no less of a math than proving theorems.

I agree. So if there is no "objective" reality, apart from that which we experience, then why is it that we all seem to experience the same reality? When I shoot a basketball, or hit a tennis ball, both I and the referee see the same trajectory and are in approximate agreement about where the ball lands. When I lift a piece of food to my mouth and eat it, it would surprise me if someone across the table said that they saw it spill from my fork and stain my shirt.

In the absence of an external reality, why is it that everyone's model of the world appears to be in such concordance with everyone else's?

Comment by quanticle on Type-safeness in Shell · 2019-05-12T19:06:21.121Z · score: 14 (6 votes) · LW · GW

PowerShell does a lot of this, doesn't it? PowerShell abandons the concept of programs transferring data as text, and instead has them tranferring serialized .Net objects (with type annotations) back and forth. It doesn't extend to the filesystem, but it's entirely possible to write functions that enforce type guarantees on their input (i.e. requiring numbers, strings, or even more complicated data types, like JSON).

A good example is searching with regexps. In Unix, grep returns a bunch of strings (namely the lines which match the specified regex). In PowerShell, Select-String returns match objects, which have fields containing the file that matched, the line number that matched, the matching line itself, capture groups, etc. It's a much richer way of passing data around than delimited text.

Comment by quanticle on Probability interpretations: Examples · 2019-05-12T17:58:55.036Z · score: 6 (3 votes) · LW · GW

I still don't think you've answered Said's question. The question is whether two people can observe different values of pi. Or, to put it differently, why is it that, whenever anyone computes a value of pi, it seems to come out to the same value (3.14159...). Doesn't that indicate that there is some kind of objective reality, to which our mathematics corresponds?

One of the questions that Wigner brings up in The Unreasonable Effectiveness of Mathematics in the Natural Sciences is why does our math work so well at predicting the future? I would put the same question to you, but in a more general form. If there is no such thing as non-experienced mathematical truths, then why does everyone's experience of mathematical truths seem to be the same?

Comment by quanticle on Why books don't work · 2019-05-12T04:20:20.063Z · score: 4 (2 votes) · LW · GW

If wishes were horses, all men would ride.

More seriously, I would love for there to be a better way to learn than books, but in practice, books inhabit a sweet spot at the intersection of information density, ease of searching, and portability that's hard for other forms of media to match.

Comment by quanticle on Why books don't work · 2019-05-11T21:39:37.325Z · score: 8 (4 votes) · LW · GW

The author seems to spend almost no time engaging with or thinking critically about the books that he's read, and then claims that "books don't work". Has the author tried writing an outline? Or writing a review?

Simply reading a book, and letting its contents wash over you won't magically make you retain the contents of that book. There is no royal road to knowledge. One has to engage with a book in order to retain not just the conclusions of the book, but also the reasoning that led to the conclusions.

Comment by quanticle on [deleted post] 2019-05-11T20:26:43.527Z

From the post:

A lot of people like to use the prisoner's dilemma to justify being shitty to other humans they personally know.

Can you post a specific example of someone using the prisoner's dilemma to justify being shitty to someone they personally know? One of the preconditions of the prisoners' dilemma is that the prisoners don't know one another very well (else, otherwise, they'd have come up with some kind of prearranged strategy). You see this with real prisoners and real gangs: they often flow along family and social lines, precisely because you can rely on your brother or a childhood friend when you've both been arrested, in a way that you can't with a relatively unknown stranger.

Comment by quanticle on [deleted post] 2019-05-11T20:21:47.712Z

Here's the post text. I was able to copy/paste it from Facebook, through some combination of running Linux, running Firefox, having AdBlock and not being logged in to Facebook:

A lot of people like to use the prisoner's dilemma to justify being shitty to other humans they personally know. I think think is a dumb analogy, at least for First World, relatively affluent adult life, because the original PD doesn't let you say "Hey, wait a minute, my game partner is an asshole who keeps defecting, I'm out of here".

That's why creating avenues of escape for people who don't have that luxury is so important to me. Entrapment is one of the most insidious parts of abuse, because of course you're going to come up with a counter-asshole strategy if you have to keep playing with an asshole. But most of those strategies have a cost of emotional guardedness and alienation that can make it tough to get along with genuinely good and nice people.

A better lesson from the PD? The winning strategy is always, always, always dependent on the strategies of your other players. It's really hard to juggle many wildly different strategies at once. Keep it simple. Pick a strategy that synchronizes well with itself, go and find people running the same strategy, and make that your people.

Comment by quanticle on Nash equilibriums can be arbitrarily bad · 2019-05-02T03:51:11.161Z · score: 4 (2 votes) · LW · GW

I see. And from then it follows the same pattern as a dollar auction, until the "winning" bet goes to zero.

Comment by quanticle on Nash equilibriums can be arbitrarily bad · 2019-05-01T16:21:31.585Z · score: 7 (4 votes) · LW · GW

Doesn't the existence of the rule that says that no money changes hands if there's a tie alter the incentives? If we both state that we want 1,000,000 pounds, then we both get it and we both walk away happy. What incentive is there for either of the two agents to name a value that is lower than 1,000,000?

Comment by quanticle on Pecking Order and Flight Leadership · 2019-04-29T21:37:37.027Z · score: 1 (2 votes) · LW · GW

Do people hate prophets, or hate being prophets?

The former. Being a prophet is great! You've achieved enlightenment! All you're doing is trying to spread the good word of your revelations with the rest of humanity. Here are all of these people, living lives of immense suffering, and you have the solution. You can bring them peace. You can ease the torment of their souls! Even a cold-blooded utilitarian can see that a 5% reduction in suffering multiplied by several hundred million people represents a substantial gain in overall utility. And if a bit of force needs to be applied in order to get people to see the Good Word, then that is justified, is it not?

Comment by quanticle on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-07T03:17:32.223Z · score: 3 (2 votes) · LW · GW

And “knowing thyself” is especially important.

Why? If you took a test, and it came back telling you that you had an IQ of 140, what about your day-to-day life would change? Likewise, what would you do different if you took a test and it came back telling you that you had an IQ of 90?

Comment by quanticle on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-07T03:15:30.509Z · score: 1 (3 votes) · LW · GW

What would you gain from knowing your own IQ?

As far as I can tell, knowing my own IQ is a no-win scenario. Either my IQ is higher than I expected it to be, in which case I feel like I'm a disappointment, or it's lower than I expect it to be, in which case I'd feel like a fraud. I wouldn't gain any actionable data from it, so why bother?

Comment by quanticle on Will superintelligent AI be immortal? · 2019-03-31T00:38:32.541Z · score: 3 (2 votes) · LW · GW

If the system is float in the vacuum heat wont go out.

The heat won't escape by conduction, nor will it escape by convection. However, it will escape via radiation.

Comment by quanticle on General v. Specific Planning · 2019-03-28T18:59:09.504Z · score: 3 (2 votes) · LW · GW

On the other hand, AlphaZero seems to play to obtain a specific, although gradually accumulated, positional advantage that ultimately results in a resounding victory. It is happy to sacrifice “generally useful” material to get this.

AlphaZero plays chess in a manner that is completely unlike how humans, or even human-designed chess programs play chess. A human grandmaster does play much like you describe yourself playing: accumulating piece advantages, and only making limited sacrifices to gain position, when it's clear that the positional advantage outweighs the piece disadvantage.

AlphaZero, on the other hand, plays much more positionally. In its games against Stockfish, it would make sacrifices that Stockfish thought were crazy, as Stockfish was evaluating the board based on pieces and AlphaZero was evaluating the board based on position.

Comment by quanticle on Do you like bullet points? · 2019-03-27T02:15:48.044Z · score: 19 (5 votes) · LW · GW

I find that bullet points lose out on the ability to include story type data that my system 1 responds to.

That's an advantage, in my opinion. I have a habit of turning articles into bullet point summaries, and I've found that the more difficult something is to turn into a bullet-point summary, the less actual content there is in the article. Ease of transformation into bullet points is a quick, yet remarkably effective heuristic to distinguish insight from insight porn.

Comment by quanticle on The tech left behind · 2019-03-16T04:12:40.838Z · score: 3 (2 votes) · LW · GW

In the same vein as OpenDoc, XMPP and RSS both come to mind. While they "saw the light of day", they never seemed to reach the threshold of popularity necessary for long-term survival, and they're not well supported any more. I would argue that they're both good examples of "left-behind" tech.

Comment by quanticle on The tech left behind · 2019-03-16T04:05:14.523Z · score: 5 (2 votes) · LW · GW

I would argue that spaced repetition is one such technology. We've known about forgetting curves and spaced repetition as a way of efficiently memorizing data since at least the '60s, if not before. Yet, even today, it's hardly used and if you talk to the average person about spaced repetition, they won't have a clue as to what you're referring to.

Here we have a really cool technology, which could significantly improve how we learn new information, and it's being used maybe 5% as often as it should be.

Comment by quanticle on The tech left behind · 2019-03-16T03:55:40.411Z · score: 2 (1 votes) · LW · GW

Could you clarify how Damascus Steel qualifies? As I understand it, the question is asking about technologies which demonstrated promise, but never reached widespread use, and thus languished in obscurity. Damascus Steel was famous and highly prized in medieval Europe. While it was rare and expensive, I'm not sure that it manages to meet the obscurity criterion.