Posts

Economic policy for artificial intelligence 2018-08-08T07:17:41.327Z
"Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy) 2018-07-19T09:48:59.843Z
AI and the paperclip problem (or: Economist solves control problem with one weird trick!) 2018-06-11T02:19:15.886Z
Paper: "A simple function of one parameter (θ) can fit any collection of ordered pairs {Xi,Yi} to arbitrary precision"--implications for Occam? 2018-05-31T03:23:53.718Z
LINK: Where Humans Meet Machines: Intuition, Expertise and Learning (Brynjolfsson talks to Kahneman) 2018-05-21T02:50:11.564Z
[LINK] How to write a dominant assurance contract on the Ethereum blockchain 2018-05-04T03:44:20.458Z
Artificial intelligence and the stability of markets 2017-11-15T02:17:12.401Z
Economics of AI conference from NBER 2017-09-27T01:45:49.392Z
Stanislav Petrov has died (2017-05-19) 2017-09-18T03:13:16.284Z
The new spring of artificial intelligence: A few early economics 2017-08-21T02:06:07.040Z
China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems 2017-08-10T01:54:36.808Z
Lloyd's of London and less-than-catastrophic risk 2017-06-14T02:49:58.254Z
From Jonathan Haidt: A site that lets you "experience different viewpoints" 2017-05-03T16:01:08.318Z
Boundedly Rational Patients? Health and Patient Mistakes in a Behavioral Framework 2017-04-28T01:01:38.210Z
Review of "The Undoing Project: A Friendship That Changed Our Minds," Michael Lewis' book about Kahneman & Tversky 2017-04-19T05:13:22.656Z
[Link] Review of "Doing Good Better" 2015-09-26T07:58:17.440Z
[LINK] Amanda Knox exonerated 2015-03-28T06:15:23.789Z
[Link] How to see into the future (Financial Times) 2014-09-07T06:04:04.826Z
LINK: "This novel epigenetic clock can be used to address a host of questions in developmental biology, cancer and aging research." 2013-10-22T07:59:30.855Z
[link] Betting on bad futures 2012-09-22T15:50:22.934Z
[link] One-question survey from Robin Hanson 2012-09-07T23:35:14.646Z
Robot ethics [link] 2012-06-01T15:43:29.295Z
[link] New Scientist, on the distant future 2012-03-07T10:15:44.297Z
[link] Admitting errors (in meteorology) 2011-12-16T17:21:27.081Z
Russ Roberts and Gary Taubes on confirmation bias [podcast] 2011-12-04T05:51:19.123Z
Another "Oops" moment [link] 2011-10-04T10:59:51.107Z
Religion, happiness, and Bayes 2011-10-04T10:21:25.771Z
Akrasia as a collective action problem 2010-12-07T15:44:56.626Z
Does cognitive therapy encourage bias? 2010-11-22T11:31:53.303Z

Comments

Comment by fortyeridania on Sam Harris and the Is–Ought Gap · 2018-11-16T03:31:22.011Z · LW · GW

Two points:

First, can you clarify what you mean by rational persuasion, if you are distinguishing it from logical proof? Do you mean that we can skip arguing for some premises because we can rely on our intuition to identify them as already shared? Or do you mean that we need not aim for deductive certainty--a lower confidence level is acceptable? Or something else?

Second, I appreciate this post because what Harris's disagreements with others so often need is exactly dissolution. And you've accurately described Harris's project: He is trying to persuade an ideal listener of moral claims (e.g., it's good to help people live happy and fulfilling lives), rather than trying to prove the truth of these claims from non-moral axioms.

Some elaboration on what Harris is doing, in my view:

  • Construct a hellish state of affairs (e.g., everyone suffering for all eternity to no redeeming purpose), call on the interlocutor to admit that such a situation is bad.
  • Construct a second state of affairs that is not so hellish (e.g., everyone happy and virtuous).
  • Call on the interlocutor to admit that the first situation is bad, and that the second situation is better.
  • Conclude that the interlocutor has admitted the truth of moral claims, even though Harris himself never explicitly said anything moral.

But by adding notions like "to no redeeming purpose" and "virtuous," Harris is smuggling oughts into the universes he describes. (He has to do this in order to block the interlocutor from saying "I don't admit the first situation is bad because the suffering could be for a good reason, and the second situation might not be good because maybe everyone is happy in a trivial sense because they've just wireheaded.)

In other words, Harris has not bridged the gap because he has begun on the "ought" side.

Rhetorically, Harris might omit the bits about purpose or virtue, and the interlocutor might still admit that the first state is bad and the second better, because the interlocutor has cooperatively embedded these additional moral premises.

In this case, to bridge the gap Harris counts on the listener supplying the first "ought."

Comment by fortyeridania on Economic policy for artificial intelligence · 2018-08-08T07:18:23.459Z · LW · GW

Summary:

Regardless of whether one adopts a pessimistic or optimistic view of artificial intelligence, policy will shape how it affects society. This column looks at both the policies that will influence the diffusion of AI and policies that will address its consequences. One of the most significant long-run policy issues relates to the potential for artificial intelligence to increase inequality. 
Comment by fortyeridania on "Artificial Intelligence" (new entry at Stanford Encyclopedia of Philosophy) · 2018-07-19T09:52:34.186Z · LW · GW

The author is Selmer Bringsjord.

Academic: https://homepages.rpi.edu/~brings/

Wikipedia: https://en.wikipedia.org/wiki/Selmer_Bringsjord

Comment by fortyeridania on AI and the paperclip problem (or: Economist solves control problem with one weird trick!) · 2018-06-11T02:24:24.533Z · LW · GW

Author:

  • Website: https://www.joshuagans.com
  • Wikipedia: https://en.wikipedia.org/wiki/Joshua_Gans

Summary:

Philosophers have speculated that an AI tasked with a task such as creating paperclips might cause an apocalypse by learning to divert ever-increasing resources to the task, and then learning how to resist our attempts to turn it off. But this column argues that, to do this, the paperclip-making AI would need to create another AI that could acquire power both over humans and over itself, and so it would self-regulate to prevent this outcome. Humans who create AIs with the goal of acquiring power may be a greater existential threat.

Key paragraph:

The insight from economics is that while it may be hard, or even impossible, for a human to control a super-intelligent AI, it is equally hard for a super-intelligent AI to control another AI. Our modest super-intelligent paperclip maximiser, by switching on an AI devoted to obtaining power, unleashes a beast that will have power over it. Our control problem is the AI's control problem too. If the AI is seeking power to protect itself from humans, doing this by creating a super-intelligent AI with more power than its parent would surely seem too risky.

Link to actual paper: https://arxiv.org/abs/1711.04309

Abstract:

Here we examine the paperclip apocalypse concern for artificial general intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that simple goal and are unavailable for any other use. We provide conditions under which a paper apocalypse can arise but also show that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.
Comment by fortyeridania on Paper: "A simple function of one parameter (θ) can fit any collection of ordered pairs {Xi,Yi} to arbitrary precision"--implications for Occam? · 2018-06-01T01:56:48.117Z · LW · GW

Well put.

Comment by fortyeridania on Paper: "A simple function of one parameter (θ) can fit any collection of ordered pairs {Xi,Yi} to arbitrary precision"--implications for Occam? · 2018-05-31T04:35:08.235Z · LW · GW

Thanks to Alex Tabarrok at Marginal Revolution: https://marginalrevolution.com/marginalrevolution/2018/05/one-parameter-equation-can-exactly-fit-scatter-plot.html

Title: "One parameter is always enough"

Author: Steven T. Piantadosi, ( University of Rochester)

Abstract:

We construct an elementary equation with a single real valued parameter that is capable of fitting any “scatter plot” on any number of points to within a fixed precision. Specifically, given given a fixed  > 0, we may construct fθ so that for any collection of ordered pairs {(xj , yj )} n j=0 with n, xj ∈ N and yj ∈ (0, 1), there exists a θ ∈ [0, 1] giving |fθ(xj ) − yj | <  for all j simultaneously. To achieve this, we apply prior results about the logistic map, an iterated map in dynamical systems theory that can be solved exactly. The existence of an equation fθ with this property highlights that “parameter counting” fails as a measure of model complexity when the class of models under consideration is only slightly broad.

After highlighting the two examples in the paper, Tabarrok provocatively writes:

Aside from the wonderment at the result, the paper also tells us that Occam’s Razor is wrong. Overfitting is possible with just one parameter and so models with fewer parameters are not necessarily preferable even if they fit the data as well or better than models with more parameters.

Occam's Razor in its narrow form--the insight that simplicity is renders a claim more probable--is a consequence of the interaction between Kolmogorov complexity and Bayes' theorem. I don't see how this result affects this idea per se. But perhaps it shows the flaws of conceptualizing complexity as "number of parameters."

Comment by fortyeridania on LINK: Where Humans Meet Machines: Intuition, Expertise and Learning (Brynjolfsson talks to Kahneman) · 2018-05-21T03:06:19.936Z · LW · GW

HT to Tyler Cowen: https://marginalrevolution.com/marginalrevolution/2018/05/erik-brynjolfsson-interviews-daniel-kahneman.html

Comment by fortyeridania on Gaining Approval: Insights From "How To Prove It" · 2018-05-14T09:12:05.166Z · LW · GW

Ah, of course. Thanks.

Comment by fortyeridania on Affordance Widths · 2018-05-14T05:44:45.611Z · LW · GW

The term "affordance width" makes sense, but perhaps there's no need to coin a new term when "tolerance" exists already.

Comment by fortyeridania on Gaining Approval: Insights From "How To Prove It" · 2018-05-14T03:40:55.917Z · LW · GW
A ∨ B ⟷ ¬A ⟶ B

But this is not true, because ¬(¬A ⟶ B) ⟶ A ∨ B. With what you've written you can get from the left side to the right side, but you can't get from the right side to the left side.

What you need is: "Either Alice did it or Bob did it. If it wasn't Alice, then it was Bob; and if it wasn't Bob, then it was Alice."

Thus: A ∨ B ⟷ (¬A ⟶ B ∧ ¬B ⟶ A)

Comment by fortyeridania on Terrorism, Tylenol, and dangerous information · 2018-05-14T03:27:40.301Z · LW · GW

Interesting post, and I'm sure "not having thought of it" helps explain the recency of vehicular attacks (though see the comment from /r/CronoDAS questioning the premise that they are as recent as they may seem).

Another factor: Other attractive methods, previously easy, are now harder--lowering the opportunity cost of a vehicular attack. For example, increased surveillance has made carefully coordinated attacks harder. And perhaps stricter regulations have made it harder to obtain bomb-making materials or disease agents.

This also helps to explain apparent geographical distribution of vehicle attacks: more common in Europe and Canada than the United States, especially per capita. Alternative ways to kill many people, like with a gun, are much easier in the US.

Yet another explanation: Perhaps terrorist behavior doesn't appear to maximize damage or terror is that much terrorism is not intended to do so. My favorite piece arguing this is from Gwern:

Comment by fortyeridania on Eight political demands that I hope we can agree on · 2018-05-04T03:37:59.050Z · LW · GW

How much support is there for promotion of prediction markets? I see three levels:

1. Legalization of real-money markets (they are legal in some places, but their illegality or legal dubiousness in the US--combined with the centrality of the US companies in global finance--makes it hard to run a big one without US permission)

2. Subsidies for real-money markets in policy-relevant issues, as advocated by Robin Hanson

3. Use of prediction markets to determine policy (futarchy), as envisioned by Robin Hanson

Comment by fortyeridania on Eight political demands that I hope we can agree on · 2018-05-04T03:33:25.942Z · LW · GW
1. We want public policy that's backed up by empiric evidence. We want a government that runs controlled trials to find out what policies work.

This seems either empty (because no policy has zero empirical backing), throttling (because you can't possibly have an adequate controlled trial on every proposal), or pointless (because most political disputes are not good-faith disagreements over empirical support).

Second, as this list seems specific to one country, I wonder how rationalists who don't follow its politics can inform this consensus.

Third, did you choose eight demands only to mimic the Fabians? Does that mean you omitted some other plausible demands, or that you stretched a few that perhaps should not have made the cut?

Comment by fortyeridania on Eight political demands that I hope we can agree on · 2018-05-04T03:23:51.593Z · LW · GW

Upvoted for the suggestion to reword the euthanasia point.

Comment by fortyeridania on [deleted post] 2018-05-04T03:19:04.181Z

Useful distinction: "rationalist" vs. "rational person." By the former I mean someone who deliberately strives to be the latter. By the latter I mean someone who wins systematically in their life.

It's possible that rationalists tend to be geeks, especially if the most heavily promoted methods for deliberately improving rationality are mathy things like explicit Bayesian reasoning, or if most of the material advocating rationality is heavily dependent on tech metaphors.

Rational people need not fit the stereotypes you've listed. Most people I know who seem to be good at living have excellent social skills and are physically fit. Some well-known rationalists, or fellow travelers, also do not fit. An example is Tim Ferriss.

Comment by fortyeridania on Bayesian Reasoning - Explained Like You're Five · 2018-01-19T04:13:29.478Z · LW · GW

Hey, I just saw this post. I like it. The coin example is a good way to lead in, and the non-quant teacher example is helpful too. But here's a quibble:

If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence.

The map is not the territory; things are still true or false. Bayes' theorem doesn't say anything about the nature of truth itself; whatever your theory of truth, that should not be affected by the acknowledgement of Bayes' theorem. Rather, it's our beliefs (or at least the beliefs of an ideal Bayesian agent) that are on a spectrum of confidence.

Comment by fortyeridania on Artificial intelligence and the stability of markets · 2017-11-15T02:18:41.389Z · LW · GW

Introduction:

Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.

However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).

Conclusion:

Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.

From the point of view of financial stability, the opposite conclusion holds.

We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.

Meanwhile, the very formality and efficiency of the risk management/supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.

The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.

Comment by fortyeridania on Pitting national health care systems against one another · 2017-10-25T02:16:34.732Z · LW · GW

I agree with most of what you've said, but here's a quibble:

If you are an evil pharma-corp, vaccines are a terrible way to be evil.

Unless you're one of the sellers of vaccines, right?

Comment by fortyeridania on Open thread, October 16 - October 22, 2017 · 2017-10-18T03:41:48.605Z · LW · GW

That's too bad; it probably doesn't have to be that way. If you can articulate what infrastructural features of 1.0 are missing from 2.0, perhaps the folks at 2.0 can accommodate them in some way.

Comment by fortyeridania on 2017 LessWrong Survey · 2017-09-18T05:37:45.891Z · LW · GW

Done.

Comment by fortyeridania on Stanislav Petrov has died (2017-05-19) · 2017-09-18T03:15:14.368Z · LW · GW

More (German): http://karl-schumacher-privat.de

Original Eliezer post: http://lesswrong.com/lw/jq/926_is_petrov_day/

Other LW discussions:

The anniversary of the relevant event will be next Tuesday.

Comment by fortyeridania on Stupid Questions September 2017 · 2017-09-18T02:22:38.868Z · LW · GW

I don't remember if the Sequences cover it. But if you haven't already, you might check out SEP's section on Replies to the Chinese Room Argument.

Comment by fortyeridania on Is Feedback Suffering? · 2017-09-15T07:23:27.960Z · LW · GW
  • Scholarly article

  • Title: Do scholars follow Betteridge’s Law?

  • Answer is no

Nice.

Comment by fortyeridania on Is Feedback Suffering? · 2017-09-11T01:46:59.820Z · LW · GW

I know this is Betteridge's law of headlines, but do you happen to know if it's accurate?

Comment by fortyeridania on New business opportunities due to self-driving cars · 2017-09-11T01:40:37.161Z · LW · GW

This was also explored by Benedict Evans in this blog post and this EconTalk interview, mentioned in the most recent feed thread.

Comment by fortyeridania on Heuristics for textbook selection · 2017-09-08T08:29:33.310Z · LW · GW

True. I think Frum did this in law school, which he finished in 1987.

Comment by fortyeridania on Heuristics for textbook selection · 2017-09-06T06:53:52.204Z · LW · GW

In addition to what you've cited, here are some methods I've used and liked:

  1. Email professors to ask for recommendations. Be polite, concise, and specific (e.g., why exactly do you want to learn more about x?).

  2. David Frum says he used to pick a random book on his chosen topic, check which books kept showing up in the footnotes, then repeat with those books. A couple rounds yielded a good picture of who the recognized authorities were. (I pointed this out in a Rationality Quotes thread in 2015. Link: http://lesswrong.com/lw/lzn/rationality_quotes_thread_april_2015/c7qp.) Cons: This is time-consuming, sometimes requires physical access to many books you don't yet own, and tends to omit recent books.

Comment by fortyeridania on Open thread, September 4 - September 10, 2017 · 2017-09-05T02:49:17.667Z · LW · GW

but I don't feel them becoming habitual as I would like

Have you noticed any improvement? For example, an increase in the amount of time you feel able to be friendly? If so, then be not discouraged! If not, try changing the reward structure.

For example, you can explicitly reward yourself for exceeding thresholds (an hour of non-stop small talk --> extra dark chocolate) or meeting challenges (a friendly conversation with that guy --> watch a light documentary). Start small and easy. Or: Some forms of friendly interaction might be more rewarding than others; persist in those to acclimate yourself to longer periods of socialising.

There's a lot of literature on self-management out there. If you're into economics, you might appreciate the approach called picoeconomics:

Caution: In my own experience, building new habits is less about reading theories and more about doing the thing you want to get better at, but it's disappointingly easy to convince myself that a deep dive into the literature is somehow just as good; your experience may be similar (or it may not).

Comment by fortyeridania on [deleted post] 2017-09-04T04:48:01.534Z

Is one's answer to the dilemma supposed to illuminate something about the title question? Presumably a large part of the worth-livingness of life consists in the NPV of future experiences, not just in past experiences.

  • Title question: Yes. Proof by revealed preference:

(1) Life is a good with free disposal.

(2) I am alive.

(3) Therefore, life is worth living.

  • Dilemma: Choose the second, on the odds that God changes its mind and lets you keep living, can't find you again the second time around, is itself annihilated in the interim, etc.

Quibble: Annihilationism is an eschatalogical doctrine about the final fate of all souls, not the simple event of the annihilation.

Comment by fortyeridania on China’s Plan to ‘Lead’ in AI: Purpose, Prospects, and Problems · 2017-08-10T01:55:13.654Z · LW · GW

I found this article linked here: https://www.cfr.org/blog/beijings-ai-strategy-old-school-central-planning-futuristic-twist

Comment by fortyeridania on Sam Harris and Scott Adams debate Trump: a model rationalist disagreement · 2017-07-26T02:57:29.400Z · LW · GW

Maybe I should write a book!

I hope you do, so I can capitalize on my knowledge of your longstanding plan to capitalize on your knowledge of Adams' longstanding plan to capitalize on his knowledge that Trump would win with a book with a book with a book.

Comment by fortyeridania on Lloyd's of London and less-than-catastrophic risk · 2017-06-30T01:07:02.115Z · LW · GW

And even if you have one, the further that real-life market is away from the abstract free market, the less prices converge to cost + usual profit.

True.

I suspect that there is no market for unique, poorly-estimable risks.

That's probably true for most such risks, but it's worth noting that there are markets for some forms of oddball events. One example is prize indemnity insurance (contest insurance).

Comment by fortyeridania on Lloyd's of London and less-than-catastrophic risk · 2017-06-15T07:21:24.310Z · LW · GW

The formatting is broken

Fixed, thanks.

For unique, poorly-estimable risks the insurance industry had strong incentive to overprice them

Plausible, and one should certainly beware of biases like this. On the other hand, given conventional assumptions regarding the competitiveness of markets, shouldn't prices converge toward a rate that is "fair" in the sense that it reflects available knowledge?

Comment by fortyeridania on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-25T03:12:24.104Z · LW · GW

Betteridge's law of headlines

Comment by fortyeridania on From Jonathan Haidt: A site that lets you "experience different viewpoints" · 2017-05-03T16:01:39.826Z · LW · GW

From Marginal Revolution: http://marginalrevolution.com/marginalrevolution/2017/05/viewpoint-diversity-experience.html

Comment by fortyeridania on Scenario analysis: a parody · 2017-04-28T01:30:05.897Z · LW · GW

I know this is meant to be parody, but how closely does it resemble scenario analysis in the corporate world? From what I've read about the actual use of scenario analysis (e.g., at Shell), the process takes much longer (many sessions over a period of weeks).

Second, and more importantly: suits are typically not quants, and have a tendency to misinterpret (or ignore) explicit probabilities. And they can easily place far too much confidence in the output of a specific model (model risk). In this context, switching from full-on quant models to narrative models (as scenario analysis entails) can increase accuracy, or at least improve calibration. This is a classic "roughly right vs. precisely wrong" situation.

Comment by fortyeridania on Boundedly Rational Patients? Health and Patient Mistakes in a Behavioral Framework · 2017-04-28T01:04:51.433Z · LW · GW

I found this article through Marginal Revolution: http://marginalrevolution.com/marginalrevolution/2017/04/thursday-assorted-links-106.html

Comment by fortyeridania on Boundedly Rational Patients? Health and Patient Mistakes in a Behavioral Framework · 2017-04-28T01:04:11.310Z · LW · GW

Authors: Ada C. Stefanescu Schmidt, Ami Bhatt, Cass R. Sunstein

Abstract:

During medical visits, the stakes are high for many patients, who are put in a position to make, or to begin to make, important health-related decisions. But in such visits, patients often make cognitive errors. Traditionally, those errors are thought to result from poor communication with physicians; complicated subject matter; and patient anxiety. To date, measures to improve patient understanding and recall have had only modest effects. This paper argues that an understanding of those cognitive errors can be improved by reference to a behavioral science framework, which distinguishes between a “System 1” mindset, in which patients are reliant on intuition and vulnerable to biases and imperfectly reliable heuristics, and a “System 2” mindset, which is reflective, slow, deliberative, and detailed-oriented. To support that argument, we present the results of a randomized-assignment experiment that shows that patients perform very poorly on the Cognitive Reflection Test and thus are overwhelmingly in a System 1 state prior to a physician visit. Assigning patients the task of completing patient-reported outcomes measures immediately prior to the visit had a small numerical, but not statistically significant, shift towards a reflective frame of mind. We describe hypotheses to explain poor performance by patients, which may be due to anxiety, a bandwidth tax, or a scarcity effect, and outline further direction for study. Understanding the behavioral sources of errors on the part of patients in their interactions with physicians and in their decision-making is necessary to implement measures improve shared decision-making, patient experience, and (perhaps above all) clinical outcomes.

Comment by fortyeridania on Holy Ghost in the Cloud (review article about christian transhumanism) · 2017-04-20T04:59:50.257Z · LW · GW

For comparison, here are Robin Hanson's thoughts on some Mormon transhumanists: http://www.overcomingbias.com/2017/04/mormon-transhumanists.html

Comment by fortyeridania on April '17 I Care About Thread · 2017-04-20T04:34:31.981Z · LW · GW

Good point. You don't have to go to the gym. I used to do jumping jacks in sets of 100, several sets throughout the day. Gradually increase the number of daily sets.

Comment by fortyeridania on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-20T04:31:20.283Z · LW · GW

What would that look like?

Concretely? I'm not sure. One way is for a pathogen to jump from animals (or a lab) to humans, and then manage to infect and kill billions of people.

Humanity existed for the great majority of its history without antibiotics.

True. But it's much easier for a disease to spread long distances and among populations than in the past.

Note: I just realized there might be some terminological confusion, so I checked Bostrom's terminology. My "billions of deaths" scenario would not be "existential," in Bostrom's sense, because it isn't terminal: Many people would survive, and civilization would eventually recover. But if a pandemic reduced today's civilization to the state in which humanity existed for the majority of its history, that would be much worse than most nuclear scenarios, right?

Comment by fortyeridania on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-20T04:13:49.812Z · LW · GW

It's true that the probability of an existential-level AMR event is very low. But the probability of any existential-level threat event is very low; it's the extreme severity, not the high probability, that makes such risks worth considering.

What, in your view, gets the top spot?

Comment by fortyeridania on April '17 I Care About Thread · 2017-04-19T08:33:12.031Z · LW · GW

Many people have been through similar periods and overcome them, so asking around will yield plenty of anecdotal advice. And I assume you've read the old /u/lukeprog piece How to Beat Procrastination.

For me, regular exercise has helped for general motivation, energy levels, willpower--the opposite of akrasia generally. (How to bootstrap the motivation to exercise? I made a promise to a friend and she agreed to hold me accountable to exercising. It was also easier because there was someone I wanted to impress.)

Good luck. When you've got a handle on it, do share what helped most/least.

Comment by fortyeridania on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-19T08:19:57.648Z · LW · GW

Yes, I have. Nuclear war lost its top spot to antimicrobial resistance.

Given recent events on the Korean peninsula it may seem strange to downgrade the risk of nuclear war. Explanation:

  • While the probability of conflict is at a local high, the potential severity of the conflict is lower than I'd thought. This is because I've downgraded my estimate of how many nukes DPRK is likely to successfully deploy. (Any shooting war would still be a terrible event, especially for Seoul, which is only about 60 km from the border--firmly within conventional artillery range.)

  • An actual conflict with DPRK may deter other aspiring nuclear states, while a perpetual lack of conflict may have the opposite effect. As the number of nuclear states rises, both the probability and severity of a nuclear war rise, so the expected damage rises as the square. The chance of accident or terrorist use of nukes rises too.

  • Rising tensions with DPRK, even without a war, can result in a larger global push for stronger anti-proliferation measures.

  • Perhaps paradoxically, because (a) DPRK's capabilities are improving over time and (b) a conflict now ends the potential for a future conflict, a higher chance of a sooner (and smaller) conflict means a lower chance of a later (and larger) conflict.

You say:

I ended up to believe that now nuclear war > runaway biotech > UFAI

What was your ranking before, and on what information did you update?

Comment by fortyeridania on April '17 I Care About Thread · 2017-04-19T04:59:23.624Z · LW · GW

Nothing particularly exciting comes to my mind

Property prices would fall. Sounds like a job for real-estate entrepreneurs.

Comment by fortyeridania on "Risk" means surprise · 2015-05-22T07:31:54.844Z · LW · GW
  • I think they can only mean either "variance" or "badness of worst case"

In the context of financial markets, risk = variance from the mean (often measured using the standard deviation). My finance professor emphasized that although in everyday speech "risk" refers only to bad things, in finance we talk of both downside and upside risk.

Comment by fortyeridania on Gettier walks into a bar, um, barrista · 2015-05-01T21:09:42.513Z · LW · GW

Gettier walks into a bar and is immediately greeted with the assertion that all barroom furniture is soft, unless it's a table. So he produces a counterexample.

Comment by fortyeridania on Experience of typical mind fallacy. · 2015-04-27T20:31:25.405Z · LW · GW

I think atypically, just like everyone else.

Comment by fortyeridania on Rationality Quotes Thread April 2015 · 2015-04-02T06:10:47.614Z · LW · GW

When I was in law school, I devised my own idiosyncratic solution to the problem of studying a topic I knew nothing about. I'd wander into the library stacks, head to the relevant section, and pluck a book at random. I'd flip to the footnotes, and write down the books that seemed to occur most often. Then I'd pull them off the shelves, read their footnotes, and look at those books. It usually took only 2 or 3 rounds of this exercise before I had a pretty fair idea of who were the leading authorities in the field. After reading 3 or 4 of those books, I usually had at least enough orientation in the subject to understand what the main questions at issue were - and to seek my own answers, always provisional, always subject to new understanding, always requiring new reading and new thinking.

--David Frum

The oldest (non-dead) source I could find was this 2008 post by someone else quoting Frum.

Related to: Update Yourself Incrementally and For progress to be by accumulaton and not by random walk, read great books

Comment by fortyeridania on What you know that ain't so · 2015-03-24T17:36:33.976Z · LW · GW

This is related to the ideological Turing Test, as well as the LW post Are Your Enemies Innately Evil.