Comment by korin43 on Could waste heat become an environment problem in the future (centuries)? · 2019-04-04T15:36:18.336Z · score: 1 (1 votes) · LW · GW

I'd like to question both assumptions (2) and (3).

For (2), it's not clear to me that "more energy" is the thing people want. Some things we do require a lot of energy (transportation, manufacturing) but some things that are really important use surprisingly little power (the internet).

For (3), it seems like we've been moving in the direction of more-efficiency for a long time (better engines and turbines to convert more of the fuel into useful energy, fewer losses to friction and transmission, etc.).

Overall, I think we're seeing an upward trend in power usage because more people are getting the full benefit of modern technology, not because modern technology is using more power per person. I wish I could find a long-term graph, but per-capital power consumption in the United States has been going down for a few years, and total power consumption in the United States has been flat since around 1995: https://en.wikipedia.org/wiki/Energy_in_the_United_States#Consumption Another way of putting this is that it's not that cars are using more gas, it's that a lot more people have cars. This means that in the short-term we can expect energy usage to continue going up for a while, but it could plausibly peak if/when the global population peaks and we finish the project of ending worldwide poverty.

For (1) I could nitpick since fission could also power our civilization for a long time, although I don't think it really effects the question you're asking.

Comment by korin43 on Why didn't Agoric Computing become popular? · 2019-02-19T15:52:33.602Z · score: 1 (1 votes) · LW · GW

When dealing with resources on the internet, you're running into the "trading off something cheap for something expensive" issue again. I could *right now* spend several days/ weeks write a program that dynamically looks up how expensive it is to run some algorithm on arbitrary cloud providers and run on the cheapest one (or wait if the price is too high), but it would be much faster for me to just do a quick Google search and hard-code to the cheapest provider right now. They might not always be the cheapest but it's probably not worth thousands of dollars of my time to optimize this more than that.

Regarding writing a program to dynamically lookup more complicated resources like algorithms and data.. I don't know how you would do this without a general-purpose programmer-equivalent AI. I think maybe your view of programming seriously underestimates how hard this is. Probably 95% of data science is finding good sources of data, getting them into a somewhat-machine-readable-form, cleaning them up, and doing various validations that the data makes any sense. If it was trivial for programs to use arbitrary data on the internet, there would be much bigger advancements than agoric computing.

Comment by korin43 on Why didn't Agoric Computing become popular? · 2019-02-16T17:06:03.112Z · score: 9 (5 votes) · LW · GW

I think the problem with this is that markets are a complicated and highly inefficient tool for coordinating resource consumption among competing individuals without needing an all-knowing resource-allocator. This is extremely useful when you need to coordinate resource consumption among competing individuals, but in the case of programming, the functions in your program aren't really competing in the same way (there's a limited pool of resources, but for the most part they each need a precise amount of memory, disk space, CPU time, etc. and no more and no less).

There also is a close-enough-to-all-knowing resource allocator (the programmer or system administrator). The market model actually sounds like a plausibly-workable way to do profiling, but it would be less overhead to just instrument every function to report what resources it uses and then cental-plan your resource economy.

In short, if everyone is a mindless automaton who takes only what they need and performs exactly what others require of them, and if the central planner can easily know exactly what resources exist and who wants them, then central planning works fine and markets are overkill (at least in the sense of being a useful tool; capitalism-as-a-moral-system is out-of-scope when talking about computer programs).

Note that even in cases like Amazon Web Services, the resource tracking and currency is just there to charge the end-user. Very few programs take these costs into account while they're executing (the exception is EC2 instance spot-pricing, but I think it's a stretch to even call that agoric computing).

Also, one other thing to consider is that agoric computing trades off something really, really cheap (computing resources) for something really, really expensive (programmer time). Most people don't even bother profiling because programmer time is dramatically more valuable than computer parts.

Comment by korin43 on Minimize Use of Standard Internet Food Delivery · 2019-02-11T18:07:08.716Z · score: 4 (3 votes) · LW · GW

The thing I don't understand is how the market got (and stays) this way. Slice successfully created a new (much lower margin) service for this. Why is everyone else putting up with 30% fees on something that's trivial to replace? For example, why aren't all of the businesses using ChowNow?

Presumably part of this is that some ordering systems get top billing in places like Google Maps, but given that Google Maps seems to show every order system under the sun, it can't be *that* hard to get a new one in there.

Also that article seems to equivocate between services like UberEats that provide their own delivery drivers and are plausibly worth paying a large fee to and services like GrubHub that are just online order systems and could presumably be trivially replaced.

Comment by korin43 on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-24T21:34:55.912Z · score: 1 (1 votes) · LW · GW

Looks like you can watch the game vs TLO here: https://www.youtube.com/watch?v=DpRPfidTjDA

I can't find the later games vs MaNa yet.

Comment by korin43 on Clothing For Men · 2019-01-20T19:17:31.005Z · score: 1 (1 votes) · LW · GW

Haha writing my comments was way easier since you already covered the hard parts in the article so I can just make short comments about the few places where I disagree.

Comment by korin43 on Clothing For Men · 2019-01-17T21:12:31.392Z · score: 2 (2 votes) · LW · GW

I feel like this article is more optimized for European / conservative US fashion. In most of the places I've lived in the US, you could follow basically the same rules but go significantly more casual. For example, you still want to get basically the same colors, material, logos, etc. but get jeans, t-shirts, and (maybe) nice-looking hoodies instead of button-up shirts, chinos and sweaters.

Comment by korin43 on Clothing For Men · 2019-01-17T21:05:35.618Z · score: 4 (3 votes) · LW · GW

I think shirts like this could help your status within small subcultures. I think the article is more about how to dress to maximize status for the overarching culture. Depending on your goals it could plausibly be worth it to optimize for a subculture instead, although I think the cases of that are probably uncommon (since most subcultures are fine with normal fashion too).

Comment by korin43 on Clothing For Men · 2019-01-17T20:56:49.609Z · score: 5 (4 votes) · LW · GW

I upvoted this article because the general advice is very good, although I disagree with most of the specific advice (the brands, which pieces of clothing are most important). Fancier companies are generally nice in ways that have nothing to do with fashion (nicer materials, more comfortable). Pretty much any brand works fine if you can find the right fit and colors. Although you may need to explore multiple brands to find clothes that fit you, it doesn't mean you have to go straight to expensive clothes. I can't find anything that fits me at Walmart but everything at Target does, and they're very similar prices.

I started wearing relatively expensive clothing in the last few years, but it's entirely for reasons that aren't obvious visually (jeans with a very slightly stretch around the waist are a lot more comfortable, wool shirts dry quickly and don't smell bad after physical activity).

Comment by korin43 on Double-Dipping in Dunning--Kruger · 2018-11-28T15:50:54.641Z · score: 2 (4 votes) · LW · GW

Please link more of your posts here. I looked through the history on your blog and there are quite a few that I think would be relevant and useful for people here. In particular, I think people would get a lot out of the posts about how to make friends. Some other other posts have titles that look interesting too but I haven't had time to read them yet.

Comment by korin43 on Real-time hiring with prediction markets · 2018-11-10T02:08:34.356Z · score: 3 (3 votes) · LW · GW

I wonder if it's just the field I'm in, but this doesn't match what I've seen as a software engineer. Companies frequently retroactively create opening if someone good enough applies (I've seen this happen at every company I've worked at, and it's the official policy at my current company).

I also don't think the people in charge of hiring care that much about salary (they don't want to pay more than they need to, but realistically, how good someone is and how long they'll stay at a company matter a lot more). Part of it is that the pool of qualified applicants is much smaller than most people think so the situation of deciding between two (good enough) candidates for one opening is rare (it has never happened to me).

Comment by korin43 on Design 3: Intentionality · 2018-09-22T13:52:39.967Z · score: 2 (2 votes) · LW · GW

I've been using Standard Notes. It's basically just a networked text editor which can display structured text nicely.

Comment by korin43 on Design 3: Intentionality · 2018-03-29T19:35:25.871Z · score: 14 (4 votes) · LW · GW

I know it's kind of a weird thing for this post to do, but this one finally gave the push I needed to setup decent journaling software, so I can do better planning, and also have something to reference in daily stand-up meetings instead of trying to come up with a summary of the previous day on the spot.

Comment by korin43 on [deleted post] 2018-02-07T15:24:53.354Z

Not sure why this got voted down so badly, but I can't get the link to work. Maybe you missed something when posting it?

Comment by korin43 on Security services relationship to social movements · 2017-12-17T16:39:49.107Z · score: 3 (1 votes) · LW · GW

This seems to be conflating two completely different phrases that use the word security. Security mindset has nothing at all to do with working for a government agency or being a spy. It's a similar concept to "antifragility" except that you're assuming that bad things don't just happen by chance.

Comment by korin43 on Fixing science via a basic income · 2017-12-09T18:48:23.745Z · score: 4 (2 votes) · LW · GW

Wouldn't this just push the problem back, so everyone would fight over Phd programs so they can get a guaranteed income? I imagine this would select for people who are good at impressing schools over people who are good at research.

Comment by korin43 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:37:45.634Z · score: 0 (0 votes) · LW · GW

The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI Problem.

Unfortunately that makes this post not very useful. It's definitely interesting, but you're just making a bunch of assertions with very little evidence (mostly just that smart people like Ayn Rand and a quantum physicist agree with you).

Comment by korin43 on An Educational Curriculum · 2017-11-22T17:54:37.645Z · score: 10 (3 votes) · LW · GW

I don't know much about the specific goal you're working on, but my experience with CS has been that the best way to learn is to work on real problems with people who know what they're doing. I've learned significantly more from my internship and jobs than I did in school, and that seems to be pretty common. Rather than trying to design a curriculum, I'd advise trying to find someone doing what you're interested in and get a job/internship/apprenticeship working with them. After you've done that for a few years, I suspect you'll know what you're not getting out of the current deal and can either go off on your own or find a different set of teachers.

Comment by korin43 on The Copernican Revolution from the Inside · 2017-11-02T06:36:04.604Z · score: 31 (12 votes) · LW · GW

I feel like you may have gone too far in the other direction then, since what I got out of this was definitely "there wasn't any evidence for heliocentrism and people just liked it better for philosophical reasons". As far as I know, the standard science education explanation for heliocentrism involves newtonian physics, observations that people weren't able to at this time (like you said, Tycho tried), and hindsight.

Can you expand on what the evidence that should have convinced people was? I feel like this article is a puzzle that's missing key information.

Comment by korin43 on Satoshi Nakamoto? · 2017-11-02T05:55:45.121Z · score: 10 (4 votes) · LW · GW

Why would a human withdraw from the account but an AI wouldn't? It seems like you're assuming either:

  1. The correct decision is to not withdraw the money. No human could be smart enough to figure this out, but an AI would. Are you an AI?
  2. The correct decision is to withdraw the money, and the AI is stupidly not doing it. Why is the AI stupider than a human?

I suppose Bitcoin's wastefulness would be a good cover for an AI wanting to use a bunch of computers without making people suspicious. I doubt it's the fastest way a super intelligent AI could make money though.

Comment by korin43 on Research Syndicates · 2017-11-02T05:40:23.277Z · score: 6 (2 votes) · LW · GW

You explain why new researchers would want to join, but why would top researchers want to? It seems like they lose money and time in exchange for that warm feeling you get when helping people. Would that be enough?

In terms of legality, worker owned corporations exist, but I suspect it would be hard convincing people to give unrestricted funding to the corporation (I think most government grants are fairly specific about what you can spent the money on?).

My (outsider) perspective of the field is that private funding for academic style research is uncommon, and generally involves the funder directly hiring the researchers, which seems to have some things in common with what you're saying (although since the researchers typically doesn't own any portions of the organization, they have presumably have fewer incentives to mentor other people).

If non academic research counts (researching something so you can build a product), then a think something similar to what you're proposing happens in some parts of the startup scene. For example, a group of people get together with an idea for a new product, start a company, research how to create/improve the product. Once the company transitions from solving scientific/technical problems to solving organizational problems, the founders leave and join or found new startups. The main difference here is that it's a short term cycle instead of a long term commitment, but that doesn't seem to stop people for providing mentoring.

Comment by korin43 on Questions about AGI's Importance · 2017-10-31T22:15:25.048Z · score: 1 (1 votes) · LW · GW

I suspect this has been answered on here before in a lot more detail, but:

  • Evolution isn't necessarily trying to make us smart; it's just trying to make us survive and reproduce
  • Evolution tends to find local optima (see: obviously stupid designs like how the optical nerve works)
  • We seem to be pretty good at making things that are better than what evolution comes up with (see: no birds on the moon, no predators with natural machine guns, etc.)

Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get AI to do something at all" and "this AI is better at its task than any human who has ever lived". Chess did it a while. It just happened with Go. I suspect we're crossing that point with image recognition now.

Comment by korin43 on LW 2.0 Open Beta Live · 2017-10-31T22:03:35.925Z · score: 0 (0 votes) · LW · GW

Woo!

Also if anyone else gets a "schema validation error" when changing this setting, remove the "Website" from your profile: https://github.com/Discordius/Lesswrong2/issues/225

Comment by korin43 on Feedback on LW 2.0 · 2017-10-05T15:10:40.866Z · score: 3 (3 votes) · LW · GW

One feature makes it worth it on its own: People are posting stuff without worrying as much about if it's "good enough" or fits the theme as well (suddenly we're getting posts from everyone again).

The site itself is kind of annoying though, so my main interactions are reading the article in my RSS feed, opening the page, voting, then closing the page.

I like the bigger font size, but agree that might be excessive (16px seems about right to me).

Commenting needs some serious UX attention. The comment box doesn't look like a comment box and the text is massive.

The front page doesn't make any sense. I understand showing top content for people who aren't logged in, but once someone is logged in, the front page should be new articles.

The site seems overly fancy with JavaScript load and whatever. Loading a page with nothing but text should be instantaneous but somehow manages to take several seconds.

Comment by korin43 on Feedback on LW 2.0 · 2017-10-05T15:04:49.445Z · score: 2 (2 votes) · LW · GW

This is huge for me: The site is actually usable on a phone. It's annoyingly slow, but LessWrong v1 is unusable on anything with a small screen (note: This could have been fixed in v1 also, but the two pull requests to fix it have been ignored for around a year).

Comment by korin43 on [deleted post] 2017-10-01T01:30:59.741Z

I just wanted to comment that I'm a huge fan of this series, so please don't stop just because the articles are getting shorter. I mean, honestly, short posts are easier to get through anyway.

Comment by korin43 on Marketing Failure · 2017-09-22T14:55:12.621Z · score: 4 (2 votes) · LW · GW

I'm concerned that this is one of those "I don't see the value in this thing so it must be useless" situations. I'm not a fan of the advertising industry and use ad blockers on principle, but it's a big stretch to assume that the entire sales department is engaged in a useless zero-sum game. If I can be snarky for a minute:

> If we can make a system such that the consumers would choose their products strictly on the basis of how much does the product fit to their needs, and make it easier for consumers to find these products, we can change the incentive system in such a way that the manufacturers would focus more on making better products instead of competing in the non-productive marketing dimension.

I propose that our company hire people whose job is to find customers whose needs are met by our product (or tell the engineering department how our product could better meet their needs), and then inform them of how our products meet their needs. I will call this department "marketing and sales".

I do like your idea of a prediction market for how much I'll like a product though. Having some way to get good things without having to do my own research would be nice.

Comment by korin43 on Beta - First Impressions · 2017-09-22T13:23:23.329Z · score: 1 (1 votes) · LW · GW

There's some sort of top-level RSS feed: https://www.lesserwrong.com/feed.xml

I don't know if there's any way to subscribe to individual people/sections.

Comment by korin43 on LW 2.0 Open Beta Live · 2017-09-21T13:37:43.613Z · score: 3 (3 votes) · LW · GW

Agghhh I can't leave this tab open because it does this:

https://media.giphy.com/media/VXND9U858tCH6/giphy.gif

Comment by korin43 on LW 2.0 Open Beta Live · 2017-09-21T13:32:26.211Z · score: 1 (1 votes) · LW · GW

For anyone else who finds intercom the most annoying feature in existence, you can add an Adblock / UBlock rule to block: ###intercom-container

Although it will still screw with the page title.

Comment by korin43 on Machine Learning Group · 2017-07-17T17:19:07.412Z · score: 3 (3 votes) · LW · GW

As a matter of short term practicality currently we don't have the hardware for GPU acceleration. This limits the things we can do, but at this stage of learning most of the time spent is on understanding and implementing the basic concepts anyway.

For what you're doing, GPU stuff probably doesn't make that big of a difference. Convolutional networks will train and run faster, but a digit recognition network should be tiny and fast anyway.

Comment by korin43 on Becoming stronger together · 2017-07-11T23:16:41.427Z · score: 1 (1 votes) · LW · GW

In 2016, the "Less Wrong Diaspora" was 86% non-hispanic "white": http://lesswrong.com/lw/nmk/2016_lesswrong_diaspora_survey_analysis_part_one/

Comment by korin43 on a different perspecive on physics · 2017-06-28T03:13:45.988Z · score: 0 (0 votes) · LW · GW

You might like the book "The End of Time" by Julian Barbour. It's about an alternative view of physics where you rearrange all of the equations to not include time. The book describes the result sort of similarly to what you're suggesting, where the system is defined as the relationship between things and the evolution of those relationships and not precise locations and times.

Comment by korin43 on Priors Are Useless · 2017-06-22T16:59:46.426Z · score: 5 (5 votes) · LW · GW

I think you lost me at the point where you assume it's trivial to gather an infinite amount of evidence for every hypothesis.

Comment by korin43 on A new, better way to read the Sequences · 2017-06-04T15:18:57.423Z · score: 3 (3 votes) · LW · GW

This seems like a good place to ask: How do people read long web based books like this without losing their place? I usually look for ebooks just because my ebook reader will remember what page I was on. I used to use bookmarks for this, but I use 4 different computers on a regular basis (two laptops, a tablet, and a phone). Instapaper / pocket work ok, but then if I add a bunch of links I'll forget about the older ones. Solutions?

Comment by korin43 on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-23T20:07:10.833Z · score: 0 (0 votes) · LW · GW

Does it use anything non-local? The experiments in the article use macroscopic fluids, which presumably don't have non-local effects.

Comment by korin43 on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-23T16:44:46.323Z · score: 0 (0 votes) · LW · GW

Note that the theory seems to have been around since the 1930's, but these experiments are new (2016).

Comment by korin43 on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-23T16:42:51.794Z · score: 1 (1 votes) · LW · GW

"The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features."

Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?

2017-05-23T16:38:35.338Z · score: 4 (3 votes)
Comment by korin43 on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-13T14:43:23.439Z · score: 3 (3 votes) · LW · GW

From the perspective of the God of Evolution, we are the unfriendly AI:

  • We were supposed to be compelled to reproduce, but we figure out that we can get the reward by disabling our reproductive functions and continuing to go through the motions.
  • We were supposed to seek out nutritious food and eat it, but we figured out that we could concentrate the parts that trigger our reward centers and just eat that.

And of course, we're unfriendly to everything else too:

  • Humans fight each other over farmland (= land that can be turned into food which can be turned into humans) all the time
  • We're trying to tile the universe with human colonies and probes. It's true that we're not strictly trying to tile the universe with our DNA, but we are trying to turn it all into human things, and it's not uncommon for people to be sad about the parts of the universe we can never reach and turn into humantronium.
  • We do not love or hate the cow/chicken/pig, but they are made of meat which can be turned into reward center triggers.

As to why we're not exactly like a paperclip maximizer, I suspect one big piece is:

  • We're not able to make direct copies of ourselves or extend our personal power to the extent that we expect AI to be able to, so "being nice" is adaptive because there are a lot of things we can't do alone. We expect that an AI could just make itself bigger or make exact copies that won't have divergent goals, so it won't need this.
Comment by korin43 on What conservatives and environmentalists agree on · 2017-04-24T22:48:45.501Z · score: 0 (0 votes) · LW · GW

This makes me wonder how much of the liberal/conservative divide with how seriously we take minor acts of terrorism has to do with direct experience with big cities. If you don't live in a city, hearing about a terrorist attack in a city is probably really scary, but if you've actually lived in a big city, a few people dying every few years is incredibly uneventful (for comparison, 318 people were murdered in my city last year).

Comment by korin43 on April '17 I Care About Thread · 2017-04-20T01:24:38.641Z · score: 0 (0 votes) · LW · GW

I sometimes wonder if there is more low hanging fruit in lives that could be saved if car safety was improved. Self driving cars are obviously one way to do that, but I worry that we're ignoring easier solutions because self driving cars will solve the problem eventually (not that I know what those easier solutions are).

Comment by korin43 on What's up with Arbital? · 2017-03-29T19:38:22.181Z · score: 10 (8 votes) · LW · GW

As a software engineer, it seems strange to me that Arbital is trying to be an encyclopedia, debate system, and blogging site at the same time. What made you decide to put those features together in one piece of software?

Comment by korin43 on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-23T20:20:28.734Z · score: 0 (0 votes) · LW · GW

I think being encrypted may not actually help much with the control problem, since the problem isn't that we expect an AI to fully understand what we want and then be evil, it's that we're worried that an AI will not be optimizing what we want. Not knowing what the outputs actually do doesn't seem like it would help at all (except that the AI would only have the inputs we want it to have).

Comment by korin43 on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-21T15:18:12.853Z · score: 0 (0 votes) · LW · GW

"In this blogpost, we're going to train a neural network that is fully encrypted during training (trained on unencrypted data). The result will be a neural network with two beneficial properties. First, the neural network's intelligence is protected from those who might want to steal it, allowing valuable AIs to be trained in insecure environments without risking theft of their intelligence. Secondly, the network can only make encrypted predictions (which presumably have no impact on the outside world because the outside world cannot understand the predictions without a secret key). This creates a valuable power imbalance between a user and a superintelligence. If the AI is homomorphically encrypted, then from it's perspective, the entire outside world is also homomorphically encrypted. A human controls the secret key and has the option to either unlock the AI itself (releasing it on the world) or just individual predictions the AI makes (seems safer)."

Building Safe A.I. - A Tutorial for Encrypted Deep Learning

2017-03-21T15:17:54.971Z · score: 2 (3 votes)
Comment by korin43 on LessWrong Discord · 2017-03-13T13:33:14.126Z · score: 1 (1 votes) · LW · GW

Are you aware of the LessWrong Slack? Why Discord over that?

Comment by korin43 on Ferocious Truth (New Blog, Map/Territory Error Categories) · 2017-03-11T16:00:06.043Z · score: 0 (0 votes) · LW · GW

I chose actions that will increase your lifespan in general, since that's strictly better than increasing the chance that if you live long enough for it to matter, you will live longer than your natural lifespan.

Evaluating the expected value of cryonics is hard because it runs into the same problem as Pascal's Wager, with a huge value in a lowe probability case. I'm not really sure how to handle that.

The reasons I don't think it's likely to work right now are:

  • Current processes may not preserve human sized brains well at all even in ideal conditions (successful cryonics experiments seem to involve animals much smaller than our brains)
  • Alcor may not do the preservation perfectly
  • The technology to reconstruct our brains from frozen ones may not be possible or might be so far off that the brain is damaged before it becomes possible
  • Alternately, you could use whole body preservation, but then the problems in my first point are significantly worse.
  • In non ideal conditions, your brain is dead and breaking down, and losing information permanently. A sufficiently powerful AI might be able to make reasonable guesses, but it's not clear how much the person they create would really be you after extensive damage.
  • The leading causes of death for people aged 15-34 are injury, suicide, and homicide. All of those have a might chance of involving trauma to the head, which makes things much worse. For example, someone who dies in a car crash is probably not going to get much value from cryonics. https://www.cdc.gov/injury/images/lc-charts/leading_causes_of_death_age_group_2014_1050w760h.gif

And this last one brings up my first point again: if I want to not die, it's much more effective to drive safely (or not drive), get adequate medical care, exercise, etc. than to focus in the small chance of surviving after my body is already dying.

Comment by korin43 on Inbox zero - A guide - v2 (Instrumental behaviour) · 2017-03-11T15:20:13.782Z · score: 0 (0 votes) · LW · GW

I just started doing this at my new job and found it extremely useful. I used to lose important mail in the backlog all the time, but now everything in my inbox is either unread or a reminder of a task I need to finish. I tend to leave my huge tasks in the inbox too, but I might change that if I start having a lot of them.

Comment by korin43 on Ferocious Truth (New Blog, Map/Territory Error Categories) · 2017-03-05T16:43:53.812Z · score: 1 (1 votes) · LW · GW

The first part was good. The ending seems to be making way too many assumptions about other people's motivations.

Consider that in a 2016 survey of Less Wrong users, only 48 of 1,660 or 2.9% of respondents answering the question said that they were “signed up or just finishing up paperwork” for cryonics. [Argument from authority here]. While this is certainly a much higher portion than the essentially 0% of Americans who are signed up for cryonics based on published membership numbers, it is still a tiny percentage when considering that cryonics is the most direct action one can take to increase the probability of living past one’s natural lifespan.

First off, this last sentence is probably wrong. The most direct actions you can take to increase your expected lifespan (beyond obvious things like eating) are to exercise regularly, avoid cars and extreme sports, and possibly make changes to your diet.

This objection is consistent with the fact that 515 or 31% of respondents to the question answered that they “would like to sign up,” but haven’t for various reasons. Beyond that, when asked “Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?”, 71% of respondents answered yes or maybe.

I had to look through the survey data, but given that the median respondent said existing cryonics techniques have a 10% chance of working, it's not surprising that a majority haven't signed up for it. It's also very misleading how you group the "would like to" responses. 20% said they would like to but can't because it's either not offered where they live or they can't afford it. The relevant number for your argument is the 11% who said they would like to but haven't got around to it.

If a reliable and trustworthy source said that for the entire day, a major company or government was giving out $100,000 checks to everyone who showed up at a nearby location, what would be the rational course of action?

This example is exactly backwards for understanding why people don't agree with you about cryonics. Cryonics is very expensive and unlikely to work (right now), even in ideal scenarios (and I'm pretty sure that 10% median is for "will Alcor's process work at all", not, "how likely are you to survive cryonics if you die in a car crash thousands of miles away from their facility").

Any course of action not involving going down and collecting the $100,000 would likely not be rational.

Ignoring opportunity cost and motivations. If someone wants $100,000 more than whatever else they could be doing with that time, then yes. But as we see above, not everyone agrees that a tiny, tiny chance of living longer is worth (the opportunity cost of) hundreds of thousands of dollars.


And I should point out, I personally think cryonics is very promising and should be getting a lot more research funding than it does (not to mention not being so legally difficult), but I think the probability of it working in common cases like not dying inside Alcor's facility right now is very low.

Comment by korin43 on Humble Charlie · 2017-02-27T20:17:21.675Z · score: 2 (2 votes) · LW · GW

In my series on GiveWell, I mentioned that my mother's friend Charlie, who runs a soup kitchen, gives away surplus donations to other charities, mostly ones he knows well. I used this as an example of the kind of behavior you might hope to see in a cooperative situation where people have convergent goals.

I recently had a chance to speak with Charlie, and he mentioned something else I found surprising: his soup kitchen made a decision not to accept donations online. They only took paper checks. This is because, since they get enough money that way, they don't want to accumulate more money that they don't know how to use.

When I asked why, Charlie told me that it would be bad for the donors to support a charity if they haven't shown up in person to have a sense of what it does.

At first I was confused. This didn't seem like very consequentialist thinking. I briefly considered the possibility that Charlie was being naïve, or irrationally traditionalist, or thinking about what resembles his idea of a good charity. But after thinking about it for a moment, I realized that Charlie was getting something deeply right that almost everyone gets wrong, at least where money was involved. He was trying to maximize benefits rather than costs, in a case where the costs are much easier to measure.

Comment by korin43 on The price you pay for arriving to class on time · 2017-02-25T14:11:23.944Z · score: 1 (1 votes) · LW · GW

And if you're early, you can either talk to friends or read. I always try to show up at least ten minutes early to things and then use the extra time to do the reading I would have done at home later.

Headlines, meet sparklines: news in context

2017-02-18T16:00:46.212Z · score: 4 (3 votes)