Posts

Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? 2017-05-23T16:38:35.338Z · score: 4 (3 votes)
Building Safe A.I. - A Tutorial for Encrypted Deep Learning 2017-03-21T15:17:54.971Z · score: 2 (3 votes)
Headlines, meet sparklines: news in context 2017-02-18T16:00:46.212Z · score: 4 (3 votes)

Comments

Comment by korin43 on Could someone please start a bright home lighting company? · 2019-11-26T21:00:49.272Z · score: 1 (1 votes) · LW · GW

Huh that company you link to at the end also has this section: https://store.yujiintl.com/collections/high-cri-led-emitters

Maybe this is just a matter of buying this emitter https://store.yujiintl.com/collections/high-cri-led-emitters/products/bc-series-high-cri-high-power-cob-led-5600k-bc270h-unit-1pcs?variant=25100635143 and hooking up to a sufficiently quiet highsink + packaging it in a more usable form factor.

I wonder if there would be issues about heat dissipation / potential fires though?

Comment by korin43 on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T14:49:25.053Z · score: 12 (6 votes) · LW · GW

You may be right, but I read this more as an argument that we'll keep personal cars, not that we're rent out half of them. If we're sticking with personal wagons, isn't it easier to just own your own engine too? Electric engines are cheap and battery prices are going down.

Comment by korin43 on Reflections on Premium Poker Tools: Part 1 - My journey · 2019-10-09T17:24:59.297Z · score: 3 (2 votes) · LW · GW

Thanks for writing this. I've been considering trying a software startup at some point and I think your lessons from this will help me a lot. I've been considering doing something with RSS readers, since it's a tool I use and I have experience working with other people on them, but I definitely need to (1) do some market research to make sure it's even plausible that I could make money, (2) look into how to market it / whether I need a cofounder or if I could approach people who could help spread the word, and (3) if I do it, make sure I can come up with something fast that people could immediately use, since I was also leaning toward "do the same thing as the market leader but better", and I think you've convinced me that that's a scary place to go. I might plausibly be able to be better than the market leader _at one thing_ and then add more features though.

(I'm brendanlong on Slack by the way, so we've already talked about this but I still found the articles really interesting)

Comment by korin43 on What supplements do you use? · 2019-07-28T19:42:31.688Z · score: 5 (2 votes) · LW · GW

You might be interested in Testa's omega-3 supplements. They contain both DHA and EPA but come from farmed algae, so they don't have the mercury issues that fish oil does.

https://www.amazon.com/Testa-middle-fish-Healthier-GMO-free-capsules/dp/B01H405UBM/

I take two per day based on some advice from someone on the LessWrong Slack.

Comment by korin43 on Explaining "The Crackpot Bet" · 2019-06-29T15:57:16.997Z · score: 1 (1 votes) · LW · GW

If he wins the bet, he gets a million dollars. If he loses the bet he gets a 21-year interest-free million dollar loan. Taking investment into account, the other party gives him several million dollars either way, and it doesn't really matter if he wins or loses the bet.

Comment by korin43 on Explaining "The Crackpot Bet" · 2019-06-29T15:51:41.675Z · score: 2 (2 votes) · LW · GW

The bet is extremely one-sided. At the outset, you get a 21-year million dollar zero-interest loan, and if you win you don't have to pay it back. There's no upside for the other person at all. Even if you "lose", the "winner" is giving you several million dollars in interest.

There are two reasons that offering this bet doesn't make you look smart:

1. The problem with the bet is extremely obvious and doesn't win you any cleverness points

2. In context, you appear to be using this bet to flout rationalist conversational norms.

3. You may also be violating the norms of the mailing list you're using (sending jokes, sending the same email to multiple lists)

Specifically for the second point, rationalist argument norms generally expect people to do some combination of providing evidence, making a (real) bet, or acknowledging the lack of evidence (which is fine! not everything is legible, and sometimes you need time to acquire evidence).

In this situation, it seems you made an argument that at least one other person found unconvincing. They responded in a way that (from your account) sounds pretty rude. At this point you have two options, either responding to the unnecessarily personal attack or respond to their argument.

For example, it would be completely reasonable to say something like "I realize you're not convinced by my argument, but I'd ask that you respond to the argument itself, and not generalizations about me (calling me a crackpot)".

You decided to respond to them with a counterargument (that you are in fact a genius), at which point the conversational norms above come up. "Bob" seems to have picked "make a bet", and you decided that "winning a Nobel Prize" is an unreasonable standard. I think you're completely justified in turning down an impossible bet, and there are several productive responses available to you:

  • Turn down the bet, and choose a different avenue to make your argument ("I'm not sure if we can come up with a reasonable bet for this, but I'm working on something exciting right now. Let's table this for now and we can see what you think when my paper/project/whatever is published.")
  • Come up with a new, more reasonable bet (some options: a paper of yours is published in a sufficiently prestigious form; you're invited to give a talk somewhere sufficiently prestigious; a neutral expert is chosen to adjudicate the bet in one year -- but knowingly, not just by the accident of saying the word "genius").

Instead, you countered with an even more unreasonable bet, sent it to multiple mailing lists, and doubled down when people asked you to stop (although they were also rude, from your account).

I hope this overly-detailed response is helpful. To be clear, I've never been on any of these mailing lists so I'm entirely relying on your account. My advice to you is:

1. Find a friend who participates on these mailing lists and get their opinion on whether you should apologize to the list or if just ending this thread is enough (I suspect a short "Sorry for the annoying messages / fake bet" would be helpful, but in some contexts people may just want the thread to end and would prefer not to get any more messages about it). I don't know the full context but if this is everything, I suspect people will get over this fairly quickly if you stop making it worse.

2. In the future, if something like this comes up, don't argue about vague things. You're perfectly within your rights to ask people to be nicer, but in a situation like this I think it would be far more productive to go with the "Please don't generalize about me; is there something you don't like about the argument?" response.

3. When you are arguing a point, be aware that sarcasm is dangerous, and trying to play it straight is even more dangerous. In particular, the bet you made and the arguments around it are highly suspect in rationalist circles. No one wants to argue with someone who is being intentionally misleading. This sort of thing *might* be ok with friends and in person, but it's almost never the right thing to do on a mailing list. If someone is arguing a point you disagree with, either give your evidence or defer the argument until you can collect more evidence.

Comment by korin43 on Explaining "The Crackpot Bet" · 2019-06-24T16:43:06.408Z · score: 10 (5 votes) · LW · GW

So if I'm understanding the timeline correctly, you said things people find so unconvincing that even your friends warned you that you sound like a crackpot, you doubled down by trolling an email list with an obviously one-sided bet, got upset that your trolling made people angry, and now you're bragging about how smart you are? I don't know you, but if I was in control of this list I would have already banned you. Group cohesion is a hard enough problem without trolls trying to mess it up "for fun".

Comment by korin43 on Could waste heat become an environment problem in the future (centuries)? · 2019-04-04T15:36:18.336Z · score: 1 (1 votes) · LW · GW

I'd like to question both assumptions (2) and (3).

For (2), it's not clear to me that "more energy" is the thing people want. Some things we do require a lot of energy (transportation, manufacturing) but some things that are really important use surprisingly little power (the internet).

For (3), it seems like we've been moving in the direction of more-efficiency for a long time (better engines and turbines to convert more of the fuel into useful energy, fewer losses to friction and transmission, etc.).

Overall, I think we're seeing an upward trend in power usage because more people are getting the full benefit of modern technology, not because modern technology is using more power per person. I wish I could find a long-term graph, but per-capital power consumption in the United States has been going down for a few years, and total power consumption in the United States has been flat since around 1995: https://en.wikipedia.org/wiki/Energy_in_the_United_States#Consumption Another way of putting this is that it's not that cars are using more gas, it's that a lot more people have cars. This means that in the short-term we can expect energy usage to continue going up for a while, but it could plausibly peak if/when the global population peaks and we finish the project of ending worldwide poverty.

For (1) I could nitpick since fission could also power our civilization for a long time, although I don't think it really effects the question you're asking.

Comment by korin43 on Why didn't Agoric Computing become popular? · 2019-02-19T15:52:33.602Z · score: 1 (1 votes) · LW · GW

When dealing with resources on the internet, you're running into the "trading off something cheap for something expensive" issue again. I could *right now* spend several days/ weeks write a program that dynamically looks up how expensive it is to run some algorithm on arbitrary cloud providers and run on the cheapest one (or wait if the price is too high), but it would be much faster for me to just do a quick Google search and hard-code to the cheapest provider right now. They might not always be the cheapest but it's probably not worth thousands of dollars of my time to optimize this more than that.

Regarding writing a program to dynamically lookup more complicated resources like algorithms and data.. I don't know how you would do this without a general-purpose programmer-equivalent AI. I think maybe your view of programming seriously underestimates how hard this is. Probably 95% of data science is finding good sources of data, getting them into a somewhat-machine-readable-form, cleaning them up, and doing various validations that the data makes any sense. If it was trivial for programs to use arbitrary data on the internet, there would be much bigger advancements than agoric computing.

Comment by korin43 on Why didn't Agoric Computing become popular? · 2019-02-16T17:06:03.112Z · score: 9 (5 votes) · LW · GW

I think the problem with this is that markets are a complicated and highly inefficient tool for coordinating resource consumption among competing individuals without needing an all-knowing resource-allocator. This is extremely useful when you need to coordinate resource consumption among competing individuals, but in the case of programming, the functions in your program aren't really competing in the same way (there's a limited pool of resources, but for the most part they each need a precise amount of memory, disk space, CPU time, etc. and no more and no less).

There also is a close-enough-to-all-knowing resource allocator (the programmer or system administrator). The market model actually sounds like a plausibly-workable way to do profiling, but it would be less overhead to just instrument every function to report what resources it uses and then cental-plan your resource economy.

In short, if everyone is a mindless automaton who takes only what they need and performs exactly what others require of them, and if the central planner can easily know exactly what resources exist and who wants them, then central planning works fine and markets are overkill (at least in the sense of being a useful tool; capitalism-as-a-moral-system is out-of-scope when talking about computer programs).

Note that even in cases like Amazon Web Services, the resource tracking and currency is just there to charge the end-user. Very few programs take these costs into account while they're executing (the exception is EC2 instance spot-pricing, but I think it's a stretch to even call that agoric computing).

Also, one other thing to consider is that agoric computing trades off something really, really cheap (computing resources) for something really, really expensive (programmer time). Most people don't even bother profiling because programmer time is dramatically more valuable than computer parts.

Comment by korin43 on Minimize Use of Standard Internet Food Delivery · 2019-02-11T18:07:08.716Z · score: 4 (3 votes) · LW · GW

The thing I don't understand is how the market got (and stays) this way. Slice successfully created a new (much lower margin) service for this. Why is everyone else putting up with 30% fees on something that's trivial to replace? For example, why aren't all of the businesses using ChowNow?

Presumably part of this is that some ordering systems get top billing in places like Google Maps, but given that Google Maps seems to show every order system under the sun, it can't be *that* hard to get a new one in there.

Also that article seems to equivocate between services like UberEats that provide their own delivery drivers and are plausibly worth paying a large fee to and services like GrubHub that are just online order systems and could presumably be trivially replaced.

Comment by korin43 on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-24T21:34:55.912Z · score: 1 (1 votes) · LW · GW

Looks like you can watch the game vs TLO here: https://www.youtube.com/watch?v=DpRPfidTjDA

I can't find the later games vs MaNa yet.

Comment by korin43 on Clothing For Men · 2019-01-20T19:17:31.005Z · score: 1 (1 votes) · LW · GW

Haha writing my comments was way easier since you already covered the hard parts in the article so I can just make short comments about the few places where I disagree.

Comment by korin43 on Clothing For Men · 2019-01-17T21:12:31.392Z · score: 2 (2 votes) · LW · GW

I feel like this article is more optimized for European / conservative US fashion. In most of the places I've lived in the US, you could follow basically the same rules but go significantly more casual. For example, you still want to get basically the same colors, material, logos, etc. but get jeans, t-shirts, and (maybe) nice-looking hoodies instead of button-up shirts, chinos and sweaters.

Comment by korin43 on Clothing For Men · 2019-01-17T21:05:35.618Z · score: 4 (3 votes) · LW · GW

I think shirts like this could help your status within small subcultures. I think the article is more about how to dress to maximize status for the overarching culture. Depending on your goals it could plausibly be worth it to optimize for a subculture instead, although I think the cases of that are probably uncommon (since most subcultures are fine with normal fashion too).

Comment by korin43 on Clothing For Men · 2019-01-17T20:56:49.609Z · score: 6 (5 votes) · LW · GW

I upvoted this article because the general advice is very good, although I disagree with most of the specific advice (the brands, which pieces of clothing are most important). Fancier companies are generally nice in ways that have nothing to do with fashion (nicer materials, more comfortable). Pretty much any brand works fine if you can find the right fit and colors. Although you may need to explore multiple brands to find clothes that fit you, it doesn't mean you have to go straight to expensive clothes. I can't find anything that fits me at Walmart but everything at Target does, and they're very similar prices.

I started wearing relatively expensive clothing in the last few years, but it's entirely for reasons that aren't obvious visually (jeans with a very slightly stretch around the waist are a lot more comfortable, wool shirts dry quickly and don't smell bad after physical activity).

Comment by korin43 on Double-Dipping in Dunning--Kruger · 2018-11-28T15:50:54.641Z · score: 2 (4 votes) · LW · GW

Please link more of your posts here. I looked through the history on your blog and there are quite a few that I think would be relevant and useful for people here. In particular, I think people would get a lot out of the posts about how to make friends. Some other other posts have titles that look interesting too but I haven't had time to read them yet.

Comment by korin43 on Real-time hiring with prediction markets · 2018-11-10T02:08:34.356Z · score: 3 (3 votes) · LW · GW

I wonder if it's just the field I'm in, but this doesn't match what I've seen as a software engineer. Companies frequently retroactively create opening if someone good enough applies (I've seen this happen at every company I've worked at, and it's the official policy at my current company).

I also don't think the people in charge of hiring care that much about salary (they don't want to pay more than they need to, but realistically, how good someone is and how long they'll stay at a company matter a lot more). Part of it is that the pool of qualified applicants is much smaller than most people think so the situation of deciding between two (good enough) candidates for one opening is rare (it has never happened to me).

Comment by korin43 on Design 3: Intentionality · 2018-09-22T13:52:39.967Z · score: 2 (2 votes) · LW · GW

I've been using Standard Notes. It's basically just a networked text editor which can display structured text nicely.

Comment by korin43 on Design 3: Intentionality · 2018-03-29T19:35:25.871Z · score: 14 (4 votes) · LW · GW

I know it's kind of a weird thing for this post to do, but this one finally gave the push I needed to setup decent journaling software, so I can do better planning, and also have something to reference in daily stand-up meetings instead of trying to come up with a summary of the previous day on the spot.

Comment by korin43 on [deleted post] 2018-02-07T15:24:53.354Z

Not sure why this got voted down so badly, but I can't get the link to work. Maybe you missed something when posting it?

Comment by korin43 on Security services relationship to social movements · 2017-12-17T16:39:49.107Z · score: 3 (1 votes) · LW · GW

This seems to be conflating two completely different phrases that use the word security. Security mindset has nothing at all to do with working for a government agency or being a spy. It's a similar concept to "antifragility" except that you're assuming that bad things don't just happen by chance.

Comment by korin43 on Fixing science via a basic income · 2017-12-09T18:48:23.745Z · score: 4 (2 votes) · LW · GW

Wouldn't this just push the problem back, so everyone would fight over Phd programs so they can get a guaranteed income? I imagine this would select for people who are good at impressing schools over people who are good at research.

Comment by korin43 on The Critical Rationalist View on Artificial Intelligence · 2017-12-06T18:37:45.634Z · score: 0 (0 votes) · LW · GW

The purpose of this post is not to argue these claims in depth but to summarize the Critical Rationalist view on AI and also how this speaks to things like the Friendly AI Problem.

Unfortunately that makes this post not very useful. It's definitely interesting, but you're just making a bunch of assertions with very little evidence (mostly just that smart people like Ayn Rand and a quantum physicist agree with you).

Comment by korin43 on An Educational Curriculum · 2017-11-22T17:54:37.645Z · score: 10 (3 votes) · LW · GW

I don't know much about the specific goal you're working on, but my experience with CS has been that the best way to learn is to work on real problems with people who know what they're doing. I've learned significantly more from my internship and jobs than I did in school, and that seems to be pretty common. Rather than trying to design a curriculum, I'd advise trying to find someone doing what you're interested in and get a job/internship/apprenticeship working with them. After you've done that for a few years, I suspect you'll know what you're not getting out of the current deal and can either go off on your own or find a different set of teachers.

Comment by korin43 on The Copernican Revolution from the Inside · 2017-11-02T06:36:04.604Z · score: 31 (12 votes) · LW · GW

I feel like you may have gone too far in the other direction then, since what I got out of this was definitely "there wasn't any evidence for heliocentrism and people just liked it better for philosophical reasons". As far as I know, the standard science education explanation for heliocentrism involves newtonian physics, observations that people weren't able to at this time (like you said, Tycho tried), and hindsight.

Can you expand on what the evidence that should have convinced people was? I feel like this article is a puzzle that's missing key information.

Comment by korin43 on Satoshi Nakamoto? · 2017-11-02T05:55:45.121Z · score: 10 (4 votes) · LW · GW

Why would a human withdraw from the account but an AI wouldn't? It seems like you're assuming either:

  1. The correct decision is to not withdraw the money. No human could be smart enough to figure this out, but an AI would. Are you an AI?
  2. The correct decision is to withdraw the money, and the AI is stupidly not doing it. Why is the AI stupider than a human?

I suppose Bitcoin's wastefulness would be a good cover for an AI wanting to use a bunch of computers without making people suspicious. I doubt it's the fastest way a super intelligent AI could make money though.

Comment by korin43 on Research Syndicates · 2017-11-02T05:40:23.277Z · score: 6 (2 votes) · LW · GW

You explain why new researchers would want to join, but why would top researchers want to? It seems like they lose money and time in exchange for that warm feeling you get when helping people. Would that be enough?

In terms of legality, worker owned corporations exist, but I suspect it would be hard convincing people to give unrestricted funding to the corporation (I think most government grants are fairly specific about what you can spent the money on?).

My (outsider) perspective of the field is that private funding for academic style research is uncommon, and generally involves the funder directly hiring the researchers, which seems to have some things in common with what you're saying (although since the researchers typically doesn't own any portions of the organization, they have presumably have fewer incentives to mentor other people).

If non academic research counts (researching something so you can build a product), then a think something similar to what you're proposing happens in some parts of the startup scene. For example, a group of people get together with an idea for a new product, start a company, research how to create/improve the product. Once the company transitions from solving scientific/technical problems to solving organizational problems, the founders leave and join or found new startups. The main difference here is that it's a short term cycle instead of a long term commitment, but that doesn't seem to stop people for providing mentoring.

Comment by korin43 on Questions about AGI's Importance · 2017-10-31T22:15:25.048Z · score: 1 (1 votes) · LW · GW

I suspect this has been answered on here before in a lot more detail, but:

  • Evolution isn't necessarily trying to make us smart; it's just trying to make us survive and reproduce
  • Evolution tends to find local optima (see: obviously stupid designs like how the optical nerve works)
  • We seem to be pretty good at making things that are better than what evolution comes up with (see: no birds on the moon, no predators with natural machine guns, etc.)

Also, specifically in AI, there is some precedent for there to be only a few years between "researchers get AI to do something at all" and "this AI is better at its task than any human who has ever lived". Chess did it a while. It just happened with Go. I suspect we're crossing that point with image recognition now.

Comment by korin43 on LW 2.0 Open Beta Live · 2017-10-31T22:03:35.925Z · score: 0 (0 votes) · LW · GW

Woo!

Also if anyone else gets a "schema validation error" when changing this setting, remove the "Website" from your profile: https://github.com/Discordius/Lesswrong2/issues/225

Comment by korin43 on Feedback on LW 2.0 · 2017-10-05T15:10:40.866Z · score: 3 (3 votes) · LW · GW

One feature makes it worth it on its own: People are posting stuff without worrying as much about if it's "good enough" or fits the theme as well (suddenly we're getting posts from everyone again).

The site itself is kind of annoying though, so my main interactions are reading the article in my RSS feed, opening the page, voting, then closing the page.

I like the bigger font size, but agree that might be excessive (16px seems about right to me).

Commenting needs some serious UX attention. The comment box doesn't look like a comment box and the text is massive.

The front page doesn't make any sense. I understand showing top content for people who aren't logged in, but once someone is logged in, the front page should be new articles.

The site seems overly fancy with JavaScript load and whatever. Loading a page with nothing but text should be instantaneous but somehow manages to take several seconds.

Comment by korin43 on Feedback on LW 2.0 · 2017-10-05T15:04:49.445Z · score: 2 (2 votes) · LW · GW

This is huge for me: The site is actually usable on a phone. It's annoyingly slow, but LessWrong v1 is unusable on anything with a small screen (note: This could have been fixed in v1 also, but the two pull requests to fix it have been ignored for around a year).

Comment by korin43 on [deleted post] 2017-10-01T01:30:59.741Z

I just wanted to comment that I'm a huge fan of this series, so please don't stop just because the articles are getting shorter. I mean, honestly, short posts are easier to get through anyway.

Comment by korin43 on Marketing Failure · 2017-09-22T14:55:12.621Z · score: 5 (3 votes) · LW · GW

I'm concerned that this is one of those "I don't see the value in this thing so it must be useless" situations. I'm not a fan of the advertising industry and use ad blockers on principle, but it's a big stretch to assume that the entire sales department is engaged in a useless zero-sum game. If I can be snarky for a minute:

> If we can make a system such that the consumers would choose their products strictly on the basis of how much does the product fit to their needs, and make it easier for consumers to find these products, we can change the incentive system in such a way that the manufacturers would focus more on making better products instead of competing in the non-productive marketing dimension.

I propose that our company hire people whose job is to find customers whose needs are met by our product (or tell the engineering department how our product could better meet their needs), and then inform them of how our products meet their needs. I will call this department "marketing and sales".

I do like your idea of a prediction market for how much I'll like a product though. Having some way to get good things without having to do my own research would be nice.

Comment by korin43 on Beta - First Impressions · 2017-09-22T13:23:23.329Z · score: 1 (1 votes) · LW · GW

There's some sort of top-level RSS feed: https://www.lesserwrong.com/feed.xml

I don't know if there's any way to subscribe to individual people/sections.

Comment by korin43 on LW 2.0 Open Beta Live · 2017-09-21T13:37:43.613Z · score: 3 (3 votes) · LW · GW

Agghhh I can't leave this tab open because it does this:

https://media.giphy.com/media/VXND9U858tCH6/giphy.gif

Comment by korin43 on LW 2.0 Open Beta Live · 2017-09-21T13:32:26.211Z · score: 1 (1 votes) · LW · GW

For anyone else who finds intercom the most annoying feature in existence, you can add an Adblock / UBlock rule to block: ###intercom-container

Although it will still screw with the page title.

Comment by korin43 on Machine Learning Group · 2017-07-17T17:19:07.412Z · score: 3 (3 votes) · LW · GW

As a matter of short term practicality currently we don't have the hardware for GPU acceleration. This limits the things we can do, but at this stage of learning most of the time spent is on understanding and implementing the basic concepts anyway.

For what you're doing, GPU stuff probably doesn't make that big of a difference. Convolutional networks will train and run faster, but a digit recognition network should be tiny and fast anyway.

Comment by korin43 on Becoming stronger together · 2017-07-11T23:16:41.427Z · score: 1 (1 votes) · LW · GW

In 2016, the "Less Wrong Diaspora" was 86% non-hispanic "white": http://lesswrong.com/lw/nmk/2016_lesswrong_diaspora_survey_analysis_part_one/

Comment by korin43 on a different perspecive on physics · 2017-06-28T03:13:45.988Z · score: 0 (0 votes) · LW · GW

You might like the book "The End of Time" by Julian Barbour. It's about an alternative view of physics where you rearrange all of the equations to not include time. The book describes the result sort of similarly to what you're suggesting, where the system is defined as the relationship between things and the evolution of those relationships and not precise locations and times.

Comment by korin43 on Priors Are Useless · 2017-06-22T16:59:46.426Z · score: 5 (5 votes) · LW · GW

I think you lost me at the point where you assume it's trivial to gather an infinite amount of evidence for every hypothesis.

Comment by korin43 on A new, better way to read the Sequences · 2017-06-04T15:18:57.423Z · score: 3 (3 votes) · LW · GW

This seems like a good place to ask: How do people read long web based books like this without losing their place? I usually look for ebooks just because my ebook reader will remember what page I was on. I used to use bookmarks for this, but I use 4 different computers on a regular basis (two laptops, a tablet, and a phone). Instapaper / pocket work ok, but then if I add a bunch of links I'll forget about the older ones. Solutions?

Comment by korin43 on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-23T20:07:10.833Z · score: 0 (0 votes) · LW · GW

Does it use anything non-local? The experiments in the article use macroscopic fluids, which presumably don't have non-local effects.

Comment by korin43 on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-23T16:44:46.323Z · score: 0 (0 votes) · LW · GW

Note that the theory seems to have been around since the 1930's, but these experiments are new (2016).

Comment by korin43 on Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? · 2017-05-23T16:42:51.794Z · score: 1 (1 votes) · LW · GW

"The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features."

Comment by korin43 on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-13T14:43:23.439Z · score: 3 (3 votes) · LW · GW

From the perspective of the God of Evolution, we are the unfriendly AI:

  • We were supposed to be compelled to reproduce, but we figure out that we can get the reward by disabling our reproductive functions and continuing to go through the motions.
  • We were supposed to seek out nutritious food and eat it, but we figured out that we could concentrate the parts that trigger our reward centers and just eat that.

And of course, we're unfriendly to everything else too:

  • Humans fight each other over farmland (= land that can be turned into food which can be turned into humans) all the time
  • We're trying to tile the universe with human colonies and probes. It's true that we're not strictly trying to tile the universe with our DNA, but we are trying to turn it all into human things, and it's not uncommon for people to be sad about the parts of the universe we can never reach and turn into humantronium.
  • We do not love or hate the cow/chicken/pig, but they are made of meat which can be turned into reward center triggers.

As to why we're not exactly like a paperclip maximizer, I suspect one big piece is:

  • We're not able to make direct copies of ourselves or extend our personal power to the extent that we expect AI to be able to, so "being nice" is adaptive because there are a lot of things we can't do alone. We expect that an AI could just make itself bigger or make exact copies that won't have divergent goals, so it won't need this.
Comment by korin43 on What conservatives and environmentalists agree on · 2017-04-24T22:48:45.501Z · score: 0 (0 votes) · LW · GW

This makes me wonder how much of the liberal/conservative divide with how seriously we take minor acts of terrorism has to do with direct experience with big cities. If you don't live in a city, hearing about a terrorist attack in a city is probably really scary, but if you've actually lived in a big city, a few people dying every few years is incredibly uneventful (for comparison, 318 people were murdered in my city last year).

Comment by korin43 on April '17 I Care About Thread · 2017-04-20T01:24:38.641Z · score: 0 (0 votes) · LW · GW

I sometimes wonder if there is more low hanging fruit in lives that could be saved if car safety was improved. Self driving cars are obviously one way to do that, but I worry that we're ignoring easier solutions because self driving cars will solve the problem eventually (not that I know what those easier solutions are).

Comment by korin43 on What's up with Arbital? · 2017-03-29T19:38:22.181Z · score: 10 (8 votes) · LW · GW

As a software engineer, it seems strange to me that Arbital is trying to be an encyclopedia, debate system, and blogging site at the same time. What made you decide to put those features together in one piece of software?

Comment by korin43 on Building Safe A.I. - A Tutorial for Encrypted Deep Learning · 2017-03-23T20:20:28.734Z · score: 0 (0 votes) · LW · GW

I think being encrypted may not actually help much with the control problem, since the problem isn't that we expect an AI to fully understand what we want and then be evil, it's that we're worried that an AI will not be optimizing what we want. Not knowing what the outputs actually do doesn't seem like it would help at all (except that the AI would only have the inputs we want it to have).