Posts

The Neuralink Monkey Demo 2021-05-03T02:29:05.288Z

Comments

Comment by scroogemcduck1 on Biological risk from the mirror world · 2024-12-21T19:51:34.905Z · LW · GW

I basically agree. The following is speculation/playing with an idea, not something I think is likely true.

Imagine it's the future. It becomes clear that a lab could easily create mirror bacteria if they wanted to, or even deliberately create mirror pathogens. It may even get to the point where countries explicitly threaten to do this.

At that point, it might be a good idea to develop mirror life for the purposes of developing countermeasures.

I'm not that familiar with how modern vaccines and drugs are made. Can a vaccine be made without involving a living cell? What about an antibiotic?

Comment by scroogemcduck1 on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2024-11-28T05:04:58.455Z · LW · GW

There's The Bayesian Conspiracy's discord server. No need to listen to the podcast or to related podcasts to participate in discussion.

Comment by scroogemcduck1 on Bets and updating · 2024-11-06T05:39:28.394Z · LW · GW

They don't need to solve the whole Halting Problem, for the same reason you don't need to contradict Rice's theorem if you had some proof (which I take as an axiom for the sake of the hypothetical) that the predictor was in fact perfect and that it is utility maximizing. Also, we can just try saying that there is a high probability that they will do this. Furthermore, you can imagine a restricted subset of Turing machines for which the Halting problem is computable. But also the only computers that exist in reality are really finite state machines.

Comment by scroogemcduck1 on Bets and updating · 2024-11-06T05:35:31.357Z · LW · GW

Well, the perplexing situation doesn't actually happen if the predictors are good enough, because they'll predict you both won't update and won't take the bet. Thus you'll never have been approached in the first place.

Comment by scroogemcduck1 on Nuclear war is unlikely to cause human extinction · 2024-08-26T23:22:50.874Z · LW · GW

There's 148.94 million km^2 of Earth land area, not ~500 million as you claim (which is about the entire surface area of the earth).

Comment by scroogemcduck1 on Logical Share Splitting · 2023-09-15T13:47:30.892Z · LW · GW

I assume your proposal requires trades be public, so that someone exploiting a proof to get free money ends up revealing the proof to others.

Until computerized theorem proving vastly improves, this system will only prove statements after the first proof is accepted.

Comment by scroogemcduck1 on Reflections on a year of college · 2023-07-08T20:42:06.209Z · LW · GW

This is a very good collection and distillation of rational college advice. However, there is very little advice from you, about your year, advice that's the title made me expect.

Comment by scroogemcduck1 on Gears in understanding · 2022-05-27T17:55:49.716Z · LW · GW

I mention this because sometimes in rationalist contexts, I've felt a pressure to not talk about models that are missing Gears. I don't like that. I think that Gears-ness is a really super important thing to track, and I think there's something epistemically dangerous about failing to notice a lack of Gears. Clearly noting, at least in your own mind, where there are and aren't Gears seems really good to me. But I think there are other capacities that are also important when we're trying to get epistemology right


A good way to notice the lack of gears is to explicitly label the non-gearsy steps. 

  • My high school calculus teacher would draw a big cloud with the word "POOF!" while saying "Woogie Woogie Boogie!" when there was an unproven but vital statement (since high school calculus doesn't rigorously prove many of the calculus notions). Ever since, whenever I explain math to someone I always make very clear what statements I don't feel like going through the proofs of or will prove later ("Magic"), or who's proofs I don't know ("Dark Magic"), as opposed to those that I'll happily explain.
  • Similarly, emergent phenomena should be called "Magic" (Though, this only works after internalizing that mysterious answers aren't answers. It's just "Gears work in mysterious ways", but in an absurd enough way to make it clear that the problem is with your understanding).
Comment by scroogemcduck1 on Rationalists Should Learn Lock Picking · 2022-03-19T23:06:43.721Z · LW · GW

They aren't. brook is saying that picking locks might damage them, and damaging locks not in use at worst means you have to throw away a padlock, whereas damaging locks in use might mean you can't open your front door.

Comment by scroogemcduck1 on The Neuralink Monkey Demo · 2021-05-04T21:52:34.278Z · LW · GW

Woah, just on a watch-like device! How far along is this technology?

Comment by scroogemcduck1 on The Neuralink Monkey Demo · 2021-05-03T14:01:11.534Z · LW · GW

If this has been a thing for 30 years, why is the hardware best-in-class? Also, is there a presentation that is more impressive/innovative but perhaps less theatrical?

Comment by scroogemcduck1 on Prediction and Calibration - Part 1 · 2021-05-02T16:18:12.778Z · LW · GW

I think it is best to try to edit it anyway. I think if you have already seen the post, it does not take that long to see that there isn't a line added that is trolly. Also, you should do it for the sake of mathematical accuracy.

Comment by scroogemcduck1 on Where do LessWrong rationalists debate? · 2021-04-30T04:24:11.931Z · LW · GW

Hey! There are at least 3 channels where TBC-and-related-podcast-content is discussed!

(though, if you are only talking about the TBC podcast and not other podcasts hosted by the same people and that are plugged in the same places, then yes, there is only one channel).

Comment by scroogemcduck1 on Suffering · 2021-01-27T00:45:00.252Z · LW · GW

I cannot speak for Scott, but I can speculate. I am quite sure a rock doesn't have qualia, because it doesn't have any processing center, gives no sign of having any utility to maximize, and has no reaction to stimuli. It most probably doesn't have a mind.