Posts

I want to donate some money (not much, just what I can afford) to AGI Alignment research, to whatever organization has the best chance of making sure that AGI goes well and doesn't kill us all. What are my best options, where can I make the most difference per dollar? 2022-08-02T12:08:46.674Z
What are these "outside of the Overton window" approaches to preventing AI apocalypse that Eliezer was talking about in his post? 2022-06-14T21:18:45.970Z
How would you explain Bayesian thinking to a ten year old? 2022-01-05T17:25:46.771Z
How do you write original rationalist essays? 2021-12-01T08:08:03.330Z

Comments

Comment by lumenwrites on What are these "outside of the Overton window" approaches to preventing AI apocalypse that Eliezer was talking about in his post? · 2022-06-14T21:18:46.207Z · LW · GW
Comment by lumenwrites on Book Launch: The Engines of Cognition · 2021-12-29T14:02:13.300Z · LW · GW

Looks amazing!

I'd love to buy an ebook version though. Or even better - an audiobook.

Comment by lumenwrites on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2021-12-01T17:52:43.734Z · LW · GW

I don't understand, its factors are 101 and 109, both are more than 100.

Comment by lumenwrites on How do you write original rationalist essays? · 2021-12-01T09:36:43.850Z · LW · GW

While I'd rather not test this empirically, I think I'm feeling pretty motivated to do this, and yet I can't. I'd really like to solve this issue without resorting to hiring a professional assassin on myself.

Comment by lumenwrites on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2021-12-01T07:52:16.312Z · LW · GW

That poem was amazing.

How does a person factorize 11,009 in their head?

Comment by lumenwrites on Predictive Coding has been Unified with Backpropagation · 2021-04-04T06:23:13.643Z · LW · GW

You guys will probably find this Slate Star Codex post interesting:

https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/

Scott summarizes the Predictive Processing theory, explains it in a very accessible way (no math required), and uses it to explain a whole bunch of mental phenomena (attention, imagination, motor behavior, autism, schizophrenia, etc.)

Can someone ELI5/TLDR this paper for me, explain in a way more accessible to a non-technical person?

- How does backprop work if the information can't flow backwards?
- In Scotts post, he says that when lower-level sense data contradicts high-level predictions, high-level layers can override lower-level predictions without you noticing it. But if low-level sensed data has high confidence/precision - the higher levels notice it and you experience "surprise". Which one of those is equivalent to the backdrop error? Is it low-level predictions being overridden, or high-level layers noticing the surprise, or something else, like changing the connections between neurons to train the network and learn from the error somehow?