Sodium's Shortform

post by Sodium · 2024-09-21T04:45:27.353Z · LW · GW · 10 comments

Contents

10 comments

10 comments

Comments sorted by top scores.

comment by Sodium · 2024-11-19T07:42:31.551Z · LW(p) · GW(p)

I think[1] people[2] probably trust individual tweets way more than they should. 

Like, just because someone sounds very official and serious, and it's a piece of information that's inline with your worldviews, doesn't mean it's actually true. Or maybe it is true, but missing important context. Or it's saying A causes B when it's more like A and C and D all cause B together, and actually most of the effect is from C but now you're laser focused on A. 
 

Also you should be wary that the tweets you're seeing are optimized for piquing the interests of people like you, not truth. 

I'm definitely not the first person to say this, but feels like it's worth it to say it again.

  1. ^

    75% Confident maybe?

  2. ^

    including some rationalists on here

comment by Sodium · 2024-10-25T21:30:32.586Z · LW(p) · GW(p)

Wait a minute, "agentic" isn't a real word? It's not on dictionary.com or Merriam-Webster or Oxford English Dictionary.

Replies from: Richard_Kennaway, cubefox, niplav
comment by Richard_Kennaway · 2024-10-26T06:59:29.947Z · LW(p) · GW(p)

A word has to be real already to get into a dictionary.

comment by niplav · 2024-10-26T00:50:06.054Z · LW(p) · GW(p)

I think normally "agile" would fulfill the same function (per its etymology), but it's very entangled with agile software engineering.

comment by Sodium · 2024-09-21T04:45:27.506Z · LW(p) · GW(p)

Pre-registering a71c97bb02e7082ca62503d8e3ac78dc9f554f524a72ad6a1392cf2d34f398d7

Replies from: niplav
comment by niplav · 2024-09-21T11:36:00.222Z · LW(p) · GW(p)

When will this be revealed?

Replies from: Sodium, Sodium
comment by Sodium · 2024-09-21T16:56:01.823Z · LW(p) · GW(p)

Wait my bad, I didn't except so many people to actually see this. 

This is kind of silly, but I had an idea for a post that I thought someone else might say before I have it written out. So I figured I'd post a hash of the thesis here. 

It's not just about, idk, getting more street cred for coming up with an idea. This is also what I'm planning to write for my MATs application to Lee Sharkley's stream. So in the case someone else did write it up before me, I would have some proof that I didn't just copy the idea from a post.

(It's also a bit silly because my guess is that the thesis isn't even that original)

Edit: to answer the original question, I will post something before October 6th on this if all goes to plan. 

comment by Sodium · 2024-10-03T21:28:41.528Z · LW(p) · GW(p)

That was the SHA-256 hash for:

What if a bag of heuristics is all there is and a bag of heuristics is all we need? That is, (1) we can decompose each forward pass in current models into a set of heuristics chained together and (2) heauristics chained together is all we need for agi

Here's my full post on the subject [LW · GW]

comment by Sodium · 2024-11-24T05:59:10.361Z · LW(p) · GW(p)

I think people see it and think "oh boy I get to be the fat people in Wall-E"

(My friend on what happens if the general public feels the AGI)