Malentropic Gizmo's Shortform

post by Malentropic Gizmo (malentropicgizmo) · 2024-01-23T12:11:11.837Z · LW · GW · 11 comments

Contents

11 comments

11 comments

Comments sorted by top scores.

comment by Malentropic Gizmo (malentropicgizmo) · 2024-01-23T12:11:14.411Z · LW(p) · GW(p)

A coin has two sides. One side commonly has a person on it, this side is called Heads, and the other usually has a number or some other picture on it, this side is called Tails. What I don't understand is why would the creator (I'm unsure whether we should blame Adam Elga, Robert Stalnaker or Arnold Zuboff) of the Sleeping Beauty Problem specify the problem so that the branch with the extra person corresponds to the Tails side of the coin. This almost annoys me more than not calling Superpermutations supermutations or Poisson equations Laplace equations or Laplace equations Harmonic equations.

comment by Malentropic Gizmo (malentropicgizmo) · 2024-04-09T13:49:59.703Z · LW(p) · GW(p)

Random Musing on Autoregressive Transformers resulting from Taelin's A::B Challenge

Let's model an autoregressive transformer as a Boolean circuit or, for simpler presentation, a n-ary circuit with m inputs and 1 output.

Model the entire system the following way: Given some particular m length starting input:

  1. circuit calculates the output token (/integer) from the input
  2. appends calculated output token to the end of the inputword
  3. deletes first token of input
  4. go to 1

It's easy to see that, strictly speaking, this system is not very powerful computationally: we have finite number of possible tokens (n) and finite length context window (m), so we only have finite possible states (n*m), therefore our model is as powerful as a finite state machine (it's pretty much equivalent in its behaviour to a regular grammar only containing AaB rules)

However, real life computers also have finite memory yet we never let that bother us!

How should we manually design our circuit to enable us to solve the most types of problems with an appropriate selection of the initial input?

I think one very straightforward solution is to simply emulate a computer with random access memory the following way:

  • Select some fixed instruction set with k instructions and from our n tokens choose k to correspond to these k instructions.
  • Select another k tokens from the remaining to denote that the given instruction is under execution.
  • design the circuit such that if the emulated computer's memory is M_t (m element vector, M_{ti} is the ith token) after the execution of the t-th instruction, then our circuit should compute the following tokens (including the starting input) : M_{00}, M_{01}, M_{02}, .. M_{0m}, M_{10}, M_{11}, .., M_{1m}, M_{20}, ...

This can be done efficiently with relatively few cicuit nodes and relatively low depth, but I don't want to write down the construction.

It's interesting to see that actual large autoregressive transformers on human language seem to be fitting this model more and more closely:

  1. With GPT-3 (possibly GPT-2), it was shown that after an instruction is given in the initial prompt, the transformer can execute that instruction in its continuation (eg. translate this french sentence to english, french: Je mange une pomme, english: ). This corresponds to having a fixed instruction set in the above model (where the instruction set is in common english instead of singular tokens)
  2. With ChatGPT-3.5 and even more with newer models, it was shown that chain of thought prompting works well for solving more complex problems than asking for a solution immediately. I think the newest models often don't even require an explicit instruction to break their reasoning down into steps, they often do so anyway. I expect this behaviour to be more and more common as newer models get smarter and also, encounter more and more transformer/human interactions in their training set. This corresponds to iteratively calculating M_1, M_2, ... according to the given instructions. However, at this point, the instructions and subsequent "memory snapshots" are all in the transformer's context window. 
  3. Might we expect this to change? Will future models be able to notice when the initial prompt or some still relevant previous data is about to exit the context window and autonomously re-generate them and subsequently pick up the calculation where they left off? I expect they will! What do you think?
comment by Malentropic Gizmo (malentropicgizmo) · 2024-03-24T14:37:40.213Z · LW(p) · GW(p)

I will read the fiction book that is recommended to me first (and I haven't already read it)! Time is of the essence! I will read anything, but if you want to recommend me something I am more likely to enjoy, here are a few thing about me: I like Sci-fi, Fantasy, metaethics, computers, games, Computer Science theory, Artificial Intelligence, fitness, D&D, edgy/shock humor.

Replies from: benjamincosman, niplav, metachirality
comment by benjamincosman · 2024-03-24T15:25:28.662Z · LW(p) · GW(p)

Too Like the Lightning by Ada Palmer :)

Replies from: Seth Herd, malentropicgizmo
comment by Seth Herd · 2024-03-24T16:07:36.889Z · LW(p) · GW(p)

I second this recommendation. This book was amazing. It's quite unlike other scifi, and that's a good thing.

comment by Malentropic Gizmo (malentropicgizmo) · 2024-03-24T17:35:56.771Z · LW(p) · GW(p)

Thank you, I will read this one!

comment by niplav · 2024-03-24T18:36:30.406Z · LW(p) · GW(p)

The titular short story from Luminous (Greg Egan, 1995).

Replies from: malentropicgizmo
comment by Malentropic Gizmo (malentropicgizmo) · 2024-03-25T17:51:09.628Z · LW(p) · GW(p)

I love Egan! I will read Luminous next! Thanks!

comment by metachirality · 2024-03-24T17:09:35.973Z · LW(p) · GW(p)

I would be surprised if you haven't read Unsong already.

Replies from: malentropicgizmo
comment by Malentropic Gizmo (malentropicgizmo) · 2024-03-24T17:36:24.637Z · LW(p) · GW(p)

Yes, but good recommendation otherwise, thank you!

comment by Malentropic Gizmo (malentropicgizmo) · 2024-03-08T18:09:46.373Z · LW(p) · GW(p)

So I've reached a point in my amateur bodybuilding process where I am satisfied with my arms. I, of course, regularly see and talk with guys who have better physiques, but it doesn't bother me, when I look in the mirror, I'm still happy. 

This, apparently, is not the typical experience. In the bodybuilding noosphere, there are many memes born from the opposite experience: "The day you start lifting is the day you're never big enough.", "You will never be as big as your pump.", etc.. 

My question is about a meme I've seen recenty which DOES mirror my own experience, but unfortunately I can't find it. IIRC the meme had similar art style to cyanide and happiness (or maybe smbc). It depicted a person who after getting an initial progress in bodybuilding talked with a much more advanced bodybuilder who tried to advise him how to continue building even more muscle. The new guy's answer was something along the lines of "thanks, but i'm okay with my current results" and the last panel depicted the buff guy wistfully watching the sunset alone. 

Can someone maybe help me find the meme? Any help is appreciated!