Posts

Malentropic Gizmo's Shortform 2024-01-23T12:11:11.837Z

Comments

Comment by Malentropic Gizmo (malentropicgizmo) on The Closed Eyes Argument For Thirding · 2024-04-10T12:20:58.819Z · LW · GW

Yes, I basically agree: My above comment is only an argument against the most popular halfer model. 

However, in the interest of sparing reader's time I have to mention that your model doesn't have a probability for 'today is Monday' nor for 'today is Tuesday'. If they want to see your reasoning for this choice, they should start with the post you linked second instead of the post you linked first.

Comment by Malentropic Gizmo (malentropicgizmo) on D&D.Sci: The Mad Tyrant's Pet Turtles [Evaluation and Ruleset] · 2024-04-09T16:22:38.446Z · LW · GW

I had to use keras backend's switch function for the automatic differentiation to work, but basically yes.

Comment by Malentropic Gizmo (malentropicgizmo) on D&D.Sci: The Mad Tyrant's Pet Turtles [Evaluation and Ruleset] · 2024-04-09T14:45:45.307Z · LW · GW

I enjoyed the exercise, thanks! 

My solution for the common turtles was setting up the digital cradle such that the mind forged inside was compelled to serve my interests (I wrote a custom loss function for the NN). I used 0.5*segments+x for the vampire one (where I used the x which had the best average gp result for the example vampire population). Annoyingly, I don't remember what I changed between my previous and my current solution, but the previous one was much better 🥲

Looking forward to the next challenge!

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-04-09T13:49:59.703Z · LW · GW

Random Musing on Autoregressive Transformers resulting from Taelin's A::B Challenge

Let's model an autoregressive transformer as a Boolean circuit or, for simpler presentation, a n-ary circuit with m inputs and 1 output.

Model the entire system the following way: Given some particular m length starting input:

  1. circuit calculates the output token (/integer) from the input
  2. appends calculated output token to the end of the inputword
  3. deletes first token of input
  4. go to 1

It's easy to see that, strictly speaking, this system is not very powerful computationally: we have finite number of possible tokens (n) and finite length context window (m), so we only have finite possible states (n*m), therefore our model is as powerful as a finite state machine (it's pretty much equivalent in its behaviour to a regular grammar only containing AaB rules)

However, real life computers also have finite memory yet we never let that bother us!

How should we manually design our circuit to enable us to solve the most types of problems with an appropriate selection of the initial input?

I think one very straightforward solution is to simply emulate a computer with random access memory the following way:

  • Select some fixed instruction set with k instructions and from our n tokens choose k to correspond to these k instructions.
  • Select another k tokens from the remaining to denote that the given instruction is under execution.
  • design the circuit such that if the emulated computer's memory is M_t (m element vector, M_{ti} is the ith token) after the execution of the t-th instruction, then our circuit should compute the following tokens (including the starting input) : M_{00}, M_{01}, M_{02}, .. M_{0m}, M_{10}, M_{11}, .., M_{1m}, M_{20}, ...

This can be done efficiently with relatively few cicuit nodes and relatively low depth, but I don't want to write down the construction.

It's interesting to see that actual large autoregressive transformers on human language seem to be fitting this model more and more closely:

  1. With GPT-3 (possibly GPT-2), it was shown that after an instruction is given in the initial prompt, the transformer can execute that instruction in its continuation (eg. translate this french sentence to english, french: Je mange une pomme, english: ). This corresponds to having a fixed instruction set in the above model (where the instruction set is in common english instead of singular tokens)
  2. With ChatGPT-3.5 and even more with newer models, it was shown that chain of thought prompting works well for solving more complex problems than asking for a solution immediately. I think the newest models often don't even require an explicit instruction to break their reasoning down into steps, they often do so anyway. I expect this behaviour to be more and more common as newer models get smarter and also, encounter more and more transformer/human interactions in their training set. This corresponds to iteratively calculating M_1, M_2, ... according to the given instructions. However, at this point, the instructions and subsequent "memory snapshots" are all in the transformer's context window. 
  3. Might we expect this to change? Will future models be able to notice when the initial prompt or some still relevant previous data is about to exit the context window and autonomously re-generate them and subsequently pick up the calculation where they left off? I expect they will! What do you think?
Comment by Malentropic Gizmo (malentropicgizmo) on The Solution to Sleeping Beauty · 2024-04-08T10:48:36.233Z · LW · GW

No she does not. And it's easy to see if you actually try to formally specify what is meant here by "today" and what is meant by "today" in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment. 

I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math, rather, it's about philosophy. Please, reread that part of my previous comment.

And frankly, it baffles me that you think that you need to explain that it's possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row.

That is not what I explained and I suggest you reread that part. Here it is again:

This whole conversation isn't about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple language of the given formal system they are working in, they are just simply so much of an expert that they can transmit and receive the given information more efficiently by speaking on a higher level of abstraction.

It is not possible to translate the conversation that we're having to a simple formal system as it's about how we should/can model some aspect of reality (which is famously dirty and complicated) with some specific mathematical object. 

The structure of my argument here is the following: 

  1. Math is about concepts in formal systems, therefore an argument about math can be expressed in some simple, formal language
  2. We are having an argument which can't be translated to a formal system.
  3. Therefore, we are not arguing about math.

The more I post about anthropics the clearer it becomes that I should've started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation.

Ah yes, clearly, the problem is that I don't understand basic probability theory. (I'm a bit sad that this conversation happened to take place with my pseudonymous account.) In my previous comment, I explicitily prepared to preempt your confusion about seeing the English word 'experiment' with my paragraph (the part of it that you, for some reason, did not quote), and specifically linking a wiki which only contains the mathematical part of 'probability', and not philosophical interpretations that are paired with it commonly, but alas, it didn't matter.

>In particular, Beauty, when awoken, has a certain credence in the statement "Today is Monday."

No she does not. And it's easy to see if you actually try to formally specify what is meant here by "today" and what is meant by "today" in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment. 

If you are not ready to accept that people have various levels of belief in the statement "Today is Monday" at all times, then I don't think this conversation can go anywhere, to be honest. This is an extremely basic fact about reality.

EDIT: gears, in the first part you selected i''m answering an accusation of bluffing in a matter-of-fact way, how is that too combative? Also, fell free to chime in at any point it is an open forum after all..

Comment by Malentropic Gizmo (malentropicgizmo) on The Solution to Sleeping Beauty · 2024-04-07T23:46:40.145Z · LW · GW

Now, that's not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties.

This whole conversation isn't about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple language of the given formal system they are working in, they are just simply so much of an expert that they can transmit and receive the given information more efficiently by speaking on a higher level of abstraction.

It is not possible to translate the conversation that we're having to a simple formal system as it's about how we should/can model some aspect of reality (which is famously dirty and complicated) with some specific mathematical object. 

To be more concrete: I want to show you that we can model (and later that we should indeed) a person's beliefs at some given point in time with probability spaces.

This is inherently a philosophical and not a mathematical problem and I don't see how you don't understand this concept and would appreciate if you could elaborate on this point as much as possible.

You keep insisting that 

By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive - they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. "Centredness" framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space. 

If we are being maximally precise, then NO: the math of probability spaces prescribes a few formal statements which (this is very important), in some cases, can be used to model experiments and events happening or not happening in reality, but the mathematical objects itself have no concept of 'experiment' or 'time' or anything like those. I won't copy it here, but you can look these up on the net yourself, if you want: here is one such source. Don't be confused by the wiki sometimes using English words, rest assured, any mathematician could translate it to any sufficiently expressive, simple formal system using variable names like a1,x3564789, etc.. (If you really think it would help you and you don't believe what I'm saying otherwise, I can translate it to first order logic for you)

Now that we hopefully cleared up that we are not arguing about math, it's time for more interesting parts:

Can a probability space model a person's beliefs at a certain point in time?

Yes, it can!

First, I would like to show you that your solution does NOT model a person's belief at a certain time:

  1. People have certain credences in the statement "Today is Monday."
    1. Do note that the above statement is fully about reality and not about math in any way and so it leans on our knowledge about humans and their minds.
    2. You can test it in various ways: eg. asking people "hey, sorry to bother you, is today Monday?", setting up an icecream stand which is only open on Monday in one direction from the lab, another in the opposite direction which is only open on Tuesday and making this fact known to subjects of an experiment who are then asked to give you icecream and observe where the go, etc..
  2. In particular, Beauty, when awoken, has a certain credence in the statement "Today is Monday."
    1. This follows from 1.
  3. Your model does not model Beauty's credences in the statement "Today is Monday".
    1. You can see this various ways, and your model is pretty weird, but because I believe you will agree with this, I won't elaborate here, unless asked later.
  4. Therefore, your solution does NOT model a person's belief at a certain time.
    1. This follows from 2 and 3.

Before I go further, I think I will ask you whether everything is clear and whether you agree with everything I wrote so far.

Comment by Malentropic Gizmo (malentropicgizmo) on The Solution to Sleeping Beauty · 2024-04-06T14:37:33.555Z · LW · GW

Metapoint: You write a lot of things in your comments with which I usually disagree, however, I think faster replies are more useful in these kind of conversations than complete replies, so at first, I'm only going to reply to points I consider the most important at the time. If you disagree and believe writing complete replies is more useful, do note (however, my experience for that case is that after a while, instead of writing a comment containing a reply to the list of points the other party brought up, I simply drop out of the conversation and I can't guarantee that this won't happen here)

My whole previous comment was meant to address the part of your comment I quoted. Here it is again:

If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another. 

With my previous comment I meant to show you that if you don't start out with "centered worlds don't work", you CAN make it work (very important: here, I haven't yet said that this is how it works or how it ought to work, merely that it CAN work without some axiom of probability getting hurt).

Still, I struggle to see what your objection is apart form your intuition that "NO! It can't work!"

When the Beauty doesn't know the actual setting of the experiment she has a different model, fitting her uninformed state of knowledge, when she is told what is actually going on she discards it and starts using the correct model from this post.

Again, I understand that in the theory you built up this is how it would work, that's not what i want to argue (yet). I want to argue how it CAN work in another way with credences/centeredness/bayesianism. To counterargue you would have to show that NO, it can't work that way. You would have to show that for some reason because of some axiom of probability or sth, we can't model Beauty's credences with probability the moment they learn the relevant info after waking up.

In probability theory, one outcome of a sample space is realized per an iteration of experiment.

Discard the concept of experiment as it might confuse you. If you want to understand how centered world/credence/bayesian epistemology works (to then see that it DOES work), experiment isn't a good word, because it might lock you into a third-person view, where of course, centeredness does not work (of course, after you understood that bayesianism CAN work, we can reintroduce the word with some nuance). 
 

Your statistical analysis is of course also assumes the third-person/not centered view, so of course it won't help you, but again, we should first talk about whether centeredness CAN work or not. Assuming that it can't and deriving stuff from that does not prove that it can't work.

So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event "The Beauty is awaken in this experement" is properly defined. Event "The Beauty is awake on this particular day" is not, unless you find some new clever way to do it - feel free to try.

The clever way isn't that clever to be honest. It's literally just: don't assume that it does not work and try it.

Comment by Malentropic Gizmo (malentropicgizmo) on The Closed Eyes Argument For Thirding · 2024-04-05T09:48:41.478Z · LW · GW

B follows from B

Typo

Comment by Malentropic Gizmo (malentropicgizmo) on The Solution to Sleeping Beauty · 2024-04-04T17:46:07.562Z · LW · GW

If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another. 

I've read the relevant part of your previous post and I have an idea that might help.

Consider the following problem: "Forgetful Brandon": Adam flips a coin and does NOT show it to Brandon, but shouts YAY! with 50% probability if the coin is HEADS (he does not shout if the coin is TAILS). (Brandon knows Adam's behaviour). However, Brandon is forgetful and if Adam doesn't shout he doesn't do any Bayesian calculation and goes off to have an icecream instead.

Adam doesn't shout. What should Brandon's credence of HEADS be after this?

I hope you agree that Brandon not actually doing the Bayesian calculation is irrelevant to the question. We should still do the Bayesian calculation if we are curious about the correct probability. Anytime Brandon updates he predictably updates in the direction of HEADS, but again: do we care about this? should we point out a failure of conservation of expected evidence? Again, I say: NO: What evidence is actually updated on in the thought experiment isn't relevant to the correct theoretical Bayesian calculation: we could also imagine a thought-experiment with a person who does bayesian calculations wrong every time, but to the correct credence that would still be irrelevant. If you agree, I don't see why you object to Sleeping Beauty not doing the calculation in case she is not awakened. (Which is the only objection you wrote under the "Freqency Argument" model)

EDIT: I see later you refer back to another post supposedly addressing a related argument, however, as that would be the fifth step of my recursion I will postpone inspecting it to tomorrow, but obviously, you can't give the same response to Forgetful Brandon as in this case Bradon does observe the non-shout, he just doesn't update on it. You also declare that P(Awake|Heads) to not be 1/2 and give "Beauty is awakened on Heads no matter what" as reason. You often do this mistake in the text, but here it's too important to not mention that "Awake" does not mean that "Beauty is awakened.", it means that "Beauty is awake" (don't forget that centeredness!) and, of course, Beauty is not awake if it is Tuesday and the coin is heads.

EDIT2: I'm also curious what you would say about the problem with the following modification ("Uninformed Sleeping Beauty"): Initially the full rules of the experiment are NOT explained to Beauty, only that she will have to sleep in the lab, she will get a drug on Monday night which will make her forget her day and that she may or may not be awakened on Monday/Tuesday. 

However, when she awakens the full rules are explained to her, ie that she will not get awakened on Tuesday if the coin is HEADS. 

Note that in this case you can't object that the prior distribution gives non-zero probability to Tuesday&Heads as Beauty unquestionably has 1/4 credence in that before they explain the full rules to her.

EDIT3: Missed that Beauty might think it's wednessday too in the previous case before being told the full rules, so let's consider instead the following ("Misinformed Sleeping Beauty"): Initially the full rules of the experiment are NOT explained to Beauty, only that she will have to sleep in the lab and that she will get a drug on Monday night which will make her forget her day. Furthermore, she is told the falsehood that she will be awakened on Monday AND Tuesday whatever happens!

However, when she awakens the full rules are explained to her, ie that she won't get/wouldn't have gotten awakened on Tuesday if the coin is HEADS. 

Note that in this case you can't object that the prior distribution gives non-zero probability to Tuesday&Heads as Beauty unquestionably has 1/4 credence in that before they explain the actual rules to her.

Comment by Malentropic Gizmo (malentropicgizmo) on Should you refuse this bet in Technicolor Sleeping Beauty? · 2024-04-04T16:47:57.882Z · LW · GW

I wasn't sure either, but looked at the previous post to check which one is intended.

Comment by Malentropic Gizmo (malentropicgizmo) on The Solution to Sleeping Beauty · 2024-04-04T16:25:28.306Z · LW · GW

Consider that in the real world Tuesday always happens after Monday. Do you agree or disagree: It is incorrect to model a real world agent's knowledge about today being Monday with probability?

Comment by Malentropic Gizmo (malentropicgizmo) on Should you refuse this bet in Technicolor Sleeping Beauty? · 2024-04-04T16:14:44.511Z · LW · GW

What is the probability of tails given it's Monday for your observer instances?

Comment by Malentropic Gizmo (malentropicgizmo) on Should you refuse this bet in Technicolor Sleeping Beauty? · 2024-04-04T15:08:12.392Z · LW · GW

You may bet that the coin is Tails at 2:3 odds. That is: if you bet 200$ and the coin is indeed Tails you win 300$. The bet will be resolved on Wednesday, after the experiment has ended.

I think the second sentence should be: "That is: if you bet 300$ and the coin is indeed Tails you win 200$."

Comment by Malentropic Gizmo (malentropicgizmo) on The Solution to Sleeping Beauty · 2024-04-04T14:09:25.172Z · LW · GW

I've started at your latest post and recursively tried to find where you made a mistake (this took a long time!). Finally, I got here and I think I've found the philosophical decision that led you astray. 

Am I understanding you correctly that you reject P(today is Monday) as a valid probability in general (not just in sleeping beauty)? And you do this purely because you dislike the 1/3 result you'd get for Sleeping Beauty? 

Philosophers answer "Why not?" to the question of centered worlds because nothing breaks and we want to consider the questions of 'when are we now?' and 'where are we now?'. I understand that you disagree that nothing breaks by axiomatically deciding that 1/3 is the wrong probability for sleeping beauty, however, if everything else seems to work, is it not much simpler to accept that 1/3 is the correct answer and then you don't have to give up considering whether today is Monday? (to me, this seems like a pretty good trade!)

Comment by Malentropic Gizmo (malentropicgizmo) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-03T05:58:22.347Z · LW · GW

Those are exactly my favourites!!

It's probably not intended, but I always imagine that in "We do not wish to advance", first the singer whispers sweet nothings to the alignment community, then the shareholder meeting starts and so: glorius-vibed music: "OPUS!!!" haha

Nihil supernum was weird because the text was always pretty somber for me. I understood it to mean to express the hardship of those living in a world without any safety nets trying to do good, ie. us, yet the music, as you point out, is pretty empowering.This combination is (to my knowledge) kinda uncommon and so interesting for me. As it happens, my favourite powermetal band also has music with this combination, eg The Things We Believe In.

Yes, combining Litany of Tarski with a pirate vibe works surprisingly well. I guess it might not be that surprising if we consider that the job of a pirate usually requires a mind accurate enough to track truth well and resilient enough to adapt to hard circumstances..

Comment by Malentropic Gizmo (malentropicgizmo) on The Closed Eyes Argument For Thirding · 2024-04-02T20:52:20.126Z · LW · GW

I'm not sure where the error is in your calculations (I suspect in double-counting tuesday, or forgetting that tuesday happens even if not woken up, so it still gets it's "matches Monday bet" payout), but I love that you've shown how thirders are the true halfers!  

To be precise, I've shown that in a given betting structure (which is commonly used as an argument for the halfer side even if you didn't use it that way now) using thirder probabilities leads to correct behaviour. In fact my belief is that in ANY kind of setup using thirder probabilities leads to correct behaviour, while using the halfer probabilities leads to worse or equivalent results. I wouldn't characterize this as ''thirders are the true halfers!'. I disagree that there is a mistake, is the only reason you think there is a mistake that the result of the calculation disagrees with your prior belief?

I don't mean to "support the halfer side", I mean that having a side, without specifying precisely what future experience(s) are being predicted, is incorrect.

But if every reasonable way to specify precisely what future experiences are being predicted gives the same set of probabilities, couldn't we say that one side is correct?

Comment by Malentropic Gizmo (malentropicgizmo) on The Closed Eyes Argument For Thirding · 2024-04-02T19:37:54.188Z · LW · GW

But do you also agree that there isn't any kind of bet with any terms or resolution mechanism which supports the halfer probabilities? While you did not say it explicitly, your comment's structure seems to imply that one of the bet structure you gave (the one I've quoted) supports the halfer side. My comment is an analysis showing that that's not true (which was apriori pretty surprising to me).

Comment by Malentropic Gizmo (malentropicgizmo) on The Closed Eyes Argument For Thirding · 2024-04-02T18:37:13.685Z · LW · GW

If it's "on wednesday, you'll be paid $1 if your predicion(s) were correct, and lose $1 if they were incorrect (and voided if somehow there are two wakenings and you make different predictions)", you should be indifferent to heads or tails as your prediction.

I recommend setting aside around an hour and studying this comment closely.

In particular, you will see that just because the text I quoted from you is true, that is not an argument for believing that the probability of heads is 1/2. Halfers are actually those who are NOT indifferent between heads and tails when they are awakened in this setup, they will change their mind about their randomized strategy!

Consider randomized strategies: before the experiment you decide that you will bet Heads with q probability and tails with 1-q. (Before the experiment, both halfers and thirders agree that all qs are equally good)

Thirder wakes up: 

Expected value of betting heads: P(Heads)*1$ + P(Tails&Monday)*P(you will bet heads on Tuesday)*(-1$) +  P(Tails&Tuesday)*P(you bet heads on Monday)*(-1$) = 1/3*1$ + 1/3*q*(-1$) +1/3*q*(-1$) = 1/3 - 2/3*q

Expected value of betting tails: P(Heads)*(-1$) + P(Tails&Monday)*P(you will bet tails on Tuesday)*1$ +  P(Tails&Tuesday)*P(you bet tails on Monday)*1$ = 1/3*(-1$) + 1/3*(1-q)*1$ +1/3*(1-q)*1$ = 1/3 - 2/3*q

Exactly equal for all q!!!!

Halfer wakes up: 

Expected value of betting heads: P(Heads)*1$ + P(Tails&Monday)*P(you will bet heads on Tuesday)*(-1$) +  P(Tails&Tuesday)*P(you bet heads on Monday)*(-1$) = 1/2*1$ + 1/4*q*(-1$) +1/4*q*(-1$) = 1/2 - 1/2*q

Expected value of betting tails: P(Heads)*(-1$) + P(Tails&Monday)*P(you will bet tails on Tuesday)*1$ +  P(Tails&Tuesday)*P(you bet tails on Monday)*1$ = 1/2*(-1$) + 1/4*(1-q)*1$ +1/4*(1-q)*1$ = -1/2*q

For all q halfers believe betting heads has higher expected value and so they are not indifferent between the two. (Because of your example's payoffs you can't get positive expected value even with randomized strategies, and so halfers won't fare worse by departing from their random strategy than thirders do staying with their randomized one, but that's just a coincidence. See the linked comment for an example where halfers' false beliefs DO lead them to make worse decisions! (that example has the same structure as yours but with different numbers))

Comment by Malentropic Gizmo (malentropicgizmo) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T16:43:31.388Z · LW · GW

My three favourites are:

  • The Litany of Tarrrrrski
  • AGI and the EMH
  • Nihil Supernum
Comment by Malentropic Gizmo (malentropicgizmo) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-01T16:13:02.263Z · LW · GW

I liked Circle, Grow and Grow

Comment by Malentropic Gizmo (malentropicgizmo) on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-03-30T02:37:55.509Z · LW · GW

Two things I saw:

  1. The 'Fangs for some reason' column is not needed, because every gray turtle has a fang and no other color has any fang.
  2. There is a lot of turtles (around 5404 more than expected) with the following characteristics: (20.4lb weight, no wrinkles, 6 shell segments, green, normal nostril size, no miscellaneous abnormalities)

My Solution (this might change before the end)::

[23.14, 19.24, 25.98, 21.52, 18.17, 7.40, 31.15, 20.40, 24.0, 20.52]

Previous solution:

22.652468, 18.932825, 25.491783, 20.964714, 18.029692, 7.4, 30.246178, 20.4, 24.039215, 20.40147

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-03-25T17:51:09.628Z · LW · GW

I love Egan! I will read Luminous next! Thanks!

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-03-24T17:36:24.637Z · LW · GW

Yes, but good recommendation otherwise, thank you!

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-03-24T17:35:56.771Z · LW · GW

Thank you, I will read this one!

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-03-24T14:37:40.213Z · LW · GW

I will read the fiction book that is recommended to me first (and I haven't already read it)! Time is of the essence! I will read anything, but if you want to recommend me something I am more likely to enjoy, here are a few thing about me: I like Sci-fi, Fantasy, metaethics, computers, games, Computer Science theory, Artificial Intelligence, fitness, D&D, edgy/shock humor.

Comment by Malentropic Gizmo (malentropicgizmo) on On green · 2024-03-22T20:31:40.860Z · LW · GW

I enjoyed this, and at times I felt close to grasping green, but now, after reading it, I wouldn't be able to convey what the part of green which isn't according to some other color is to someone else. Multiple times in the post you build up something just to demolish it a few paragraphs later which makes the bottom line hard to remember for me, so a green for dummies version would be nice.

Example of solarpunk aesthetic (to be clear: I think the best futures are way more future-y than this)

I like the picture. Obviously, the pictured scene would be simulated on some big server cluster, but nice aesthetics, I wouldn't require a more future-y one.

Comment by Malentropic Gizmo (malentropicgizmo) on Increasing IQ by 10 Points is Possible · 2024-03-20T19:53:40.493Z · LW · GW

I'm surprised people are taking you seriously.

If you're reading comments under the post, that obviously selects for people who take him seriously, similarly to how if you clicked through a banner advertising to increase one's penis by X inches, you would mostly find people who took the ad more seriously than you'd expect.

Comment by Malentropic Gizmo (malentropicgizmo) on On Devin · 2024-03-18T18:24:22.097Z · LW · GW

I put ~5% on the part I selected, but there is no 5% emoji, so I thought I will mention this using a short comment.

Comment by Malentropic Gizmo (malentropicgizmo) on kave's Shortform · 2024-03-16T00:18:16.498Z · LW · GW

Because when you lose weight you lose a mix of fat and muscle, but when you gain weight you gain mostly fat if you don't exercise (and people usually don't because they think it's optional) resulting in a greater bodyfat percentage (which is actually the relevant metric for health, not weight)

Comment by Malentropic Gizmo (malentropicgizmo) on "How could I have thought that faster?" · 2024-03-13T18:06:09.077Z · LW · GW

I also thought that it was very common. I would say it's necessary for competition math.

Comment by Malentropic Gizmo (malentropicgizmo) on Notes from a Prompt Factory · 2024-03-12T18:06:34.795Z · LW · GW

ah I see. yes, that is possible, though that makes the main character much less relatable

Comment by Malentropic Gizmo (malentropicgizmo) on Notes from a Prompt Factory · 2024-03-12T17:40:34.920Z · LW · GW

I think the main character's desire to punish the AIs stemmed from his self-hatred instead. How would you explain this part otherwise?

And if sometimes in their weary, resentful faces I recognize a mirror of my own expression—well, what of it?

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-03-08T18:09:46.373Z · LW · GW

So I've reached a point in my amateur bodybuilding process where I am satisfied with my arms. I, of course, regularly see and talk with guys who have better physiques, but it doesn't bother me, when I look in the mirror, I'm still happy. 

This, apparently, is not the typical experience. In the bodybuilding noosphere, there are many memes born from the opposite experience: "The day you start lifting is the day you're never big enough.", "You will never be as big as your pump.", etc.. 

My question is about a meme I've seen recenty which DOES mirror my own experience, but unfortunately I can't find it. IIRC the meme had similar art style to cyanide and happiness (or maybe smbc). It depicted a person who after getting an initial progress in bodybuilding talked with a much more advanced bodybuilder who tried to advise him how to continue building even more muscle. The new guy's answer was something along the lines of "thanks, but i'm okay with my current results" and the last panel depicted the buff guy wistfully watching the sunset alone. 

Can someone maybe help me find the meme? Any help is appreciated!

Comment by Malentropic Gizmo (malentropicgizmo) on Daniel Kokotajlo's Shortform · 2024-03-06T18:31:36.356Z · LW · GW

I love how it admits it has no idea how come it gets better if it retains no memories

Comment by Malentropic Gizmo (malentropicgizmo) on Exercise: Planmaking, Surprise Anticipation, and "Baba is You" · 2024-02-25T17:04:53.614Z · LW · GW

What about Outer Wilds? It's not strictly a puzzle game, but I think it might go well with this exercise. Also, what games would you recommend for this to someone who has already played every available level in Baba Is You?

Comment by Malentropic Gizmo (malentropicgizmo) on Exercise: Planmaking, Surprise Anticipation, and "Baba is You" · 2024-02-25T02:35:45.230Z · LW · GW

Or Understand for 4 EUR which has a highly upvoted lesswrong post recommending it.

Comment by Malentropic Gizmo (malentropicgizmo) on Less Wrong automated systems are inadvertently Censoring me · 2024-02-21T16:20:18.953Z · LW · GW

It's a pity we don't know the karma scores of their comments before this post was published. For what it's worth, I only see two of his comments with negative karma this and this. The first one among these two is the one recent comment of Roko I strong-downvoted (though also strong agree-voted), but I might not have done that if I knew that only a few comments with a few negative karma is enough to silence someone.

Comment by Malentropic Gizmo (malentropicgizmo) on Has anyone actually changed their mind regarding Sleeping Beauty problem? · 2024-02-19T15:43:17.795Z · LW · GW

Please do so in a post, I subscribed to those

Comment by Malentropic Gizmo (malentropicgizmo) on Has anyone actually changed their mind regarding Sleeping Beauty problem? · 2024-02-18T17:11:30.116Z · LW · GW

Initially, I had a strong feeling/intuition that the answer was 1/3, but felt that because you can also construct a betting situation for 1/2, the question was not decided. In general, I've always found betting arguments the strongest forms of arguments: I don't much care how philosophers feel about what the right way to assign probabilities is, I want to make good decisions in uncertain situations for which betting arguments are a good abstraction. "Rationality is systematized winning" and all that.

Then, I've read this comment, which showed me that I made a mistake by accepting the halfer betting situation as an argument for 1/2. In retrospect, I could have avoided this by actually doing the math, but it's an understandable mistake, people have finite time. In particular, this sentence on the Sleeping Beauty Paradox tag page also makes the mistake: "If Beauty's bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half." No, as the linked comment shows, it is advantageous to bet 1:1 in some interpretations, but that's exactly because the actual probability is 1/3. Note: there is no rule/axiom that a bet's odds should always correspond with the event's probability, that is something that can be derived in non-anthropic situations assuming rational expected money-maximizing agents. It's more accurate to call what the above situation points to a scoring rule, you can make up situations with other scoring rules too: "Sleeping Beauty, but Omega will kick you in nuts/vulva if you don't say your probability is 7/93." In this case it is also advantageous "to behave as if" the probability is 7/93 in some respect, but the probability in your mind should still be the correct one.

Comment by Malentropic Gizmo (malentropicgizmo) on I played the AI box game as the Gatekeeper — and lost · 2024-02-12T20:55:00.370Z · LW · GW

Two things don't have to be completely identical to each other for one to give us useful information about the other. Even though the game is not completely identical to the risky scenario (as you pointed out: you don't play against a malign superintelligence), it serves as useful evidence to those who believe that they can't possibly lose the game against a regular human.

Comment by Malentropic Gizmo (malentropicgizmo) on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-06T15:08:02.867Z · LW · GW

I see, I didn't consider that. Sorry.

Comment by Malentropic Gizmo (malentropicgizmo) on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-05T15:04:07.777Z · LW · GW

The post titled "Most experts believe COVID-19 was probably not a lab leak"  is on the frontpage yet this post while being newer and having more karma is not. Looking into it, it's because this post does not have the frontpage tag: it is a personal blogpost. 

Personal Blogposts are posts that don't fit LessWrong's Frontpage Guidelines. They get less visibility by default. The frontpage guidelines are:

  • Timelessness. Will people still care about this in 5 years?
  • Avoid political topics. They're important to discuss sometimes, but we try to avoid it on LessWrong.
  • General Appeal. Is this a niche post that only a small fraction of users will care about?

I don't see how this post doesn't meet those three while the other post does, given they are about the exact same topic, so either Roko disallowed this post to appear on the frontpage or we should expect the frontpage to be curated to be more conforming to mainstream viewpoints (and in that case imo that should also be explicit in the frontpage guidelines)

Comment by Malentropic Gizmo (malentropicgizmo) on Most experts believe COVID-19 was probably not a lab leak · 2024-02-05T00:38:10.012Z · LW · GW

Wikipedia says there is another BSL-4 lab in Harbin, Heilongjiang province. (Source is an archived Chinese news site) Is that incorrect?

Comment by Malentropic Gizmo (malentropicgizmo) on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-03T22:08:24.460Z · LW · GW

Ah, very nice, thank you!

Comment by Malentropic Gizmo (malentropicgizmo) on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-03T21:56:41.646Z · LW · GW

Thank you for answering, I'm sure this will convince a big fraction of the audience!

Maybe, as an European I'm missing some crucial context, but I'm most interested in the pieces of metadata proving the authenticity of the document. I can also make various official-seeming pdfs. (Also, I'm kinda leery of opening pdfs) Do you have, for example, some tweet by Daszak trying to explain the proposal (which would imply that even he accepts its existence) ? (or a conspicuous refusal to answer questions about it or at least a Sharon Lerner tweet confirming that she did upload this pdf)

Comment by Malentropic Gizmo (malentropicgizmo) on Brute Force Manufactured Consensus is Hiding the Crime of the Century · 2024-02-03T21:42:05.493Z · LW · GW

How do we know that this DEFUSE proposal really exists? I've seen some pay-walled articles from (to me) reputable news sources, but they are pay-walled so I couldn't read them fully. The beginning of one says they were released by some DRASTIC group I've never heard of. I would appreciate if you could provide some more direct evidence.

Comment by Malentropic Gizmo (malentropicgizmo) on Malentropic Gizmo's Shortform · 2024-01-23T12:11:14.411Z · LW · GW

A coin has two sides. One side commonly has a person on it, this side is called Heads, and the other usually has a number or some other picture on it, this side is called Tails. What I don't understand is why would the creator (I'm unsure whether we should blame Adam Elga, Robert Stalnaker or Arnold Zuboff) of the Sleeping Beauty Problem specify the problem so that the branch with the extra person corresponds to the Tails side of the coin. This almost annoys me more than not calling Superpermutations supermutations or Poisson equations Laplace equations or Laplace equations Harmonic equations.

Comment by Malentropic Gizmo (malentropicgizmo) on $300 for the best sci-fi prompt: the results · 2024-01-06T00:16:14.524Z · LW · GW

time jumps' are actually just retreating into some abuse-triggered fugue state

Wait, I thought this was the intended meaning of the original, the twist of the whole story. The Hemingway prompt explicitly asks GPT to include mental illness and at the end of the story:

He closed his eyes again. A minute passed, or perhaps a lifetime.

 he explicitly just loses track of time in these moments.

Comment by Malentropic Gizmo (malentropicgizmo) on Based Beff Jezos and the Accelerationists · 2023-12-06T18:46:50.494Z · LW · GW

Assume that our universe is set up the way you believe it is, ie: the orthogonality thesis is false, sufficiently intelligent agents automatically value the welfare of sentient minds.

In spite of our assumption we can create a system behaving exactly like a misaligned AI would in the following way: 

The superintelligent AI is created and placed in a simulation without its knowledge. This superintelligent AI by assumption is aligned with human values. 

The user outside the simulation gives a goal (which is not necessarily aligned to human values, eg: 'make paperclips') to the system the following way:

Every timestep the aligned AI in the simulation is asked to predict the behavior of a (to its knowledge) hypothetical AI with the user's goal and situation corresponding to the situation of the system outside the simulation.

Then the system behaves as given by the simulated superintelligent aligned AI and the simulated AI's memory is reset.

This setup requires a few non-trivial components apart from the simulated SAI: 

  • a component simulating the world of the SAI and setting that up to give the aligned AI incentive to answer the 'hypothetical' questions without letting it know that its in a simulation
  • a component translating the SAI's answers to the real world

If you don't deny that any of these components is theoretically possible, then how is it possible for you to believe that a misaligned superintelligent system is impossible?

If you believe that a misaligned superintelligent system is indeed possible in theory, then what is the reason you believe that gradient descent/RLHF or some other way we will use to create AIs will result in ones considerate of the welfare of sentient minds?

Comment by Malentropic Gizmo (malentropicgizmo) on 2023 Unofficial LessWrong Census/Survey · 2023-12-03T16:51:05.004Z · LW · GW

I had the same problem with last year's survey, but I don't remember whether I asked and if I asked then what was the answer: Does 'supernatural entity' includes a being outside the simulation? and similarly Does 'magic' includes events that cant be explained by the physical laws of the simulation (but can be explained by the laws of the simulator's world). I can see arguments for either.

Also, some questions ask about lesswrong accounts, but assumes one only has one.