Posts

Comments

Comment by Amarko on Vipassana Meditation and Active Inference: A Framework for Understanding Suffering and its Cessation · 2024-03-27T08:00:11.885Z · LW · GW

On the other hand, the more you get accustomed to a pleasurable stimulus, the less pleasure you receive from it over time (hedonic adaptation). Since this happens to both positive and negative emotions, it seems to me that there is a kind of symmetry here. To me this suggests that decreasing prediction error results in more neutral emotional states rather than pleasant states.

Comment by Amarko on Vipassana Meditation and Active Inference: A Framework for Understanding Suffering and its Cessation · 2024-03-25T00:38:39.069Z · LW · GW

I disagree that all prediction error equates to suffering. When you step into a warm shower you experience prediction error just as much as if you step into a cold shower, but I don't think the initial experience of a warm shower contains any discomfort for most people, whereas I expect the cold shower usually does.

Furthermore, far more prediction error is experienced in life than suffering. Simply going for a walk leads to a continuous stream of prediction error, most of which people feel pretty neutral about.

Comment by Amarko on Translations Should Invert · 2023-10-05T21:46:27.986Z · LW · GW

This reminds me of a lot of discussions I've had with people where we seem to be talking past each other, but can't quite pin down what the disagreement is.

Usually we just end up talking about something else instead that we both seem to derive value from.

Comment by Amarko on Why you can't treat decidability and complexity as a constant (Post #1) · 2023-07-27T02:54:34.497Z · LW · GW

It seems to me that the constraints of reality are implicit. I don't think "it can be done by a human" is satisfied by a method requiring time travel with a very specific form of paradox resolution. It sounds like you're arguing that the Church-Turing thesis is simply worded ambiguously.

Comment by Amarko on Why you can't treat decidability and complexity as a constant (Post #1) · 2023-07-27T02:49:21.211Z · LW · GW

It looks Deutschian CTCs are similar to a computer that can produce all possible outputs in different realities, then selectively destroy the realities that don't solve the problem. It's not surprising that you could solve the halting problem in such a framework.

Comment by Amarko on mental number lines · 2023-07-20T09:31:23.139Z · LW · GW

Our symbolic conception of numbers is already logarithmic, as order of magnitude corresponds to the number of digits. I think an estimate of a product based on an imaginary slide rule would be roughly equivalent to estimating based on the number of digits and the first digit.

Comment by Amarko on Micro Habits that Improve One’s Day · 2023-07-02T14:55:35.468Z · LW · GW

Similar to point 2: I find that reading a book in the morning helps my mood. Particularly a physical fiction book.

Comment by Amarko on Micro Habits that Improve One’s Day · 2023-07-02T14:52:00.732Z · LW · GW

I've definitely noticed the pattern of habits seeming to improve my life without them feeling like they are improving my life. On a similar note, a lot of habits seem easy to maintain while I'm doing them and obviously beneficial, but when I stop I have no motivation to continue. I don't know why that is, but my hope is that if I notice this hard enough it will become easier for me to recognize that I should do the thing anyway.

Comment by Amarko on The bullseye framework: My case against AI doom · 2023-05-31T14:20:10.434Z · LW · GW

I read some of the post and skimmed the rest, but this seems to broadly agree with my current thoughts about AI doom, and I am happy to see someone fleshing out this argument in detail.

[I decided to dump my personal intuition about AI risk below. I don't have any specific facts to back it up.]

It seems to me that there is a much larger possibility space of what AIs can/will get created than the ideal superintelligent "goal-maximiser" AI put forward in arguments for AI doom.

The tools that we have depend more on the specific details of the underlying mechanics, and how we can wrangle it to do what we want, rather than our prior beliefs on how we would expect the tools to behave. I imagine that if you lived before aircraft and imagined a future in which humans could fly, you might think that humans would be flapping giant wings or be pedal-powered or something. While it would be great for that to exist, the limitations of the physics we know how to use require a different kind of mechanic that has different strengths and weaknesses to what we would think of in advance.

There's no particular reason to think that the practical technologies available will lead to an AI capable of power-seeking, just because power-seeking is a side effect of the "ideal" AI that some people want to create. The existing AI tools, as far as I can tell, don't provide much evidence in that direction. Even if a power-seeking AI is eventually practical to create, it may be far from the default and by then we may have sufficiently intelligent non-power-seeking AI.

Comment by Amarko on Open Thread With Experimental Feature: Reactions · 2023-05-24T19:32:27.111Z · LW · GW

Perhaps they could be next to the "Reply" button, and fully contained in the comment's container?

Comment by Amarko on The Benevolent Billionaire (a plagiarized problem) · 2023-05-19T02:12:19.900Z · LW · GW

The answer is pretty clear with Bayes' Theorem. The world in which the coin lands heads and you get the card has probability 0.0000000005, and the world in which the coin lands tails has probability 0.5. Thus you live a world with a prior probability of 0.5000000005, so the probability of the coin being heads is 0.0000000005/0.5000000005, or a little under 1 in a billion.

Given that the worst case scenario of losing the bet is saying you can't pay it and losing credibility, you and Adam should take the bet. If you want to (or have to) actually commit to paying, then you have to decide whether you would completely screw over 1 alternate self so that a billion selves can have a bit more money. Given that $100 would not really make a difference to my life in the long run, I think I would not take the bet in this scenario.

Comment by Amarko on ask me about my battery · 2023-04-22T22:59:48.118Z · LW · GW

Personally I would be interested in a longer post about whatever you have to say about the battery and battery design. You could make a sequence, so that it can be split into multiple posts.

Comment by Amarko on baturinsky's Shortform · 2023-04-03T10:08:22.216Z · LW · GW

I assume work is output/time. If a machine is doing 100% of the work, then the human's output is undefined since the time is 0.

Comment by Amarko on Framing Practicum: Semistable Equilibrium · 2021-10-16T20:17:29.564Z · LW · GW

Some properties that I notice about semistable equilibria:

  • It is non-differentiable, so any semsistable equilibrium that occurs in reality is only approximate.
  • If the zone of attraction and repulsion are the same state, random noise will inevitably cause the state to hop over to the repulsive side. So what a 'perfect' semistable equilibrium will look like is a system where the state tends towards some point, hangs around for a while, and then suddenly flies off to the next equilibrium. This makes me think of the Gömböc.
  • A more approximate semsistable equilibrium that has an actual stable point in reality will be one that has a stable equilibrium at one point, and an unstable equilibrium soon after. I think an example of this is a neutron star. A neutron star is stable because gravity pulls the matter inward while the nuclear forces push outward. With more compression however, gravity overcomes these forces and a black hole forms, after which the entire star will collapse.
Comment by Amarko on The Point of Trade · 2021-06-25T10:05:42.341Z · LW · GW

There's still the problem that two people can't occupy the same space at the same time, so we need people to be able to swap places instantly. This then requires some coordination, which is mentioned below.

Some commenters have mentioned economy of scale—It can be more efficient to pool together resources to make a bunch of one thing at a time. For example, people want paperclips but they could get them much faster if they operate a massive paperclip-making machine rather than everyone making their own individually. I think this is already covered though, as if everyone has the same innate traits and preferences then they can all just operate the machine for one microsecond and collect their paperclip. This solves the coordination problem, as if everyone knows what they want and how much labor is required per person then they all just switch between each task accordingly. The only coordination is about who goes where and when.

So in the end, we have people indistinguishable by location, preferences, abilities, knowledge, and behaviour under any given local conditions. It seems that there is some sense in which the people in this world are 'indistinguishable' as agents (barring potentially unnecessary differences like eye colour). I think this hints at trade being unnecessary if and only if the agents are 'indistinguishable' in some sense.

Edit: We may also need need equivalent local conditions as well. If everyone wants a house within 3 miles of anyone else, and there is a single 3-mile-diameter circle that is the best spot in the whole world, then only one person could occupy that spot. If one person happens to have that spot, they could rent it out for a time in exchange for things they want. Boom, trade.

Or for a more obvious example, one person has all the military power. This person does no work and gets stuff in exchange for not killing anyone. It's extreme, but probably technically an edge-case form of trade.

Comment by Amarko on Reinforcing Habits · 2021-06-23T11:30:09.768Z · LW · GW

I wonder if there's a different potential takeaway here than "find what feels rewarding". Duhig’s story makes me think of a perspective I've learned from TEAM-CBT: Bad habits (and behavioural patterns in general) are there for a reason, as a solution to some other problem. An important first step to changing your behaviour is to understand the reasons why not to change, and then really consider what is worth changing. It sounds to me that Duhig figured out what problem eating cookies was trying to solve.

At least, that's the theory as I understand it. I haven't put it into practice, so I'm just spitballing here. But my personal experience is that when I have tried to create habits by setting up an arbitrary reward (eg. eating a block of chocolate) it has not been effective at all for me.

Comment by Amarko on DALL-E by OpenAI · 2021-01-07T19:35:10.400Z · LW · GW

I've learned to be resilient against AI distortions, but 'octagonal red stop sign' really got me. Which is ironic, you'd think that prompt would be particularly easy for the AI to handle. The other colours and shapes didn't have a strong effect, so I guess the level of familiarity makes a difference.

I think the level of nausea is a function of the amount of meaning that is being distorted, eg. distorted words, faces or food have a much stronger effect than warped clock faces or tables, for example. (I would also argue there is more meaning to the shape of a golf club than a clock face.)