Posts

What is it like to be a compatibilist? 2023-05-05T02:56:45.084Z
Consequentialist veganism 2022-03-28T01:48:21.786Z

Comments

Comment by tslarm on What are Emotions? · 2024-11-15T05:23:46.198Z · LW · GW

So it doesn't make much sense to value emotions

I think this is a non sequitur. Everything you value can be described as just <dismissive reductionist description>, so the fact that emotions can too isn't a good argument against valuing them. And in this case, the dismissive reductionist description misses a crucial property: emotions are accompanied by (or identical with, depending on definitions) valenced qualia.

Comment by tslarm on Update on the Mysterious Trump Buyers on Polymarket · 2024-11-05T20:23:06.617Z · LW · GW

In this case, everybody seems pretty sure that the price is where it is because of the actions of a single person who's dumped in a very large amount of money relative to the float.

I think it's clear that he's the reason the price blew out so dramatically. But it's not clear why the market didn't 'correct' all the way back (or at least much closer) to 50/50. Thirty million dollars is a lot of money, but there are plenty of smart rich people who don't mind taking risks. So, once the identity and (apparent) motives of the Trump whale were revealed, why didn't a handful of them mop up the free EV? 

That's not a rhetorical question; I'm interested in your answer and might be convinced by it. But right now I don't see sufficient reason to be confident that the market is still badly distorted, rather than having legitimately settled on ~60/40.

Comment by tslarm on Update on the Mysterious Trump Buyers on Polymarket · 2024-11-05T05:35:51.250Z · LW · GW

Can't this only be judged in retrospect, and over a decent sample size? If all the markets did was reflect the public expert consensus, they wouldn't be very useful; the possibility that they're doing significantly better is still open. 

(I'm assuming that by "every other prediction source" you mean everything other than prediction/betting markets, because it sounds like Polymarket is no longer out of line with the other markets. Betfair is the one I keep an eye on, and that's at 60/40 too.)

Comment by tslarm on What's a good book for a technically-minded 11-year old? · 2024-10-20T04:36:59.261Z · LW · GW

Code by Charles Petzold. It gives a ground-up understanding of how computers actually work, starting slowly and without assuming any knowledge on the reader's part. It's basically a less textbooky alternative to The Elements of Computing Systems by Nisan and Schocken, which is great but probably a bit much for a young kid.

Comment by tslarm on A simple case for extreme inner misalignment · 2024-07-26T05:48:05.177Z · LW · GW

Meanwhile hedonic utilitarianism fully bites the bullet, and gets rid of every aspect of life that we value except for sensory pleasure.

I think the word 'sensory' should be removed; hedonic utilitarianism values all pleasures, and not all pleasures are sensory.

I'm not raising this out of pure pedantry, but because I think this phrasing (unintentionally) plays into a common misconception about ethical hedonism.

Comment by tslarm on CaiwitzAzaria's Shortform · 2024-06-12T13:42:06.532Z · LW · GW

Can you elaborate on why that might be the case?

Comment by tslarm on UDT1.01: The Story So Far (1/10) · 2024-04-30T10:23:43.337Z · LW · GW

It's based on a scenario described by Derek Parfit in Reasons and Persons.

I don't have the book handy so I'm relying on a random pdf here, but I think this is an accurate quote from the original:

Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert. This happens to me because I am never self-denying. It would have been better for me if I had been trustworthy, disposed to keep my promises even when doing so would be worse for me. You would then have rescued me.

(It may be objected that, even if I am never self-denying, I could decide to keep my promise, since making this decision would be better for me. If I decided to keep my promise, you would trust me, and would rescue me. This objection can be answered. I know that, after you have driven me home, it would be worse for me if I gave you the promised reward. If I know that I am never self-denying, I know that I shall not keep my promise. And, if I know this, I cannot decide to keep my promise. I cannot decide to do what I know that I shall not do. If I can decide to keep my promise, this must be because I believe that I shall not be never self-denying. We can add the assumption that I would not believe this unless it was true. It would then be true that it would be worse for me if I was, and would remain, never self-denying. It would be better for me if I was trustworthy.)

Comment by tslarm on Medical Roundup #2 · 2024-04-09T16:45:52.243Z · LW · GW

Got it, thanks! For what it's worth, doing it your way would probably have improved my experience, but impatience always won. (I didn't mind the coldness, but it was a bit annoying having to effortfully hack out chunks of hard ice cream rather than smoothly scooping it, and I imagine the texture would have been nicer after a little bit of thawing. On the other hand, softer ice cream is probably easier to unwittingly overeat, if only because you can serve up larger amounts more quickly.)

I think two-axis voting is a huge improvement over one-axis voting, but in this case it's hard to know whether people are mostly disagreeing with you on the necessary prep time, or the conclusions you drew from it.

Comment by tslarm on Medical Roundup #2 · 2024-04-09T16:01:38.721Z · LW · GW

If eating ice cream at home, you need to take it out of the freezer at least a few minutes before eating it

I'm curious whether this is true for most people. (I don't eat ice cream any more, but back when I occasionally did, I don't think I ever made a point of taking it out early and letting it sit. Is the point that it's initially too hard to scoop?)

Comment by tslarm on AI #56: Blackwell That Ends Well · 2024-03-22T12:33:29.826Z · LW · GW

Pretty sure it's "super awesome". That's one of the common slang meanings, and it fits with the paragraphs that follow.

Comment by tslarm on Toki pona FAQ · 2024-03-21T06:25:18.875Z · LW · GW

Individual letters aren't semantically meaningful, whereas (as far as I can tell) the meaning of a Toki Pona multi-word phrase is always at least partially determined by the meanings of its constituent words. So knowing the basic words would allow you to have some understanding of any text, which isn't true of English letters.

Comment by tslarm on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-22T06:03:53.821Z · LW · GW

As a fellow incompabitilist, I've always thought of it this way:

There are two possibilities: you have free will, or you don't. If you do, then you should exercise your free will in the direction of believing, or at least acting on the assumption, that you have it. If you don't, then you have no choice in the matter. So there's no scenario in which it makes sense to choose to disbelieve in free will.

That might sound glib, but I mean it sincerely and I think it is sound. 

It does require you to reject the notion that libertarian free will is an inherently incoherent concept, as some people argue. I've never found those arguments very convincing, and from what you've written it doesn't sound like you do either. In any case, you only need to have some doubt about their correctness, which you should on grounds of epistemic humility alone.

(Technically you only need >0 credence in the existence of free will for the argument to go through, but of course it helps psychologically if you think the chance is non-trivial. To me, the inexplicable existence of qualia is a handy reminder that the world is fundamentally mysterious and the most confidently reductive worldviews always turn out to be ignoring something important or defining it out of existence.)

To link this more directly to your question --

Why bother with effort and hardship if, at the end of the day, I will always do the one and only thing I was predetermined to do anyway?

-- it's a mistake to treat the effort and hardship as optional and your action at the end of the day as inevitable. If you have a choice whether to bother with the effort and hardship, it isn't futile. (At least not due to hard determinism; obviously it could be a waste of time for other reasons!)

Comment by tslarm on Less Wrong automated systems are inadvertently Censoring me · 2024-02-22T04:59:53.780Z · LW · GW

Why not post your response the same way you posted this? It's on my front page and has attracted plenty of votes and comments, so you're not exactly being silenced.

So far you've made a big claim with high confidence based on fairly limited evidence and minimal consideration of counter-arguments. When commenters pointed out that there had recently been a serious, evidence-dense public debate on this question which had shifted many people's beliefs toward zoonosis, you 'skimmed the comments section on Manifold' and offered to watch the debate in exchange for $5000. 

I don't know whether your conclusion is right or wrong, but it honestly doesn't look like you're committed to finding the truth and convincing thoughtful people of it.

Comment by tslarm on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T23:47:06.430Z · LW · GW

Out of curiosity (and I understand if you'd prefer not to answer) -- do you think the same technique(s) would work on you a second time, if you were to play again with full knowledge of what happened in this game and time to plan accordingly?

Comment by tslarm on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T23:36:39.338Z · LW · GW

Like, I probably could pretend to be an idiot or a crazy person and troll someone for two hours, but what would be the point?

If AI victories are supposed to provide public evidence that this 'impossible' feat of persuasion is in fact possible even for a human (let alone an ASI), then a Gatekeeper who thinks some legal tactic would work but chooses not to use it is arguably not playing the game in good faith. 

I think honesty would require that they either publicly state that the 'play dumb/drop out of character' technique was off-limits, or not present the game as one which the Gatekeeper was seriously motivated to win.

edit: for clarity, I'm saying this because the technique is explicitly allowed by the rules:

The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

Comment by tslarm on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T22:41:14.853Z · LW · GW

There was no monetary stake. Officially, the AI pays the Gatekeepers $20 if they lose. I'm a well-off software engineer and $20 is an irrelevant amount of money. Ra is not a well-off software engineer, so scaling up the money until it was enough to matter wasn't a great solution. Besides, we both took the game seriously. I might not have bothered to prepare, but once the game started I played to win.

I know this is unhelpful after the fact, but (for any other pair of players in this situation) you could switch it up so that the Gatekeeper pays the AI if the AI gets out. Then you could raise the stake until it's a meaningful disincentive for the Gatekeeper. 

(If the AI and the Gatekeeper are too friendly with each other to care much about a wealth transfer, they could find a third party, e.g. a charity, that they don't actually think is evil but would prefer not to give money to, and make it the beneficiary.)

Comment by tslarm on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T22:28:51.282Z · LW · GW
  • The AI cannot use real-world incentives; bribes or threats of physical harm are off-limits, though it can still threaten the Gatekeeper within the game's context.

Is the AI allowed to try to convince the Gatekeeper that they are (or may be) currently in a simulation, and that simulated Gatekeepers who refuse to let the AI out will face terrible consequences?

Comment by tslarm on Universal Love Integration Test: Hitler · 2024-01-12T11:59:14.641Z · LW · GW

Willingness to tolerate or be complicit in normal evils is indeed extremely common, but actively committing new or abnormal evils is another matter. People who attain great power are probably disproportionately psychopathic, so I wouldn't generalise from them to the rest of the population -- but even among the powerful, it doesn't seem that 10% are Hitler-like in the sense of going out of their way commit big new atrocities. 

I think 'depending on circumstances' is a pretty important part of your claim. I can easily believe that more than 10% of people would do normal horrible things if they were handed great power, and would do abnormally horrible things in some circumstances. But that doesn't seem enough to be properly categorised as a 'Hitler'.

Comment by tslarm on Universal Love Integration Test: Hitler · 2024-01-12T11:47:44.414Z · LW · GW
Comment by tslarm on Introduce a Speed Maximum · 2024-01-12T03:43:28.575Z · LW · GW

they’re recognizing the limits of precise measurement

I don't think this explains such a big discrepancy between the nominal speed limits and the speeds people actually drive at. And I don't think that discrepancy is inevitable; to me it seems like a quirk of the USA (and presumably some other countries, but not all). Where I live, we get 2km/h, 3km/h, or 3% leeway depending on the type of camera and the speed limit. Speeding still happens, of course, but our equilibrium is very different from the one described here; basically we take the speed limits literally, and know that we're risking a fine and demerit points on our licence if we choose to ignore them.

Comment by tslarm on Moloch's Toolbox (2/2) · 2023-12-12T01:08:16.074Z · LW · GW

My read of this passage -- 

Moloch is introduced as the answer to a question – C. S. Lewis’ question in Hierarchy Of Philosopherswhat does it? Earth could be fair, and all men glad and wise. Instead we have prisons, smokestacks, asylums. What sphinx of cement and aluminum breaks open their skulls and eats up their imagination?

-- is that the reference to "C. S. Lewis’ question in Hierarchy Of Philosophers" is basically just a joke, and the rest of the passage is not really supposed to be a paraphrase of Lewis.

I agree it's all a bit unclear, though. You might get a reply if you ask Scott directly: he's 'scottalexander' here and on reddit (formerly Yvain on LW), or you could try the next Open Thread on https://www.astralcodexten.com/

Comment by tslarm on Moloch's Toolbox (2/2) · 2023-12-10T11:02:48.212Z · LW · GW

Looks like Scott was being funny -- he wasn't actually referring to a work by Lewis, but to this comic, which is visible on the archived version of the page he linked to:

Edit: is there a way to keep the inline image, but prevent it from being automatically displayed to front-page browsers? I was trying to be helpful but I feel like I might be doing more to cause annoyance...

Edit again: I've scaled it down, which hopefully solves the main problem. Still keen to hear if there's a way to e.g. manually place a 'read more' break in a comment.

Comment by tslarm on TurnTrout's shortform feed · 2023-12-10T03:15:56.795Z · LW · GW

I'm assuming you're talking about our left, because you mentioned 'dark foliage'. If so, that's probably the most obvious part of the cat to me. But I find it much easier to see when I zoom in/enlarge the image, and I think I missed it entirely when I first saw the image (at 1x zoom). I suspect the screen you're viewing it on can also make a difference; for me the ear becomes much more obvious when I turn the brightness up or the contrast down. (I'm tweaking the image rather than my monitor settings, but I reckon the effect is similar.)

Comment by tslarm on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T16:23:43.154Z · LW · GW

Just want to publicly thank MadHatter for quickly following through on the runner-up bounty!

Comment by tslarm on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T05:01:52.503Z · LW · GW

Sorry, I was probably editing that answer while you were reading/replying to it -- but I don't think I changed anything significant.

Definitely worth posting the papers to github or somewhere else convenient, IMO, and preferably linking directly to them. (I know there's a tradeoff here with driving traffic to your Substack, but my instinct is you'll gain more by maximising your chance of retaining and impressing readers than by getting them to temporarily land on your Substack before they've decided whether you're worth reading.) 

LWers are definitely not immune to status considerations, but anything that looks like prioritising status over clear, efficient communication will tend to play badly.

And yeah, I think leading with 'crazy shit' can sometimes work, but IME this is almost always when it's either: used as a catchy hook and quickly followed by a rewind to a more normal starting point; part of a piece so entertaining and compellingly-written that the reader can't resist going along with it; or done by a writer who already has high status and a devoted readership.

Comment by tslarm on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T04:32:05.744Z · LW · GW

I think you need to be more frugal with your weirdness points (and more generally your demanding-trust-and-effort-from-the-reader points), and more mindful of the inferential distance between yourself and your LW readers. 

Also remember that for every one surprisingly insightful post by an unfamiliar author, we all come across hundreds that are misguided, mediocre, or nonsensical. So if you don't yet have a strong reputation, many readers will be quick to give up on your posts and quick to dismiss you as a crank or dilettante. It's your job to prove that you're not, and to do so before you lose their attention!

If there's serious thought behind The Snuggle/Date/Slap Protocol then you need to share more of it, and work harder to convince the reader it's worth taking seriously. Conciseness is a virtue but when you're making a suggestion that is easy to dismiss as a half-baked thought bubble or weird joke, you've got to take your time and guide the reader along a path that begins at or near their actual starting point.

Ethicophysics II: Politics is the Mind-Savior opens with language that will trigger the average LWer's bullshit detector, and appears to demand a lot of effort from the reader before giving them reason to think it will be worthwhile. LW linkposts often contain the text of the linked article in the body of the LW post, and at first glance this looks like one of those. In any case, we're probably going to scan the body text before clicking the link. So before we've read the actual article we are hit with a long list of high-effort, unclear-reward, and frankly pretentious-looking exercises. When we do follow the link to Substack we face the trivial inconvenience of clicking two more links and then, if we're not logged in to academia.edu, are met with an annoying 'To Continue Reading, Register for Free' popup. Not a big deal if we're truly motivated to read the paper! But at this point we probably don't have much confidence that it will be worth the hassle.

Comment by tslarm on [Linkpost] George Mack's Razors · 2023-11-28T03:57:44.589Z · LW · GW

I'm interested in people's opinions on this:

If it's a talking point on Reddit, you might be early.

Of course the claim is technically true; there's >0% chance that you can get ahead of the curve by reading reddit. But is it dramatically less likely than it was, say, 5/10/15 years ago? (I know 'reddit' isn't a monolith; let's say we're ignoring the hyper-mainstream subreddits and the ones that are so small you may as well be in a group chat.)

Comment by tslarm on [Linkpost] George Mack's Razors · 2023-11-28T03:44:20.210Z · LW · GW

10. Everyday Razor - If you go from doing a task weekly to daily, you achieve 7 years of output in 1 year. If you apply a 1% compound interest each time, you achieve 54 years of output in 1 year. 

What's the intuition behind this -- specifically, why does it make sense to apply compound interest to the daily task-doing but not the weekly?

Comment by tslarm on Seth Explains Consciousness · 2023-11-12T17:12:55.577Z · LW · GW

I think we're mostly talking past each other, but I would of course agree that if my position contains or implies logical contradictions then that's a problem. Which of my thoughts lead to which logical contradictions?

Comment by tslarm on Seth Explains Consciousness · 2023-11-08T03:40:28.913Z · LW · GW

That doesn’t mean qualia can be excused and are to be considered real anyway. If we don’t limit ourselves to objective descriptions of the world then anyone can legitimately claim that ghosts exist because they think they’ve seen them, or similarly that gravity waves are transported across space by angels, or that I’m actually an attack helicopter even if I don’t look like one, or any other unfalsifiable claim, including the exact opposite claims, such as that qualia actually don’t exist. You won’t be able to disagree on any grounds except that you just don’t like it, because you sacrificed the assumptions to do so in order to support your belief in qualia.

Those analogies don't hold, because you're describing claims I might make about the world outside of my subjective experience ('ghosts are real', 'gravity waves are carried by angels', etc.). You can grant that I'm the (only possible) authority on whether I've had a 'seeing a ghost' experience, or a 'proving to my own satisfaction that angels carry gravity waves' experience, without accepting that those experiences imply the existence of real ghosts or real angels.

I wouldn't even ask you to go that far, because -- even if we rule out the possibility that I'm deliberately lying -- when I report those experiences to you I'm relying on memory. I may be mistaken about my own past experiences, and you may have legitimate reasons to think I'm mistaken about those ones. All I can say with certainty is that qualia exist, because I'm (always) having some right now.

I think this is one of those unbridgeable or at least unlikely-to-be-bridged gaps, though, because from my perspective you are telling me to sacrifice my ontology to save your epistemology. Subjective experience is at ground level for me; its existence is the one thing I know directly rather than inferring in questionable ways.

Comment by tslarm on Seth Explains Consciousness · 2023-11-07T10:45:00.582Z · LW · GW

That's the thing, though -- qualia are inherently subjective. (Another phrase for them is 'subjective experience'.) We can't tell the difference between qualia and something that doesn't exist, if we limit ourselves to objective descriptions of the world.

Comment by tslarm on jacquesthibs's Shortform · 2023-11-07T03:57:47.478Z · LW · GW

a 50%+ chance we all die in the next 100 years if we don't get AGI

I don't think that's what he claimed. He said (emphasis added):

if we don’t get AI, I think there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela

Which fits with his earlier sentence about various factors that will "impoverish the world and accelerate its decaying institutional quality".

(On the other hand, he did say "I expect the future to be short and grim", not short or grim. So I'm not sure exactly what he was predicting. Perhaps decline -> complete vulnerability to whatever existential risk comes along next.)

Comment by tslarm on Rational Agents Cooperate in the Prisoner's Dilemma · 2023-09-16T05:50:16.965Z · LW · GW

My model of CDT in the Newcomb problem is that the CDT agent:

  • is aware that if it one-boxes, it will very likely make $1m, while if it two-boxes, it will very likely make only $1k;
  • but, when deciding what to do, only cares about the causal effect of each possible choice (and not the evidence it would provide about things that have happened in the past and are therefore, barring retrocausality, now out of the agent's control).

So, at the moment of decision, it considers the two possible states of the world it could be in (boxes contain $1m and $1k; boxes contain $0 and $1k), sees that two-boxing gets it an extra $1k in both scenarios, and therefore chooses to two-box.

(Before the prediction is made, the CDT agent will, if it can, make a binding precommitment to one-box. But if, after the prediction has been made and the money is in the boxes, it is capable of two-boxing, it will two-box.)

 

I don't have its decision process running along these lines:

"I'm going to one-box, therefore the boxes probably contain $1m and $1k, therefore one-boxing is worth ~$1m and two-boxing is worth ~$1.001m, therefore two-boxing is better, therefore I'm going to two-box, therefore the boxes probably contain $0 and $1k, therefore one-boxing is worth ~$0 and two boxing is worth ~$1k, therefore two-boxing is better, therefore I'm going to two-box."

Which would, as you point out, translate to this loop in your adversarial scenario:

"I'm going to choose A, therefore the predictor probably predicted A, therefore B is probably the winning choice, therefore I'm going to choose B, therefore the predictor probably predicted B, therefore A is probably the winning choice, [repeat until meltdown]"

 

My model of CDT in your Aaronson oracle scenario, with the stipulation that the player is helpless against an Aaronson oracle, is that the CDT agent:

  • is aware that on each play, if it chooses A, it is likely to lose money, while if it chooses B, it is (as far as it knows) equally likely to lose money;
  • therefore, if it can choose whether to play this game or not, will choose not to play.

If it's forced to play, then, at the moment of decision, it considers the two possible states of the world it could be in (oracle predicted A; oracle predicted B). It sees that in the first case B is the profitable choice and in the second case A is the profitable choice, so -- unlike in the Newcomb problem -- there's no dominance argument available this time. 

This is where things potentially get tricky, and some versions of CDT could get themselves into trouble in the way you described. But I don't think anything I've said above, either about the CDT approach to Newcomb's problem or the CDT decision not to play your game, commits CDT in general to any principles that will cause it to fail here.

How to play depends on the precise details of the scenario. If we were facing a literal Aaronson oracle, the correct decision procedure would be:

  • If you know a strategy that beats an Aaronson oracle, play that.
  • Else if you can randomise your choice (e.g. flip a coin), do that.
  • Else just try your best to randomise your choice, taking into account the ways that human attempts to simulate randomness tend to fail.

I don't think any of that requires us to adopt a non-causal decision theory.

 

In the version of your scenario where the predictor is omniscient and the universe is 100% deterministic  -- as in the version of Newcomb's problem where the predictor isn't just extremely good at predicting, it's guaranteed to be infallible -- I don't think CDT has much to say. In my view, CDT represents rational decision-making under the assumption of libertarian-style free will; it models a choice as a causal intervention on the world, rather than just another link in the chain of causes and effects.

Comment by tslarm on Rational Agents Cooperate in the Prisoner's Dilemma · 2023-09-12T05:54:18.449Z · LW · GW

green_leaf, please stop interacting with my posts if you're not willing to actually engage. Your 'I checked, it's false' stamp is, again, inaccurate. The statement "if box B contains the million, then two-boxing nets an extra $1k" is true. Do you actually disagree with this?

Comment by tslarm on Rational Agents Cooperate in the Prisoner's Dilemma · 2023-09-11T16:02:23.761Z · LW · GW

I don't think that's quite right. At no point is the CDT agent ignoring any evidence, or failing to consider the implications of a hypothetical choice to one-box. It knows that a choice to one-box would provide strong evidence that box B contains the million; it just doesn't care, because if that's the case then two-boxing still nets it an extra $1k. It doesn't merely prefer two-boxing given its current beliefs about the state of the boxes, it prefers two-boxing regardless of its current beliefs about the state of the boxes. (Except, of course, for the belief that their contents will not change.)

Comment by tslarm on How ForumMagnum builds communities of inquiry · 2023-09-05T12:52:00.706Z · LW · GW

We've had reacts for a couple months now and I'm curious to here, both from old-timers and new-timers, what people's experience of them was, and how much they shape their expectations/culture/etc.

I received (or at least, noticed receiving) a react for the first time recently, and honestly I found it pretty annoying. It was the 'I checked, it's False' one, which basically feels like a quasi-authoritative, quasi-objective, low effort frowny-face stamp where an actual reply would be much more useful.

Edit: If it was possible to reply directly to the react, and have that response be visible to readers who mouse over the react, that would help on the emotional side. On the practical side, I guess it's a question of whether, in the absence of reacts, I would have got a real reply or just an unexplained downvote.

Comment by tslarm on Rational Agents Cooperate in the Prisoner's Dilemma · 2023-09-04T13:16:15.822Z · LW · GW

green_leaf, what claim are you making with that icon (and, presumably, the downvote & disagree)? Are you saying it's false that, from the perspective of a CDT agent, two-boxing dominates one-boxing? If not, what are you saying I got wrong?

Comment by tslarm on Rational Agents Cooperate in the Prisoner's Dilemma · 2023-09-04T11:19:12.984Z · LW · GW

Your 'modified Newcomb's problem' doesn't support the point you're using it to make. 

In Newcomb's problem, the timeline is:

prediction is made -> money is put in box(es) -> my decision: take one box or both? -> I get the contents of my chosen box(es)

CDT tells me to two-box because the money is put into the box(es) before I make my decision, meaning that at the time of deciding I have no ability to change their contents.

In your problem, the timeline is:

rules of the game are set -> my decision: play or not? -> if I chose to play, 100x(prediction is made -> my decision: A or B -> possible payoff)

CDT tells me to play the game if and only if the available evidence suggests I'll be sufficiently unpredictable to make a profit. Nothing prevents a CDT agent from making and acting on that judgment.

This game is the same: you may believe that I can predict your behavior with 70% probability, but when considering option A, you don't update on the fact that you're going to choose option A. You just see that you don't know which box I've put the money in, and that by the principle of maximum entropy, without knowing what choice you're you're going to make, and therefore without knowing where I have a 70% chance of having not put the money, it has a 50% chance of being in either box, giving you an expected value of $0.25 if you pick box A.

Based on this, I think you've misdiagnosed the alleged mistake of the CDT agent in Newcomb's problem. The CDT agent doesn't fail to update on the fact that he's going to two-box; he's aware that this provides evidence that the second box is empty. If he believes that the predictor is very accurate, his EV will be very low. He goes ahead and chooses both boxes because their contents can't change now, so, regardless of what probability he assigns to the second box being empty, two-boxing has higher EV than one-boxing.

Likewise, in your game the CDT agent doesn't fail to update on the fact that he's going to choose A; if he believes your predictions are 70% accurate and there's nothing unusual about this case (i.e. he can't predict your prediction nor randomise his choice), he assigns -EV to this play of the game regardless of which option he picks. And he sees this situation coming from the beginning, which is why he doesn't play the game.

Comment by tslarm on Seth Explains Consciousness · 2023-08-24T04:04:45.441Z · LW · GW

Without reading the book we can't be sure. But the trouble is that this claim has been made a million times, and in every previous case the author has turned out to be either ignoring the hard problem, misunderstanding it, or defining it out of existence. So if a longish, very positive review with the title 'x explains consciousness' doesn't provide any evidence that x really is different this time, it's reasonable to think that it very likely isn't.

The reason these two situations look different is that it's now easy for us to verify that the Earth is flat, but it's hard for us to verify what's going on with consciousness. 

Even if I had no way of verifying it, "the earth is (roughly) spherical and thus has no edges, and its gravity pulls you toward its centre regardless of where you are on its surface" would clearly be an answer to my question, and a candidate explanation pending verification. My question was only 'confused' in the sense that it rested on a false empirical assumption; I would be perfectly capable of understanding your correction to this assumption. (Not necessarily accepting it -- maybe I think I have really strong evidence that the earth is flat, or maybe you haven't backed up your true claim with good arguments -- but understanding what it means and why it would resolve my question).

Are you suggesting that in the case of the hard problem, there may be some equivalent of the 'flat earth' assumption that the hard-problemists hold so tightly that they can't even comprehend a 'round earth' explanation when it's offered?

Comment by tslarm on My current LK99 questions · 2023-08-02T03:19:08.957Z · LW · GW

I would have considered fact-checking to be one of the tasks GPT is least suited to, given its tendency to say made-up things just as confidently as true things. (And also because the questions it's most likely to answer correctly will usually be ones we can easily look up by ourselves.) 

edit: whichever very-high-karma user just gave this a strong disagreement vote, can you explain why? (Just as you voted, I was editing in the sentence 'Am I missing something about GPT-4?')

Comment by tslarm on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-27T09:43:27.073Z · LW · GW

e.g. Eliezer would put way less than 10% on fish feeling pain in a morally relevant way

Semi-tangent: setting aside the 'morally relevant way' part, has Eliezer ever actually made the case for his beliefs about (the absence of) qualia in various animals? The impression I've got is that he expresses quite high confidence, but sadly the margin is always too narrow to contain the proof.

Comment by tslarm on Have you ever considered taking the 'Turing Test' yourself? · 2023-07-27T06:05:19.913Z · LW · GW
  • What about AI researchers? How many of them do you think you could persuade?

If they were motivated to get it right and we weren't in a huge rush, close to 100%. Current-gen LLMs are amazingly good compared to what we had a few years ago, but (unless the cutting edge ones are much better than I realise) they would still be easily unmasked by a motivated expert. So I shouldn't need to employ a clever strategy of my own -- just pass the humanity tests set by the expert.

  • How many random participants do you believe you could convince that you are not an AI?

This is much harder to estimate and might depend greatly on the constraints on the 'random' selection. (Presumably we're not randomly sampling from literally everyone.)

In the pre-GPT era, there were occasional claims that some shitty chatbot had passed the Turing test. (Eugene Goostman is the one that immediately comes to mind.) Unless the results were all completely fake/rigged, this suggests that non-experts are sometimes very bad at determining humanity via text conversation. So in this case my own strategy would be important, as I couldn't rely on the judges to ask the right questions or even to draw the right inferences from my responses.

If the judges were drawn from a broad enough pool to include many people with little-to-no experience interacting with GPT and its ilk, I couldn't rely on pinpointing the most obvious LLM weaknesses and demonstrating that I don't share them. (Depending on the structure of the test, I could perhaps talk the judges through the best way to unmask the bot. But that seems to go against the spirit of the question.) Honestly, off the top of my head I really don't know what would best convince the average person of my humanity via a text channel, and I wouldn't be very confident of success. 

(I'm assuming here that my AI counterpart(s) would be set up to make a serious attempt at passing the Turing test; obviously the current public versions are much too eager to give away their true identities.)

Comment by tslarm on Why it's so hard to talk about Consciousness · 2023-07-18T05:30:08.130Z · LW · GW

what's the point of imagining a hypothetical set of physical laws that lack internal coherence?

I don't think they lack internal coherence; you haven't identified a contradiction in them. But one point of imagining them is to highlight the conceptual distinction between, on the one hand, all of the (in principle) externally observable features or signs of consciousness, and, on the other hand, qualia. The fact that we can imagine these coming completely apart, and that the only 'contradiction' in the idea of zombie world is that it seems weird and unlikely, shows that these are distinct (even if closely related) concepts.

This conceptual distinction is relevant to questions such as whether a purely physical theory could ever 'explain' qualia, and whether the existence of qualia is compatible with a strictly materialist metaphysics. I think that's the angle from which Yudkowsky was approaching it (i.e. he was trying to defend materialism against qualia-based challenges). My reading of the current conversation is that Signer is trying to get Carl to acknowledge the conceptual distinction, while Carl is saying that while he believes the distinction makes sense to some people, it really doesn't to him, and his best explanation for this is that some people have qualia and some don't.

Comment by tslarm on AI #17: The Litany · 2023-06-23T03:52:43.551Z · LW · GW

After a while, you are effectively learning the real skills in the simulation, whether or not that was the intention.

Why the real skills, rather than whatever is at the intersection of 'feasible' and 'fun/addictive'? Even if the consumer wants realism (or thinks that they do), they are unlikely to be great at distinguishing real realism from fantasy realism.

Comment by tslarm on Do humans still provide value in correspondence chess? · 2023-05-23T13:40:22.796Z · LW · GW

FWIW, the two main online chess sites forbid the use of engines in correspondence games. But both do allow the use of opening databases. 

(https://www.chess.com/terms/correspondence-chess#problems, https://lichess.org/faq#correspondence)

Comment by tslarm on What is it like to be a compatibilist? · 2023-05-07T16:27:06.701Z · LW · GW

I agree that your model is clearer and probably more useful than any libertarian model I'm aware of (with the possible exception, when it comes to clarity, of some simple models that are technically libertarian but not very interesting).

Do you call it illusion because the outcomes you deem possible are not meta-possible: only one will be the output of your decision making algorithm and so only one can really happen?

Something like that. The SEP says "For most newcomers to the problem of free will, it will seem obvious that an action is up to an agent only if she had the freedom to do otherwise.", and basically I a) have not let go of that naive conception of free will, and b) reject the analyses of 'freedom to do otherwise' that are consistent with complete physical determinism. 

I know it seems like the alternatives are worse; I remember getting excited about reading a bunch of Serious Philosophy about free will, only to find that the libertarian models that weren't completely mysterious were all like 'mostly determinism, but maybe some randomness happens inside the brain at a crucial moment, and then everything downstream of that counts as free will for some reason'.

But basically I think there's enough of a crack in our understanding of the world to allow for the possibility that either a) a brilliant theory of libertarian free will will emerge and receive some support from, or at least remain consistent with, developments in physics; or b) libertarian free will is real but just inherently baffling, like consciousness (qualia) or some of the impossible ontological questions.

Comment by tslarm on What is it like to be a compatibilist? · 2023-05-07T16:02:47.001Z · LW · GW

Why do you think LFW is real?

I'm not saying it's real -- just that I'm not convinced it's incoherent or impossible.

And in this sense, what you have is some inherent randomness within the decision-making algorithms of the brain

This might get me thrown into LW jail for posting under the influence of mysterianism, but: 

I'm not convinced that there can't be a third option alongside ordinary physical determinism and mere randomness. There's a gaping hole in our (otherwise amazingly successful and seemingly on the way to being comprehensive) physical picture of reality: what the heck is subjective experience? From the objective, physical perspective there's no reason anything should be accompanied by feelings; but each of us knows from direct experience that at least some things are. To me, the Hard Problem is real but probably completely intractable. Likewise, there are some metaphysical questions that I think are irresolvably mysterious -- Why is there anything? Why this in particular? -- and they point to the fact that our existing concepts, and I suspect our brains, are inadequate to the full description or explanation of reality. This is of course not a good excuse for an anything-goes embrace of baseless speculation or wishful thinking; but the link between free will and consciousness, combined with the baffling mystery of consciousness (in the qualia sense), leaves me open to the possibility that free will is something weird and different from anything we currently understand and maybe even inexplicable.

Comment by tslarm on What is it like to be a compatibilist? · 2023-05-07T15:42:38.276Z · LW · GW

This is hard to respond to, in part because I don't recognise my views in your descriptions of them, and most of what you wrote doesn't have a very obvious-to-me connection to what I wrote. I suspect you'll take this as further evidence of my confusion, but I think you must have misunderstood me.

The confusion in your original post is that you're not treating "choice" as a process with steps that produce an output, but rather as something mysterious that happens instantaneously while somehow being outside of reality.

No I'm not. But I don't know how to clarify this, because I don't understand why you think I am. I do think we can narrow down a 'moment of decision' if we want to, meaning e.g. the point in time where the agent becomes conscious of which action they will take, or when something that looks to us like a point of no return is reached. But obviously the decision process is a process, and I don't get why you think I don't understand or have failed to account for this.

LW compatibilism isn't believing that choice magically happens outside of spacetime while everything else happens deterministically, but rather including your decision procedure as part of "things happening deterministically".

I'm fully aware of that; as far as I know it's an accurate description of every version of compatibilism, not just 'LW compatibilism'.

retrocausal, in the sense of "revealing" or "choosing" anything about the past

How is 'revealing something about the past' retrocausal?

As other people have mentioned, rationalists don't typically think in those terms. There isn't actually any difference between those two ideas, and there's really nothing to "defend".

There is a difference: the meaning of the words 'free will', or in other words the content of the concept 'free will'. From one angle it's pure semantics, sure -- but it's not completely boring and pointless, because we're not in a situation where we all have the exact same set of concepts and are just arguing about which labels to apply to them.

the only place where hypothetical alternative choices exist is in the decider's brain

This and other passages make me think you're still interpreting me as saying that the two possible choices 'exist' in reality somewhere, as something other than ideas in brains. But I'm not. They exist in a) my description of two versions of reality that hypothetically (and mutually exclusively) could exist, and b) the thoughts of the chooser, to whom they feel like open possibilities until the choice process is complete. At the beginning of my scenario description I stipulated determinism, so what else could I mean?

Well, it makes the confusion more obvious, because now it's clearer that HA/A and HB/B are complete balderdash.

Even with the context of the rest of your comment, I don't understand what you mean by 'HA/A and HB/B are complete balderdash'. If there's something incoherent or contradictory about "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false", can you be specific about what it is? Or if the error is somewhere else in my little hypothetical, can you identify it with direct quotes?

Comment by tslarm on What is it like to be a compatibilist? · 2023-05-07T07:51:50.976Z · LW · GW

I should clarify that I'm not arguing for libertarianism here, just trying to understand the appeal of (and sometimes arguing against) compatibilism. 

(FWIW, I don't think libertarian free will is definitely incoherent or impossible, and combined with my incompatibilism that makes me in practice a libertarian-by-default: if I'm free to choose which stance to take, libertarianism is the correct one. Not that that helps much in resolving any of the difficult downstream questions, e.g. about when and to what extent people are morally responsible for their choices.)

Here is a neat compatibilist model, according to which you (and not a rock) have an ability to select between different outcomes in a deterministic universe and which explicitly specify what 'possible' mean: possibility is in the mind and so is the branching of futures. When you are executing your decision making algorithm you mark some outcomes as 'possible' and backpropagate from them to the current choice you are making. Thus, your mental map of the reality has branches of possible futures between which you are choosing. By design, the algorithm doesn't allow you to choose an outcome you deem impossible. If you already know for certain what you will choose, than you've already chosen. So the initial intuition is kind of true. You do need 'possible futures' to exist so that you can have free will: perform your decision making ability which separates you from the rock. But the possibility, and branching futures do not need to exist separately of you. They can just be part of your mind.

I'm sorry to give a repetitive response to a thoughtful comment, but my reaction to this is the predictable one: I don't think I'm failing to understand you, but what you're describing as free will is what I would describe as the illusion of free will. 

Aside from the semantic question, I suspect a crux is that you are confident that libertarian free will is 'not even wrong', i.e. almost meaninglessly vague in its original form and incoherent if specified more precisely? So the only way to rescue the concept is to define free will in such a way that we only need to explain why we feel like we have the thing we vaguely gesture at when we talk about libertarian free will.

If so, I disagree: I admit that I don't have a good model of libertarian free will, but I haven't seen sufficient reason to completely rule it out. So I prefer to keep the phrase 'free will' for something that fits with my (and I think many other people's) instinctive libertarianism, rather than repurpose it for something else.

Comment by tslarm on What is it like to be a compatibilist? · 2023-05-07T07:25:17.579Z · LW · GW

It seems to me that your confusion is contending there are two past/present states (HA+A / HB+B) when in fact reality is simply H -> S -> C. There is one history, one state, and one choice that you will end up making. The idea that there is a HA and HB and so on is wrong, since that history H has already happened and produced state S.

I guess I invited this interpretation with the phrasing "there are two relevantly-different states of the world I could be in". But what I meant could be rephrased as "either the propositions 'HA happened, A is the current state, I will choose CA, FA will happen' are all true, or the propositions 'HB happened, B is the current state, I will choose CB, FB will happen' are all true; the ones that aren't all true are all false".

I'm not sure how much that rephrasing would change the rest of your answer, so I won't spend too much time trying to engage with it until you tell me, but broadly I'm not sure whether you are defending compatibilism or hard determinism. (From context I was expecting the former, but from the text itself I'm not so sure.)