Posts

JBlack's Shortform 2021-08-28T07:42:42.667Z

Comments

Comment by JBlack on Oracle predictions don't apply to non-existent worlds · 2021-09-16T07:48:17.557Z · LW · GW

The Oracle is predicting the combined results because that's what makes the thought experiment interesting.

Comment by JBlack on Oracle predictions don't apply to non-existent worlds · 2021-09-16T05:05:20.755Z · LW · GW

While the Oracle's prediction only applies to the world it was delivered in, you don't know which of the as-yet hypothetical worlds that will be. Whatever your decision ends up being, the Oracle's prediction will be correct for that decision.

If you hear that you will win, you bet on the Red Sox and lose, then your decision process was still correct but your knowledge about the world was incorrect. You believed that what you heard came from an Oracle, but it didn't.

This also applies to Newcombe's problem: if at any point you reason about taking one box and there's nothing in it, or about taking two boxes and there's a million in one, then you are implicitly exploring the possibility that Omega is not a perfect predictor. That is, that the problem description is incorrect.

Comment by JBlack on Chantiel's Shortform · 2021-09-16T04:09:07.852Z · LW · GW

Ah okay, so we're talking about a bug in the hardware implementation of an AI. Yes, that can certainly happen and will contribute some probability mass to alignment failure, though probably very little by comparison with all the other failure modes.

Comment by JBlack on Chantiel's Shortform · 2021-09-15T01:48:30.375Z · LW · GW

I'm confused. In the original comments you're talking about a super-intelligent AI noting a exploitable hardware flaw in itself and deliberately using that error to hack its utility function using something like rowhammer exploit.

Then you say that the utility function already had an error in it from the start and the AI isn't using its intelligence to do anything except note that it has this flaw. Then introduce an analogy in which I have a brain flaw that under some bizarre circumstances will turn me into a paperclip maximizer, and I am aware that I have it.

In this analogy, I'm doing what? Deliberately taking drugs and using guided meditation to rowhammer my brain into becoming a paperclip maximizer?

Comment by JBlack on Chantiel's Shortform · 2021-09-14T14:32:43.704Z · LW · GW

Think of something you currently value, the more highly valued the better. You don't need to say what it is, but it does need to be something that seriously matters to you. Not just something you enjoy, but something that you believe is truly worthwhile.

I could try to give examples, but the thought exercise only works if it's about what you value, not me.

Now imagine that you could press a button so that you no longer care about it at all, or even actively despise it. Would you press that button? Why, or why not?

Comment by JBlack on Does blockchain technology offer potential solutions to some AI alignment problems? · 2021-09-11T03:42:25.258Z · LW · GW

Do you mean some sort of layer inversion where the only way to send any sort of data packet to some other machine is to ... use a blockchain, which relies on the ability to send packets to other machines? I don't get how this works.

Comment by JBlack on Does blockchain technology offer potential solutions to some AI alignment problems? · 2021-09-10T13:14:54.367Z · LW · GW

It could convince you to connect it to the Internet.

Though this is already a false dichotomy. The negation of "on the blockchain" is not "disconnected to the internet". Almost all traditional hardware is connected to the internet.

Comment by JBlack on Covid 9/9: Passing the Peak · 2021-09-10T13:00:54.734Z · LW · GW

I'm also Australian, though not in New South Wales. Prior to the current NSW outbreak, localised and usually short lockdowns (generally one city, or at worst one state at a time) had been overwhelmingly effective at keeping the rest of the nation both COVID-free and free of restrictions.

While I do disagree with a great deal that Australia's various governments have been doing, that has not been one of them.

The current outbreak has come as shock for two reasons: first is that the NSW state government was slower than every other state to act, breaking the implicit deal of fast temporary sacrifices to eliminate transmission and protect everyone else. The second is that this is the Delta variant, known to be more transmissible.

Both of these meant that the NSW outbreak rapidly grew to a size that outpaced testing tracing and isolation, meaning that lockdown measures would take much longer and also require more stringent restrictions to eliminate than any previous outbreak. The NSW premier made the decision to abandon that approach altogether. The new strategy is to rush mass vaccinations and then stop most restrictions.

Projections show various Sydney hospitals being overwhelmed within weeks if restrictions are dropped now. So it's now a three-way balance over the next 6-8 weeks between vaccine supply, already stretched hospital capacity, and the ability of people and businesses to endure whatever restrictions are needed to keep the hospitals functioning and reduce avoidable deaths.

Comment by JBlack on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-10T08:10:39.299Z · LW · GW

Thanks, making use of the relatively low propagation speed hadn't occurred to me.

That would indeed reduce the scaling of data bandwidth significantly. It would still exist, just be not quite as severe. Area versus volume scaling still means that bandwidth dominates compute as speeds increase (with volume emulated per node decreasing), just not quite as rapidly.

I didn't mean "tick" as a literal physical thing that happens in brains, just a term for whatever time scale governs the emulation updates.

Comment by JBlack on The Duplicator: Instant Cloning Would Make the World Economy Explode · 2021-09-09T08:05:07.079Z · LW · GW

You seem to be talking about a compute-dominated process, with almost perfect data locality. I suspect that brain emulation may be almost entirely communication-dominated with poor locality and (comparatively) very little compute. Most neurons in the brain have a great many synapses, and the graph of connections has relatively small diameter.

So emulating any substantial part of a human brain may well need data from most of the brain every "tick". Suppose emulating a brain in real time takes 10 units per second of compute, and 1 unit per second of data bandwidth (in convenient units where a compute node has 10 units per second of each). So a single node is bottlenecked on compute and can only run at real time.

To achieve 2x speed you can run on two nodes to get the 20 units per second of compute capability, but your data bandwidth requirement is now 4 units/second: both the nodes need full access to the data, and they need to get it done in half the time. After 3x speed-up, there is no more benefit to adding nodes. They all hit their I/O capacity, and adding more will just slow them all down due to them all needing to access every node's data every tick.

This is even making the generous assumption that links between nodes have the same capacity and no more latency or coordination issues than a single node accessing its own local data.

I've obviously just made up numbers to demonstrate scaling problems in an easy way here. The real numbers will depend upon things we still don't know about brain architecture, and on future technology. The principle remains the same, though: different resource requirements scale in different ways, which yields a "most efficient" speed for given resource constraints, and it likely won't be at all cost-effective to vary from that by an order of magnitude in either direction.

Comment by JBlack on [deleted post] 2021-09-08T13:25:19.583Z

Yes, I can see some benefits to responding to straw-manning as if it were misphrased enquiry. I do think that at least 90% of the occasions in which straw-manning happens, it isn't.

Most of the times I see it happen are in scenarios where curious enquiry about the difference is not a plausible motivation. In my experience it has nearly always happened where both sides are trying to win some sort of debate, usually with an audience.

That aside, the proposed mechanism for straw-manning was that it is a particular kind of mistake, so I would expect to see at least some significant fraction of cases where the same kind of enquiry was intended, and the mistake was not made. I haven't observed any significant fraction of such cases in the situations where I have seen straw-man arguments used.

I agree that the fictional example I wrote does have a tone that implies that there is no difference between my caricature and your position. That matches the majority of cases where I see straw-man arguments being used. We could discuss the special case of straw-manning where such implication isn't present, but I think that would reduce the practical scope to near zero.

Comment by JBlack on Why the technological singularity by AGI may never happen · 2021-09-08T12:38:43.015Z · LW · GW

Initialising a starting state of 700 MB at 10^-21 J per bit operation costs about 6 picojoules.

Obtaining that starting state through evolution probably cost many exajoules, but that's irrelevant to the central thesis of the post: fundamental physical limits based on the cost of energy required for the existence of various levels of intelligence.

If you really intended this post to hypothesize that the only way for AI to achieve high intelligence would be to emulate all of evolution in Earth's history, then maybe you can put that in another post and invite discussion on it. My comment was in relation to what you actually wrote.

Comment by JBlack on [deleted post] 2021-09-07T12:23:47.681Z

[Comment somehow ended up in triplicate after some parse error displayed]

Comment by JBlack on [deleted post] 2021-09-07T12:23:06.881Z

[My comment ended up triplicated, and no delete option]

Comment by JBlack on [deleted post] 2021-09-07T12:22:49.904Z

Please don't take the following as seriously as it appears:

Just wanna make sure: what's the difference between what you're thinking, and naïve idiocy that sincerely holds that every time people engage in straw-man behaviour, they're really trying honestly to discern what the other person is thinking and just inadvertently completely misrepresented their position in the worst plausible way and forgot to put a question mark on it?

Obviously this is a straw-man presentation of your thesis, yet follows the form of the "nothing wrong with wanting to check" alternative. Is it actually any better with the question than without? My suspicion is that there isn't a lot of difference.

I guess the true test will be to wait and see whether this comment thread devolves into acrimonious and hostility-filled polarisation.

Comment by JBlack on Is LessWrong dead without Cox’s theorem? · 2021-09-07T11:39:17.742Z · LW · GW

Did you even read the last line of my comment?

I down-voted you for poisoning the well.

Comment by JBlack on Is LessWrong dead without Cox’s theorem? · 2021-09-07T11:04:20.862Z · LW · GW

Yes, it does. In a charitable way.

Comment by JBlack on Conditional on the first AGI being aligned correctly, is a good outcome even still likely? · 2021-09-07T02:48:23.442Z · LW · GW

It depends upon how long it takes, or how likely it is, for AGI to bootstrap into super-intelligence for aligned versus non-aligned AGIs. If the orthogonality thesis holds, then aligned AGI is no less or more intrinsically capable of super-intelligence than non-aligned.

I would suspect that aligned super-intelligent AGI would be far more capable than we are at detecting and preventing development of unaligned super-intelligence. The fact that in this hypothetical scenario we successfully produced aligned AGI before non-aligned AGI would be moderately strong evidence that we were not terrible at it. So it would be reasonable to suppose that the chances of future development of unaligned super-intelligence after a friendly super-intelligence would be markedly lowered.

So I suspect that the bulk of probability is in the case of developing aligned AGI that does not develop super-intelligence, or at least not fast enough to prevent an unaligned one from overtaking it.

This isn't an implausible scenario, certainly. While I can't think of any reason why this would be more likely than not, I certainly don't assign it negligible probability.

Comment by JBlack on Is LessWrong dead without Cox’s theorem? · 2021-09-06T23:26:38.269Z · LW · GW

My main criterion for up-voting comments and posts is whether I think others would be likely to benefit from reading them. This topic has come up a few times already with much better analysis, so I did not up-vote.

My main criterion for down-voting is whether I think it is actively detrimental to thought or discussion about a topic. Your post doesn't meet that criterion either (despite the inflammatory title), so I did not down-vote it.

Your comment in this thread does meet that criterion, and I've down-voted it. It is irrelevant to the topic of the post, does not introduce any interesting argument, applies a single judgement without evidence to a diverse group of people, and is adversarial, casting disagreement with or mere lack of interest in your original point in terms of deliberate suppression of a point of view.

So no, you have not been down-voted for "pointing it out". You have (at least in my case) been down-voted for poisoning the well.

Comment by JBlack on How does your data backup solution setup work? · 2021-09-06T15:15:22.186Z · LW · GW

I just backup to an external drive once per week, rotating between two drives stored in different parts of the house (one in a box within a bug-out bag). Once per couple of months I rotate one of the drives off site. I've tested and documented the restore procedures, test the integrity of the backups as part of the automated process, which also reports on the health statistics of the drives.

I can afford to lose up to a week's worth of updates at any given time, and expect to at some point. For especially valuable things I sometimes do a midweek backup. The worst-case scenario would be having my house completely destroyed without any chance to even grab the bug-out bag, during the same period as the off-site drive failing without notice. This combination is possible, but seems unlikely.

You can keep copies of passwords in the same locations as you keep cash, important documents, and other valuables. Don't have just one copy, don't keep them in the same place. You can obscure passwords in many different ways: just look at how many documents, receipts, cards, and similar have meaningless identifiers on them.

Total cost: About $600 over the past fifteen years for drives, some hours to refine the backup configuration, about 5 minutes per week of attention, and occasional updates when I rearrange my home network significantly.

Comment by JBlack on [Link post] When pooling forecasts, use the geometric mean of odds · 2021-09-06T14:24:17.134Z · LW · GW

When aggregating data, selection of aggregation method always depends upon the answer to the question "for what purpose?"

If you include one extreme outlier prediction, it can radically shift the geometric mean of a bunch of moderate ones. Is this a desirable property for your purposes?

For example: if three people all predict that Ms Green will win something versus Dr Blue with 1:1 odds, and I predict that Dr Blue has one in a million chance, then the arithmetic mean of probabilities says that between us, we think that Dr Blue has about 38% chance. Geometric mean of odds says that we think Dr Blue has 3% chance. Is either of these more useful to you for some purpose than the other?

Comment by JBlack on Alex Ray's Shortform · 2021-09-06T14:10:07.953Z · LW · GW

Moore appears to go from "it's safe to drink a cup of glyphosate" to (being offered the chance to do that) "of course not / I'm not stupid".

There seem to be two different concepts being conflated here. One is "it will be extremely unlikely to cause permanent injury", while the other is "it will be extremely unlikely to have any unpleasant effects whatsoever". I have quite a few personal experiences with things that are the first but absolutely not the second, and would fairly strenuously avoid going through them again without extremely good reasons.

I'm sure you can think of quite a few yourself.

Comment by JBlack on Hope and False Hope · 2021-09-06T13:52:51.679Z · LW · GW

If you prefer, mentally insert an "otherwise, ..." after the first paragraph.

Comment by JBlack on Acausal Trade and the Ultimatum Game · 2021-09-06T11:35:18.333Z · LW · GW

If the first player knows the second player's distribution, then their optimum strategy is always a single point, the one where 

  (1-offer) * P(offer accepted)

is maximized. You can do this by setting P(50%) = 1 and P(x) < 1 / 2(1-x)  for all x < 50%. Choosing a distribution only just under these limits maximizes player 2's payoff for irrational player 1's, while providing incentive for smarter player 1's to always choose 50%.

In general, it never makes sense for acceptance probability to decrease for larger amounts offered, and so the reject probability is a cumulative distribution function for a threshold value. Hence any viable strategy is equivalent to drawing a threshold value from some distribution. So in principle, both players are precommitting to a single number in each round drawn from some distribution.

Nonetheless, the game does not become symmetric. Even when both players enter the game with a precommitted split drawn from a distribution, the first player has the disadvantage that they cannot win more than the amount they commit to, while the second player will receive a larger payout for any proposal above their committed level. So for any distribution other than "always 50%", the first player should propose unfair splits slightly more often than the second player rejects them.

However, in settings where the players are known to choose precommitted splits from a distribution, one player or the other can always do better by moving their cumulative distribution closer to "always 50%". This is the only stable equilibrium. (Edit: I messed up the assumptions in the maths, and this is completely wrong)

As seen above, a population of player 2's with known precommitment strategy can induce player 1 to offer 50% all the time. But this still isn't a stable equilibrium! Player 2's can likewise choose a rejection function that incentivizes any offer short of 100%. This can be seen as slightly skewed version of Prisoner's Dilemma, where either side choosing a distribution that incentivizes greater than 50% pay-off to themselves is defecting, and one that incentivizes 50% is cooperating.

Comment by JBlack on Hope and False Hope · 2021-09-06T08:54:40.385Z · LW · GW

What you are describing is covered by the condition at the start of my post: "very much (at least order-of-magnitude) greater confidence that the process will work".

My calculation is based on the minimum probability that it will work for it to be worthwhile for me, which is around 5% chance of success.

Comment by JBlack on Why the technological singularity by AGI may never happen · 2021-09-06T08:44:00.064Z · LW · GW

Because 20-year-old people with 200 IQ exist, and their brains consume approximately 3 MW-hr by age 20. Therefore there are no fundamental physical limitations preventing this.

Comment by JBlack on Hope and False Hope · 2021-09-05T12:37:46.835Z · LW · GW

I'd be substantially more confident in cryonics if it were actually supported by society with stable funding, regulations, transparency, priority in case of natural disasters, ongoing well-supported research, guarantees about future coverage of revival and treatment costs, and so on.

Even then I have strong doubts about uninterrupted maintenance of clients for anything like a hundred years. Even with the best intentions, more than 99.9999% uptime for any single organization (including through natural disasters and changes in society) is hard. And yet, that's the easier part of the problem.

Comment by JBlack on Hope and False Hope · 2021-09-05T12:21:25.882Z · LW · GW

I'm aware that this is a thing that people do. I expect that people doing it have very much (at least order-of-magnitude) greater confidence that the process will work, since the probability thresholds to make cryonic-funded-by-insurance worthwhile are substantially greater than for cryonic-funded-by-investments unless capacity for investment is negligible and the insurance is very cheap.

That is, it's really only for people in their 20's who don't have much income and yet want to pay ten thousand dollars or so to reduce the probability of dying permanently in the next decade by something like 0.0005. Every decade in which they don't build up enough to pay for it outright, they're on a losing treadmill because the premiums typically more than double per decade of age, and on top of that they have been forgoing investment growth with that money the whole time.

Comment by JBlack on Hope and False Hope · 2021-09-05T08:34:51.948Z · LW · GW

I have in fact looked into cryonics as a possible life-extension mechanism, looked at a bunch of the possible failure modes, and many of these cannot be reliably averted by a group of well-meaning people. If you're actually trying to model "people who are not currently investing in cryonic preservation", then it does little good to post hypotheses such as "they are too scared of false hope". Maybe some are, but certainly not all.

Also yes, my threshold around 5% is where I have calculated that it would be "worth it to me", and my upper bound (not estimate) of 2% is based on some weeks of investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years. If there have been any exciting revolutions in the past ten years that significantly alter these numbers, I haven't seen them mentioned anywhere (including company websites and sites like this).

As far as bets go, I am literally staking my life on this estimate, am I not?

Comment by JBlack on Why the technological singularity by AGI may never happen · 2021-09-05T06:16:47.669Z · LW · GW

The first two points in (1) are plausible from what we know so far. I'd hardly put it at more than 90% that there's no way around them, but let's go with that for now. How do you get 1 EUR per 10^22 FLOPs as a fundamental physical limit? The link has conservative assumptions based on current human technology. My prior on those holding is negligible, somewhere below 1%.

But that aside, even the most pessimistic assumption in this post don't imply that a singularity won't happen.

We know that it is possible to train a 200 IQ equivalent intelligence for at most 3 MW-hr, energy that costs at most $30. We also know that once trained, it is possible for it to do at least the equivalent of a decade of thought by the most expert human that has ever lived for a similar cost. We have very good reason to expect that it could be capable of thinking very much faster than the equivalent human.

Those alone are sufficient for a superintelligence take-off that renders human economic activity (and possibly the human species) irrelevant.

Comment by JBlack on MikkW's Shortform · 2021-09-05T04:55:15.019Z · LW · GW

It might be worth noting here that Australia does generally use proportional voting for state legislative houses, which control most of the day-to-day laws that people live under. I'm not sure whether this comes under what you meant by "at least one house of their legislatures" or not.

At the national level, one house does use a proportional voting system (in that multiple representatives are elected by the people per electorate in a manner proportional to their support), but the electorates are divided between states and not proportional to population. In the other house, electorates are proportional to population but each elects only one member.

Comment by JBlack on Hope and False Hope · 2021-09-05T04:29:34.318Z · LW · GW

If I could be confident of 5%, it would be attractive right now. The problem isn't really any single point of failure, the problem is that there are way too many points of failure that all have pretty good chances of happening, any single one of which dooms all or most of the clients. Even so, if I had substantially more assets then it would be attractive even at 0-2%.

Comment by JBlack on Hope and False Hope · 2021-09-05T04:05:56.746Z · LW · GW

Life insurance is insurance, a way of paying extra to deal with expensive events that have a low probability of occurring, to give you a high probability of (financially) surviving it. Paying that amount extra in case of something that is nearly guaranteed to happen, and gives you a small chance of getting past it, seems the exact opposite of the case where insurance makes sense.

Comment by JBlack on Hope and False Hope · 2021-09-04T14:06:35.269Z · LW · GW

While it does seem worthwhile from a purely selfish point of view, $150k+ for a small chance of revival (my estimate: no more than 2%) seems expensive from the point of view of things that money can buy to promote the future welfare of people I care about.

Comment by JBlack on Rafael Harth's Shortform · 2021-09-04T12:44:45.945Z · LW · GW

Is it really that simple? I've seen a lot of ways in which people strongly express beliefs different from those expressed by a large majority of smart people. Most of the apparent reasons do not seem to boil down to overconfidence of any sort, but are related to the fact that expressions of belief are social acts with many consequences. Personally I have a reputation as a "fence-sitter" (apparently this is socially undesirable) since I often present evidence for and against various positions instead of showing "courage of convictions".

I wouldn't quite profess that beliefs being expressed are nothing but tokens in a social game and don't actually matter to how people actually think and act, but I'm pretty sure that they matter a lot less than the form and strength of expression indicates. People do seem to really believe what they say in the moment, but then continue with life without examining the consequences of that belief to their life.

I am not excluding myself from this assessment, but I would expect anyone reading or posting on this site to want to examine consequences of their expressed and unexpressed beliefs substantially more than most.

Comment by JBlack on Rafael Harth's Shortform · 2021-09-04T11:44:44.115Z · LW · GW

To me, 0.02 is a comparatively tiny difference between likelihood of a proposition and its negation.

If P(A) = 0.51 and P(~A) = 0.49 then almost every decision I make based on A will give almost equal weight to whether it is true or false, and the cognitive process of working through implications on either side are essentially identical to the case P(A) = 0.49 and P(~A) = 0.51. The outcome of the decision will also be the same very frequently, since outcomes are usually unbalanced.

It takes quite a bit of contriving to arrange a situation where there is any meaningful difference between P(A) = 0.51 and P(A) = 0.49 for some real-world proposition A.

Comment by JBlack on [deleted post] 2021-09-04T11:27:12.035Z

What sort of discussion are you looking for? Negation is fairly straightforward in classical propositional logic, predicate logic, and probability (the bases for Bayesian reasoning).

If questions about personality types are implicitly tied to some particular model, then the proposition "A has personality type X" really means "A has personality type X in model M", which in turn usually boils down to "A will have (or had) particular ranges of scores in M's associated personality test under the prescribed conditions for administering it". 

How does negation come into such a discussion? Maybe you want to talk about the differences between negating various parts of that proposition versus negating the whole thing? I'm not sure.

Comment by JBlack on Chantiel's Shortform · 2021-09-04T00:38:55.158Z · LW · GW

Well, as I said before, the AI could still consider the possibility that the world is composed entirely of hats (minus the AI simulation).

At this point I'm not sure there's much point in discussing further. You're using words in ways that seem self-contradictory to me.

You said "the AI could still consider the possibility that the world is composed of [...]". Considering a possibility is creating a model. Models can be constructed about all sorts of things: mathematical statements, future sensory inputs, hypothetical AIs in simulated worlds, and so on. In this case, the AI's model is about "the world", that is to say, reality.

So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can't do that.

To me, this is a blatant contradiction. My model of you is that you are unlikely to post blatant contradictions, so I am left with the likelihood that what you mean by your statements is wholly unlike the meaning I assign to the same statements. This does not bode well for effective communication.

Comment by JBlack on Is there a name for the theory that "There will be fast takeoff in real-world capabilities because almost everything is AGI-complete"? · 2021-09-03T13:35:37.893Z · LW · GW

I've certainly seen this argument before, and even advocated for it somewhat. I haven't seen a specific name for it though.

I do have some doubts about (1). There does seem to be quite a lot of scope for human-guided AI that performs substantially better than human or AI alone. Even 10-20% improvement in any of a wide range of tasks would be a lot, and I think we're going to see a lot more effort into coordinating this sort of thing. It may not even look like AI. It may just look like better software tools despite using ever more generalizable models with specific customization behind the scenes.

Comment by JBlack on Addendum to "Amyloid Plaques: Medical Goodhart, Chemical Streetlight" · 2021-09-03T13:23:58.242Z · LW · GW

Just to comment on the last example: I totally agree with your assessment of this.

In particular anything that involves Löb's theorem or considerations about how an agent should reason when considering an identical copy of themselves is almost certainly impractical mathematical cloud-castle building. I don't have anything against that type of activity as a pursuit in itself and engage in it quite a lot, but don't have any illusions that it will solve any real problems in my lifetime.

Any actual AI will have extremely bounded rationality by those standards. Quite a few of the decision processes discussed in those articles are literally uncomputable, let alone able to be implemented in any hardware that can exist in the known universe. However, considering the much more relevant but thornier problems of resource-constrained decision making is not nearly so elegant and fun.

Comment by JBlack on Rationality Is Expertise With Reality · 2021-09-03T10:02:51.760Z · LW · GW

Efficiency demands that you actually get your point across, otherwise your efficiency is zero points-got-across per thousand words.

Comment by JBlack on Chantiel's Shortform · 2021-09-03T09:36:07.786Z · LW · GW

Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it.

The only means would be errors in the simulation.

Any underlying reality that supports Turing machines or any of the many equivalents can simulate every computable process. Even in the case of computers with bounded resources, there are corresponding theorems that show that the process being computed does not depend upon the underlying computing model.

So the only thing that can be discerned is that the underlying reality supports computation, and says essentially nothing about the form that it takes.

An AI, even without a distinction between base-level reality and abstractions, [...] would be able to conceive of the idea of percepts misleading about reality

How can it conceive of the idea of percepts misleading about reality if it literally can't conceive of any distinction between models (which are a special case of abstractions) and reality?

Comment by JBlack on Chantiel's Shortform · 2021-09-02T08:09:20.854Z · LW · GW

The concept of a mirage is different from the concept of non-base-level reality.

Different in a narrow sense yes. "Refraction through heated air that can mislead a viewer into thinking it is reflection from water" is indeed different from "lifetime sensory perceptions that mislead about the true nature and behaviour of reality". However, my opinion is that any intelligence that can conceive of the first without being able to conceive of the second is crippled by comparison with the range of human thought.

Comment by JBlack on Rationality Is Expertise With Reality · 2021-09-01T03:46:44.552Z · LW · GW

It doesn't read as persuasion-coded to me. In fact it reads as stream-of-consciousness musing that defeats its own opening point.

What if everyone actually is a perfectly rational actor?

[...]

Rationality is expertise with the universe we live in.

You're wondering what if everyone has perfect expertise with the universe we live in? Furthermore this is somehow linked to fake praise, your strong distrust for authority figures who tell you things without explaining their reasoning, and the idea that muscle-tension works as a variably-obvious signalling mechanism to yourself as well as to others?

Well maybe this makes internal sense to you, but it looks incoherent to me.

Comment by JBlack on Beware of small world puzzles · 2021-08-31T08:17:14.943Z · LW · GW

This came up frequently in my time as a mathematical educator. Far too many "word problems"(*) are written in ways that require some words and phrases in the problem to be interpreted in their everyday sense, and others to be strictly interpreted mathematically even when this directly contradicts their usual meanings. Learning which are in each category often turns out to be equivalent to "guessing the password", often without even the benefit of instructional material or consistency between problems.

In my experience, problems in probability or statistics are by far the worst of this type.

 

(*) in the pedagogical sense, not the one that means testing identity of semigroup elements.

Comment by JBlack on Chantiel's Shortform · 2021-08-31T08:00:09.033Z · LW · GW

This feels like a bait-and-switch since you're now talking about this in terms of an "ontologically fundamental" qualifier where previously you were only talking about "ontologically different".

To you, does the phrase "ontologically fundamental" mean exactly the same thing as "ontologically different"? It certainly doesn't to me!

Comment by JBlack on Chantiel's Shortform · 2021-08-29T12:23:30.613Z · LW · GW

When asking, "Should I treat base-level reality and abstractions as fundamentally distinct?", I think I good way to approximate this is by asking "Would I want an AI to reason as if its abstractions and base-level reality were fundamentally distinct?"

Do you want an AI to be able to conceive of anything along the lines of "how correct is my model", to distinguish hypothetical from actual, or illusion from substance?

If you do, then you want something that fits in the conceptual space pointed at by "base-level reality", even if it doesn't use that phrase or even have the capability to express it.

I suppose it might be possible to have a functioning AI that is capable of reasoning and forming models without being able to make any such distinctions, but I can't see a way to do it that won't be fundamentally crippled compared with human capability.

Comment by JBlack on JBlack's Shortform · 2021-08-29T12:01:13.217Z · LW · GW

The weird thing is that the person doing the anti-update isn't subjected to any "fiction". It's only a possibility that might have happened and didn't.

Comment by JBlack on JBlack's Shortform · 2021-08-28T14:04:41.156Z · LW · GW

I deliberately wrote it so that there is no memory trickery or any other mental modification happening at all in the case where Sleeping Beauty updates from "definitely heads" to "hmm, could be tails".

The bizarreness here is that all that is required is being aware of some probability that someone else, even in a counterfactual universe that she knows hasn't happened, could come to share her future mental state without having her current mental state.

Yes, in the tails case memories that Sleeping Beauty might have between sleep and full wakefulness are removed, if the process allows her to have formed any. I was just implicitly assuming that ordinary memory formation would be suppressed during the (short) memory creation process.

Comment by JBlack on JBlack's Shortform · 2021-08-28T07:42:42.951Z · LW · GW

While pondering Bayesian updates in the Sleeping Beauty Paradox, I came across a bizarre variant that features something like an anti-update.

In this variant as in the original, Sleeping Beauty is awakened on Monday regardless of the coin flip. On heads, she will be gently awakened and asked for her credence that the coin flipped heads. On tails, she will be instantly awakened with a mind-ray that also implants a false memory of having been awakened gently and asked for her credence that the coin flipped heads, and her answering. In both cases the interviewer then asks "are you sure?" She is aware of all these rules.

On heads, Sleeping Beauty awakens with certain knowledge that heads was flipped, because if tails was flipped then (according to the rules) she would have a memory that she doesn't have. So she should answer "definitely heads".

Immediately after she answers though, her experience is completely consistent with tails having been flipped, and when asked whether she is sure, should now answer that she is not sure anymore.

This seems deeply weird. Normally new experiences reduce the space of possibilities, and Bayesian updates rely on this. A possibility that previously had zero credence cannot gain non-zero credence through any Bayesian update. I am led to suspect that Bayesian updates cannot be an appropriate model for changing credence in situations where observers can converge in relevant observable state with other possible observers.