Posts

Max Kaye's Shortform 2020-09-04T12:07:37.159Z

Comments

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-19T15:41:04.911Z · LW · GW

Yeah, almost everyone who we ban who has any real content on the site is warned. It didn't feel necessary for curi, because he has already received so much feedback about his activity on the site over the years (from many users as well as mods), and I saw very little probability of things changing because of a warning.

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

curi evidently wanted to change some things about his behaviour, otherwise he wouldn't have updated his commenting policy. How do you know he wouldn't have updated it more if you'd warned him? That's exactly the type of criticism we (CR/FI) think is useful.

That sort of update is exactly the type of thing that would be reasonable to expect next time he came back (considering that he was away for 2 weeks when the ban was announced). He didn't want to be banned, and he didn't want to have shitty discussions, either. (I don't know those things for certain, but I have high confidence.)

What probability would you assign to him continuing just as before if you said something like "If you keep continuing what you're doing, I will ban you. It's for these reasons." Ideally, you could add "Here they are in the rules/faq/whatever".

Practically, the chance of him changing is lower now because there isn't any point if he's never given any chances. So in some ways you were exactly right to think there's low probability of him changing, it's just that it was due to your actions. Actions which don't need to be permanent, might I add.

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-19T14:08:56.386Z · LW · GW

I do not think the core disagreement between you and me comes from a failure of me to explain my thoughts clearly enough.

I don't either.

The same goes for your position. The many words you have already written have failed to move me. I do not expect even more words to change this pattern.

Sure, we can stop.

Curi is being banned for wasting time with long, unproductive conversations.

I don't know anywhere I could go to find out that this is a bannable offense. If it is not in a body of rules somewhere, then it should be added. If the mods are unwilling to add it to the rules, he should be unbanned, simple as that.

Maybe that idea is worth discussing? I think it's reasonable. If something is an offense it should be publicly stated as such and new and continuing users should be able to point to it and say "that's why". It shouldn't feel like it was made up on the fly as a special case -- it's a problem when new rules are invented ad-hoc and not canonicalized (I don't have a problem with JIT rulebooks, it's practical).

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-19T07:11:37.887Z · LW · GW

The traditional guidance for up/downvotes has been "upvote what you would like want to see more of, downvote what you would like to see less of". If this is how votes are interpreted, then heavy downvotes imply "the forum's users would on average prefer to see less content of this kind".

You're using quotes but I am not sure what you're quoting, do you just mean to emphasize/offset those clauses?

but people also have the right to choose who they want to spend their time with,

Sure, that might be part of the reason curi hadn't been active on LW for 13 days at the time of the ban.

(continued)

even if someone who they preferred not to spend time with viewed that as being punished.

I don't know if curi think's it's punishment. I think it's punishment, and I think most ppl would agree that 'A ban' would be an answer to the question (in online forum contexts, generally) 'What is an appropriate punishment?' That would mean a ban is a punishment.

LW mods can do what they want; in essence it's their site. I'm arguing:

  1. it's unnecessary
  2. it was done improperly
  3. it reflects badly on LW and creates a hostile culture to opposing ideas
  4. (3) is antithetical to the opening lines of the LessWrong FAQ (which I quote below). Note: I'm introducing this argument in this post, I didn't mention it originally.
  5. significant parts of habryka's post were factually incorrect. It was noted, btw, in FI that a) habryka's comments were libel, and b) that curi's reaction--quoted below--is mild and undercuts habryka's claim.

curi wrote (in his post on the LW ban)

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

I didn’t threaten anyone. I’m guessing it was a careless wording. I think habryka should retract or clarify it. Above habryka used “attack[]” as a synonym for criticize. I don’t like that but it’s pretty standard language. But I don’t think using “threat[en]” as a synonym for criticize is reasonable.

“threaten” has meanings like “state one's intention to take hostile action against someone in retribution for something done or not done” and “express one's intention to harm or kill“ (New Oxford Dictionary). This is the one thing in the post that I strongly object to.

from the FI discussion:

JustinCEO: i think curi's response to this libel is written in a super mild way

JustinCEO: which notably contrasts with being the sort of person who would have "a history of threats against people who engage with him" in the first place

LessWrong FAQ (original emphasis)

LessWrong is a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we want to develop and practice the art of human rationality.

To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.

I don't think the things people have described (in this thread) as seemly important parts of LW are at all reflected by this quote, rather, they contradict it.

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-19T06:57:05.271Z · LW · GW

This is the definition that I had in mind when I wrote the notice above, sorry for any confusion it might have caused.

This definition doesn't describe anything curi has done (see my sibling reply linked below), at least that I've seen. I'd appreciate any quotes you can provide.

https://www.lesswrong.com/posts/PkpuvsFYr6yuYnppy/open-and-welcome-thread-september-2020?commentId=H2tyDgoRFov8Xs8HS

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-19T06:55:23.718Z · LW · GW

define:threat

I prefer this definition, "a declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course; menace".

This definition seems okay to me.

undue justice

I don't know how justice can be undue, do you mean like undue or excessive prosecution? or persecution perhaps? thought I don't think either prosecution or persecution describe anything curi's done on LW. If you have counterexamples I would appreciate it if you could quote them.

We have substantial disagreements about what constitutes a threat,

Evidently yes, as do dictionaries.

I don't think the dictionary definitions disagree much. It's not a substantial disagreement. thesaurus.com seems to agree; it lists them as ~strong synonyms. the crux is retribution vs retaliation, and retaliation is more general. The mafia can threaten shopkeeps with violence if they don't pay protection. I think retaliation is a better fitting word.

However, this still does not apply to anything curi has done!

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T12:53:45.966Z · LW · GW

lsusr said:

(1) Curi was warned at least once.

I'm reasonably sure the slack comments refers to events 3 years ago, not anything in the last few months. I'll check, though.

There are some other comments about recent discussion in that thread, like this: https://www.lesswrong.com/posts/iAnXcZ5aGZzNc2J8L/the-law-of-least-effort-contributes-to-the-conjunction?commentId=38FzXA6g54ZKs3HQY

gjm said:

I had not looked, at that point; I took "mirrored" to mean taking copies of whole discussions, which would imply copying other people's writing en masse. I have looked, now. I agree that what you've put there so far is probably OK both legally and morally.

My apologies for being a bit twitchy on this point; I should maybe explain for the benefit of other readers that the last time curi came to LW, he did take a whole pile of discussion from the LW slack and copy it en masse to the publicly-visible internet, which is one reason why I thought it plausible he might have done the same this time.

I don't think there is case for (1). Unless gjm is a mod and there are things I don't know?

lsusr said:

(2) Curi is being banned for wasting time with long, unproductive conversations. An appeals process would produce another long, unproductive conversation.

habryka explicitly mentions curi changing his LW commenting policy to be 'less demanding'. I can see the motivation for expedition, but the mods don't have to speedrun it. I think it's bad there wasn't any communication beforehand.

lsusr said:

(3) Specific quotes are unnecessary. It blindingly obvious from a glance through curi's profile and even curi's response you linked to that curi is damaging to productive dialogue on Less Wrong.

I don't think that's the case. His net karma has increased, and judging him for content on his blog - not his content on LW - does not establish whether he was 'damaging to productive dialogue on Less Wrong'.

His posts on less wrong have been contributions, for example, www.lesswrong.com/posts/tKcdTsMFkYjnFEQJo/can-social-dynamics-explain-conjunction-fallacy-experimental is a direct response to of EY's posts and it was net-upvoted. He followed that up with two more net-upvoted posts:

This is not the track record of someone wanting to waste time. I know there are disagreements between LW and curi / FI. If that's the main point of contention, and that's why he's being banned, then so be it. But he doesn't deserve to mistreated and have baseless accusations thrown at him.

lsusr said:

The strongest claim against curi is "a history of threats against people who engage with him [curi]". I was able to confirm this via a quickly glance through curi's past behavior on this site. In this comment threatens to escalate a dialogue by mirroring it off of this website. By the standards of collaborative online dialogue, this constitutes a threat against someone who engaged with him.

We have substantial disagreements about what constitutes a threat, in that case. I think a threat needs to involve something like danger, or violence, or something like that. It's not a 'threat' to copy public discussion under fair use for criticism and commentary.

I googled the definition, and these are the two (for define:threat)

  • a statement of an intention to inflict pain, injury, damage, or other hostile action on someone in retribution for something done or not done.
  • a person or thing likely to cause damage or danger.

Neither of these apply.

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T11:21:15.267Z · LW · GW

The above post explicitely says that the ban isn't a personal judgement of curi. It's rather a question of whether it's good or not to have curi around on LessWrong and that's where LW standards matter.

Isn't it even worse then b/c no action was necessary?

But more to the point, isn't the determination X person is not good to have around a personal judgement? It doesn't apply to everyone else.

I think what habryka meant was that he wasn't making a personal judgement.

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T09:16:15.829Z · LW · GW

I'm not sure about other cases, but in this case curi wasn't warned. If you're interested, he and I discuss the ban in the first 30 mins of this stream

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T08:57:38.055Z · LW · GW

FYI and FWIW curi has updated the post to remove emails and reword the opening paragraph.

http://curi.us/2215-fallible-ideas-post-mortems and http://curi.us/2215-fallible-ideas-post-mortems#18059

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T08:50:36.624Z · LW · GW

Arguably, if there is something truly wrong with the list, I should have an issue with it.

This is non-obvious. It seems like you are extrapolating from yourself to everyone else. In my model, how much you would mind being on such a list is largely determent by how much social anxiety you generally feel. I would very much mind being on that list, even if I felt like it was justified.

I think this is fair, and additionally I maybe shouldn't have used the word "truly"; it's a very laden word. I do think that, on the balance of probabilities, my case does reduce the likelihood of something being foundationally wrong with it, though. (Note: I've said this in, what I think, is a LW friendly way. I'd say it differently on FI.)

One thing I do think, though, is that people's social anxiety does not make things in general right or wrong, but can be decisive wrt thinking about a single action.

Another thing to point out is anonymous participation in FI is okay, it's reasonably easy to use an anonymous/pseudonymous email to start with. curi's blog/forum hybrid also allows for anonymous posting. FI is very pro-free-speech.

Knowing the existence of the list (again, even if it were justified) would also make me uneasy to talk to curi.

I think that's okay, curi isn't trying to attract everyone as an audience, and FI isn't designed to be a forum which makes people feel comfortable, as such. It has different goals from e.g. LW or a philosophy subreddit.

I think we'd agree that norms at FI aren't typical and aren't for everyone. It's a place where anyone can post, but that doesn't mean that everyone should, sorta thing.

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T08:12:30.370Z · LW · GW

Today we have banned two users, curi and Periergo from LessWrong for two years each.

I wanted to reply to this because I don't think it's right to judge curi the way you have. Periergo I don't have an issue w/. (it's a sockpuppet acct anyway)

I think your decision should not go unquestioned/uncriticized, which is why I'm posting. I also think you should reconsider curi's ban under a sort of appeals process.

Also, the LW moderation process is evidently transparent enough for me to make this criticism, and that is notable and good. I am grateful for that.

On his blog, he and others maintain a long list of people who engaged with him and others in the Critical Rationalist community, but then stopped, in a way that is very hard to read as anything but a public attack.

You are judging curi and FI (Fallible Ideas) via your standards (LW standards), not FI's standards. I think this is problematic.

I'd like to note I am on that list. (like 1/2 way down) I am also a public figure in Australia, having founded a federal political party based on epistemic principles with nearly 9k members. I am okay with being on that list. Arguably, if there is something truly wrong with the list, I should have an issue with it. I knew about being on that list earlier this year, before I returned to FI. Being on the list was not a factor in my decision.

There is nothing immoral or malicious about curi.us/2215. I can understand why you would find it distasteful, but that's not a decisive reason to ban someone or condemn their actions.

A few hours ago, curi and I discussed elements about the ban and curi.us/2215 on his stream. I recommend watching a few minutes starting at 5:50 and at 19:00, for transparency you might also be interested in 23:40 -> 24:00. (you can watch on 2x speed, should be fine)

Particularly, I discuss my presence on curi.us/2215 at 5:50

You say:

a long list of people who engaged with him and others in the Critical Rationalist community

There are 33 by my count (including me). The list spans a decade, and is there for a particular purpose, and it is not to publicly shame people in to returning, or to be mean for the sake of it. I'd like to point out some quotes from the first paragraph of curi.us/2215:

This is a list of ppl who had discussion contact with FI and then quit/evaded/lied/etc. It would be good to find patterns about what goes wrong. People who left are welcome to come back and try again.

Notably, you don't end up on the list if you are active. Also, although it's not explicitly mentioned in the top paragraph; a crucial thing is that those on the list have left and avoided discussion about it. Discussion is much more important in FI than most philosophy forums - it's how we learn from each other, make sure we understand, offer criticism and assist with error correction. You're not under any obligation to discuss something, but if you have criticisms and refuse to share them: you're preventing error correction; and if you leave to evade criticism then you're not living by your values and philosophy.

The people listed on curi.us/2215 have participated in a public philosophy forum for which there are established norms that are not typical and are different from LW. FI views the act of truth-seeking differently. While our (LW/FI) schools of thought disagree on epistemology, both schools have norms that are related to their epistemic ideas. Ours look different.

It is unfair to punish someone for an act done outside of your jurisdiction under different established norms. If curi were putting LW people on his list, or publishing off-topic stuff at LW, sure, take moderation action. None of those things happened. In fact, the main reason you've provided for even knowing about that list is via the sockpuppet you banned.

Sockpuppet accounts are not used to make the lives of their victims easier. By banning curi along with Periergo you have facilitated a (minor) victory for Periergo. This is not right.

a history of threats against people who engage with him

THIS IS A SERIOUS ALLEGATION! PLEASE PROVIDE QUOTES

curi prefers to discuss in public so they should be easy to find and verify. I have never known curi to threaten people. He may criticise them, but he does not threaten them.

Notably, curi has consistently and loudly opposed violence and the initiation of force, if people ask him to leave them alone (provided they haven't e.g. committed a crime against him), he respects that.

being the historically most downvoted account in LessWrong history

This is not a reason to ban him, or anyone. Being disliked is not a reason for punishment.

Those three things in combination, a propensity for long unproductive discussions, a history of threats against people who engage with him, and being the historically most downvoted account in LessWrong history, make me overall think it's better for curi to find other places as potential discussion venues.

"a history of threats against people who engage with him" has not been established or substantiated.

he seems well-intentioned

I believe he is. As far as I can tell he's gone to great personal expense and trouble to keep FI alive for no other reason than that his sense of morality demands it. (That might be over simplifying things, but I think the essence is the same. I think he believes it is the right thing to do, and it is a necessary thing to do)

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from)

He has gained karma since returning to LW briefly. I think you should retract the part about him having negative karma b/c it misrepresents the situation. He could have made a new account and he would have positive karma now. That means your judgement is based on past behaviour that was already punished. This is double jeopardy. (Edit: after some discussion on FI it looks like this isn't double jeopardy, just double punishment. Double jeopardy specifically refers to being on trial for the same offense twice, not being punished twice.)

Moreover, curi is being punished for being honest and transparent. If he had registered a new account and hidden his identity, would you have banned him only based on his actions this past 1-2 months? If you can say yes, then fine, but I don't think your argument holds in this case the only part that is verifiable is based on your disapproval of his discussion methods. Disagreeing with him is fine. I think a proportionate response would be a warning.

As it stands no warning was given, and no attempt to learn his plans was made. I think doing that would be proportionate and appropriate. A ban is not.

It is significant that curi is not able to discuss this ban himself. I am voluntarily doing this, of my own accord. He was not able to defend himself or provide explanation.

This is especially problematic as you specifically say you think he was improving compared with his conduct several years ago.

I do also think his most recent series of posts and comments is overall much less bad than the posts and comments he posted a few years ago (where most of his negative karma comes from), but they still don't strike me as great contributions to the LessWrong canon

This alone is not enough. A warning is proportionate.

are all low-karma

Unpopularity is no reason for a ban

and I assign too high of a probability that old patterns will repeat themselves.

How is this different to pre-crime?

I think, given he had deliberately changed his modus operandi weeks ago and has not posted in 13 days, this is unfair and overly judgmental.

You go on to say:

and I do think that was the right move, but I don't think it's enough to tip the scales on this issue.

What could curi have done differently which would have tipped the scales? If there is no acceptable thing he could have done, why was action not taken weeks ago when he was active?

I believe it is fundamentally unjust to delay action in this fashion without talking with him first. curi has an incredibly long track record of discussion, he is very open to it. He is not someone who avoids taking responsibility for things; quite the opposite. If you had engaged him, I am confident he would have discussed things with you.

and to generally err on the side of curating our userbase pretty heavily and maintaining high standards.

It makes sense that you want to cultivate the best rational forums you can. I think that is a good goal. However, again, there were other, less extreme and more proportionate actions that could have been taken first, especially seeing as curi had changed his LW discussion policy and was inactive at the time of the ban.

We presumably disagree on the meaning of 'high standards', but I don't think that's particularly relevant here.

This means making difficult moderation decision long before it is proven "beyond a reasonable doubt" that someone is not a net-positive contributor to the site.

There were many alternative actions you could have taken. For example, a 1-month ban. Restricting curi to only posting on his own shortform. Warning him of the circumstances and consequences under conditions, etc.

In this case, I think it is definitely not proven beyond a reasonable doubt that curi is overall net-negative for the site

I'm glad you've mentioned this, but LW is not a court of law and you are not bound to those standards (and no punishment here is comparable to the punishment a court might distribute). I think there are other good reasons for reconsidering curi's ban.

banning him might well be a mistake, but I think the probabilities weigh heavily enough in favor of the net-negative, and the worst-case outcomes are bad-enough, that on-net I think this is the right choice.

I think there is a critical point to be made here: you could have taken no action at this time and put a mod-notification for activity on his account. If he were to return and do something you deemed unacceptable, you could swiftly warn him. If he did it again, then a short-term ban. Instead, this is a sledge-sized banhammer used when other options were available. It is a decision that is now publicly on LW and indicates that LW is possibly intolerant of things other than irrationality. I don't think this is reflective of LW, and I think it reflects poorly on the moderation policies here. I don't think it needs to be that way, though.

I think a conditional unbanning (i.e. 1 warning, with the next action being a swift short ban) is an appropriate action for the moderation team to make, and I implore you to reconsider your decision.

If you think this is not appropriate, then I request you explain why 2 years is an appropriate length of time, and why Periergo and curi should have identical ban lengths.

The alternative to pacificity does not need to be so heavy handed.

I’d also like to note that curi has published a post on his blog regarding this ban; I read it after drafting this reply: http://curi.us/2381-less-wrong-banned-me

Comment by Max Kaye (max-kaye) on Open & Welcome Thread - September 2020 · 2020-09-18T02:20:58.370Z · LW · GW

FYI I am on that list and fine with it - curi and I discussed this post a bit here: https://www.youtube.com/watch?v=MxVzxS8uMto

I think you're wrong on multiple counts. Will reply more in a few hours.

Comment by Max Kaye (max-kaye) on Max Kaye's Shortform · 2020-09-15T12:24:23.438Z · LW · GW

\usepackage{cleveref}

Cool, thanks. I think I was missing \usepackage{cleveref}. I actually wrote the post in latex (the post for which I asked this question), but the lesswrong docs on using latex are lacking. for example they don't tell you they support importing stuff and don't list what is supported.

\Cref{eq:1} is an amazing new discovery; before Max Kaye, no one grasped the perfect and utter truth of \cref{eq:1}.

I use crefs in the .tex file linked above. I suppose I should have been more specific and asked "does anyone know how to label equations and reference them on lesswrong?" instead.

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-07T00:55:59.814Z · LW · GW

I think that the argument about emulating one Turing machine with another is the best you're going to get in full generality.

In that case I especially don't think that argument answers the question in OP.

I've left some details in another reply about why I think the constant overhead argument is flawed.

So while SI and humans might have very different notions of simplicity at first, they will eventually come to have the same notion, after they see enough data from the world.

I don't think this is true. I do agree some conclusions would be converged on by both systems (SI and humans), but I don't think simplicity needs to be one of them.

If an emulation of a human takes X bits to specify, it means a human can beat SI at binary predictions at most X times(roughly) on a given task before SI wises up.

Uhh, I don't follow this. Could you explain or link to an explanation please?

The quantity that matters is how many bits it takes to specify the mind, not store it(storage is free for SI just like computation time).

I don't think that applies here. I think that data is part of the program.

For the human brain this shouldn't be too much more than the length of the human genome, about 3.3 GB.

You would have to raise the program like a human child in that case^1. Can you really make the case you're predicting something or creating new knowledge via SI if you have to spend (the equiv. of) 20 human years to get it to a useful state?

How would you ask multiple questions? Practically, you'd save the state and load that state in a new SI machine (or whatever). This means the data is part of the program.

Moreover, if you did have to raise the program like any other newborn, you have to use some non-SI process to create all the knowledge in that system (because people don't use SI, or if they do use SI, they have other system(s) too).

1: at least in terms of knowledge; though if you used the complete human genome arguably you'd need to simulate a mother and other ppl too, but they have to be good simulations after the first few years, which is a regressive problem. So it's probably easier to instantiate it in a body and raise it like a person b/c human people are already suitable. You also need to worry about it becoming mistaken (intuitively one disagrees with most people on most things we'd use an SI program for).

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-07T00:44:36.727Z · LW · GW

The solution to the "large overhead" problem is to amortize the cost of the human simulation over a large number of English sentences and predictions.

That seems a fair approach in general, like how can we use the program efficiently/profitably, but I don't think it answers the question in OP. I think it actually actually implies the opposite effect: as you go through more layers of abstraction you get more and more complex (i.e. simplicity doesn't hold across layers of abstraction). That's why the strategy you mention needs to be over ever larger and larger problem spaces to make sense.

So this would still mean most of our reasoning about Occam's Razor wouldn't apply to SI.

A short English sentence then adds only a small amount of marginal complexity to the program - i.e. adding one more sentence (and corresponding predictions) only adds a short string to the program.

I'm not sure we (humanity) know enough to claim only a short string needs to be added. I think GPT-3 hints at a counter-example b/c GTP has been growing geometrically.

Moreover, I don't think we have any programs or ideas for programs that are anywhere near sophisticated enough to answer meaningful Qs - unless they just regurgitate an answer. So we don't have a good reason to claim to know what we'll need to add to extend your solution to handle more and more cases (especially increasingly technical/sophisticated cases).

Intuitively I think there is (physically) a way to do something like what you describe efficiently because humans are an example of this -- we have no known limit for understanding new ideas. However, it's not okay to use this as a hypothetical SI program b/c such a program does other stuff we don't know how to do with SI programs (like taking into account itself, other actors, and the universe broadly).

If the hypothetical program does stuff we don't understand and we also don't understand its data encoding methods, then I don't think we can make claims about how much data we'd need to add.

I think it's reasonable there would be no upper limit on the amount of data we'd need to add to such a program as we input increasingly sophisticated questions. I also think it's intuitive there's no upper limit on this data requirement (for both people and the hypothetical programs you mention).

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T20:54:31.387Z · LW · GW

for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next. Therefore the English and code-complexity can differ by at most a constant.

Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.

Solomonoff induction is fine with inputs taking unboundedly long to run. There might be cases where the human doesn't converge to a stable answer even after an indefinite amount of time. But if a "simple" hypothesis can have people debating indefinitely about what it actually predicts, I'm okay with saying that it's not actually simple(or that it's too vague to count as a hypothesis), so it's okay if SI doesn't return an answer in those cases.

Yeah okay, I think that's fair.

My issue generally (which is in my reply to johnswentworth) is the overhead is non-negligible if you're going to invoke a human. In that case we can't conclude that simplicity would carry over from the english representation to the code representation. So this argument doesn't answer the question.

You do say it's a loose bound, but I don't think it's useful. One big reason is that the overhead would dwarf any program we'd ever run, and pretty much every program would look identical b/c of the overhead. For simplicity to carry over we need relatively small overhead (even like the entire python runtime is only like 20mb extra via py2exe, much smaller than a mind and definitely not simple).

Maybe it's worth mentioning the question in OP: I read it as: "why would stuff the simplicity an idea had in one form (code) necessarily correspond to simplicity when it is in another form (english)? or more generally: why would the complexity of an idea stay roughly the same when the idea is expressed through different abstraction layers?" After that there's implications for Occam's Razor. Particularly it's relevant b/c occam's razor would give different answers when comparing ideas at different levels of abstraction, and if that's the case we can't be sure that ideas which are simple in english will be simple in code and we don't have a reason for Occam's Razor applying to SI.

Does that line up with what you think OP is about? If not we might be talking cross-purposes.

I mean you can pass functions as arguments to other functions and perform operations on them.

Ahh okay; first class functions.

Re "perform operations on [functions]": you can make new functions and partially or fully apply functions, but that's about it. (that does mean you can partially apply functions and pass them on, though, which is super useful)

So if you had one programming language with dictionaries built-in and other without, the one with dictionaries gets at most a constant advantage in code-length.

I agree with you that the theoretical upper bound on the minimum overhead is the size of a compiler/interpreter.

I think we might disagree on this, though: the compiler/interpreter includes data such as initial conditions (e.g. binary extensions, dynamic libraries, etc). I think this is an issue b/c there's no upper bound on that. If you invoke a whole person then it's an issue b/c for that person to solve more and more complex problems (or a wider and wider array) those initial conditions are going to grow correspondingly. Our estimates for the data requirements to store a mind are like bits. I'd expect the minimum required data to drop as problems got "simpler", but my intuition is that pattern is not the same pattern as what Occam's Razor gives us (e.g. minds taking less data can still think about what Thor would do).

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T16:37:53.348Z · LW · GW

the M1-simulator may be long, but its length is completely independent what we're predicting - thus, the M2-Kolmogorov-complexity of a string is at most the M1-Kolmogorov-complexity plus a constant (where the constant is the length of the M1-simulator program).

I agree with this, but I don't think it answers the question. (i.e. it's not a relevant argument^([1]))

Given the English sentence, the simulated human should then be able to predict anything a physical human could predict given the same English sentence.

There's a large edge case where the overhead constant is ~greater than the program. in those cases it's not the case that simplicity transitions across layers of abstraction.

That edge case means this doesn't follow:

Thus, if something has a short English description, then there exists a short (up to a constant) code description

[1]: Edit: it could be relevant but not the whole story; but in that case it's missing a sizable chunk.

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T15:48:40.269Z · LW · GW

On the literature that addresses your question: here is a classic LW post on this sort of question.

The linked post doesn't seem to answer it, e.g. in the 4th paragraph EY says:

Why, exactly, is the length of an English sentence a poor measure of complexity? Because when you speak a sentence aloud, you are using labels for concepts that the listener shares—the receiver has already stored the complexity in them.

I also don't think it fully addresses the question - or even partially in a useful way, e.g. EY says:

It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind like Thor.

The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output.

But this bakes in knowledge about measuring stuff. Maxwell's equations are - in part - easier to code because we have a way to describe measurements that's easy to compute. That representation is via an abstraction layer! It uses labels for concepts too.

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T15:17:16.507Z · LW · GW

One somewhat silly reason: for any simple English hypothesis, we can convert it to code by running a simulation of a human and giving them the hypothesis as input, then asking them to predict what will happen next.

Some things are quick for people to do and some things are hard. Some ideas have had multiple people continuously arguing for centuries. I think this either means you can't apply a simulation of a person like this, or some inputs have unbounded overhead.

because coding languages were designed to be understandable by humans and have syntax similar to human languages.

You should include all levels of abstraction in your reasoning, like raw bytecode. It's both low level and can be written by humans. It's not necessarily fun but it's possible. What about things people design at a transistor level?

e.g. functional programming languages allow you to treat functions as objects.

I use Haskell and have no idea what you're talking about.

I think abstraction in programming is different to what you mean; e.g. dictionary might a simple data structure to use, but a list of tuples is harder to use even though you can implement a dictionary using a list of tuples. the abstraction layer (the implementation) is what makes complex operations on list of tuples into simple operations on dictionaries.

Comment by Max Kaye (max-kaye) on Why would code/English or low-abstraction/high-abstraction simplicity or brevity correspond? · 2020-09-05T15:06:55.452Z · LW · GW

Brevity of code and english can correspond via abstraction.

I don't know why brevity in low and high abstraction programs/explanations/ideas would correspond (I suspect they wouldn't). If brevity in low/high abstraction stuff corresponded; isn't that like contradictory? If a simple explanation in high abstraction is also simple in low abstraction then abstraction feels broken; typically ideas only become simple after abstraction. Put another way: the reason to use abstraction is to make ideas/thing that are highly complex into things that are less complex.

I think Occam's Razor makes sense only if you take into account abstractions (note: O.R. itself is still a rule of thumb regardless). Occam's Razor doesn't make sense if you think about all the extra stuff an explanation invokes - partially because that body of knowledge grows as we learn more, and good ideas become more consistent with the population of other ideas over time.

When people think of short code they think of doing complex stuff with a few lines of code. e.g. cat asdf.log | cut -d ',' -f 3 | sort | uniq. When people think of (good) short ideas they think of ideas which are made of a few well-established concepts that are widely accessible and easy to talk about. e.g. we have seasons because energy from sunlight fluctuates ~sinusoidally through our annual orbit.

One of the ways SI can use abstraction is via the abstraction being encoded in both the program, program inputs, and the observation data.

(I think) SI uses an arbitrary alphabet of instructions (for both programs and data), so you can design particular abstractions into your SI instruction/data language. Of course the program would be a bit useless for any other problem than the one you designed it for, in this case.

Is there literature arguing that code and English brevity usually or always correspond to each other?

I don't know of any.

If not, then most of our reasons for accepting Occam’s Razor wouldn’t apply to SI.

I think some of the reasoning makes sense in a pointless sort of way. e.g. the hypothesis 1100 corresponds to the program "output 1 and stop". The input data is from an experiment, and the experiment was "does the observation match our theory?", and the result was 1. The program 1100 gets fed into SI pretty early, and it matches the predicted output. The reason this works is that SI found a program which has info about 'the observation matching the theory' already encoded, and we fed in observation data with that encoding. Similarly, the question "does the observation match our theory?" is short and elegant like the program. The whole thing works out because all the real work is done elsewhere (in the abstraction layer).

Comment by Max Kaye (max-kaye) on Mathematical Inconsistency in Solomonoff Induction? · 2020-09-04T14:06:28.210Z · LW · GW

I went through the maths in OP and it seems to check out. I think the core inconsistency is that Solomonoff Induction implies which is obviously wrong. I'm going to redo the maths below (breaking it down step-by-step more). curi has which is the same inconsistency given his substitution. I'm not sure we can make that substitution but I also don't think we need to.

Let and be independent hypotheses for Solomonoff induction.

According to the prior, the non-normalized probability of (and similarly for ) is: (1)

what is the probability of ? (2)

However, by Equation (1) we have: (3)

thus (4)

This must hold for any and all and .

curi considers the case where and are the same length, starting with Equation (4), we get (5):

but (6)

and (7)

so: (8)

curi has slightly different logic and argues which I think is reasonable. His argument means we get . I don't think those steps are necessary but they are worth mentioning as a difference. I think Equation (8) is enough.

I was curious about what happens when . Let's assume the following: (9)

so, from Equation (2): (10)

by Equation (3) and Equation (10): (11)

but Equation (9) says --- this contradicts Equation (11).

So there's an inconsistency regardless of whether or not.

Comment by Max Kaye (max-kaye) on Max Kaye's Shortform · 2020-09-04T12:07:58.393Z · LW · GW

testing \latex \LaTeX

does anyone know how to label equations and reference them?

@max-kaye u/max-kaye https://www.lesswrong.com/users/max-kaye

Comment by Max Kaye (max-kaye) on TAG's Shortform · 2020-08-31T04:35:23.708Z · LW · GW

If it's possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can't freely decide to use it to make a better decision than the one you would have made anyway.

[...]

Deterministic physics excluded free choice. Physics doesn't.

MWI is deterministic over the multiverse, not per-universe.

Comment by Max Kaye (max-kaye) on Mathematical Inconsistency in Solomonoff Induction? · 2020-08-27T13:11:37.258Z · LW · GW
A combination where both are fine or equally predicted fails to be a hypothesis.

Why? If I have two independent actions - flipping a coin and rolling a 6-sided die (d6) - am I not able to combine "the coin lands heads 50% of the time" and "the die lands even (i.e. 2, 4, or 6) 50% of the time"?

If you have partial predictions of X1XX0X and XX11XX you can "or" them into X1110X.

This is (very close to) a binary "or", I roughly agree with you.

But if you try to combine 01000 and 00010 the result will not be 01010 but something like 0X0X0.

This is sort of like a binary "and". Have the rules changed? And what are they now?

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-26T01:20:42.871Z · LW · GW
So there is a relationship between the Miller and Popper papers conclusions, and it's assumptions. Of course there is. That is what I am saying. But you were citing it as an example of a criticism that doesn't depend on assumptions.
> Your argument proposes a criticism of Popper’s argument
No, it proposes a criticism of your argument ... the criticsm that there is a contradiction between your claim that the paper makes no assumptions, and the fact that it evidently does.

I didn't claim that paper made no assumptions. I claimed that refuting that argument^[1] would not refute CR, and vice versa. Please review the thread, I think there's been some significant miscommunications. If something's unclear to you, you can quote it to point it out.

[1]: for clarity, the argument in Q: A proof of the impossibility of inductive probability.


> Reality does not contradict itself
Firstly, epistemology goes first. You don't know anything about reality without having the means to acquire knowledge.

Inductivism is not compatible with this - it has no way to bootstrap except by some other, more foundational epistemic factors.

Also, you didn't really respond to my point or the chain of discussion-logic before that. I said an internal contradiction would be a way to refute an idea (as a second example when you asked for examples). you said contradictions being bad is an assumption. i said no, it's a conclusion, and offered an explanation (which you've ignored). In fact, through this discussion you haven't - as far as I can see - actually been interested in figuring out a) what anything else things or b) where and what you might be wrong about.

Secondly, I didn't say it was the PNC was actually false.

I don't think there's any point talking about this, then. We haven't had any meaningful discussion about it and I don't see why we would.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-26T01:00:07.491Z · LW · GW
> Bertrand Russell had arguments against that kind of induction...
Looks like the simple organisms and algorithms didn't listen to him!

I don't think you're taking this seriously.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-25T12:53:23.313Z · LW · GW
CR doesn't have good arguments against the other kind of induction, the kind that just predicts future observations on the basis of past ones , the kind that simple organisms and algorithms can do.

This is the old kind of induction; Bertrand Russell had arguments against that kind of induction...

The refutations of that kind of induction are way beyond the bounds of CR.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-25T12:47:05.827Z · LW · GW
The assumption that contradictions are bad is a widespread assumption, but it is still an assumption.

Reality does not contradict itself; ever. An epistemology is a theory about how knowledge works. If a theory (epistemic or otherwise) contains an internal contradiction, it cannot accurately portray reality. This is not an assumption, it's an explanatory conclusion.

I'm not convinced we can get anywhere productive continuing this discussion. If you don't think contradictions are bad, it feels like there's going to be a lot of work finding common ground.

But I don't count it as an example, since I don't regard it as correct [...]

This is irrational. Examples of relationships do not depend on whether the example is real or not. All that's required is that the relationship is clear, whether each of us judges the idea itself as true or not doesn't matter in this case. We don't need to argue this point anyway, since you provided an example:

In particular , it is based on bivalent logic where, 1 and 0 are the only possible values, but the loud and proud inductivists here base their arguments on probabilistic logic, where propositions have a probability between but not including 1 and 0. So "induction must be based on bivalent logic" is an assumption.

Cool, so do you see how the argument you made is separate from whether inductivism is right or not?

Your argument proposes a criticism of Popper's argument. The criticism is your conjecture that Popper made a mistake. Your criticism doesn't rely on whether inductivism is right or not, just whether it's consistent or not (and consistent according to some principles you hint at). Similarly, if Popper did make a mistake with that argument, it doesn't mean that CR is wrong, or that Inductivism is wrong; it just means Popper's criticism was wrong.


Curiously, you say:

But I don't count it as an example, since I don't regard it as correct,

Do you count yourself a Bayesian or Inductivist? What probability did you assign to it being correct? And what probability do you generally assign to a false-positive result when you evaluate the correctness of examples like this?

Comment by Max Kaye (max-kaye) on MakoYass's Shortform · 2020-08-25T00:54:58.616Z · LW · GW
The concept of indexical uncertainty we're interested in is... I think... uncertainty about which kind of body or position in the universe your seat of consciousness is in, given that there could be more than one.

I'm not sure I understand yet, but does the following line up with how you're using the word?

Indexical uncertainty is uncertainty around the exact matter (or temporal location of such matter) that is directly facilitating, and required by, a mind. (this could be your mind or another person's mind)

Notes:

  • "exact" might be too strong a word
  • I added "or temporal location of such matter" to cover the sleeping beauty case (which, btw, I'm apparently a halfer or double halfer according to wikipedia's classifications, but haven't thought much about it)

Edit/PS: I think my counter-example with Alice, Alex, and Bob still works with this definition.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-25T00:36:05.097Z · LW · GW
> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
The other options need to be acceptable to both parties!

Sure, or the parties need a rational method of resolving a disagreement on acceptability. I'm not sure why that's particularly relevant, though.

> I can’t think of an example of an infinite regress except cases where there are other options which stop the regress.
I don't see how that is an example, principally because it seems wrong to me.

You didn't quote an example - I'm unsure if you meant to quote a different part?

In any case, what you've quoted isn't an example, and you don't explain why it seems wrong or what about it is an issue. Do you mean that cases exist where there is an infinite regress and it's not soluble with other methods?

I'm also not sure why this is particularly relevant.

Are we still talking about the below?

> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?

I did give you an example (one of Popper's arguments against inductivism).

A generalised abstract case is where someone of a particular epistemology criticises another school of epistemology on the grounds of an internal contradiction. A criticism of that person's criticism does not necessarily relate to that person's epistemology, and vice versa.

Comment by Max Kaye (max-kaye) on Postel’s Principle as Moral Aphorism · 2020-08-24T07:37:22.479Z · LW · GW

Here are some (mostly critical) notes I made reading the post. Hope it helps you figure things out.

> * If you can’t say something nice, don’t say anything at all.

This is such bad advice (I realise eapache is giving it as an example of common moral sayings; though I note it's neither an aphorism nor a phrase). Like maybe it applies when talking to a widow at her husbands funeral?

"You're going to fast", "you're hurting me", "your habit of overreaching hurts your ability to learn", etc. These are good things to say in the right context, and not saying them allows bad things to keep happening.

> The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.

I don't know why this is in here, particularly the second clause -- I'm not sure it helps with anything. It's also mean.

> This combination of personality traits makes ...

The last thing you talk about is what Peterson might say, not your own personality. Sounds like you're talking about the personality trait(s) of "[having] something weird to say about lobsters and serotonin".

> This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour.

I presume you mean "guiding" more than "defining". It could define standards you hold for your own behaviour.

> *[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess ... to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]*

This is problematic, e.g. 'I will judge you for being an ineffective, inconsistent nazi, but never for holding or not holding nazi values'. Making moral judgements is important. That said, judging things poorly is (possibly very) harmful. (Examples: treating all moral inconsistencies as equally bad, or treating some racism as acceptable b/c of the target race)

> annoyingly large number of people

I think it's annoyingly few. A greater population is generally good.

> There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble”

How do you know you aren't just friends with people who approve of this?

What do you do WRT everyone else? (e.g. shop-keeps, the mailman, taxi drivers)

> If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.

Are you using Postel's Principle *solely* for approval? (You say "The more people who like me, the more secure my situation" earlier, but is there another reason?)

> Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism.

How can formal definitions be implicit?

Which aphorism? You provided 5 things you called aphorisms, but you haven't called Postel's Principle that.

> ... [within the context of your own behaviour] something is only morally permissible for me if it is permissible for *all* of the people I am likely to interact with regularly.

What about people you are friends with for particular purposes? Example: a friend you play tennis with but wouldn't introduce to your parents.

What if one of those people decides that Postel's Principle is not morally permissible?

> ... [within the context of other ppl's behaviour] it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.

You're basing your idea on which things are generally morally permissible on what other people think. (Note: you do acknowledge this later which is good)

This cannot deal with contradictions between people's moral views (a case where neither of those people necessarily have contradictions, but you do).

It also isn't an idea that works in isolation. Other people might have moral views b/c they have principles from which they derive those views. They could be mistaken about the principles or their application. In such a case would you - even if you realised they were mistaken - still hold their views as permissible? How is that rational?

> Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle.

This is a moral choice, by what moral knowledge can you make such a choice? I presume you see how using Postel's Principle here might lead you into a recursive trap (like an echo-chamber), and how it limits your ability to error correct if something goes wrong. Ultimately you're not in control of what your social circle becomes (or who's in and who's out).

> It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing.

What? Why?

Your using of 'untenable' is unclear; is it just impractical but something you'd do if it were practical, or is it unthinkable to do so, or is it just so difficult it would never happen? (Note: I think option 3 is not true, btw)

> (since inaction is of course its own kind of action)

It's good you realise this.

> In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.

I can see the logic of why you'd *want* to do this, but I can't see *how* you'd do it. Also, I don't see why you'd care to if it wasn't causing problems. I have friends and associates I value which I'd have to cut-loose if I were to follow Postel's P. That would harm me, so how could it be moral to?

It would harm you too, unless the friends are a) collectively and individually not very meaningful (but then why be friends at all?) or b) not providing value to your life anyway (so why be friends at all?). Maybe there are other options?

> Unfortunately, it sometimes happens that people change their moral stances, ...

Why is this a bad thing!??!? It's **good** to learn you were wrong and improve your values to reflect that.

You expand the above with "especially when under pressure from other people who I may not be interacting with directly" -- I'd argue that's not *necessarily* changing one's preference, it's just that the person is behaving like that to please someone else. Hard to see why that would matter unless it was like within the friend group itself or impacted the person so much that they couldn't spend time with you (the second example being something that happens alongside moral pressure with e.g. domestic abuse, so might be something to seriously consider).

> tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice.

You bring up a decent problem with your philosophy, but then say:

> While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.

First, "many" is not "all" so you still have undefined behaviour (like what to do in these situations). Secondly, who cares if you have an interest in radical islam? A friend of yours suddenly began adhering to a pro-violence anti-reason philosophy. I don't think you need Postel’s P. to know you don't want to casually hang with them again.

So I think this is a bad example for two reasons:
1. You dismiss the problem because "many of these choices end up being rather easy", but that's a bad reason to dismiss it, and I really hope many of those choices are not because a friend has recently decided terrorism might be a good hobby.
2. If you do it just b/c you don't have an interest that doesn't cover all cases, but more importantly to do so for that reason is to reject deeper moral explanations. How do you know you're "on the right side of history" if you can't judge it and refuse the available moral knowledge we have?

> Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. ... This sort of situation also forces me with a choice, and often a much more difficult one. ... If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.

I agree that you shouldn't take your friends' moral conclusions into account when thinking about big societal stuff. But the thing about the "right side of history" is that you can't predict it. Take the US civil war - with your Postel’s P. inspired morality, your judgements would depend on which state you were in. Leading up to things you'd probably judge the dominant local view the one that would endure. If you didn't judge the situation like that, it means you would have used some other moral knowledge that isn't part of Postel’s P.

> However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.

I agree, that sounds like a very uncomfortable situation.

> Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.

Why is this not by design? I think it's natural for ppl to mostly agree with their friend group on particular moral judgements (moral explanations can be a whole different ball game). I don't think Postel’s P. need be involved.

Additionally: social dynamics are such that a group can be very *restrictive* in regards to what's acceptable, and often treat harshly those members who are too liberal in what they accept. (Think Catholics in like the 1600s or w/e)

----

I think the programmingisterrible post is good.

> If some data means two different things to different parts of your program or network, it can be exploited—Interoperability is achieved at the expense of security.

Is something like *moral security* important to you? Maybe it's moot because you don't have anyone trying to maliciously manipulate you, but worth thinking about if you hold the keys to any accounts, servers, etc.

> The paper, and other work and talks from the LANGSEC group, outlines a manifesto for language based security—be precise and consistent in the face of ambiguity

Here tef (the author) points out that preciseness and consistency (e.g. having and adhering to well formed specs) are a way to avoid the bad things about Postel’s P. Do you agree with this? Are your own moral views "precise and consistent"?

> Instead of just specifying a grammar, you should specify a parsing algorithm, including the error correction behaviour (if any).

This is good, and I think applies to morality: you should be able to handle any moral situation, know the "why" behind any decision you make, and know how you avoid errors in moral judgements/reasoning.

Note: "any moral situation" is fine for me to say here b/c "don't make judgements on extreme or wacky moral hypotheticals" can be part of your moral knowledge.

Comment by Max Kaye (max-kaye) on MakoYass's Shortform · 2020-08-24T06:02:21.705Z · LW · GW
Hmm. It appears to me that Qualia are whatever observations affect indexical claims, and anything that affects indexical claims is a qualia

I don't think so, here is a counter-example:

Alice and Bob start talking in a room. Alice has an identical twin, Alex. Bob doesn't know about the twin and thinks he's talking to Alex. Bob asks: "How are you today?". Before Alice responds, Alex walks in.

Bob's observation of Alex will surprise him, and he'll quickly figure out that something's going on. But more importantly: Bob's observation of Alex alters the indexical 'you' in "How are you today?" (at least compared to Bob's intent, and it might change for Alice if she realises Bob was mistaken, too).

I don't think this is anything close to describing qualia. The experience of surprise can be a quale, the feeling of discovering something can be a quale (eureka moments), the experience of the colour blue is a quale, but the observation of Alex is not.

Do you agree with this? (It's from https://plato.stanford.edu/entries/indexicals/)

An indexical is, roughly speaking, a linguistic expression whose reference can shift from context to context. For example, the indexical ‘you’ may refer to one person in one context and to another person in another context.

Btw, 'qualia' is the plural form of 'quale'

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T21:55:27.319Z · LW · GW
It's a bad thing if ideas can't be criticised at all, but it's also a bad thing if the relationship of mutual criticism is cyclic, if it doesn't have an obvious foundation or crux.

Do you have an example? I can't think of an example of an infinite regress except cases where there are other options which stop the regress. (I have examples of these, but they're contrived)

> We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong
Do you have a concrete example?

I think some of the criticisms of inductivism Popper offered are like this. Even if Popper was wrong about big chunks of critical rationalism, it wouldn't necessarily invalidate the criticisms. Example: A proof of the impossibility of inductive probability.

(Note: I don't think Popper was wrong but I'm also not sure it's necessary to discuss that now if we disagree; just wanted to mention)

> And it’s always possible both are wrong, anyway
Kind of, but "everything is wrong" is vulgar scepticism.

I'm not suggesting anything I said was a reason to think both theories wrong, I listed it because it was a possibility I didn't mention in the other paragraphs, and it's a bit of a trivial case for this stuff (i.e. if we come up with a reason both are wrong then we don't have to worry about them anymore if we can't answer that criticism)

Comment by Max Kaye (max-kaye) on Universality Unwrapped · 2020-08-22T20:45:08.696Z · LW · GW

FYI this usage of the term *universality* overloads a sorta-similar concept David Deutsch (DD) uses in *The Fabric of Reality* (1997) and in *The Beginning of Infinity* (2011). In BoI it's the subject of Chapter 6 (titled "The Jump to Universality") I don't know what history the idea has prior to that. Some extracts are below to give you a bit of an idea of how DD uses the word 'universality'.

Part of the reason I mention this is the reference to Popper; DD is one of the greatest living Popperians and has made significant contributions to critical rationalism. I'd consider it somewhat odd if the author(s) (Paul Christiano?) weren't familiar with DD's work given the overlap. DD devotes a fair proportion of BoI to discussing universality and AGI directly -- both separately and together -- and another fair proportion of the book to foundations for that discussion.

---

General comment: Your/this use of 'universality' (particularly in AN #81) feels a lot like the idea of *totality* (as in a total function, etc). more like it's omniscience rather than an important epistemic property.

---

Some extracts from BoI on universality (with a particular but not exclusive focus on computation):

Here is an even more speculative possibility. The largest benefits of any universality, beyond whatever
parochial problem it is intended to solve, come from its being useful for further innovation. And innovation is unpredictable. So, to appreciate universality at the time of its discovery, one must either value abstract knowledge for its own sake or expect it to yield unforeseeable benefits. In a society that rarely experienced change, both those attitudes would be quite unnatural. But that was reversed with the Enlightenment, whose quintessential idea is, as I have said, that progress is both desirable and attainable. And so, therefore, is universality.

---

Babbage originally had no conception of computational universality. Nevertheless, the Difference Engine already comes remarkably close to it – not in its repertoire of computations, but in its physical constitution. To program it to print out a given table, one initializes certain cogs. Babbage eventually realized that this programming phase could itself be automated: the settings could be prepared on punched cards like Jacquard’s, and transferred mechanically into the cogs. This would not only remove the main remaining source of error, but also increase the machine’s repertoire. Babbage then realized that if the machine could also punch new cards for its own later use, and could control which punched card it would read next (say, by choosing from a stack of them, depending on the position of its cogs), then something qualitatively new would happen: the jump to universality. Babbage called this improved machine the Analytical Engine. He and his colleague the mathematician Ada, Countess of Lovelace, knew that it would be capable of computing anything that human ‘computers’ could, and that this included more than just arithmetic: it could do algebra, play chess, compose music, process images and so on. It would be what is today called a universal classical computer. (I shall explain the significance of the proviso ‘classical’ in Chapter 11, when I discuss quantum computers, which operate at a still higher level of universality.)

---

The mathematician and computer pioneer Alan Turing later called this mistake ‘Lady Lovelace’s objection’. It was not computational universality that Lovelace failed to appreciate, but the universality of the laws of physics. Science at the time had almost no knowledge of the physics of the brain. Also, Darwin’s theory of evolution had not yet been published, and supernatural accounts of the nature of human beings were still prevalent. Today there is less mitigation for the minority of scientists and philosophers who still believe that AI is unattainable. For instance, the philosopher John Searle has placed the AI project in the following historical perspective: for centuries, some people have tried to explain the mind in mechanical terms, using similes and metaphors based on the most complex machines of the day. First the brain was supposed to be like an immensely complicated set of gears and levers. Then it was hydraulic pipes, then steam engines, then telephone exchanges – and, now that computers are our most impressive technology, brains are said to be computers. But this is still no more than a metaphor, says Searle, and there is no more reason to expect the brain to be a computer than a steam engine.
But there is. A steam engine is not a universal simulator. But a computer is, so expecting it to be able to do whatever neurons can is not a metaphor: it is a known and proven property of the laws of physics as best we know them. (And, as it happens, hydraulic pipes could also be made into a universal classical computer, and so could gears and levers, as Babbage showed.)

---

Because of the necessity for error-correction, all jumps to universality occur in digital systems. It is why spoken languages build words out of a finite set of elementary sounds: speech would not be intelligible if it were analogue. It would not be possible to repeat, nor even to remember, what anyone had said. Nor, therefore, does it matter that universal writing systems cannot perfectly represent analogue information such as tones of voice. Nothing can represent those perfectly. For the same reason, the sounds themselves can represent only a finite number of possible meanings. For example, humans can distinguish between only about seven different sound volumes. This is roughly reflected in standard musical notation, which has approximately seven different symbols for loudness (such as p, mf, f, and so on). And, for the same reason, speakers can only intend a finite number of possible meanings with each utterance. Another striking connection between all those diverse jumps to universality is that they all happened on Earth. In fact all known jumps to universality happened under the auspices of human beings – except one, which I have not mentioned yet, and from which all the others, historically, emerged. It happened during the early evolution of life.
Comment by Max Kaye (max-kaye) on The Mirror Chamber: A short story exploring the anthropic measure function and why it can matter · 2020-08-22T19:36:19.096Z · LW · GW

This is commentary I started making as I was reading the first quote. I think some bits of the post are a bit vague or confusing but I think I get what you mean by anthropic measure, so it's okay in service to that. I don't think equating anthropic measure to mass makes sense, though; counter examples seem trivial.

> The two instances can make the decision together on equal footing, taking on exactly the same amount of risk, each- having memories of being on the right side of the mirror many times before, and no memories of being on the wrong- tacitly feeling that they will go on to live a long and happy life.

feels a bit like like quantum suicide.

note: having no memories of being on the wrong side does not make this any more pleasant an experience to go through, nor does it provide any reassurance against being the replica (presuming that's the one which is killed).

> As is custom, the loser speaks first.

naming the characters papers and scissors is a neat idea.

> Paper wonders, what does it feel like to be... more? If there were two of you, rather than just one, wouldn't that mean something? What if there were another, but it were different?... so that-
>
> [...]
>
> Scissors: "What would it feel like? To be... More?... What if there were two of you, and one of me? Would you know?"

isn't paper thinking in 2nd person but then scissors in 1st? so paper is thinking about 2 ppl but scissors about 3 ppl?

> It was true. The build that plays host to the replica (provisionally named "Wisp-Complete"), unlike the original's own build, is effectively three brains interleaved

Wait, does this now mean there's 4 ppl? 3 in the replica and 1 in the non-replica?

> Each instance has now realised that the replica- its brain being physically more massive- has a higher expected anthropic measure than the original.

Um okay, wouldn't they have maybe thought about this after 15 years of training and decades of practice in the field?

> It is no longer rational for a selfish agent in the position of either Paper nor Scissors to consent to the execution of the replica, because it is more likely than not, from either agent's perspective, that they are the replica.

I'm not sure this follows in our universe (presuming it is rational when it's like 1:1 instead of 3:1 or whatever). like I think it might take different rules of rationality or epistemology or something.

> Our consenters have had many many decades to come to terms with these sorts of situations.

Why are Paper and Scissors so hesitant then?

> That gives any randomly selected agent that has observed that it is in the mirror chamber a 3/4 majority probability of being the replica, rather than being the original.

I don't think we've established sufficiently that the 3 minds 1 brain thing are actually 3 minds. I don't think they qualify for that, yet.

> But aren't our consenters perfectly willing to take on a hefty risk death in service of progress? No. Most Consenters aren't. Selling one's mind and right to life in exchange for capital would be illegal.

Why would it be a hefty risk? Isn't it 0% chance of death? (the replicant is always the one killed)

> In a normal mirror chamber setup, when the original enters the mirror chamber, they are confident that it is the original who will walk out again. They are taking on no personal risk. None is expected, and none is required.

Okay we might be getting some answers soon.

> The obvious ways of defecting from protocol- an abdication of the responsibility of the consenter, a refusal to self-murder, an attempt to replicate without a replication license- are taken as nothing less than Carcony.

Holy shit this society is dystopic.

> It would be punished with the deaths of both copies and any ancestors of less than 10 years of divergence or equivalent.

O.O

> But if, somehow, the original were killed? What if neither instance of the Consenter signed for their replica's execution, and the replica were left alive. That would not be considered Carcony. It would not even be considered theft- because a brain always belongs to its mind.

I'm definitely unclear on the process for deciding; wouldn't like only one guillatine be set up and both parties affixed in place? (Moreover, why wouldn't the replica just be a brain and not in a body, so no guillatine, and just fed visual inputs along with the mirror-simulation in the actual room -- sounds feasilble)

> What if neither instance of the Consenter signed for their replica's execution

Wouldn't this be an abdication of responsibility as mentioned in the prev paragraph?

> So, do you see now? Do you see how Consenter Nai Paper-Chell-Glass-Stratton was motivated by a simple alignment of the payoff matrices?

Presumably to run away with other-nai-x3-in-a-jar-stratton?

> Paper: "You wouldn't do that to me. Look... if you're the original... And I do myself, and I'm the replica. I wont actually be dead, because if you destroy a representation of your brain case in a simulation that doesn't actually destroy you in real life. I might not even desync. Whatever happens, I'll still be alive after the penetration so I'll know I'm the replica, but you might not know. It might look like I'm really dead. And you'll have no incentive to follow through and do yourself at that point."

> Scissors: "I still don't see it."

So both parties sign for the destruction of the replica, but only the legit Nai's signing will actually trigger the death of the replica. The replica Nai's signing will only SIMULATE the death of a simulated replica Nai (the "real" Nai being untouched) - though if this happened wouldn't they 'desync' - like not be able to communicate? (presuming I understand your meaning of desync)

> Paper: "... If you're the replica, it doesn't matter whether you do yourself, you'll still get saved either way, but you're incented not to do yourself because having a simulated spike stuck through your simulated head will probably be pretty uncomfortable. But also, if you're the original, you're sort of doomed either way, you're just incented to run off and attempt Carcony, but there's no way the replica would survive you doing that, and you probably wouldn't either, you wouldn't do that to me. Would you?"

I don't follow the "original" reasoning; if you're the original and you do yourself the spike goes through the replica's head, no? So how do you do Carcony at that point?

> The test build is an order of magnitude hardier than Nai's older Cloud-Sheet. As such, the testing armature is equipped to apply enough pressure to pierce the Cloud-Sheet's shielding, and so it was made possible for the instances to conspire to commit to the legal murder of Consenter Nai Scissors Bridger Glass Stratton.

So piercing the sheilding of the old brain (cloud-sheet) is important b/c the various Nai's (ambiguous: all 4 or just 3 of them) are conspiring to murder normal-Nai and they need to pierce the cloud-sheet for that. But aren't most new brains they test hardier than the one Nai is using? So isn't it normal that the testing-spike could pierce her old brain?

> A few things happened in the wake of Consenter Paper Stratton's act of praxis.

omit "act of", sorta redundant.

> but most consenter-adjacent philosophers took the position that it was ridiculous to expect this to change the equations, that a cell with thrice the mass should be estimated to have about thrice the anthropic measure, no different.

This does not seem consistent with the universe. If that was the case then it would have been an issue going smaller and smaller to begin with, right?

Also, 3x lattices makes sense for error correction (like EC RAM), but not 3x mass.

> The consenter union banned the use of mirror chambers in any case where the reasonable scoring of the anthropic measure of the test build was higher than the reasonable scoring of a consenter's existing build.

this presents a problem for testing better brains; curious if it's going to be addressed.

I just noticed "Consenter Nai Paper-Chell-Glass-Stratton" - the 'paper' referrs to the rock-paper-sissors earlier (confirmed with a later Nai reference). She's only done this 4 times now? (this being replication or the mirror chamber)

earlier "The rational decision for a selfish agent instead becomes..." is implying the rational decision is to execute the original -- presumably this is an option the consenter always has? like they get to choose which one is killed? Why would that be an option? Why not just have a single button that when they both press it, the replica is died; no choice in the matter.

> Scissors: "I still don't see it."

Scissors is slower so scissors dies?

> Paper wonders, what does it feel like to be... more? If there were two of you, rather than just one, wouldn't that mean something? What if there were another, but it were different?... so that-

I thought this was Paper thinking not wondering aloud. In that light

> Scissors: "What would it feel like? To be... More?... What if there were two of you, and one of me? Would you know?"

looks like partial mind reading or something, like super mental powers (which shouldn't be a property of running a brain 3x over but I'm trying to find out why they concluded Scissors was the original)

> Each instance has now realised that the replica- its brain being physically more massive- has a higher expected anthropic measure than the original.

At this point in the story isn't the idea that it has a higher anthropic measure b/c it's 3 brains interleaved, not 1? while the parenthetical bit ("its brain ... massive") isn't a reason? (Also, the mass thing that comes in later; what if they made 3 brains interleaved with the total mass of one older brain?)

Anyway, I suspect answering these issues won't be necessary to get an idea of anthropic measure.

(continuing on)

> Anthropic measure really was the thing that caused consenter originals to kill themselves.

I don't think this is rational FYI

> And if that wasn't true of our garden, we would look out along the multiverse hierarchy and we would know how we were reflected infinitely, in all variations.
> [...]
> It became about relative quantities.

You can't take relative quantities of infinities or subsets of infinities (it's all 100% or 0%, essentially). You can have *measures*, though. David Deutsch's Beginning of Infinity goes into some detail about this -- both generally and wrt many worlds and the multiverse.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T18:37:02.134Z · LW · GW
I would like to ask him if he maintains a distinction between values and preferences, morality and (well formed) desire.

I think he'd say 'yes' to a distinction between morality and desire, at least in the way I'm reading this sentence. My comment: Moral statements are part of epistemology and not dependent on humans or local stuff. However, as one learns more about morality and considers their own actions, their preferences progressively change to be increasingly compatible with their morality.

Being a fallibilist I think he'd add something like or roughly agree with: the desire to be moral doesn't mean all our actions become moral, we're fallible and make mistakes, so sometimes we think we're doing something moral that turns out not to be (at which point we have some criticism for our behaviour and ways to improve it).

(I'm hedging my statements here b/c I don't want to put words in DD's mouth; these are my guesses)

I prefer schools that don't.

Wouldn't that just be like hedonism or something like that? I'm not sure what would be better about a school that doesn't.

But I've never asked those who do whether they have a precise account of what moral values are, as a distinct entity from desires, maybe they have a good and useful account of values, where they somehow reliably serve the aggregate of our desires, that they just never explain because they think everyone knows it intuitively, or something. I don't. They seem too messy to prove correctness of.

Why is the definition of values and the addition of "moral" not enough?

Definitions (from google):

[moral] values: [moral] principles or standards of behaviour; one's judgement of what is important in life.

principle: a fundamental truth or proposition that serves as the foundation for a system of belief or behaviour or for a chain of reasoning.

I'd argue for a slightly softer definition of principle, particularly it should account for: moral values and principles can be conclusions, they don't have to be taken as axiomatic, however, they are *general* and apply universally (or near-universally).

They seem too messy to prove correctness of.

Sure, but we can still learn things about them, and we can still reason about whether they're wrong or right.

Here's a relevant extract from BoI (about 20% through the book, in ch5 - there's a fair amount of presumed reading at this point)

In the case of moral philosophy, the empiricist and justificationist misconceptions are often expressed
in the maxim that ‘you can’t derive an ought from an is’ (a paraphrase of a remark by the Enlightenment philosopher David Hume). It means that moral theories cannot be deduced from factual knowledge. This has become conventional wisdom, and has resulted in a kind of dogmatic despair about morality: ‘you can’t derive an ought from an is, therefore morality cannot be justified by reason’. That leaves only two options: either to embrace unreason or to try living without ever making a moral judgement. Both are liable to lead to morally wrong choices, just as embracing unreason or never attempting to explain the physical world leads to factually false theories (and not just ignorance).
Certainly you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations. And, although factual evidence and moral maxims are logically independent, factual and moral explanations are not. Thus factual knowledge can be useful in criticizing moral explanations.
For example, in the nineteenth century, if an American slave had written a bestselling book, that event would not logically have ruled out the proposition ‘Negroes are intended by Providence to be slaves.’ No experience could, because that is a philosophical theory. But it might have ruined the explanation through which many people understood that proposition. And if, as a result, such people had found themselves unable to explain to their own satisfaction why it would be Providential if that author were to be forced back into slavery, then they might have questioned the account that they had formerly accepted of what a black person really is, and what a person in general is – and then a good person, a good society, and so on.
Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T17:57:08.926Z · LW · GW
In what way are the epistemologies actually in conflict?

Well, they disagree on how to judge ideas, and why ideas are okay to treat as 'true' or not.

There are practical consequences to this disagreement; some of the best CR thinkers claim MIRI are making mistakes that are detrimental to the future of humanity+AGI, for **epistemic** reasons no less.

My impression is that it is more just a case of two groups of people who maybe don't understand each other well enough, rather than a case of substantiative disagreement between the useful theories that they have, regardless of what DD thinks it is.

I have a sense of something like this, too, both in the way LW and CR "read" each other, and in the more practical sense of agreement in the outcome of many applications.

I do still think there is a substantive disagreement, though. I also think DD is one of the best thinkers wrt CR and broadly endorse ~everything in BoI (there are a few caveats, a typo and improvements to how-to-vary, at least; I'll mention if more come up. The yes/no stuff I mentioned in another post is an example of one of these caveats). I mention endorsing BoI b/c if you wanted to quote something from BoI it's highly likely I wouldn't have an issue with it (so is a good source of things for critical discussion).

Bayes does not disagree with true things, nor does it disagree with useful rules of thumb.

CR agrees here, though there is a good explanation of "rules of thumb" in BoI that covers how, when, and why rules of thumb can be dangerous and/or wrong.

Whatever it is you have, I think it will be conceivable from bayesian epistemological primitives, and conceiving it in those primitives will give you a clearer idea of what it really is.

This might be a good way try to find disagreements between BE (Bayesian Epistemology) and CR in more detail. It also tests my understanding of CR (and maybe a bit of BE too).

I've given some details on the sorts of principles in CR in my replies^1, if you'd like to try this do you have any ideas on where to go next? I'm happy to provide more detail with some prompting about the things you take issue with or you think need more explanation / answering criticisms.

[1]: or, at least my sub-school of thought; some of the things I've said are actually controversial within CR, but I'm not sure they'll be significant.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T17:46:41.527Z · LW · GW

It's not uncommon for competing ideas to have that sort of relationship. This is a good thing, though, because you have ways of making progress: e.g. compare the two ideas to come up with an experiment or create a more specific goal. Typically refuting one of those ideas will also answer or refute the criticism attached to it.

If a theory doesn't offer some refutation for competing theories then that fact is (potentially) a criticism of that theory.

We can also come up with criticisms that are sort of independent of where they came from, like a new criticism is somewhat linked to idea A but idea A can be wrong in some way without implying the criticism was also wrong. It doesn't make either theory A or B more likely or something when this happens; it just means there are two criticisms not one.

And it's always possible both are wrong, anyway.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T17:30:13.160Z · LW · GW

I'm happy to do this. On the one hand I don't like that lots of replies creates more pressure to reply to everything, but I think if we'll probably be fine focusing on the stuff we find more important if we don't mind dropping some loose ends. If they become relevant we can come back to them.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T09:49:24.364Z · LW · GW
> CR says that truth is objective
I'd say bayesian epistemology's stance is that there is one completely perfect way of understanding reality, but that it's perhaps provably unattainable to finite things.
You cannot disprove this, because you have not attained it. You do not have an explanation of quantum gravity or globally correlated variation in the decay rate of neutrinos over time or any number of other historical or physical mysteries.
[...]
It's good to believe that there's an objective truth and try to move towards it, but you also need to make peace with the fact that you will almost certainly never arrive at it

Yes, knowledge creation is an unending, iterative process. It could only end if we come to the big objective truth, but that can't happen (the argument for why is in BoI - the beginning of infinity).

We sometimes talk about aumann's agreement theorem, the claim that any two bayesians who, roughly speaking, talk for long enough, will eventually come to agree about everything.

I think this is true of any two *rational* people with sufficient knowledge, and it's rationality not bayesians that's important. If two partially *irrational* bayesians talk, then there's no reason to think they'd reach agreement on ~everything.

There is a subtle case with regards to creative thought, though: take two people who agree on ~everything. One of them has an idea, they now don't agree on ~everything (but can get back to that state by talking more).

WRT "sufficient knowledge": the two ppl need methods of discussing which are rational, and rational ways to resolve disagreements and impasse chains. they also need attitudes about solving problems. namely that any problem they run into in the discussion is able to be solved and that one or both of them can come up with ways to deal with *any* problem when it arises.

> taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense
Which means "wrong" is no longer a meaningful word. Do you think you can operate without having a word like "wrong"? Do you think you can operate without that concept?

If it were meaningless I wouldn't have had to add "in an absolute sense". Just because an explanation is wrong in an *absolute* sense (i.e. it doesn't perfectly match reality) does not mean it's not *useful*. Fallibilism generally says it's okay to believe things that are false (which all explanations are in some case); however, there are conditions on those times like there are no known unanswered criticisms and no alternatives.

Since BoI there has been more work on this problem and the reasoning around when to call something "true" (practically speaking) has improved - I think. Particularly:

  • Knowledge exists relative to *problems*
  • Whether knowledge applies or is correct or not can be evaluated rationally because we have *goals* (sometimes these goals are not specific enough, and there are generic ways of making your goals arbitrarily specific)
  • Roughly: true things are explanations/ideas which solve your problem, have no known unanswered criticism (i.e. are not refuted), and no alternatives which have no known unanswered criticisms
  • something is wrong if the conjecture that it solves the problem is refuted (and that refutation is unanswered)
  • note: a criticisms of an idea is itself an idea, so can be criticised (i.e. the first criticism is refuted by a second criticism) - this can be recursive and potentially go on forever (tho we know ways to make sure they don't).
I think DD sometimes plays inflamatory word games, defining things poorly on purpose.

I think he's in a tough spot to try and explain complex, subtle relationships in epistemology using a language where the words and grammar have been developed, in part, to be compatible with previous, incorrect epistemologies.

I don't think he defines things poorly (at least typically); and would acknowledge an incomplete/fuzzy definition if he provided one. (Note: one counterexample is enough to refute this claim I'm making)

> there are rational ways to choose *exactly one* explanation (or zero if none hold up)
If you point a gun at a bayesian's head and force them to make a bet against a single outcome, they're always supposed to be able to. It's just, there are often reasons not to.

I think you misunderstand me.

let's say you wanted a pet, we need to make a conjecture about what to buy you that will make you happy (hopefully without developing regret later). the possible set of pets to start with are all the things that anyone has ever called a pet.

with something like this there will be lots of other goals, background goals, which we need to satisfy but don't normally list. An example is that the pet doesn't kill you, so we remove snakes, elephants, other other things that might hurt you. there are other background goals like life of the pet or ongoing cost; adopting you a cat with operable cancer isn't a good solution.

there are maybe other practical goals too, like it should be an animal (no pet rocks), should be fluffy (so no fish, etc), shouldn't cost more than $100, and yearly cost is under $1000 (excluding medical but you get health insurance for that).

maybe we do this sort of refinement a bit more and get a list like: cat, dog, rabbit, mouse

you might be *happy* with any of them, but can you be *more happy* with one than any other; is there a *best* pet? **note: this is not an optimisation problem** b/c we're not turning every solution into a single unit (e.g. your 'happiness index'); we're providing *decisive reasons* for why an option should or shouldn't be included. We've also been using this term "happy" but it's more than just that, it's got other important things in there -- the important thing, though, is that it's your *preference* and it matches that (i.e. each of the goals we introduce are in fact goals of yours; put another way: the conditions we introduce correspond directly and accurately to a goal)

this is the sort of case where there is there's no gun to anyone's head, but we can continue to refine down to a list of exactly **one** option (or zero). let's say you wanted an animal you could easily play with -> then rabbit,mouse are excluded, so we have options: cat,dog. If you'd prefer an animal that wasn't a predator - both cat,dog excluded and we get to zero (so we need to come up with new options or remove a goal). If instead you wanted a pet that you could easily train to use a litter tray, well we can exclude a dog so you're down to one. Let's say the litter tray is the condition you imposed.

What happens if I remember ferrets can be pets and I suggest that? well now we need a *new* goal to find which of the cat or ferret you'd prefer.

Note: for most things we don't go to this level of detail b/c we don't need to; like if you have multiple apps to choose from that satisfy all your goals you can just choose one. If you find out a reason it's not good, then you've added a new goal (if you weren't originally mistaken, that is) and can go back to the list of other options.

Note 2: The method and framework I've just used wrt the pet problem is something called yes/no philosophy and has been developed by Elliot Temple over the past ~10+ years. Here are some links:

Argument · Yes or No Philosophy, Curiosity – Rejecting Gradations of Certainty, Curiosity – Critical Rationalism Epistemology Explanations, Curiosity – Critical Preferences and Strong Arguments, Curiosity – Rationally Resolving Conflicts of Ideas, Curiosity – Explaining Popper on Fallible Scientific Knowledge, Curiosity – Yes or No Philosophy Discussion with Andrew Crawshaw

Note 3: During the link-finding exercise I found this: "All ideas are either true or false and should be judged as refuted or non-refuted and not given any other status – see yes no philosophy." (credit: Alan Forrester) I think this is a good way to look at it; *technically and epistemically speaking:* true/false is not a judgement we can make, but refuted/non-refuted *is*. we use refuted/non-refuted as a proxy for false/true when making decisions, because (as fallible beings) we cannot do any better than that.

I'm curious about how a bayesian would tackle that problem. Do you just stop somewhere and say "the cat has a higher probability so we'll go with that?" Do you introduce goals like I did to eliminate options? Is the elimination of those options equivalent to something like: reducing the probability of those options being true to near-zero? (or absolute zero?) Can a bayesian use this method to eliminate options without doing probability stuff? If a bayesian *can*, what if I conjecture that it's possible to *always* do it for *all* problems? If that's the case there would be a way to decisively reach a single answer - so no need for probability. (There's always the edge case there was a mistake somewhere, but I don't think there's a meaningful answer to problems like "P(a mistake in a particular chain of reasoning)" or "P(the impact of a mistake is that the solution we came to changes)" -- note: those P(__) statements are within a well defined context like an exact and particular chain of reasoning/explanation.

Why believe anything!

So we can make decisions.

The great thing is that you don't need to have beliefs to methodically do your best to optimize expected utility

Yes you do - you need a theory of expected utility; how to measure it, predict it, manipulate it, etc. You also need a theory of how to use things (b/c my expected utility of amazing tech I don't know how to use is 0). You need to believe these theories are true, otherwise you have no way to calculate a meaningful value for expected utility!

You can operate well amid uncertainty

Yes, I additionally claim we can operate **decisively**.

In conclusion I don't see many substantial epistemological differences.

It matters more for big things, like SENS and MIRI. Both are working on things other than key problems; there is no good reason to think they'll make significant progress b/c there are other more foundational problems.

I agree practically a lot of decisions come out the same.

I've heard that DD makes some pretty risible claims about the prerequisites to creative intelligence (roughly, that values must, from an engineering feasibility perspective, be learned, that it would be in some way hard to make AGI that wouldn't need to be "raised" into a value system by a big open society, that a thing with 'wrong values' couldn't participate in an open society [and open societies will be stronger] and so wont pose a major threat), but it's not obvious to me how those claims bear at on bayesian epistemology.

I don't know why they would be risible -- nobody has a good reason why his ideas are wrong to my knowledge. They refute a lot of the fear-mongering that happens about AGI. They provide reasons for why a paperclip machine isn't going to turn all matter into paperclips. They're important because they refute big parts of theories from thinkers like Bostrom. That's important because time, money, and effort are being spent in the course of taking Bostrom's theories seriously, even though we have good reasons they're not true. That could be time, money, and effort spent on more important problems like figuring out how creativity works. That's a problem which would actually lead to the creation of an AGI.

Calling unanswered criticisms *risible* seems irrational to me. Sure unexpected answers could be funny the first time you hear them (though this just sounds like ppl being mean, not like it was the punchline to some untold joke) but if someone makes a serious point and you dismiss it because you think it's silly, then you're either irrational or you have a good, robust reason it's not true.

[...] and a real process of making one AGI would tend to take a long time and involve a lot of human intervention?

He doesn't claim this at all. From memory the full argument is in Ch7 of BoI (though has dependencies on some/all of the content in the first 6 chapters, and some subtleties are elaborated on later in the book). He expressly deals with the case where an AGI can run like 20,000x faster than a human (i.e. arbitrarily fast). He also doesn't presume it needs to be raised like a human child or take the same resources/attention/etc.

Have you read much of BoI?

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-22T08:29:21.096Z · LW · GW

On the note of *qualia* (providing in case it helps)

DD says this in BoI when he first uses the word:

Intelligence in the general-purpose sense that Turing meant is one of a constellation of attributes of the human mind that have been puzzling philosophers for millennia; others include consciousness, free will, and meaning. A typical such puzzle is that of qualia (singular quale, which rhymes with ‘baalay’) – meaning the subjective aspect of sensations. So for instance the sensation of seeing the colour blue is a quale. Consider the following thought experiment. You are a biochemist with the misfortune to have been born with a genetic defect that disables the blue receptors in your retinas. Consequently you have a form of colour blindness in which you are able to see only red and green, and mixtures of the two such as yellow, but anything purely blue also looks to you like one of those mixtures. Then you discover a cure that will cause your blue receptors to start working. Before administering the cure to yourself, you can confidently make certain predictions about what will happen if it works. One of them is that, when you hold up a blue card as a test, you will see a colour that you have never seen before. You can predict that you will call it ‘blue’, because you already know what the colour of the card is called (and can already check which colour it is with a spectrophotometer). You can also predict that when you first see a clear daytime sky after being cured you will experience a similar quale to that of seeing the blue card. But there is one thing that neither you nor anyone else could predict about the outcome of this experiment, and that is: what blue will look like. Qualia are currently neither describable nor predictable – a unique property that should make them deeply problematic to anyone with a scientific world view (though, in the event, it seems to be mainly philosophers who worry about it).

and under "terminology" at the end of the chapter:

Quale (plural qualia) The subjective aspect of a sensation.

This is in Ch7 which is about AGI.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-16T20:36:24.275Z · LW · GW
But maybe there could be something reasonably describable as a bayesian method. But I don't work with enough with non-bayesian philosophers to be able to immediately know how we are different, well enough to narrow in on it.

I don't know how you'd describe Bayesianism atm but I'll list some things I think are important context or major differences. I might put some things in quotes as a way to be casual but LMK if any part is not specific enough or ambiguous or whatever.

  • both CR and Bayesianism answer Qs about knowledge and judging knowledge; they're incompatible b/c they make incompatible claims about the world but overlap.
  • CR says that truth is objective
    • explanations are the foundation of knowledge, and it's from explanations that we gain predictive power
    • no knowledge is derived from the past; that's an illusion b/c we're already using per-existing explanations as foundations
      • new knowledge can be created to explain things about the past we didn't understand, but that's new knowledge in the same way the original explanation was once new knowledge
        • e.g. axial tilt theory of seasons; no amount of past experience helped understand what's *really* happening, someone had to make a conjecture in terms of geometry (and maybe Newtonian physics too)
    • when we have two explanations for a single phenomena they're either the same, both wrong, or one is "right"
      • "right" is different from "true" - this is where fallibilism comes in (note: I don't think you can talk about CR without talking about fallibilism; broadly they're synonyms)
        • taken to logical conclusions it means roughly that all our theories are wrong in an absolute sense and we'll discover more and more better explanations about the universe to explain it
          • this includes ~everything: anything we want to understand requires an explanation: quantum physics, knowledge creation, computer sciences, AGI, how minds work (which is actually the same general problem as AGI) - including human minds, economics, why people choose particular ice-cream flavors
          • DD suggests in *the beginning of infinity* that we should rename scientific theories scientific "misconceptions" because that's more accurate
        • anyone can be mistaken on anything
      • there are rational ways to choose *exactly one* explanation (or zero if none hold up)
    • if we have a reason that some explanation is false, then there is no amount of "support" which makes it less likely to be false. (this is what is meant by 'criticism'). no objectively true thing has an objectively true reason that it's false.
      • so we should believe only those things for which there are no unanswered criticisms
        • this is why some CR ppl are insistent on finishing and concluding discussions - if two people disagree then one must have knowledge of why the other is wrong, or they're both wrong (or both don't know enough, etc)
          • to refuse to finish a discussion is either denying the counterparty the opportunity to correct an error (which was evidently important enough to start the discussion about) - this is anti-knowledge and irrational, *or* it's to deny that you have an error (or that the error can be corrected) which is also anti-knowledge and irrational.
          • there are maybe things to discuss about practicality but even if there are good reasons to drop conversations for practical purposes sometimes, it doesn't explain why it happens so much.

that was less focused on differences/incompatibilities than I had in mind originally but hopefully it gives you some ideas.

Is the bayesian method... trying always to understand things on the math/decision theory level? Confidently; deutsch is not doing that.

Unless it's maths/decision theory related, that's right. CR/Fallibilism is more about reasoning; like an internal-contradiction means an idea is wrong; there's 0 probability it's correct. Maybe someone alters the idea so it doesn't have a contradiction which means it needs to be judged again.

His understanding of AGI is utterly anthropomorphic

I don't think that's the case. I think his understanding/theories of AGI don't have anything to do with humans (besides that we'd create one - excluding aliens showing up or whatever). There's a separate explanation for why AGI isn't going to arise randomly e.g. out of a genetic ML algorithm.

If that argument doesn't make sense to you, well that might mean that we've just identified something that bayesian/decision theoretic reasoning can do, that can't be done without it.

Well, we don't agree about fish, but whether it makes sense or not depends on your meaning. If you mean that I understand your reasoning, I think I do. If you mean that I think the reasoning is okay, maybe from your principles but I don't think it's *right*. Like I think there are issues with it such that the explanation and conclusion shouldn't be used.

ps: I realize that's a lot of text to dump all at once, sorry about that. Maybe it's a good idea to focus on one thing?

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-16T20:05:28.699Z · LW · GW
I would be interested to know how Mirror Chamber strikes you though, I haven't tried to get non-bayesians to read it.

Will the Mirror Chamber explain what "anthropic measure" (or the anthropic measure function) is?

I ended up clicking through to this and I guess that the mirror chamber post is important but not sure if I should read something else first.

I started reading, and it's curious enough (and short enough) I'm willing to read the rest, but wanted to ask the above first.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-16T08:02:33.032Z · LW · GW
[...] critrats [...] let themselves wholely believe probably wrong theories in the expectation that this will add up to a productive intellectual ecosystem

As someone who thinks you'd think they're a 'critrat', this feels wrong to me. I can't speak for other CR ppl, ofc, and some CR ppl aren't good at it (like any epistemology), but for me I don't think what you describe would add up to "a productive intellectual ecosystem".

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-16T08:00:15.114Z · LW · GW
In this case, I don't think he has practiced a method of holding multiple possible theories and acting with reasonable uncertainty over all them. That probably would sound like a good thing to do to most popperians, but they often seem to have the wrong attitudes about how (collective) induction happens and might not be prepared to do it;

I'm not sure what this would look like in practice. If you have two competing theories and don't need to act on them - there's no issue. If they're not mutually exclusive there's no issue. If the *specific* action that might be taken is the same regardless of the theories there's no issue. So it seems like the crux must be around multiple competing, mutually exclusive theories which we need to act on.

In the crux case there are ways to deal with it so that you only *have* to act on one. Which method to use depends on goals and time constraints. Some acceptable ideas: choose the one that is more quickly disproved or the one that does less damage if wrong, choose to investigate the difference between the two (like do research to find a criticism of one of them, ideally in a way that is focused on the intersection as opposed to researching something that could only affect one theory).

I think to convince a Popperian that their ideas are incomplete you need to find an example of where the Bayesian method can deal with problems CR can't.

A note on the use of 'bayesian': sorry if this isn't the right term btw. If it's not I hope you know what I mean when I say it.

Comment by Max Kaye (max-kaye) on misc raw responses to a tract of Critical Rationalism · 2020-08-16T07:50:35.241Z · LW · GW
I think it's fairly clear from this that he doesn't have solomonoff induction internalized, he doesn't know how many of his objection to bayesian metaphysics it answers.

I suspect, for DD, it's not about *how many* but *all*. If I come up with 10 reasons Bayesianism is wrong (so 10 criticisms), and 9 of those get answered adequately, the 1 that's still left is as bad as the 10; *any* unanswered criticism is a reason not to believe an idea. So to convince DD (or any decent Popperian) that an idea is wrong can't rely on incomplete rebuttals, an idea needs to be *uncriticised* (answered criticisms don't count here, though those answers could be criticised; that entire chain can be long and all of it needs to be resolved). There are also ideas answering questions like "what happens when you get to an 'I don't know' point?" or "what happens with two competing ideas, both of which are uncriticised?"

Clarifying point: some ideas (like MWI, string theory, etc) are very difficult to criticise by showing a contradiction with evidence, but the fact 2 competing ideas exist means they're either compatible in a way we don't realise or they offer some criticisms of each other, even if we can't easily judge the quality of those criticisms at the time.

Note: I'm not a Bayesian; DD's book *The Beginning of Infinity* convinced me that Popper's foundation for epistemology (including the ideas built on top of it / improved it) was better in a decisive way.

Comment by Max Kaye (max-kaye) on The Law of Least Effort Contributes to the Conjunction Fallacy · 2020-08-16T03:50:20.777Z · LW · GW
Evidence to the contrary, please?

here

Before October 2014, copyright law permitted use of a work for the purpose of criticism and review, but it did not allow quotation for other more general purposes. Now, however, the law allows the use of quotation more broadly. So, there are two exceptions to be aware of, one specifically for criticism and review and a more general exception for quotation. Both exceptions apply to all types of copyright material, such as books, music, films, etc.

https://www.copyrightuser.org/understand/exceptions/quotation/ - first link on google. there are more details about conditions there, and particularly what you'd have to show in order to prove infringement. Good luck ¯\_(ツ)_/¯

Quoting is a copyright violation in every jurisdiction I know of, if it's done en masse.

"en masse" is vague.

Wow, you know about a lot of different legal frameworks. How does copyright violation work in Tuvalu and Mauritius? I've always wondered.

-- general comments --

It's trivial to see that your idea of quoting is incomplete because most instances of quoting you see aren't copyright violations (like news, youtube commentary, academic papers, whatever).

However, you obviously care about copyright violations deeply, so I suggest you get in touch with google too; they are worse offenders.

https://webcache.googleusercontent.com/search?q=cache:1fkfDXctehAJ:https://www.lesswrong.com/+&cd=1&hl=en&ct=clnk&gl=au

Since you care about *COPYRIGHT INFRINGEMENT* and not *BEING CRITICISED* surely this blatant infringement of your copyright is a much larger priority. The probability of someone seeing material which is infringing your copyright is orders of magnitude larger on google than on a small random website.

---

Edit/update/mini-post-mortem: I made this post because of an emotional reaction to the post above it by @gjm, which I shouldn't have done. Some points were fine, but I was sarcastic ("Wow, you ...") and treated @gjm's ideas unfairly, e.g. by using language like "trivial" to make his ideas sound less reasonable than they might be (TBH IANAL so really it's dishonest of me to act with such certainty). Those statements were socially calibrated (to some degree) to try and either upset/annoy gjm or impact stuff around social status. Since I'd woken up recently (like less than 30min before posting) and was emotional I should have known better than to post those bits (maybe I should have avoided posting at all). There's also the last paragraph, "Since you care about ..." part, which at best is an uncharitable interpretation and at worst is putting words in gjm's mouth (which isn't okay).

For those reasons I'd like to apologies to gjm for those parts. I feel it'd be dishonest to remove them so I'm adding this update instead.