Posts

AI origin question 2015-11-01T20:35:18.911Z
[LINK] Steven Pinker on "The false allure of group selection" 2012-06-19T20:14:15.842Z
[LINK] "Straight and crooked thinking," by Robert H. Thouless 2011-06-04T02:46:07.975Z
[LINK] Subculture roles (by Brad Hicks) 2011-05-18T03:00:43.252Z
Gödel and Bayes: quick question 2011-04-14T06:12:05.618Z
Gettier in Zombie World 2011-01-23T06:44:29.137Z

Comments

Comment by hairyfigment on Am I confused about the "malign universal prior" argument? · 2024-08-29T01:31:37.611Z · LW · GW

I don't see how any of it can be right. Getting one algorithm to output Spongebob wouldn't cause the SI to watch Spongebob -even a less silly claim in that vein would still be false. The Platonic agent would know the plan wouldn't work, and thus wouldn't do it.

Since no individual Platonic agent could do anything meaningful alone, and they plainly can't communicate with each other, they can only coordinate by means of reflective decision theory. That's fine, we'll just assume that's the obvious way for intelligent minds to behave. But then the SI works the same way, and knows the Platonic agents will think that way, and per RDT it refuses to change its behavior based on attempts to game the system. So none of this ever happens in the first place.

(This is without even considering the serious problems with assuming Platonic agents would share a goal to coordinate on. I don't think I buy it. You can't evolve a desire to come into existence, nor does an arbitrary goal seem to require it. Let me assure you, there can exist intelligent minds which don't want worlds like ours to exist.)

Comment by hairyfigment on Optimistic Assumptions, Longterm Planning, and "Cope" · 2024-07-18T08:29:07.698Z · LW · GW

https://arxiv.org/abs/1712.05812

It's directly about inverse reinforcement learning, but that should be strictly stronger than RLHF. Seems incumbent on those who disagree to explain why throwing away information here would be enough of a normative assumption (contrary to every story about wishes.)

Comment by hairyfigment on TurnTrout's shortform feed · 2024-07-09T06:40:12.945Z · LW · GW

this always helps in the short term,

You seem to have 'proven' that evolution would use that exact method if it could, since evolution never looks forward and always must build on prior adaptations which provided immediate gain. By the same token, of course, evolution doesn't have any knowledge, but if "knowledge" corresponds to any simple changes it could make, then that will obviously happen.

Comment by hairyfigment on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-15T20:45:11.733Z · LW · GW

Well that's disturbing in a different way. How often do they lose a significant fraction of their savings, though? How many are unvaccinated, which isn't the same as loudly complaining about the shot's supposed risks? The apparent lack of Flat Earthers could point to them actually expecting reality to conform to their words, and having a limit on the silliness of the claims they'll believe. But if they aren't losing real money, that could point to it being a game (or a cost of belonging).

Comment by hairyfigment on I was raised by devout Mormons, AMA [&|] Soliciting Advice · 2024-03-14T00:59:05.521Z · LW · GW

The answer might be unhelpful due to selection bias, but I'm curious to learn your view of QAnon. Would you say it works like a fandom for people who think they aren't allowed to read or watch fiction? I get the strong sense that half the appeal - aside from the fun of bearing false witness - is getting to invent your own version of how the conspiracy works. (In particular, the pseudoscientific FNAF-esque idea at the heart of it isn't meant to be believed, but to inspire exegesis like that on the Kessel Run.) This would be called fanfic or "fanwank" if they admitted it was based on a fictional setting. Is there something vital you think I'm missing?

Comment by hairyfigment on Most People Don't Realize We Have No Idea How Our AIs Work · 2023-12-25T04:16:46.367Z · LW · GW

There have, in fact, been numerous objections to genetically engineered plants and by implication everything in the second category. You might not realize how much the public is/was wary of engineered biology, on the grounds that nobody understood how it worked in terms of exact internal details. The reply that sort of convinced people - though it clearly didn't calm every fear about new biotech - wasn't that we understood it in a sense. It was that humanity had been genetically engineering plants via cultivation for literal millennia, so empirical facts allowed us to rule out many potential dangers.

Comment by hairyfigment on The Limits of Artificial Consciousness: A Biology-Based Critique of Chalmers’ Fading Qualia Argument · 2023-12-18T01:49:33.127Z · LW · GW

Note that it requires the assumption that consciousness is material

Plainly not, assuming this is the same David J. Chalmers.

Comment by hairyfigment on Evaluating the historical value misspecification argument · 2023-10-05T23:21:30.629Z · LW · GW

This would make more sense if LLMs were directly selected for predicting preferences, which they aren't. (RLHF tries to bridge the gap, but this apparently breaks GPT's ability to play chess - though I'll grant the surprise here is that it works at all.) LLMs are primarily selected to predict human text or speech. Now, I'm happy to assume that if we gave humans a D&D-style boost to all mental abilities, each of us would create a coherent set of preferences from our inconsistent desires, which vary and may conflict at a given time even within an individual. Such augmented humans could choose to express their true preferences, though they still might not. If we gave that idealized solution to LLMs, it would just boost their ability to predict what humans or augmented humans would say. The augmented-LLM wouldn't automatically care about the augmented-human's true values.

While we can loosely imagine asking LLMs to give the commands that an augmented version of us would give, that seems to require actually knowing how to specify how a D&D ability-boost would work for humans - which will only resemble the same boost for AI at an abstract mathematical level, if at all. It seems to take us back to the CEV problem of explaining how extrapolation works. Without being able to do that, we'd just be hoping a better LLM would look at our inconsistent use of words like "smarter," and pick the out-of-distribution meaning we want, for cases which have mostly never existed. This is a lot like what "Complexity of Wishes" was trying to get at, as well as the longstanding arguments against CEV. Vaniver's comment seems to point in this same direction.

Now, I do think recent results are some evidence that alignment would be easier for a Manhattan Project to solve. It doesn't follow that we're on track to solve it.

Comment by hairyfigment on Meta Questions about Metaphilosophy · 2023-09-01T08:15:10.664Z · LW · GW

The classification heading "philosophy," never mind the idea of meta-philosophy, wouldn't exist if Aristotle hadn't tutored Alexander the Great. It's an arbitrary concept which implicitly assumes we should follow the aristocratic-Greek method of sitting around talking (or perhaps giving speeches to the Assembly in Athens.) Moreover, people smarter than either of us have tried this dead-end method for a long time with little progress. Decision theory makes for a better framework than Kant's ideas; you've made progress not because you're smarter than Kant, but because he was banging his head against a brick wall. So to answer your question, if you've given us any reason to think the approach of looking for "meta-philosophy" is promising, or that it's anything but a proven dead-end, I don't recall it.

Comment by hairyfigment on Ten Thousand Years of Solitude · 2023-08-21T03:19:31.206Z · LW · GW

Oddly enough, not all historians are total bigots, and my impression is that the anti-Archipelago version of the argument existed in academic scholarship - perhaps not in the public discourse - long before JD. E.g. McNeill published a book about fragmentation in 1982, whereas GG&S came out in 1997.

Comment by hairyfigment on Ten Thousand Years of Solitude · 2023-08-20T22:53:23.534Z · LW · GW

Perhaps you could see my point better in the context of Marxist economics? Do you know what I mean when I say that the labor theory of value doesn't make any new predictions, relative to the theory of supply and demand? We seldom have any reason to adopt a theory if it fails to explain anything new, and its predictive power in fact seems inferior to that of a rival theory. That's why the actual historians here are focusing on details which you consider "not central" - because, to the actual scholars, Diamond is in fact cherry-picking topics which can't provide any good reason to adopt his thesis. His focus is kind of the problem.

Comment by hairyfigment on Ten Thousand Years of Solitude · 2023-08-20T08:51:10.267Z · LW · GW

>The first chapter that's most commonly criticized is the epilogue - where Diamond puts forth a potential argument for why Europe, and not China, was the major colonial power.  This argument is not central to the thesis of the book in any way,

It is, though, because that's a much harder question to answer. Historians think they can explain why no American civilization conquered Europe, and why the reverse was more likely, without appeal to Diamond's thesis. This renders it scientifically useless, and leaves us without any clear reason to believe it, unless he could take his thesis farther.

The counter-Diamond argument seems to be the opposite of Scott Alexander's "Archipelago" idea. Constant war between similar cultures led to the development and spread of highly efficient government or state institutions, especially when it came to war. Devereaux writes, "Any individual European monarch would have been wise to pull the brake on these changes, but given the continuous existential conflict in Europe no one could afford to do so and even if they did, given European fragmentation, the revolutions – military, industrial or political – would simply slide over the border into the next state."

Comment by hairyfigment on Summary of and Thoughts on the Hotz/Yudkowsky Debate · 2023-08-18T00:48:22.252Z · LW · GW

I do see selves, or personal identity, as closely related to goals or values. (Specifically, I think the concept of a self would have zero content if we removed everything based on preferences or values; roughly 100% of humans who've every thought about the nature of identity have said it's more like a value statement than a physical fact.) However, I don't think we can identify the two. Evolution is technically an optimization process, and yet has no discernible self. We have no reason to think it's actually impossible for a 'smarter' optimization process to lack identity, and yet form instrumental goals such as preventing other AIs from hacking it in ways which would interfere with its ultimate goals. (The latter are sometimes called "terminal values.")

Comment by hairyfigment on What The Lord of the Rings Teaches Us About AI Alignment · 2023-08-01T21:14:15.927Z · LW · GW

So, what does LotR teach us about AI alignment? I thought I knew what you meant until near the end, but I actually can't extract any clear meaning from your last points. Have you considered stating your thesis in plain English?

Comment by hairyfigment on The UAP Disclosure Act of 2023 and its implications · 2023-07-22T07:30:43.419Z · LW · GW

You left out, 'People naively thinking they can put this discussion to bed by legally requiring disclosure,' though politicians would likely know they can't stop conspiracy theorists just by proving there's no conspiracy.

Comment by hairyfigment on All AGI Safety questions welcome (especially basic ones) [July 2023] · 2023-07-22T07:28:15.351Z · LW · GW

Just as humans find it useful to kill a great many bacteria, an AGI would want to stop humans from e.g. creating a new, hostile AGI. In fact, it's hard to imagine an alternative which doesn't require a lot of work, because we know that in any large enough group of humans, one of us will take the worst possible action. As we are now, even if we tried to make a deal to protect the AI's interests, we'd likely be unable to stop someone from breaking it.

I like to use the silly example of an AI transcending this plane of existence, as long as everyone understands this idea appears physically impossible. If somehow it happened anyway, that would mean there existed a way for humans to affect the AI's new plane of existence, since we built the AI, and it was able to get there. This seems to logically require a possibility of humans ruining the AI's paradise. Why would it take that chance? If killing us all is easier than either making us wiser or watching us like a hawk, why not remove the threat?

I'm not sure I understand your point about massive resource use. If you mean that SI would quickly gain control of so many stellar resources that a new AGI would be unable to catch up, it seems to me that:

1. people would notice the Sun dimming (or much earlier signs), panic, and take drastic action like creating a poorly-designed AGI before the first one could be assured of its safety, if it didn't stop us;

2. keeping humans alive while harnessing the full power of the Sun seems like a level of inconvenience no SI would choose to take on, if its goals weren't closely aligned with our own.

Comment by hairyfigment on Why it's so hard to talk about Consciousness · 2023-07-20T08:52:46.458Z · LW · GW

Have you actually seen orthonormal's sequence on this exact argument? My intuitions say the "Martha" AI described therein, which imitates "Mary," would in fact have qualia; this suffices to prove that our intuitions are unreliable (unless you can convincingly argue that some intuitions are more equal than others.) Moreover, it suggests a credible answer to your question: integration is necessary in order to "understand experience" because we're talking about a kind of "understanding" which necessarily stems from the internal workings of the system, specifically the interaction of the "conscious" part with the rest.

(I do note that the addendum to the sequence's final post should have been more fully integrated into the sequence from the start.)

Comment by hairyfigment on [deleted post] 2023-07-19T23:04:33.015Z

The obvious reply would be that ML now seems likely to produce AGI, perhaps alongside minor new discoveries, in a fairly short time. (That at least is what EY now seems to assert.) Now, the grandparent goes far beyond that, and I don't think I agree with most of the additions. However, the importance of ML sadly seems well-supported.

Comment by hairyfigment on UFO Betting: Put Up or Shut Up · 2023-06-13T10:13:23.558Z · LW · GW

Hesitant to bet while sick, but I'll offer max bet $20k at 25:1.

Comment by hairyfigment on What can we learn from Bayes about reasoning? · 2023-05-05T22:18:09.941Z · LW · GW

The basic definition of evidence is more important than you may think. You need to start by asking what different models predict. Related: it is often easier to show how improbable the evidence is according to the scientific model, than to get any numbers at all out of your alternative theory.

Comment by hairyfigment on The Rocket Alignment Problem, Part 2 · 2023-05-01T22:26:36.492Z · LW · GW

Even focusing on that doesn't make your claim appear sensible, because such laws will neither happen soon enough, nor in a sufficiently well-aimed fashion, without work from people like the speaker. You also implied twice that tech CEOs would take action on their own - the quote is in the grandparent - and in the parent you act like you didn't make that bizarre claim.

Comment by hairyfigment on The Rocket Alignment Problem, Part 2 · 2023-05-01T20:38:38.873Z · LW · GW

>Instead it just means that Bob shouldn't rely on his company doing the fastest and easiest thing and having it turn out fine. Instead Bob should expect to make sacrifices, either burning down a technical lead or operating in (or helping create) a regulatory environment where the fastest and easiest option isn't allowed.

The above feels so bizarre that I wonder if you're trying to reach Elon Musk personally. If so, just reach out to him. If we assume there's no self-reference paradox involved, we can safely reject your proposed alternatives as obviously impossible; they would have zero credibility even if AI companies weren't in an arms race, which appears impossible to stop from the inside unless all the CEOs involved can meet at Bohemian Grove.

Comment by hairyfigment on My Assessment of the Chinese AI Safety Community · 2023-04-27T04:19:58.408Z · LW · GW

See, that makes it sound like my initial response to the OP was basically right, and you don't understand the argument being made here. At least one Western reading of these new guidelines was that, if they meant anything, then the bureaucratic obstacle they posed for AGI would greatly reduce the threat thereof. This wouldn't matter if people were happy to show initiative - but if everyone involved thinks volunteering is stupid, then whose job is it to make sure the official rules against a competitive AI project won't stop it from going forward? What does that person reliably get for doing the job?

Comment by hairyfigment on My Assessment of the Chinese AI Safety Community · 2023-04-26T18:48:27.813Z · LW · GW

All of that makes sense except the inclusion of "EA," which sounds backwards. I highly doubt Chinese people object to the idea of doing good for the community, so why would they object to helping people do more good, according to our best knowledge?

Comment by hairyfigment on Contra Yudkowsky on AI Doom · 2023-04-24T02:07:42.436Z · LW · GW

I note in passing that the elephant brain is not only much larger, but also has many more neurons than any human brain. Since I've no reason to believe the elephant brain is maximally efficient, making the same claim for our brains should require much more evidence than I'm seeing.

Comment by hairyfigment on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-23T20:34:20.299Z · LW · GW

What are you trying to argue for? I'm getting stuck on the oversimplified interpretation you give for the quote. In the real world, smart people such as Leibniz raised objections to Newton's mechanics at the time, objections which sound vaguely Einsteinian and not dependent on lots of data. The "principle of sufficient reason" is about internal properties of the theory, similar to Einstein's argument for each theory of relativity. (Leibniz's argument could also be given a more Bayesian formulation, saying that if absolute position in space is meaningful, then a full description of the 'initial state' of the universe needs to contain additional complexity which has zero predictive value in order to specify that location.) Einstein, in the real world, expressed confidence in general relativity prior to experimental confirmation. What Eliezer is talking about seems different in degree, but not in kind.

Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down · 2023-04-08T10:12:53.077Z · LW · GW

Out of curiosity, what do you plan to do when people keep bringing up Penrose?

Comment by hairyfigment on Monthly Roundup #5: April 2023 · 2023-04-03T23:05:05.177Z · LW · GW

Pretty sure that doesn't begin to address the reasons why a paranoid dictator might invade Taiwan, and indeed would undo a lot of hard work spent signaling that the US would defend Taiwan without committing us to nuclear war.

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-31T22:00:30.032Z · LW · GW

Pretty sure this is my last comment, because what you just quoted about soundness is, in fact, a direct consequence of Löb's Theorem. For any proposition P, Löb's Theorem says that □(□P→P)→□P. Let P be a statement already disproven, e.g. "2+2=5". This means we already had □¬P, and now we have □(¬P & P), which is what inconsistency means. Again, it seemed like you understood this earlier.

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-31T19:55:28.169Z · LW · GW

https://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem

A coherent formal system can't fully define truth for its own language. It can give more limited definitions for the truth of some statement, but often this is best accomplished by just repeating the statement in question. (That idea is also due to Tarski: 'snow is white' is true if and only if snow is white.) You could loosely say (very loosely!) that a claim, in order to mean anything, needs to point to its own definition of what it would mean for that claim to be true. Any more general definition of truth, for a given language, needs to appeal to concepts which can't be expressed within that language, in order to avoid a self-reference paradox.

So, there's no comparison between applying the concept in your last paragraph to individual theorems you've already proven, like "2+2=4" - "my system thinks it has proved it, and believes that's good enough to act on", and the application to all hypothetical theorems you might prove later, like "2+2=5". Those two ideas have different meanings - the latter can't even be expressed within the language in a finite way, though it could be an infinite series of theorems or new axioms of the form □P→P - and they have wildly different consequences. You seem to get this when it comes to hypothetical proofs that eating babies is mandatory.

Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T01:04:19.471Z · LW · GW

Here's some more:

A majority (55%) of Americans are now worried at least somewhat that artificially intelligent machines could one day pose a risk to the human race’s existence. This marks a reversal from Monmouth’s 2015 poll, when a smaller number (44%) was worried and a majority (55%) was not.

https://www.monmouth.edu/polling-institute/reports/monmouthpoll_us_021523/

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-31T00:53:49.819Z · LW · GW

The first part of the parent comment is perfectly true for a specific statement - obviously not for all P - and in fact this was the initial idea which inspired the theorem. (For the normal encoding, "This statement is provable within PA," is in fact true for this exact reason.) The rest of your comment suggests you need to more carefully distinguish between a few concepts:

  1. what PA actually proves
  2. what PA would prove if it assumed □P→P for all claims P
  3. what is actually true, which (we believe) includes category 1 but emphatically not 2.
Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T00:28:28.327Z · LW · GW

This may be what I was thinking of, though the data is more ambiguous or self-contradictory: https://www.vox.com/future-perfect/2019/1/9/18174081/fhi-govai-ai-safety-american-public-worried-ai-catastrophe

Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T23:44:45.061Z · LW · GW

I'll look for the one that asked about the threat to humanity, and broke down responses by race and gender. In the meantime, here's a poll showing general unease and bipartisan willingness to legally restrict the use of AI: https://web.archive.org/web/20180109060531/http://www.pewinternet.org/2017/10/04/automation-in-everyday-life/

Plus:

A SurveyMonkey poll on AI conducted for USA TODAY also had overtones of concern, with 73% of respondents saying that would prefer if AI was limited in the rollout of newer tech so that it doesn’t become a threat to humans.

Meanwhile, 43% said smarter-than-humans AI would do “more harm than good,” while 38% said it would result in “equal amounts of harm and good.”

I do note, on the other side, that the general public seems more willing to go Penrose, sometimes expressing or implying a belief in quantum consciousness unprompted. That part is just my own impression.

Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T21:06:17.179Z · LW · GW

The average person on the street is even further away from this I think.

This contradicts the existing polls, which appear to say that everyone outside of your subculture is much more concerned about AGI killing everyone. It looks like if it came to a vote, delaying AGI in some vague way would win by a landslide, and even Eliezer's proposal might win easily.

Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T21:00:29.809Z · LW · GW

It would've been even better for this to happen long before the year of the prediction mentioned in this old blog-post, but this is better than nothing.

Comment by hairyfigment on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T20:07:08.523Z · LW · GW

Because the United Nations is a body chiefly concerned with enforcing international treaties, I imagine it would be incentivized to support arguments in favor of increasing its own scope and powers.

You imagine falsely, because your premise is false. The UN not only isn't a body, its actions are largely controlled by a "Security Council" of powerful nations which try to serve their own interests (modulo hypotheticals about one of their governments being captured by a mad dog) and have no desire to serve the interests of the UN as such. This is mostly by design. We created the UN to prevent world wars, hence it can't act on its own to start a world war.

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-30T19:51:49.052Z · LW · GW

I don't know that I follow. The question, here and in the context of Löb's Theorem, is about hypothetical proofs. Do you trust yourself enough to say that, in the hypothetical where you experience a proof that eating babies is mandatory, that would mean eating babies is mandatory?

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-30T19:47:16.428Z · LW · GW

I don't even understand why you're writing □(□P→P)→□P→P, unless you're describing what the theory can prove. The last step, □P→P, isn't valid in general, that's the point of the theorem! If you're describing what formally follows from full self-trust, from the assumption that □P→P for every P, then yes, the theory proves lots of false claims, one of which is a denial that it can prove 2+2=5.

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-30T05:39:12.357Z · LW · GW

If you're asking how we get to a formal self-contradiction, replace "False" with a statement the theory already disproves, like "0 does not equal 0." I don't understand your conclusion above; the contradiction comes because the theory already proves ¬False and now proves False, so the claim about "¬□False" seems like a distraction.

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-30T05:22:55.831Z · LW · GW

Do you also believe that if you could "prove" eating babies was morally required, eating babies would be morally required? PA obviously believes Lob's theorem itself, and indeed proves the soundness of all its actual proofs, which is what I said above. What PA doesn't trust is hypothetical proofs.

Comment by hairyfigment on Why do the Sequences say that "Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent."? · 2023-03-28T17:56:01.852Z · LW · GW

How do you interpret "soundness"? It's being used to mean that a proof of X implies X, for any statement X in the formal language of the theory. And yes, Löb's Theorem directly shows that PA cannot prove its own soundness  for any set of statements save a subset of its own theorems.

Comment by hairyfigment on You Don't Exist, Duncan · 2023-03-04T23:42:40.310Z · LW · GW

Go ahead and test the prediction from the start of that thread, if you like, and verify that random people on the street will often deny the existence of the other two types. (The prediction also says not everyone will deny the same two.) You already know that NTs - asked to imagine maximal, perfect goodness - will imagine someone who gets upset about having the chance to save humanity by suffering for a few days, but who will do it anyway if Omega tells him it can't be avoided.

Comment by hairyfigment on [Link] A community alert about Ziz · 2023-02-25T08:01:21.102Z · LW · GW

It sure sounds like you think outsiders would typically have the "common sense" to avoid Ziz. What do you think such an outsider would make of this comment?

Comment by hairyfigment on [Link] A community alert about Ziz · 2023-02-24T20:55:28.115Z · LW · GW

There's this guy Michael Vassar who strikes me - from afar - as a failed cult leader, and Ziz as a disciple of his who took some followers in a different direction. Even before this new information, I thought her faith sounded like a breakaway sect of the Church of Asmodeus.

Michael Vassar was one of the inspirations for Eliezer's Professor Quirrell, but otherwise seems to have little influence.

Comment by hairyfigment on Rationality-related things I don't know as of 2023 · 2023-02-11T22:39:03.968Z · LW · GW

While it's arguably good for you to understand the confusion which led to it, you might want to actually just look up Solomonoff Induction now.

Comment by hairyfigment on Rationality-related things I don't know as of 2023 · 2023-02-11T09:46:03.859Z · LW · GW

>Occam's razor. Is it saying anything other than P(A) >= P(A & B)?

Yes, this is the same as the argument for (the abstract importance of) Solomonoff Induction. (Though I guess you might not find it convincing.)

We have an intuitive sense that it's simpler to say the world keeps existing when you turn your back on it. Likewise, it's an intuitively simpler theory to say the laws of physics will continue to hold indefinitely, than to say the laws will hold up until February 12, 2023 at midnight Greenwich Mean Time. The law of probability which you cited only captures a small part of this idea, since it says that last theory has at least as much probability as the 'simple' version. We could add a rule saying nearly all of that probability accrues to the model where the laws keep holding, but what's the general form of that rule?

Occam's original formulation about not multiplying entities doesn't help us much, as we could read this to say we shouldn't assume the world keeps existing when unobserved. That's the opposite of what we want. Newton's version was arguably better, talking about "properties" which should be tentatively assumed to hold generally when we can't find any evidence to the contrary, but then we could reasonably ask what "property" means.

SI comes from the idea that we should look for the minimum message length which predicts the data, and we could see SI in particular as an attempt to solve the grue-bleen problem. The "naturalized induction" school says this is technically still wrong, but it represents a major advance.

Comment by hairyfigment on You Don't Exist, Duncan · 2023-02-04T04:43:39.063Z · LW · GW

Except, if you Read The Manual, you might conclude that in fact those people also can't understand you exist.

Comment by hairyfigment on Covid 12/22/22: Reevaluating Past Options · 2022-12-23T02:52:45.809Z · LW · GW

Stop lying about observed risk of death by ethnicity.

Comment by hairyfigment on Covid 12/15/22: China’s Wave Begins · 2022-12-16T03:18:15.812Z · LW · GW

Well, current events seem to have confirmed that China couldn't keep restrictions in place indefinitely, and the fact that they even tried - together with the cost of stopping - suggest that it would've been a really good idea to protect their people using the best vaccine. China could presumably have just stuck it in everyone by force of law. What am I missing here?