Posts

Pittsburgh meetup Nov. 20 2010-11-16T21:03:42.630Z
Bay Area Meetup Saturday 6/12 2010-06-09T18:18:17.750Z
Pittsburgh Meetup: Saturday 9/12, 6:30PM, CMU 2009-09-10T03:06:21.614Z
Pittsburgh Meetup: Survey of Interest 2009-08-27T16:18:27.792Z

Comments

Comment by Nick_Tarleton on Preference Inversion · 2025-01-03T20:20:09.433Z · LW · GW

Let's look at preference for eating lots of sweets, for example. Society tries to teach us not to eat too much sweets because it's unhealthy, and from the perspective of someone who likes eating sweets, this often feels coercive. Your explanation applied here would be that upon reflection, people will decide "Actually, eating a bunch of candy every day is great" -- and no doubt, to a degree that is true, at least with the level of reflection that people actually do.

However when I decided to eat as much sweet as I wanted, I ended up deciding that sweets were gross, except in very small amounts or as a part of extended exercise where my body actually needs the sugar. What's happening here is that society has a bit more wisdom than the candy loving kid, tries clumsily to teach the foolish kid that their ways are wrong and they'll regret it, and often ends up succeeding more in constraining behavior than integrating the values in a way that the kid can make sense of upon reflection.

The OP addresses cases like this:

One thing that can cause confusion here - by design - is that perverted moralities are stabler if they also enjoin nonperversely good behaviors in most cases. This causes people to attribute the good behavior to the system of threats used to enforce preference inversion, imagining that they would not be naturally inclined to love their neighbor, work diligently for things they want, and rest sometimes. Likewise, perverted moralities also forbid many genuinely bad behaviors, which primes people who must do something harmless but forbidden to accompany it with needlessly harmful forbidden behaviors, because that's what they've been taught to expect of themselves.

I agree that the comment you're replying to is (narrowly) wrong (if understanding 'prior' as 'temporally prior'), because someone might socially acquire a preference not to overeat sugar before they get the chance to learn they don't want to overeat sugar. ISTM this is repaired by comparing not to '(temporally) prior preference' but something like 'reflectively stable preference absent coercive pressure'.

Comment by Nick_Tarleton on Benito's Shortform Feed · 2025-01-03T01:47:51.149Z · LW · GW

I can easily imagine an argument that: SBF would be safe to release in 25 years, or for that matter tomorrow, not because he'd be decent and law-abiding, but because no one would trust him and the only crimes he's likely to (or did) commit depend on people trusting him. I'm sure this isn't entirely true, but it does seem like being world-infamous would have to mitigate his danger quite a bit.

More generally — and bringing it back closer to the OP — I feel interested in when, and to what extent, future harms by criminals or norm-breakers can be prevented just by making sure that everyone knows their track record and can decide not to trust them.

Comment by Nick_Tarleton on Comment on "Death and the Gorgon" · 2025-01-01T23:31:01.547Z · LW · GW

Though — I haven't read all of his recent novels, but I think — none of those are (for lack of a better word) transhumanist like Permutation City or Diaspora, or even Schild's Ladder or Incandescence. Concretely: no uploads, no immortality, no artificial minds, no interstellar civilization. I feel like this fits the pattern, even though the wildness of the physics doesn't. (And each of those four earlier novels seems successively less about the implications of uploading/immortality/etc.)

Comment by Nick_Tarleton on Double's Shortform · 2025-01-01T02:12:14.493Z · LW · GW

In practice, it just requires hardware with limited functionality and physical security — hardware security modules exist.

An HSM-analogue for ML would be a piece of hardware that can have model weights loaded into its nonvolatile memory, can perform inference, but doesn't provide a way to get the weights out. (If it's secure enough against physical attack, it could also be used to run closed models on a user's premises, etc.; there might be a market for that.)

Comment by Nick_Tarleton on 2025 Prediction Thread · 2024-12-30T04:41:54.188Z · LW · GW

This doesn't work. (Recording is Linux Firefox; same thing happens in Android Chrome.)

An error is logged when I click a second time (and not when I click on a different probability):

[GraphQL error]: Message: null value in column "prediction" of relation "ElicitQuestionPredictions" violates not-null constraint, Location: line 2, col 3, Path: MakeElicitPrediction instrument.ts:129:35
Comment by Nick_Tarleton on 2025 Prediction Thread · 2024-12-30T03:41:53.508Z · LW · GW

How can I remove an estimate I created with an accidental click? (Said accidental click is easy to make on mobile, especially because the way reactions work there has habituated me to tapping to reveal hidden information and not expecting doing so to perform an action.)

Comment by Nick_Tarleton on What happens next? · 2024-12-29T16:59:49.832Z · LW · GW

If specifically with IQ, feel free to replace the word with "abstract units of machine intelligence" wherever appropriate.

By calling it "IQ", you were (EDIT: the creator of that table was) saying that gpt4o is comparable to a 115 IQ human, etc. If you don't intend that claim, if that replacement would preserve your meaning, you shouldn't have called it IQ. (IMO that claim doesn't make sense — LLMs don't have human-like ability profiles.)

Comment by Nick_Tarleton on What happens next? · 2024-12-29T16:46:43.189Z · LW · GW

Learning on-the-fly remains, but I expect some combination of sim2real and muZero to work here.

Hmm? sim2real AFAICT is an approach to generating synthetic data, not to learning. MuZero is a system that can learn to play a bunch of games, with an architecture very unlike LLMs. This sentence doesn't typecheck for me; what way of combining these concepts with LLMs are you imagining?

Comment by Nick_Tarleton on shortplav · 2024-12-28T22:21:31.448Z · LW · GW

I don't think it much affects the point you're making, but the way this is phrased conflates 'valuing doing X oneself' and 'valuing that X exist'.

Comment by Nick_Tarleton on Probability of death by suicide by a 26 year old · 2024-12-15T03:39:17.431Z · LW · GW

Among 'hidden actions OpenAI could have taken that could (help) explain his death', I'd put harassment well above murder.

Of course, the LessWrong community will shrug it off as a mere coincidence because computing the implications is just beyond the comfort level of everyone on this forum.

Please don't do this.

Comment by Nick_Tarleton on daijin's Shortform · 2024-12-10T15:46:03.834Z · LW · GW

Multiple democracies do have or have had wealth taxes.

Comment by Nick_Tarleton on Hazard's Shortform Feed · 2024-12-08T23:02:40.952Z · LW · GW

I've gotten things from Michael's writing on Twitter, but also wasn't distinguishing him/Ben/Jessica when I wrote that comment.

Comment by Nick_Tarleton on Sapphire Shorts · 2024-12-08T18:24:01.834Z · LW · GW

I can attest to something kind of like this; in mid-late 2020, I

  • already knew Michael (but had been out of touch with him for a while) and was interested in his ideas (but hadn't seriously thought about them in a while)
  • started doing some weird intense introspection (no drugs involved) that led to noticing some deeply surprising things & entering novel sometimes-disruptive mental states
  • noticed that Michael/Ben/Jessica were talking about some of the same things I was picking up on, and started reading & thinking a lot more about their online writing
    • (IIRC, this noticing was not entirely conscious — to some extent it was just having a much stronger intuition that what they were saying was interesting)
  • didn't directly interact with any of them during this period, except for one early phone conversation with Ben which helped me get out of a very unpleasant state (that I'd gotten into by, more or less, decompartmentalizing some things about myself that I was unprepared to deal with)
Comment by Nick_Tarleton on Hazard's Shortform Feed · 2024-12-08T17:44:03.136Z · LW · GW

I have understood and become convinced of some of Michael's/Ben's/Jessica's stances through a combination of reading their writing and semi-independently thinking along similar lines, during a long period of time when I wasn't interacting with any of them, though I have interacted with all of them before and since.

Comment by Nick_Tarleton on Hazard's Shortform Feed · 2024-12-08T17:41:16.450Z · LW · GW

... those posts are saying much more specific things than 'people are sometimes hypocritical'?

"Can crimes be discussed literally?":

  • some kinds of hypocrisy (the law and medicine examples) are normalized
  • these hypocrisies are / the fact of their normalization is antimemetic (OK, I'm to some extent interpolating this one based on familiarity with Ben's ideas, but I do think it's both implied by the post, and relevant to why someone might think the post is interesting/important)
  • the usage of words like 'crime' and 'lie' departs from their denotation, to exclude normalized things
  • people will push back in certain predictable ways on calling normalized things 'crimes'/'lies', related to the function of those words as both description and (call for) attack
  • "There is a clear conflict between the use of language to punish offenders, and the use of language to describe problems, and there is great need for a language that can describe problems. For instance, if I wanted to understand how to interpret statistics generated by the medical system, I would need a short, simple way to refer to any significant tendency to generate false reports. If the available simple terms were also attack words, the process would become much more complicated."

This seems 'unsurprising' to me in, and only in, an antimemetic Everybody Knows sense.

"Guilt, Shame, and Depravity":

  • hypocrisy is often implemented through internal dissociation (shame)
  • ashamed people form coalitions around a shared interest in hiding information
  • [some modeling of/claims about how these coalitions work]
  • [some modeling of the incentives/conditions that motivate guilt vs. shame]

This is a bit more detailed than 'people are sometimes hypocritical'; and I don't think of the existence of ashamed coalitions to cover up norm violations in general (as opposed to relatively-more-explicitly-coordinated coalitions to cover up more-specific kinds of violations) as a broadly unsurprising claim. The degree to which shame can involve forgetting one's own actions & motives, which Ben describes, certainly felt like a big important surprise when I (independently, two years before that post) consciously noticed it in myself.

My guess would be that the bailey is something like "everyone is 100% hypocritical about everything 100% of the time, all people are actually 100% stupid and evil; except maybe for the small group of people around Michael Vassar" or something like that.

I haven't picked up this vibe from them at all (in writing or in person); I have sometimes picked up a vibe of 'we have uniquely/indispensably important insights'. YMMV, of course.

Comment by Nick_Tarleton on D0TheMath's Shortform · 2024-11-15T23:57:00.444Z · LW · GW

Embarrassingly, that was a semi-unintended reaction — I would bet a small amount against that statement if someone gave me a resolution method, but am not motivated to figure one out, and realized this a second after making it — that I hadn't figured out how to remove by the time you made that comment. Sorry.

Comment by Nick_Tarleton on Scissors Statements for President? · 2024-11-07T23:40:10.767Z · LW · GW

It sounds to me like the model is 'the candidate needs to have a (party-aligned) big blind spot in order to be acceptable to the extremists(/base)'. (Which is what you'd expect, if those voters are bucketing 'not-seeing A' with 'seeing B'.)

(Riffing off from that: I expect there's also something like, Motive Ambiguity-style, 'the candidate needs to have some, familiar/legible(?), big blind spot, in order to be acceptable/non-triggering to people who are used to the dialectical conflict'.)

Comment by Nick_Tarleton on How can I get over my fear of becoming an emulated consciousness? · 2024-07-08T17:32:08.953Z · LW · GW

if my well-meaning children successfully implement my desire never to die, by being uploaded, and "turn me on" like this with sufficient data and power backups but lack of care; or if something else goes wrong with the technicians involved not bothering to check if the upload was successful in setting up a fully virtualized existence complete with at least emulated body sensations, or do not otherwise check from time to time to ensure this remains the case;

These don't seem like plausible scenarios to me. Why would someone go to the trouble of running an upload, but be this careless? Why would someone running an upload not try to communicate with it at all?

Comment by Nick_Tarleton on What percent of the sun would a Dyson Sphere cover? · 2024-07-05T04:23:37.032Z · LW · GW

A shell in a Matrioshka brain (more generally, a Dyson sphere being used for computation) reradiates 100% of the energy it captures, just at a lower temperature.

Comment by Nick_Tarleton on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-20T22:00:42.759Z · LW · GW

The AI industry people aren't talking much about solar or wind, and they would be if they thought it was more cost effective.

I don't see them talking about natural gas either, but nuclear or even fusion, which seems like an indication that whatever's driving their choice of what to talk about, it isn't short-term cost-effectiveness.

Comment by Nick_Tarleton on Actually, Power Plants May Be an AI Training Bottleneck. · 2024-06-20T18:27:39.605Z · LW · GW

I doubt it (or at least, doubt that power plants will be a bottleneck as soon as this analysis says). Power generation/use varies widely over the course of a day and of a year (seasons), so the 500 GW number is an average, and generating capacity is overbuilt; this graph on the same EIA page shows generation capacity > 1000 GW and non-stagnant (not counting renewables, it declined slightly from 2005 to 2022 but is still > 800 GW):

This seems to indicate that a lot of additional demand[1] could be handled without building new generation, at least (and maybe not only) if it's willing to shut down at infrequent times of peak load. (Yes, operators will want to run as much as possible, but would accept some downtime if necessary to operate at all.)

This EIA discussion of cryptocurrency mining (estimated at 0.6% to 2.3% of US electricity consumption!) is highly relevant, and seems to align with the above. (E.g. it shows increased generation at existing power plants with attached crypto mining operations, mentions curtailment during peak demand, and doesn't mention new plant construction.)

  1. ^

    Probably not as much as implied by the capacity numbers, since some of that capacity is peaking plants and/or just old, meaning not only inefficient, but sometimes limited by regulations in how many hours it can operate per year. But still.

Comment by Nick_Tarleton on A study on cults and non-cults - answer questions about a group and get a cult score · 2024-06-20T17:25:38.779Z · LW · GW

Confidentiality: Any information you provide will not be personally linked back to you. Any personally identifying information will be removed and not published. By participating in this study, you are agreeing to have your anonymized responses and data used for research purposes, as well as potentially used in writeups and/or publications.

Will the names (or other identifying information if it exists, I haven't taken the survey) of the groups evaluated potentially be published? I'm interested in this survey, but only willing to take it if there's a confidentiality assurance for that information, even independent of my PII. (E.g., I might want to take it about a group without potentially contributing to public association between that group and 'being a cult'.)

Comment by Nick_Tarleton on When is "unfalsifiable implies false" incorrect? · 2024-06-15T04:36:38.992Z · LW · GW

The hypothetical bunker people could easily perform the Cavendish experiment to test Newtonian gravity, there just (apparently) isn't any way they'd arrive at the hypothesis.

Comment by Nick_Tarleton on Web-surfing tips for strange times · 2024-05-31T18:44:17.092Z · LW · GW

As a counterpoint, I use Firefox as my primary browser (I prefer a bunch of little things about its UI), and this is a complete list of glitches I've noticed:

  • The Microsoft account login flow sometimes goes into a loop of asking me for my password
  • Microsoft Teams refuses to work ('you must use Edge or Chrome')
  • Google Meet didn't used to support background blurring, but does now
  • A coworker reported that a certain server BMC web interface didn't work in Firefox, but did in Chrome (on Mac) — I found (on Linux, idk if that was the relevant difference) it broke the same way in both, which I could get around by deleting a modal overlay in the inspector
Comment by Nick_Tarleton on Non-Disparagement Canaries for OpenAI · 2024-05-31T18:19:00.732Z · LW · GW

(I am not a lawyer)

The usual argument (e.g.) for warrant canaries being meaningful is that the (US) government has much less legal ability to compel speech (especially false speech) than to prohibit it. I don't think any similar argument holds for private contracts; AFAIK they can require speech, and I don't know whether anything is different if the required speech is known by both parties to be false. (The one relevant search result I found doesn't say there's anything preventing such a contract; Claude says there isn't, but it could be thrown out on grounds of public policy or unconscionability.)

I would think this 'canary' still works, because it's hard to imagine OpenAI suing, or getting anywhere with a suit, for someone not proactively lying (when silence could mean things besides 'I am subject to an NDA'). But, if a contract requiring false speech would be valid,

  • insofar as this works it works for different reasons than a warrant canary
  • it could stop working, if future NDAs are written with it in mind

(Quibbles aside, this is a good idea; thanks for making it!)

Comment by Nick_Tarleton on "No evidence" as a Valley of Bad Rationality · 2020-03-30T17:33:47.963Z · LW · GW

Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn't, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).

Comment by Nick_Tarleton on "No evidence" as a Valley of Bad Rationality · 2020-03-30T17:27:07.079Z · LW · GW

And the correct reaction (and the study's own conclusion) is that the sample is too small to say much of anything.

(Also, the "something else" was "conventional treatment", not another antiviral.)

Comment by Nick_Tarleton on Zeynep Tufekci on Why Telling People They Don't Need Masks Backfired · 2020-03-18T08:12:36.106Z · LW · GW

I find the 'backfired through distrust'/'damaged their own credibility' claim plausible, it agrees with my prejudices, and I think I see evidence of similar things happening elsewhere; but the article doesn't contain evidence that it happened in this case, and even though it's a priori likely and worth pointing out, the claim that it did happen should come with evidence. (This is a nitpick, but I think it's an important nitpick in the spirit of sharing likelihood ratios, not posterior beliefs.)

Comment by Nick_Tarleton on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T20:35:02.330Z · LW · GW

if there's a domain where the model gives two incompatible predictions, then as soon as that's noticed it has to be rectified in some way.

What do you mean by "rectified", and are you sure you mean "rectified" rather than, say, "flagged for attention"? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn't try to be consistent. I believe 'immediately update your model somehow when you notice an inconsistency' is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don't think this belief is opposed to "rationalism", which should only require not indefinitely tolerating inconsistency.)

Comment by Nick_Tarleton on How long does SARS-CoV-2 survive on copper surfaces · 2020-03-11T22:40:55.348Z · LW · GW

On the other hand:

We found that viable virus could be detected... up to 4 hours on copper...

Comment by Nick_Tarleton on How long does SARS-CoV-2 survive on copper surfaces · 2020-03-07T19:41:50.646Z · LW · GW

Here's a study using a different coronavirus.

Brasses containing at least 70% copper were very effective at inactivating HuCoV-229E (Fig. 2A), and the rate of inactivation was directly proportional to the percentage of copper. Approximately 103 PFU in a simulated wet-droplet contamination (20 µl per cm2) was inactivated in less than 60 min. Analysis of the early contact time points revealed a lag in inactivation of approximately 10 min followed by very rapid loss of infectivity (Fig. 2B).

Comment by Nick_Tarleton on How long does SARS-CoV-2 survive on copper surfaces · 2020-03-07T19:35:14.546Z · LW · GW

That paper only looks at bacteria and does not knowably carry over to viruses.

Comment by Nick_Tarleton on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-31T00:36:21.268Z · LW · GW

I don't see you as having come close to establishing, beyond the (I claim weak) argument from the single-word framing, that the actual amount or parts of structure or framing that Dragon Army has inherited from militaries are optimized for attacking the outgroup to a degree that makes worrying justified.

Comment by Nick_Tarleton on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-05-21T19:58:53.751Z · LW · GW

Doesn't work in incognito mode either. There appears to be an issue with lesserwrong.com when accessed over HTTPS — over HTTP it sends back a reasonable-looking 301 redirect, but on port 443 the TCP connection just hangs.

Comment by Nick_Tarleton on "Taking AI Risk Seriously" (thoughts by Critch) · 2018-05-21T03:44:15.088Z · LW · GW

Similar meta: none of the links to lesserwrong.com currently work due to, well, being to lesserwrong.com rather than lesswrong.com.

Comment by Nick_Tarleton on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-16T01:17:32.159Z · LW · GW

Further-semi-aside: "common knowledge that we will coordinate to resist abusers" is actively bad and dangerous to victims if it isn't true. If we won't coordinate to resist abusers, making that fact (/ a model of when we will or won't) common knowledge is doing good in the short run by not creating a false sense of security, and in the long run by allowing the pattern to be deliberately changed.

Comment by Nick_Tarleton on Mental Illness Is Not Evidence Against Abuse Allegations · 2018-05-16T01:12:43.988Z · LW · GW

This post may not have been quite correct Bayesianism (... though I don't think I see any false statements in its body?), but regardless there are one or more steel versions of it that are important to say, including:

  • persistent abuse can harm people in ways that make them more volatile, less careful, more likely to say things that are false in some details, etc.; this needs to be corrected for if you want to reach accurate beliefs about what's happened to someone
  • arguments are soldiers; if there are legitimate reasons (that people are responding to) to argue against someone or see them as dangerous, this is likely to bleed over to dismissing other things they say more than is justified, especially if there are other motivations to do so
  • the intelligent social web makes some people both more likely to be abused, and less likely to be believed
    • whether someone seems "off" depends to some extent on how the social web wants them to be perceived, independent of what they're doing
    • seriously I don't know how to communicate using words just how powerful (I claim) this class of effects is
  • there are all kinds of reasons that not believing claims about abuse is often just really convenient; this sounds obvious but I don't see people accounting for it well; this motivation will take advantage of whatever rationalizations it can
Comment by Nick_Tarleton on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-06T23:26:04.312Z · LW · GW

IMO, the "legitimate influence" part of this comment is important and good enough to be a top-level post.

Comment by Nick_Tarleton on Give praise · 2018-05-01T00:06:37.500Z · LW · GW

This is simply instrumentally wrong, at least for most people in most environments. Maybe people and an environment could be shaped so that this was a good strategy, but the shaping would actually have to be done and it's not clear what the advantage would be.

Comment by Nick_Tarleton on Give praise · 2018-05-01T00:00:34.731Z · LW · GW

My consistent experience of your comments is one of people giving [what I believe to be, believing that I understand what they're saying] the actual best explanations they can, and you not understanding things that I believe to be comprehensible and continuing to ask for explanations and evidence that, on their model, they shouldn't necessarily be able to provide.

(to be upfront, I may not be interested in explaining this further, due to limited time and investment + it seeming like a large tangent to this thread)

Comment by Nick_Tarleton on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning · 2016-01-28T16:27:07.803Z · LW · GW

I don't see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We've seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?

Comment by Nick_Tarleton on LessWrong 2.0 · 2015-12-05T20:36:45.403Z · LW · GW

A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.

Comment by Nick_Tarleton on The Problem with AIXI · 2014-03-22T17:18:47.189Z · LW · GW

Other possible implications of this scenario have been discusesd on LW before.

Comment by Nick_Tarleton on Is my view contrarian? · 2014-03-14T02:21:45.483Z · LW · GW

This shouldn't lead to rejection of the mainstream position, exactly, but rejection of the evidential value of mainstream belief, and reversion to your prior belief / agnosticism about the object-level question.

Comment by Nick_Tarleton on Building Phenomenological Bridges · 2013-12-25T21:37:48.813Z · LW · GW

Solving that problem seems to require some flavor of Paul's "indirect normativity", but that's broken and might be unfixable as I've discussed with you before.

Do you have a link to this discussion?

Comment by Nick_Tarleton on Open Thread, November 1 - 7, 2013 · 2013-11-25T20:35:58.898Z · LW · GW

Why not go a step further and say that 1 copy is the same as 0, if you think there's a non-moral fact of the matter? The abstract computation doesn't notice whether it's instantiated or not. (I'm not saying this isn't itself really confused - it seems like it worsens and doesn't dissolve the question of why I observe an orderly universe - but it does seem to be where the GAZP points.)

Comment by Nick_Tarleton on Open Thread, November 1 - 7, 2013 · 2013-11-08T05:04:24.482Z · LW · GW

I wonder if it would be fair to characterize the dispute summarized in/following from this comment on that post (and elsewhere) as over whether the resolutions to (wrong) questions about anticipation/anthropics/consciousness/etc. will have the character of science/meaningful non-moral philosophy (crisp, simple, derivable, reaching consensus across human reasoners to the extent that settled science does), or that of morality (comparatively fuzzy, necessarily complex, not always resolvable in principled ways, not obviously on track to reach consensus).

Comment by Nick_Tarleton on No Universally Compelling Arguments in Math or Science · 2013-11-08T01:10:55.481Z · LW · GW

Where Recursive Justification Hits Bottom and its comments should be linked for their discussion of anti-inductive priors.

(Edit: Oh, this is where the first quote in the post came from.)

Comment by Nick_Tarleton on No Universally Compelling Arguments in Math or Science · 2013-11-08T00:06:39.358Z · LW · GW

Measuring optimization power requires a prior over environments. Anti-inductive minds optimize effectively in anti-inductive worlds.

(Yes, this partially contradicts my previous comment. And yes, the idea of a world or a proper probability distribution that's anti-inductive in the long run doesn't make sense as far as I can tell; but you can still define a prior/measure that orders any finite set of hypotheses/worlds however you like.)

Comment by Nick_Tarleton on No Universally Compelling Arguments in Math or Science · 2013-11-05T05:58:49.820Z · LW · GW

I agree with the message, but I'm not sure whether I think things with a binomial monkey prior, or an anti-inductive prior, or that don't implement (a dynamic like) modus ponens on some level even if they don't do anything interesting with verbalized logical propositions, deserve to be called "minds".