Posts

Comments

Comment by countingtoten on Somerville Mask Usage · 2020-07-01T02:56:49.150Z · score: 3 (4 votes) · LW · GW

This seems like vastly more people masked than I've observed in a crowded suburb. I can't give precise numbers, but it feels like less than a third wearing them, despite distancing being very difficult if you walk in some areas.

Comment by countingtoten on Likelihood of hyperexistential catastrophe from a bug? · 2020-06-19T18:06:11.583Z · score: 1 (1 votes) · LW · GW

No. First, people thinking of creating an AGI from scratch (i.e., one comparable to the sort of AI you're imagining) have already warned against this exact issue and talked about measures to prevent a simple change of one bit from having any effect. (It's the problem you don't spot that'll kill you.)

Second, GPT-2 is not near-perfect. It does pretty well at a job it was never intended to do, but if we ignore that context it seems pretty flawed. Naturally, its output was nowhere near maximally bad. The program did indeed have a silly flaw, but I assume that's because it's more of a silly experiment than a model for AGI. Indeed, if I try to imagine making GPT-N dangerous, I come up with the idea of an artificial programmer that uses vaguely similar principles to auto-complete programs and could thus self-improve. Reversing the sign of its reward function would then make it produce garbage code or non-code, rendering it mostly harmless.

Again, it's the subtle flaw you don't spot in GPT-N that could produce an AI capable of killing you.

Comment by countingtoten on Likelihood of hyperexistential catastrophe from a bug? · 2020-06-19T08:38:16.324Z · score: 1 (1 votes) · LW · GW

The only way I can see this happening with non-negligible probability is if we create AGI along more human lines - e.g, uploaded brains which evolve through a harsh selection process that wouldn't be aligned with human values. In that scenario, it may be near certain. Nothing is closer to a mind design capable of torturing humans than another human mind - we do that all the time today.

As others point out, though, the idea of a sign being flipped in an explicit utility function is one that people understand and are already looking for. More than that, it would only produce minimal human-utility if the AI had a correct description of human utility. Otherwise, it would just use us for fuel and building material. The optimization part also has to work well enough. Everything about the AGI, loosely speaking, has to be near-perfect except for that one bit. This naively suggests a probability near zero. I can't imagine a counter-scenario clearly enough to make me change this estimate, if you don't count the previous paragraph.

Comment by countingtoten on Studies On Slack · 2020-05-13T21:25:27.433Z · score: 1 (1 votes) · LW · GW

This shows why I don't trust the categories. The ability to let talented people go in whatever direction seems best will almost always be felt as freedom from pressure.

Comment by countingtoten on How effective are tulpas? · 2020-03-10T19:51:50.459Z · score: 1 (1 votes) · LW · GW

One, the idea was to pick a fictional character I preferred, but could not easily come to believe in. (So not Taylolth, may death come swiftly to her enemies.) Two, I wanted to spend zero effort imagining what this character might say or do. I had the ability to picture Kermit.

Comment by countingtoten on How effective are tulpas? · 2020-03-10T19:47:24.224Z · score: 1 (1 votes) · LW · GW

I meant that as a caution - though it is indeed fictional evidence, and my lite version IRL seems encouraging.

I really think you'll be fine taking it slow. Still, if you have possible risk factors, I would:

  • Make sure you have the ability to speak with a medical professional on fairly short notice.
  • Remind yourself that you are always in charge inside your own head. People who might know tell me that hearing this makes you safer. It may be a self-proving statement.
Comment by countingtoten on How effective are tulpas? · 2020-03-10T10:32:59.022Z · score: 3 (5 votes) · LW · GW

Sy, is that you?

I started talking to Kermit the Frog, off and on, many months ago. I had this idea after seeing an article by an ex-Christian who appeared never to have made predictions about her life using a truly theistic model, but who nevertheless missed the benefits she recalls getting from her talks with Jesus. Result: Kermit has definitely comforted me once or twice (without the need for 'belief') and may have helped me to remember useful data/techniques I already knew, but mostly nothing much happens.

Now, as an occasional lucid dreamer who once decided to make himself afraid in a dream, I tend not to do anything that I think is that dumb. I have not devoted much extra effort or time to modelling Kermit the Frog. However, my lazy experiment has definitely yielded positive results. Perhaps you could try your own limited experiment first?

Comment by countingtoten on Criticism as Entertainment · 2020-01-10T10:38:45.467Z · score: 0 (3 votes) · LW · GW

Pitch Meeting is at least bringing up actual problems with Game of Thrones season 8. But I dare you to tell if early Game of Thrones was better or worse than season 8, based on the Pitch Meeting.

That's gonna be super easy, barely an inconvenience. The video for S8 not only feels more critical than usual, it gives specific examples of negative changes from previous seasons, plus a causal explanation (albeit partial) that the video keeps coming back to. One might even say the characters harp on their explanation for S8 being worse than early seasons.

Also, looking at the PM video on GOT up to that point, the chief criticism I find (and this once it seems like pretty explicit criticism of the art in question) seems to be about the show getting old and also worse as it goes on. Actual quote:

Well, eventually we're going to start running out of main characters, in later seasons, so they're all going to develop pretty thick plot armor.

Of course, whether that makes the early seasons better or whether their failures took a while to become obvious is a difficult question. PM seems to suggest the former, given their fairly blatant assertion that S8 is worse due to being inexplicably rushed. However, that is technically a different question. It leaves open the possibility that the showrunners' ultimate failure was determined by their approach at the start, as the earlier video suggested.

Comment by countingtoten on Criticism as Entertainment · 2020-01-10T10:26:10.190Z · score: 1 (1 votes) · LW · GW

I have to add that Pitch Meeting does not, in fact, label itself "review," though it does contain implied assertions. (Some implied claims are even about media, rather than about screenwriters and studio executives.)

The specific example/challenge/instance of meta-shitting seems odd, though. For one, GOT Season 8 was a major news story rather than something you needed to learn from Pitch Meeting. I'm going to make the rest a separate comment.

Comment by countingtoten on What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address. · 2019-12-03T21:58:31.259Z · score: 1 (1 votes) · LW · GW

Somewhat unsurprisingly, claim 1 had the least support.

Admittedly, this is what shminux objected to. Beforehand I would have expected more resistance based on people already believing the future is uncertain, casting doubt on claim 2 and especially the "tractable" part of claim 3. If I had to steelman such views, they might sound something like, 'The way to address this problem is to make sure sensible people are in charge, and a prerequisite for being sensible is not giving weird-sounding talks for 3 people.'

How sure are you that the people who showed up were objecting out of deeply-held disagreements, and not out of a sense that objections are good?

Comment by countingtoten on What I talk about when I talk about AI x-risk: 3 core claims I want machine learning researchers to address. · 2019-12-03T21:48:00.542Z · score: -1 (2 votes) · LW · GW

That number was presented as an example ("e.g.") - but more importantly, all the numbers in the range you offer here would argue for more AI alignment research! What we need to establish, naively, is that the probability is not super-exponentially low for a choice between 'inter-galactic civilization' and 'extinction of humanity within a century'. That seems easy enough if we can show that nothing in the claim contradicts established knowledge.

I would argue the probability for this choice existing is far in excess of 50%. As examples of background info supporting this: Bayesianism implies that "narrow AI" designs should be compatible on some level; we know the human brain resulted from a series of kludges; and the superior number of neurons within an elephant's brain is not strictly required for taking over the world. However, that argument is not logically necessary.

(Technically you'd have to deal with Pascal's Mugging. However, I like Hansonian adjustment as a solution, and e.g. I doubt an adult civilization would deceive its people about the nature of the world.)

Comment by countingtoten on Is requires ought · 2019-10-30T06:33:07.172Z · score: 1 (1 votes) · LW · GW

I almost specified, 'what would it be without the confusing term "ought" or your gerrymandered definition thereof,' but since that was my first comment in this thread I thought it went without saying.

Comment by countingtoten on Is requires ought · 2019-10-30T05:33:21.581Z · score: 1 (1 votes) · LW · GW

Do you have a thesis that you argue for in the OP? If so, what is that thesis?

Are you prepared to go down the other leg of the dilemma and say that the "true oughts" do not include any goal which would require you to, eg, try to have correct beliefs? Also: the Manhattan Project.

Comment by countingtoten on Is requires ought · 2019-10-29T21:50:27.368Z · score: 2 (3 votes) · LW · GW

Why define goals as ethics (knowing that definitions are tools that we can use and replace depending on our goal of the moment)? You seem to be saying that 'ought' has a structure which can also be used to annihilate humanity or bring about unheard-of suffering. That does not seem to me like a useful perspective.

Seriously, just go and watch "Sorry to Bother You."

Comment by countingtoten on What's going on with "provability"? · 2019-10-13T20:22:35.851Z · score: 4 (3 votes) · LW · GW

Arguably, the numbers we care about. Set theory helpfully adds that second-order arithmetic (arithmetic using the language of sets) has only a single model (up to what is called 'isomorphism', meaning a precise analogy) and that Godel's original sentence is 'true' within this abstraction.

Comment by countingtoten on A Critique of Functional Decision Theory · 2019-09-14T00:45:00.963Z · score: 2 (3 votes) · LW · GW

In part IV, can you explain more about what your examples prove?

You say FDT is motivated by an intuition in favor of one-boxing, but apparently this is false by your definition of Newcomb's Problem. FDT was ultimately motivated by an intuition that it would win. It also seems based on intuitions regarding AI, if you read that post - specifically, that a robot programmed to use CDT would self-modify to use a more reflective decision theory if given the chance, because that choice gives it more utility. Your practical objection about humans may not be applicable to MIRI.

As far as your examples go, neither my actions nor my abstract decision procedure controls whether or not I'm Scottish. Therefore, one-boxing gives me less utility in the Scot-Box Problem and I should not do it.

Exception: Perhaps Scotland in this scenario is known for following FDT. Then FDT might in fact say to one-box (I'm not an expert) and this may well be the right answer.

Comment by countingtoten on The AI Timelines Scam · 2019-07-14T16:51:35.950Z · score: 5 (3 votes) · LW · GW

Assuming you mean the last blockquote, that would be the Google result I mentioned which has text, so you can go there, press Ctrl-F, and type "must fail" or similar.

You can also read the beginning of the PDF, which talks about what can and can't be programmed while making clear this is about hardware and not algorithms. See the first comment in this family for context.

Comment by countingtoten on The AI Timelines Scam · 2019-07-14T07:10:57.841Z · score: 9 (5 votes) · LW · GW

Again, he plainly says more than that. He's challenging "the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer." He asserts as fact that certain types of cognition require hardware more like a human brain. Only two out of four areas, he claims, "can therefore be programmed." In case that's not clear enough, here's another quote of his:

since Area IV is just that area of intelligent behavior in which the attempt to program digital computers to exhibit fully formed adult intelligence must fail, the unavoidable recourse in Area III to heuristics which presuppose the abilities of Area IV is bound, sooner or later, to run into difficulties. Just how far heuristic programming can go in Area III before it runs up against the need for fringe consciousness, ambiguity tolerance, essential/inessential discrimination, and so forth, is an empirical question. However, we have seen ample evidence of trouble in the failure to produce a chess champion, to prove any interesting theorems, to translate languages, and in the abandonment of GPS.

He does not say that better algorithms are needed for Area IV, but that digital computers must fail. He goes on to falsely predict that clever search together with "newer and faster machines" cannot produce a chess champion. AFAICT this is false even if we try to interpret him charitably, as saying more human-like reasoning would be needed.

Comment by countingtoten on The AI Timelines Scam · 2019-07-14T02:26:10.423Z · score: -1 (2 votes) · LW · GW

I couldn't have written an equally compelling essay on biases in favor of long timelines without lying, I think,

Then perhaps you should start here.

Comment by countingtoten on The AI Timelines Scam · 2019-07-14T02:23:00.304Z · score: 15 (6 votes) · LW · GW

Hubert Dreyfus, probably the most famous historical AI critic, published "Alchemy and Artificial Intelligence" in 1965, which argued that the techniques popular at the time were insufficient for AGI.

That is not at all what the summary says. Here is roughly the same text from the abstract:

Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer. Attempts to simulate cognitive processes on computers have, however, run into greater difficulties than anticipated. An examination of these difficulties reveals that the attempt to analyze intelligent behavior in digital computer language systematically excludes three fundamental human forms of information processing (fringe consciousness, essence/accident discrimination, and ambiguity tolerance). Moreover, there are four distinct types of intelligent activity, only two of which do not presuppose these human forms of information processing and can therefore be programmed. Significant developments in artificial intelligence in the remaining two areas must await computers of an entirely different sort, of which the only existing prototype is the little-understood human brain.

In case you thought he just meant greater speed, he says the opposite on PDF page 71. Here is roughly the same text again from a work I can actually copy and paste:

It no longer seems obvious that one can introduce search heuristics which enable the speed and accuracy of computers to bludgeon through in those areas where human beings use more elegant techniques. Lacking any a priori basis for confidence, we can only turn to the empirical results obtained thus far. That brute force can succeed to some extent is demonstrated by the early work in the field. The present difficulties in game playing, language translation, problem solving, and pattern recognition, however, indicate a limit to our ability to substitute one kind of "information processing** for another. Only experimentation can determine the extent to which newer and faster machines, better programming languages, and cleverer heuristics can continue to push back the frontier. Nonetheless, the dra- matic slowdown in the fields we have considered and the general failure to fulfill earlier predictions suggest the boundary may be near. Without the four assumptions to fall back on, current stagnation should be grounds for pessimism.

This, of course, has profound implications for our philosophical tradi- tion. If the persistent difficulties which have plagued all areas of artificial intelligence are reinterpreted as failures, these failures must be interpre- ted as empirical evidence against the psychological, epistemological, and ontological assumptions. In Heideggerian terms this is to say that if Western Metaphysics reaches its culmination in Cybernetics, the recent difficulties in artificial intelligence, rather than reflecting technological limitations, may reveal the limitations of technology.

If indeed Dreyfus meant to critique 1965's algorithms - which is not what I'm seeing, and certainly not what I quoted - it would be surprising for him to get so much wrong. How did this occur?

Comment by countingtoten on The Schelling Choice is "Rabbit", not "Stag" · 2019-06-08T02:07:31.885Z · score: 5 (3 votes) · LW · GW

Meandering conversations were important to him, because it gave them space to actually think. I pointed to examples of meetings that I thought had gone well, that ended will google docs full of what I thought had been useful ideas and developments. And he said "those all seemed like examples of mediocre meetings to me – we had a lot of ideas, sure. But I didn't feel like I actually got to come to a real decision about anything important."

Interesting that you choose this as an example, since my immediate reaction to your opening was, "Hold Off On Proposing Solutions." More precisely, my reaction was that I recall Eliezer saying he recommended this before any other practical rule of rationality (to a specific mostly white male audience, anyway) and yet you didn't seem to have established that people agree with you on what the problem is.

It sounds like you got there eventually, assuming "the right path for the organization" is a meaningful category.

Comment by countingtoten on Stories of Continuous Deception · 2019-06-02T07:29:26.844Z · score: 1 (1 votes) · LW · GW

I think this is being presented because a treacherous turn requires deception.

As I've mentioned before, that is technically false (unless you want a gerrymandered definition).

Comment by countingtoten on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T23:18:41.398Z · score: 4 (3 votes) · LW · GW

Smiler AI: I'm focusing on self-improvement. A smarter, better version of me would find better ways to fill the world with smiles. Beyond that, it's silly for me to try predicting a superior intelligence.

Comment by countingtoten on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T18:46:33.083Z · score: 5 (2 votes) · LW · GW

Mostly agree, but I think an AGI could be subhuman in various ways until it becomes vastly superhuman. I assume we agree that no real AI could consider literally every possible course of action when it comes to long-term plans. Therefore, a smiler could legitimately dismiss all thoughts of repurposing our atoms as an unprofitable line of inquiry, right up until it has the ability to kill us. (This could happen even without crude corrigibility measures, which we could remove or allow to be absent from a self-revision because we trust the AI.) It could look deceptively like human beings deciding not to pursue an Infinity Gauntlet to snap our problems away.

Comment by countingtoten on Comment section from 05/19/2019 · 2019-05-22T08:41:44.497Z · score: -13 (4 votes) · LW · GW

I deny that your approach ever has an advantage over recognizing that definitions are tools which have no truth values, and then digging into goals or desires.

Comment by countingtoten on A Treacherous Turn Timeline - Children, Seed AIs and Predicting AI · 2019-05-22T07:53:36.855Z · score: 6 (4 votes) · LW · GW

The core of the disagreement between Bostrom (treacherous turn) and Goertzel (sordid stumble) is about how long steps 2. and 3. will take, and how obvious the seed AI's unalignment will look like during these steps.

Really? Does Bostrom explicitly call this the crux?

I'm worried at least in part that AGI (for concreteness, let's say a smile-maximizer) won't even see a practical way to replace humanity with its tools until it far surpasses human level. Until then, it honestly seeks to make humans happy in order to gain reward. Since this seems more benevolent than most humans - who proverbially can't be trusted with absolute power - we could become blase about risks. This could greatly condense step 4.

Comment by countingtoten on Comment section from 05/19/2019 · 2019-05-19T22:29:41.480Z · score: 1 (3 votes) · LW · GW

Not every line in 37 Ways is my "standard Bayesian philosophy," nor do I believe much of what you say follows from anything standard.

This probably isn't our central disagreement, but humans are Adaptation-Executers, not Fitness-Maximizers. Expecting humans to always use words for Naive Bayes alone seems manifestly irrational. I would go so far as to say you shouldn't expect people to use them for Naive Bayes in every case, full stop. (This seems to border on subconsciously believing that evolution has a mind.) If you believe someone is making improper inferences, stop trying to change the subject and name an inference you think they'd agree with (that you consider false).

Comment by countingtoten on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-16T08:46:37.422Z · score: 1 (1 votes) · LW · GW

Comment status: I may change my mind on a more careful reading.

Other respondents have mentioned the Mathematical Macrocosm Hypothesis. My take differs slightly, I think. I believe you've subtly contradicted yourself. In order for your argument to go anywhere you had to assume that an abstract computation rule exists in the same sense as a real computer running a simulation. This seems to largely grant Tegmark's version of the MMH (and may be the first premise I reject here). ETA: the other branch of your dilemma doesn't seem to engage with the functionalist view of qualia, which says that the real internal behavior or relationships within a physical system are what matter.

Now, we're effectively certain that our world is fundamentally governed by mathematical laws of physics (whether we discover the true laws or not). Dualist philosophers like Chalmers seem to grant this point despite wanting to say that consciousness is different. I think Chalmers freely grants that your consciousness - despite being itself non-physical, on his view - is wholly controlled by physical processes in your brain. This seems undisputed among serious people. (You can just take certain chemicals or let yourself get hungry, and see how your thoughts change.)

So, on the earlier Tegmark IV premise, there's no difference between you and a simulation. You are a simulation within an abstract mathematical process, which exists in exactly the same way as an arithmetical sequence or the computational functions you discuss. You are isomorphic to various simulations of yourself within abstract computations.

Chalmers evidently postulates a "bridging law" in the nature of reality which makes some simulations conscious and not others. However, this seems fairly arbitrary, and in any case I also recall Chalmers saying that a person uploaded (properly) to a computer would be conscious. I certainly don't see anything in his argument to prevent this. If you don't like the idea of this applying to more abstract computations, I recommend you reject Tegmark and admit that the nature of reality is still technically an open problem.

Comment by countingtoten on Can Bayes theorem represent infinite confusion? · 2019-03-22T23:44:24.337Z · score: 9 (2 votes) · LW · GW

Using epsilons can in principle allow you to update. However, the situation seems slightly worse than jimrandomh describes. It looks like you need P(E|h), or the probability if H is false, in order to get a precise answer. Also, the missing info that jim mentioned is already enough in principle to let the final answer be any probability whatsoever.

If we use log odds (the framework in which we could literally start with "infinite certainty") then the answer could be anywhere on the real number line. We have infinite (or at least unbounded) confusion until we make our assumptions more precise.

Comment by countingtoten on mAIry's room: AI reasoning to solve philosophical problems · 2019-03-18T07:04:28.401Z · score: 3 (2 votes) · LW · GW

One is phrased or presented as knowledge. I don't know the best way to approach this, but to a first approximation the belief is the one that has an explicit probability attached. I know you talked about a Boolean, but there the precise claim given a Boolean value was "these changes have happened", described as an outside observer would, and in my example the claim is closer to just being the changes.

Your example could be brought closer by having mAIry predict the pattern of activation, create pointers to memories that have not yet been formed, and thus formulate the claim, "Purple looks like n<sub>p</sub>." Here she has knowledge beforehand, but the specific claim under examination is incomplete or undefined because that node doesn't exist.

Comment by countingtoten on Want to Know What Time Is? · 2019-03-09T09:40:58.931Z · score: 1 (1 votes) · LW · GW

The first potential problem I see is that "information available" should be relative to an information-storage device like a human brain, whereas time in (my limited understanding of) physics is relative to a rock or other physical frame of reference. Those seem different.

If we try to remove that problem then we get a new one (which might exist anyway in a less acute form). When we take as our "given phenomenon" something large in spatial area, like 'the Earth at exactly 4:40 am EST in this frame of reference,' we find vastly more available information than we could have - even in principle, I would think - for many phenomena we would consider to take more time. So this definition doesn't seem to match the word.

Comment by countingtoten on mAIry's room: AI reasoning to solve philosophical problems · 2019-03-09T09:14:03.779Z · score: 4 (2 votes) · LW · GW

Really, no link to orthonormal's sequence?

I think you haven't zeroed in on the point of the Mary's Room argument. According to this argument, when Mary exclaims, "So that's what red looks like!" she is really pointing to a non-verbal belief she was previously incapable of forming. (I don't mean the probability of her statement, but the real claim to which she attached the probability.) So it won't convince any philosophers if you talk about mAIry setting a preexisting Boolean.

Of course this argument fails to touch physicalism - some version of mAIry could just form a new memory and acquire effective certainty of the new claim that "Red looks like {memories of red}," a claim which mAIry was previously incapable of even formulating. (Note that eg this claim could be made false by altering her memories or showing her a green apple while telling her "Humans call this color 'red'." The claim is clearly meaningful, though a more carefully formulated version might be tautological.) However, the OP as written doesn't quite touch Mary's Room.

Comment by countingtoten on Avoiding Jargon Confusion · 2019-02-23T05:27:23.204Z · score: 1 (1 votes) · LW · GW

You mean to say that deliberate anti-epistemology, which combines dehumanization with anthropomorphism, turns out to be bad?

Comment by countingtoten on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-23T00:11:53.554Z · score: 7 (2 votes) · LW · GW

I mean, physically assaulting anyone is a crime; so the OP arguably violates one of these existing rules. This is definitely true (technically) if he suggested doing anything like that with newcomers to a LW meetup unless they specifically say not to. While we likely want a looser approach to enforcement (compared to a zero-tolerance policy that would ban Duncan) it sounds to me like you should tell him not to do it again.

Comment by countingtoten on Beware of black boxes in AI alignment research · 2018-01-21T21:41:26.798Z · score: 6 (2 votes) · LW · GW

When I saw the title, I thought, 'But we want to decompose problems in FAI theory to isolate questions we can answer. This suggests heavy use of black boxes.' I wondered if perhaps he was trying to help people who were getting everything wrong (in which case I think a positive suggestion has more chance of helping than telling people what to avoid). I was pleased to see the post actually addressed a more intelligent perspective, and has much less to do with your point or mine.

Comment by countingtoten on Entitlement, Covert Contracts, Social Libertarianism, and Related Concepts · 2017-11-21T10:41:09.582Z · score: 5 (2 votes) · LW · GW
I don't think horrible people would have disliked Kurt Godel?

Say what now?

If horrible people like you, that does usually mean you aren't doing enough for the people they hate.

Comment by countingtoten on Against Modest Epistemology · 2017-11-18T03:00:37.435Z · score: 2 (1 votes) · LW · GW

Like many people in the past year, I frequently wonder if I'm dreaming while awake. This seems to make up >10% of the times I've tested it. I'm also running out of ways to say that I mean what I say.

You may be right that the vast majority of the time (meaningful cough) when humans wonder if they're dreaming, they are. People who know that may account for nearly all exceptions.

Comment by countingtoten on Against Modest Epistemology · 2017-11-17T21:22:37.610Z · score: 2 (1 votes) · LW · GW

That's actually not quite right - my dream *content* varies widely in how mundane it is. My point is that I learned to recognize dreams not by practicing the thought 'This experience is too vivid to be a dream,' but by practicing tests which seemed likely to work.

Comment by countingtoten on Against Modest Epistemology · 2017-11-17T19:45:59.099Z · score: 2 (1 votes) · LW · GW

The part about sensory data sounds totally wrong to me personally, and of course you know where this is going (see also). Whereas my dream self can, in fact, notice logical flaws or different physics and conclude that I'm dreaming.

Comment by countingtoten on Against Modest Epistemology · 2017-11-17T18:50:06.791Z · score: 2 (1 votes) · LW · GW

No, seriously, what you're saying sounds like nonsese. Number one, dreams can have vivid stimuli that I recall explicitly using as evidence that I wasn't dreaming; of course I've also thought I was performing mundane tasks. Number two, how does dream-you distinguish the difference without having "tested the behavior of written words, or used some other more-or-less reliable test?"

Comment by countingtoten on Status Regulation and Anxious Underconfidence · 2017-11-17T10:02:34.989Z · score: 3 (3 votes) · LW · GW

Um, you just refuted a crackpot claim on the object level, using the kind of common-sense argument that I (a layman) heard from a physics teacher in high school. ETA: This may illustrate a problem with the neat, bright-line categories you're assuming.

On a similar note: I remember a speech given by a young-Earth creationist that I think differs from lesser crankdom mainly in being more developed. As the lie aged it needed to birth more lies in response to the real world of entangled truths. And while I couldn't refute everything the guy said - that's the point of a Gish Gallop - I knew a cat probably couldn't be a vegetarian.

Comment by countingtoten on Hero Licensing · 2017-11-17T09:49:06.861Z · score: 12 (4 votes) · LW · GW

I don't see it. Maybe you think fox epistemology wouldn't donate to MIRI, which is presumably what Eliezer cares about? But what he claims repeatedly is that we should judge situations just as you say, and he offers a way to do this.

Comment by countingtoten on Hero Licensing · 2017-11-17T09:42:26.878Z · score: 2 (1 votes) · LW · GW

So, if people want more social status, then their behavior in your narrative feels obviously wrong to me. Choosing that behavior feels like it would encourage others to slap their own efforts down. In practice, maybe few people share my decision procedure and I 'should' slap other status-seekers in order to make room for myself (though the latter doesn't strictly follow.) But even if that's true, I don't think it informs my instinctive reaction. (I do pity physics cranks who don't inconvenience me personally or harm anything I care about. That loss meme always slightly horrified me, though I admit I don't know the guy's comic well.)

Are you arguing that most people don't seek increased status, or that they don't think this way?

I get that we tend to overestimate our suffering/work relative to that of others, but that doesn't automatically make us hate everyone who wants another dollar in their bank account. Does it?

Another puzzling feature of your diagnosis: if most people treat status as a resource like money, then why wouldn't they try to award it for service to their tribe? That feels like a natural compromise between status-seekers and those who want to stay big fish in some small pond. The alternative described in the OP seems, well, obviously cultish. It suggests a pond in which big fish claim divine right to rule (as opposed to eg claiming their rule benefits all fish) and everyone goes along with this for some reason.

Comment by countingtoten on Against Modest Epistemology · 2017-11-14T23:26:45.335Z · score: 2 (1 votes) · LW · GW

I assume you believe you're awake because you've tried to levitate, or tested the behavior of written words, or used some other more-or-less reliable test?

Comment by countingtoten on In defence of epistemic modesty · 2017-11-08T23:32:38.241Z · score: 5 (4 votes) · LW · GW

OP seems like a good argument for the weak claim you apply to your own field, but then goes off the rails. For now I'll note two points that seem definitely wrong.

1:

Bayesian accounts of epistemology seem to go haywire if we think one should have a credence in Bayesian epistemology itself,

On a practical level this just seems false. On an abstract level probability doesn't deal with uncertainty about mathematical questions; but MIRI and others have made progress on this very issue. I think true modesty would lead you to see such issues as eminently solvable. (This is around the point where you seem to stop arguing for the standard you apply to yourself, on questions you care about, and start making more sweeping claims.)

I peripherally note that if you reject the notion of a degree of credence justified by your assumptions and evidence, you suddenly have a problem explaining what your thesis even means and why (by your lights) anyone should care. But I don't think you actually do reject it (and you haven't expressly questioned any other assumptions of Cox's Theorem or the strengthened versions thereof).

2:

(e.g. the agreement of the U.S. and German governments with the implied view of the physicists). This is a lot more involved, but the expected ‘accuracy yield per unit time spent’ may still be greater than (for example) making a careful study of the relevant physics.

This is partly an artifact of the example, but I do not think a layman at the time could get any useful information at all by your method - not without getting shot. Also, you forgot to include a timeframe in the question. This makes theoretical arguments much more relevant then usual (see also: cryonics). It doesn't take much study of physics to realize that a large positively-charged atomic nucleus could, in principle, fly apart. Knowing what that would mean takes more science, but Special Relativity was already decades old.

Comment by countingtoten on There's No Fire Alarm for Artificial General Intelligence · 2017-11-06T22:21:11.151Z · score: 2 (1 votes) · LW · GW

I want to pursue this slightly. Before recent evidence - which caused me to update in a vague way towards shorter timelines - my uncertainty looked like a near-uniform distribution over the next century with 5% reserved for the rest of time (conditional on us surviving to AGI). This could obviously give less than a 10% probability for the claim "5-10 years to strong AI" and the likely destruction of humanity at that time. Are you really arguing for something lower, or are you "confident" the way people were certain (~80%) Hillary Clinton would win?

Comment by countingtoten on The Copernican Revolution from the Inside · 2017-11-06T07:26:05.419Z · score: 5 (2 votes) · LW · GW

They had arguments about physics that the OP weirdly downplays. Like I said below: Copernicus disliked the equant because it contradicted the most straightforward reading of Ptolemy's own physics; Kepler unambiguously disproved scholastic physics. Also, Galileo discovered Galilean relativity. He definitely made enough observations to show this last idea had something to it, unlike the scholastic explanation of heavenly bodies.

Comment by countingtoten on Moloch's Toolbox (1/2) · 2017-11-06T05:28:38.752Z · score: 5 (2 votes) · LW · GW

He's not that libertarian in the political sense, though probably more than either of us.

Comment by countingtoten on The Copernican Revolution from the Inside · 2017-11-06T04:13:14.122Z · score: -3 (2 votes) · LW · GW

By completely ignoring physics until Galileo, you paint a deceptive picture.

In Aristotle's physics, each god inspired a different but equally regular and circular motion in the heavens.

Copernicus objected to the equant because it was not a regular circular motion. It just modified another circle, which seems like an obvious contradiction. If we treat it as a motion added to the system, it would be something like motion along a (rotating) radius. The planet would go back and forth in a straight line that happens to produce a modified circle. Now, we could imagine that all of these circles are conceptual rather than being actual motions added together. We could say that the deities involved compel the actual motion of the planet in its (single) crystal sphere to act as if influenced by other, imaginary circles. But that would seem to require a more active role for the deities, leading to awkward questions. That seems like the major reason why people called Copernicus more coherent and elegant.

Kepler - as you point out and then ignore - showed that all previous sytems gave false predictions, and you could get true ones (according to the observations of the time) by using ellipses. That was the end of the Church's Artistotelian physics. At that point, their model of the heavens and physics in general was provably wrong.

Notice what Tycho Brahe's system doesn't have? Guess what was also missing from the chief attempt to defend Brahe against Galileo. Abandoning Aristotle's physics of perfect circles would have removed most of the actual reason for thinking the heavens and the Earth followed different rules to begin with.