Posts
Comments
Ah! That sounds like a great one!
So, folks like Chris Ferguson are presumably doing both activities (judging how much evidence as well as accurately translating brain estimates to numerical estimates).
But if I go find a consistently successful poker player who does not translate brain estimates to numerical estimates, then I could see how that person does on calibration exercises. That sounds like a fun experiment. Now I just need to get the grant money ...
Sidenote, but how would I narrow down to the successful poker players who don't translate brain estimates to numerical estimates? I mean, I could always ask them up front, but how would I interpret an answer like "I don't really use numbers all that much. I just go by feel." Is that a brain that's translating brain-based estimates to numerical estimates, then throwing away the numbers because of childhood mathematical scarring? Or is that a brain that's doing something totally outside translating brain-based estimates to numerical estimates?
Gatsby believed in the green light, the orgastic future that year by year recedes before us. It eluded us then, but that's no matter — tomorrow we will run faster, stretch out our arms farther... And one fine morning —
- The Great Gatsby
I always liked Fitzgerald's portrayal of what Something to Protect feels like.
Happy New Year's resolutions, all.
I'm having difficulty replacing your quotation with its referent. Could you describe an activity I could do that would demonstrate that I was judging how much evidence I have on a given issue?
Hey, that's me! I also didn't think we had other LWers down here. PM sent, let's meet up after the holidays.
I thought of the idea that maybe the human decision maker has multiple utility functions that when you try to combine them into one function some parts of the original functions don't necessarily translate well... sounds like the "shards of desire" are actually a bunch of different utility functions.
I hereby request a research-filled thread of what to do when you feel like you're in this situation, which I believe has been called "welfare economics" in the literature.
It sounds like you're measuring your success by the impact you have on the person you are directly communicating with.
What happens if you measure success by your impact on the rest of your audience?
Interesting position! I can't speak for James, but I want to engage with this. Let's pretend, for the scope of this thread, that I made the statement about the proper role of skepticism.
I'm happy to endorse your wording. I agree it's more precise to talk about "claims" than "things" in this context.
Quick communication check. When you say "increased" you're implying at least two distinct levels of skepticism. From your assertion, I gather that difficult-to-measure claims like "there exist good leaders, people who can improve the performance of the rest of their team" will face your higher level of skepticism.
Could you give me an example of a claim that faces your lower level of skepticism?
[S]kepticism should be directed at things that are actually untrue rather than things that are difficult to measure.
The last question was asked for the first time, half in jest, on May 21, 2061 ...
Thank you. Heuristics like these are, I think, the meta-skill I'm trying to learn at the same time.
Thanks for sharing your experience!
In case you or any other LWers would find these interesting, here are some resources I've enjoyed:
- Dan Carlin's Hardcore History podcast This is what got me hooked on history in the first place. [Edit: I see you mentioned this in the comments! Well ... seconded!]
- Philip Tetlock's Expert Political Judgment book Really anything that Tetlock has done (I think some LWers are involved in his Good Judgment Project). For my money, this is the steelman against using geopolitical insight for forecasting purposes.
I personally worry about moving from "reading history for insight" to "reading history for insight porn". What actions do you take to push back against that tendency?
Finally, FWIW, this sentence jumped off the page when I read it:
I don't read much fiction any more, because most fiction can't compete with the sheer weight, drama, and insightfulness of history.
There was a time in my life when I would have emphatically agreed. These days I have to disagree, though. I've taken to reading history for the experience of viewing the same events from multiple conflicting perspectives. I feel like it widens my set of available reference classes for common issues. Since shifting to view history as a "reference class generator" I've picked up literature as a "way of being in the world generator".
Note: Here's what I'm not saying. I'm not saying you or anybody else should have the same experience I do. I am saying to watch out for mind-projection at "most fiction can't compete with ... history". It's more accurate to say that your experience of most fiction can't compete with your experience of history ... which isn't really the same thing at all. Especially since you can probably change both experiences, either with some effort or just by waiting a while.
Done! Wish I had had a scanner handy going in, I'm curious about the digit ratio.
I'm curious about this "liquid water is wet" statement. Obviously I agree, but for the sake of argument, could you taboo "is" and tell me the statement again? I'm trying to understand how your algorithm feels from the inside.
If you're curious how to quantify fractions of statements, you might enjoy this puzzle I heard once. Suppose you're an ecological researcher and you need to know the number of fish in a large lake. How would you get a handle on that number?
After describing
blind certainty, a close-mindedness that amounts to an imprisonment so total that the prisoner doesn't even know he's locked up.
David Foster Wallace continues
The point here is that I think this is one part of what teaching me how to think is really supposed to mean. To be just a little less arrogant. To have just a little critical awareness about myself and my certainties. Because a huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally wrong and deluded. I have learned this the hard way, as I predict you will, too.
There is a real joy in doing mathematics, in learning ways of thinking that explain and organize and simplify. One can feel this joy discovering new mathematics, rediscovering old mathematics, learning a way of thinking from a person or text, or finding a new way to explain or to view an old mathematical structure.
This inner motivation might lead us to think that we do mathematics solely for its own sake. That’s not true: the social setting is extremely important. We are inspired by other people, we seek appreciation by other people, and we like to help other people solve their mathematical problems.
The entire essay is a beautiful discussion of success and failure in practicing the art of mathematics. Changing the things that need to be changed, much of it applies to practicing the art of rationality.
Could you give this some more context? My reaction was to downvote.
The word "only" gives me vibes like "language exerts a trivial or insignificant influence on our consciousness". I don't know any of Kroetz's plays, but given that he is a playwright I feel like I'm getting the wrong vibe.
I'm only familiar with open source tools, but git will do this with "git diff --word-diff FILE1 FILE2" and Emacs diff has the "ediff-toggle-autorefine" command. IMO you still need to insert line breaks before they become useful.
GNU has wdiff though I've never used it: https://www.gnu.org/software/wdiff/ (update: the git command above seems to do the same thing)
I'm still looking for an online diff tool that makes the word-level differences obvious. That would be ideal here (my web skills are too weak to make it happen this month).
Is there a convenient place to see just what changed from the old to the new?
Online diff tools aren't usefully handling the paragraphs when I copy-paste, and my solution of download -> insert line breaks -> run through my favorite diff program is probably inconvenient for most.
This is the most forceful version I've seen (assumed it had been posted before, discovered it probably hasn't, won't start a new thread since it's too similar):
But by definition, there can’t be any particular feeling associated with simply being wrong. Indeed, the whole reason it’s possible to be wrong is that, while it is happening, you are oblivious to it. When you are simply going about your business in a state you will later decide was delusional, you have no idea of it whatsoever. You are like the coyote in the Road Runner cartoons, after he has gone off the cliff but before he has looked down. Literally in his case and figuratively in yours, you are already in trouble when you feel like you’re still on solid ground. So I should revise myself: it does feel like something to be wrong. It feels like being right.
But I'm not comfortable endorsing either of these quotes without a comment.
chipaca's quote (and friends) suggest to me that
- my "being wrong" and "being right" are complementary hypotheses, and
- my subjective feelings are not evidence either way.
Schulz's quote (and book) suggest to me that
- my "being wrong" is broadly and overwhelmingly true (my map is not the territory), and
- my subjective feeling of being right is in fact evidence that I am very wrong.
I'd prefer to emphasize that "You are already in trouble when you feel like you’re still on solid ground," or said another way:
Becoming less wrong feels different from the experience of going about my business in a state that I will later decide was delusional.
I'm more an outsider than a regular participant here on LW, but I have been boning up on rhetoric for work. I'm thrown by this in a lot of ways.
I notice that I'm confused.
Good for private rationality, bad for public rhetoric? What does your diagram of the argument's structure look like?
As for me, I want this as the most important conclusion in the summary.
But in fact most goals are dangerous when an AI becomes powerful
I don't get that, because the evidence for this statement comes after it and later on it is restated in a diluted form.
goals that seem safe ... can lead to extremely pathological behaviour if the AI becomes powerful
Do you want a different statement as the most important conclusion? If so, which one? If not, why do you believe the argument works best when structured this way? As opposed to, e.g., an alternative that puts the concrete evidence farther up and the abstract statement "Most goals are dangerous when an AI becomes powerful" somewhere towards the end.
Related point: I get frequent feelings of inconsistency when reading this summary.
- I'm encouraged to imagine the AI as a super committee of
Edison, Bill Clinton, Plato, Oprah, Einstein, Caesar, Bach, Ford, Steve Jobs, Goebbels, Buddha, etc.
- then I'm told not to anthropomorphize the AI.
Or
- I'm told the AI's motivations are what "we actually programmed into it",
- then I'm asked to worry about the AI's motivation to lie.
Note I'm talking about rhetorical, a/k/a surface-level feeling of inconsistency here.
You seem like a nice guy.
Let's put on a halo. Isn't the easiest way to appear trustworthy to first appear attractive?
I was surprised this summary didn't produce emotions around this cluster of questions:
- Who are you?
- Do I like you?
- Do I respect your opinion?
Did you intend to skip over all that? If so, is it because you expect your target audience already has their answers?
Shut up and take my money!
There are so many futuristic scenarios out there. For various reasons, these didn't hit me in the gut.
The scenarios painted in the paragraph that starts with
Our society is setup to magnify the potential of such an entity, providing many routes to great power.
are very easy for me to imagine.
Unfortunately, that works against your summary for me. My imagination consistently conjures human beings.
- Wall Street banker.
- Political lobbyist for an industry that I dislike.
- (Nobody comes to mind for the "replace almost every worker in the service sector" scenario.)
- Chairman of the Federal Reserve.
- Anonymous Eastern European hacker.
The feeling that "these are problems I am familiar with, and my society is dealing with them through normal mechanisms" makes it hard for me to feel your message about novel risks demanding novel solutions. Am I unique here?
Inversely, the scenarios in the next paragraph, the one that starts with
Of course, simply because an AI could be extremely powerful
are difficult for me to seriously imagine. You acknowledge this problem later on, with
Humans don’t expect this kind of behaviour
Am I unique in feeling that as dismissive and condescending? Is there an alternative phrasing that takes into account my humanity yet still gets me afraid of this UFAI thing? I expect you have all gotten together, brainstormed scenarios of terrifying futures, trotted them out among your target audience, kept the ones that caused fear, and iterated on that a few times. Just want to check that my feelings are in the minority here.
Break any of these rules
I really enjoy Luke's post here: http://lesswrong.com/lw/86a/rhetoric_for_the_good/
It's a list of rules. Do you like using lists of rules as springboards for checking your rhetoric? I do. I find my writing improves when I try both sides of a rule that I'm currently following / breaking.
To what nugget of rationality does this point?
The idea that a self-imposed external constraint on action can actually enhance our freedom by releasing us from predictable and undesirable internal constraints is not an obvious one. It is hard to be Ulysses.
-- Reid Hastie & Robyn Dawes (Rational Choice in an Uncertain World)
The "Ulysses" reference is to the famous Ulysses pact in the Odyssey.
While I don't read scientific literature that much, I do make formal predictions pretty often. Typically any time I notice something I'm interested in that will be easy to check in the future.
Will I get to bed on time today? Will I be early for the meeting tomorrow? Etc.
I second the anecdotal evidence that this is a "live" exercise. Sidenote: it took me way too long to realize I needed to write all my predictions down. I spent a few weeks thinking I was completely excellent at predicting things.
I endorse (with the possibly-expected caveat about Wilson score ranking).
Unfortunately, I can't (don't know how to?) hack the LW backend. Is that something I can look into?
I beseech you, in the bowels of Christ, think it possible that you may be mistaken.
-- Oliver Cromwell
Previously posted two years ago. I'm curious if some things bear repeating. Is there any accepted timeframe for duplicates?
That's an interesting prediction. Have you tried it? Can you predict what you'd do after filling the notebook?
In my imagination, I'd probably wind up in one of two states:
- Feeling tricked and asking myself "What was the point of that?"
- Feeling accomplished and waiting for the next instruction.