Posts
Comments
The game aspect is trying to get a higher "score" of hi-fives at the end of each day. Sort of like Tetris or Bejeweled where you always run out of space/time eventually, but can play again to improve your score.
When working in a textiles warehouse I would make it fun by imagining someone I'd met walking down a familiar street while showing off the shirt/hat/etc. I just sorted/tagged/profiled in a ridiculous fashion show montage, then turning to me with a smile and a wink or thumbs-up and saying, "Thanks, man!" or similar after I finished X items depending on the day's quota. The person would then step into a crowd behind me cheering me on, who I would imagine turning around and "hi-five"-ing one at a time after arbitrary milestones to celebrate my progress.
To come up with this idea I asked myself who would be disappointed if no one in the world were willing to do any job resembling mine anymore and what would they be losing, then optimized the generated examples for salience and awesomeness.
Took the entire survey and all extra credit questions in one go; minus ACT, SAT2400 and Respectable(tm) IQ scores since I don't have them, and </=140 character LW description because I was starting to get tired after the 40 min. IQ test.
So much fun! I'm very curious to see the results.
Meetup report! We had a total of 4 Attendees plus a well behaved infant. Much lower than usual, but not unexpected due to scheduling issues.
Meta-meetup discussion: Nominated planner for next meetup. It has been suggested that in the future, if an organizer/presenter cannot make it to their meetup, that it not be postponed unless at least 3-7 days notice can be given (since not everyone checks their email, facebook and/or the Less Wrong posting daily).
Presentation: Skipped in favor of scouting the area as a location for future meetups. While I'm currently re-working the whole thing (found some critical flaws), it should be ready again by the meetup after next if there is interest.
Location Impressions: While the Wine Loft's menu isn't designed for eating a full meal, the indoor seating and ambiance are great for running group exercises or just being social. Very lounge like, with couches, cushions and easily moveable, low-to-the-ground tables. The outdoor seating is a little cramped and loud for my tastes, as it's small and adjacent to the main thoroughfare off the expressway, but well shaded and cool this time of year. They're also mostly empty on Sundays prior to 9 p.m., so we should be able to conquer a nook fairly easily even without a reservation; and that shouldn't be a problem as long as our expected group size doesn't fall below 6.
The Greene itself is pleasant to walk through, with wide sidewalks, lampposts, outdoor cafe's, wall art, and non-repeating architecture. There is a small patch of greenery in the center which hosts events, some of them musical. There is a Books & Co. just across the alley from the Wine Loft; a spacious, two-story bookstore with podium and seating for a presentation area should we decide to run events for the public (such as educational material for CFAR) or start an ancillary Less Wrong book group. There is also a Funny Bone comedy club nearby that has shows every Sunday at 7 p.m., though I don't know how good the performers are.
Food choices in the area tend toward the upscale, but Choe's Asian Gourmet seems the most promising in terms of both price and the menu preferences of which I've been made aware. For future meetups I'd recommend having dinner there, then migrating to the Wine Loft for drinks, planning and rationality games and exercises.
Though we missed many of our regulars, a good time was had and much data gathered!
An example of "having the child occupied by some solitary activity" from my past: Almost as soon I could walk my parents started sending me on quests to find and retrieve various items throughout grocery stores, then put them back and find another if they weren't quite what was asked for. Wasted almost none of their time while keeping me entertained and feeling (while learning to be) useful to them in that context.
...you should NOT paint your room and lose your deposit if you are not decently-off financially.
Unless the apartment owners and managers only care what it looks like when you leave and you can afford to add a few layers of white base paint just before doing so, to avoid losing the deposit. Such policies are often clearly delineated in the lease contract, and you can sometimes negotiate leniency with the management as long as you do so in writing and have it attached to the contract pre-signature. YMMV
I was not aware of this rumor. How did you come to the conclusion it is widespread, and why do you think it's worth taking seriously?
For those who've never used a command line interface and find them intimidating (one of my hurdles on the way to learning to program), I'd recommend Learn Code the Hard Way: The Command Line Crash Course. The exercises are designed to trip you up and force you to figure some things out for yourself, which has quickly increased my confidence and self-reliance so far.
I have not finished the book, but am already getting slightly addicted to "commanding" my computer to do my bidding instead of having to dig my way through windows explorer and context menus to get anything done. Am I right in thinking this may be good prep for migrating to linux?
This wasn't done. "My enemy is status signalling" is a moderately effective general purpose attack against positions one doesn't like but doesn't apply here (except in the Hansonian "Everything is Signalling" sense.)
I don't consider Vaniver an enemy, but will forgo brevity and taboo "status" to better show where I think I disagree with you:
I agree with the content of the message; that frivolous use of the word "rationality" and its conjugates in post titles needs to be curtailed and prevented.
I object to that message's delivery, which seems to me to imply that an acceptable reaction to those who make that mistake are, "That was so stupid, I'm not even going to explain why you're wrong. Just do what I say." That they're worth little enough to the community as to make them acceptable targets of public ridicule. If I had made the mistake, I would feel alienated by this.
And this isn't relevant. In fact, familiarity with the sequences would be in some ways negatively useful in the context (given that it may give the assumption that such usages of Rational in titles was the endorsed norm.)
You're right. What I meant was closer to, "insufficiently exposed to those portions of the sequences that warn against improper uses of words as to have internalized a certain level of caution about how they communicate," but I hadn't recalled the confounding counterexamples you reference (as mentioned here) at the time.
I also notice that "misinterpreting the joke" has little to do with my actual objection and will amend the great-grandparent accordingly. Thank you for prompting me to clarify.
Is the temporary amusement of some at the sniping of those others' status worth potentially alienating them from the community, even if they number less than "most"? I do not want such "ridicule of the less socially experienced and/or quick to read sequences" norms to become prevalent here.
Downvoted because, while I agree with the content of the message [1], I object to the way it was delivered, which seems to me to imply that an acceptable reaction to those who make the mistake is, "That was so stupid, I'm not even going to explain why you're wrong. Just do what I say." That they're worth little enough to the community as to be acceptable targets of ridicule. If I had been publicly admonished in this way, I would feel alienated.
[1] Frivolous use of the word "rationality" and its conjugates in post titles needs to be curtailed and prevented.
Edited to clarify. (Thanks, wedrifid!) Original text follows for context, but please disregard.
Downvoted for status signalling at the expense of newcomers who can reasonably be expected to not have read A Human's Guide to Words yet, without at least linking to an accessible explanation for those who might misinterpret the joke.
Seconding "The Tripods Trilogy" by John Christopher. It was my introduction to sci-fi and had a stong emotional impact.
I found this person's anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.
A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.
But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more severe. A bug gets everything that it affects wrong. And fixing bugs doesn't improve your performance in a continuous fashion; you can fix a "little" bug and immediately go from getting everything wrong to everything right. You can't really describe the accuracy of a buggy program by the percent of questions it gets right; if you ask it to do something different, it could suddenly go from 99% right to 0% right. You can only define its behavior by isolating what the bug does.
Often, I think mistakes are more like bugs than errors. My clinkers weren't random; they were in specific places, because I had sub-optimal fingerings in those places. A kid who gets arithmetic questions wrong usually isn't getting them wrong at random; there's something missing in their understanding, like not getting the difference between multiplication and addition. Working generically "harder" doesn't fix bugs (though fixing bugs does require work).
Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as "stupid."
Tags like "stupid," "bad at _", "sloppy," and so on, are ways of saying "You're performing badly and I don't know why." Once you move it to "you're performing badly because you have the wrong fingerings," or "you're performing badly because you don't understand what a limit is," it's no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It's not you, it's the bug.
This also applies to "lazy." Lazy just means "you're not meeting your obligations and I don't know why." If it turns out that you've been missing appointments because you don't keep a calendar, then you're not intrinsically "lazy," you were just executing the wrong procedure. And suddenly you stop wanting to call the person "lazy" when it makes more sense to say they need organizational tools.
"Lazy" and "stupid" and "bad at _" are terms about the map, not the territory. Once you understand what causes mistakes, those terms are far less informative than actually describing what's happening.
Web app idea: I'm posting this comment immediately and without editing so I don't forget the idea before I get a chance to write it down/work it out more, as I have to leave my computer soon.
- Display a short passage that illustrates something irrational that people do or think, with instructions for the reader to enter into a text box the first or most important thing that came to mind and then press a "ready" button.
- Ready button reveals a question of the form, "Were your thoughts similar to any of the following?" with a list of questions/remarks you would hope a rationalist would (or wouldn't) ask/make.
- Yes/No buttons save text box and button answers, clear the text box and question fields and replace the passage with a new one.
No priming by reading questions before passages. Writing their thoughts before seeing the question will hopefully keep people honest. Saving text box with answers allows answer auditing. Each passage's irrationality may be more or less obvious depending on a person's background. Same with desired/undesired thinking examples with questions (that's what we're measuring with this though, isn't it?).
Positive example question: Yes = +1, No = 0 Negative example question: Yes = -1, No = 0
With even split between positive/negative example questions, rationalists should score = 1/2 # of questions asked. More questions answered = more confidence in estimate. Wider range of topics addressed in questions = more confidence in estimate.
Edited to add: I created a storyboard for the app's testing process here and have started a list of example passages with desired/undesired responses here.
Happiness, as a state of mind in humans, seems less to me about how strong the "orgasms" are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of 'human' and 'happiness' to a computer)?
I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people's intuitions about the Felix and Wireheading problems.
After sleeping on this and thinking about it all day at work, I made a game. I'd like to make the wording more ritualistic and provide descriptive play examples eventually, but here's a good untested first draft: (Note: I did not read any comments before posting this.)
ETA: I will be editing the rules to remove/redirect perverse incentives and add classroom/tournament formats and examples, but may not always have this post completely up to date. The most recent version can always be found here.
The Un-Naming for 2-6 players
Materials: A deck of words. (stack of notecards, dictionary/internet, pen) Tokens (poker chips, m&m’s, pennies, etc.) A timer (egg-timer, mini-hourglass, mobile phone app., etc.) A lighter and ashtray (see optional rules)
Rule 1: Unless stated otherwise within the rules, players must never read any of The Un-Naming’s cards aloud, sign their translation or interpretation via gesture, or reveal them face up to each other (cards should be embossed with braille when used with visually handicapped players).
Rule 2: Choose a player to go first by any arbitrary means available. This person will henceforth be referred to as the Describer.
Rule 3: Choose a method of passing the Describer position from one player to the next, such that all players hold the position an equal number of times during this session of the game.
Rule 4: The Un-Naming has at least four phases: Description, Abstraction, Concretization, and Passing The Torch. A single repetition of each phase will henceforth be referred to as a Cycle.
Rule 5: Choose a maximum time for each Cycle to last. At the beginning of each Cycle the Describer will set a timer for this amount.
Rule 6: If the timer goes off before a Cycle is complete, the players must finish the phase they’re on and then move directly to Passing The Torch.
Rule 7: The Describer may tell a player to “Stop,” move on to another player, then return immediately to the previous player if that player hesitates too long before providing a description.
Rule 8: Choose a maximum number of Cycles to complete this session, after which tokens will be counted and the game will end. This number must be a multiple of the number of players.
Rule 9: Play starts with the Description phase. Follow all phase instructions until the game ends.
Optional Rule 10: The players with the fewest tokens after all cycles have completed lose their names for the next hour (or the rest of the social event) and cannot be referred to by them.
Description:
- The Describer starts the Cycle timer, draws a word from the deck, then describes one usage of the word without uttering it or any direct synonyms (see Rationalist Taboo), in that order.
- Proceed to Abstraction or Concretization.
- This phase may only occur once per Cycle.
Abstraction:
- The Describer points to another player and utters the word “Abstract.”
- The designated player describes a category of which e believes the previously described usage to be an example without uttering the words for, or any direct synonyms of, either.
- If the Describer believes the described category matches the usage e described, e hands the designated player a Token.
- Repeat 1-3 until all players other than the Describer have abstracted.
- Proceed to Concretization or Ascension.
- This phase may only occur once per Cycle.
Concretization:
- The Describer points to another player and utters the word “Concretize.”
- The designated player describes what e believes to be an example of the previously described usage without uttering the words for, or any direct synonyms of, either.
- If the Describer believes the described example matches the usage e described, e hands the designated player a Token.
- Repeat 1-3 until all players other than the Describer have concretized.
- Proceed to Abstraction or Descension.
- This phase may only occur once per Cycle.
Passing the Torch
- Once Description, Abstraction and Concretization have occurred, the Describer hands the Cycle timer to the next player in line for the position, but keeps the Word card on his person for tallying purposes.
- The new Describer starts the next Cycle. Optional: The Describer burns his Word in effigy before passing the lighter, ashtray and Cycle timer along.
Ascension: Optional Phase (replaces Passing The Torch and Description)
- If all other players are handed tokens during the Abstraction phase, the Describer may declare “Ascension!”
- The Describer chooses another player’s category description to be a description of the next Word and that player becomes the Describer as though Passing The Torch.
- The new Describer writes the new Word on a notecard, re-starts the Cycle timer and repeats the category description. This counts as a new Cycle.
- Proceed to Abstraction.
Descension: Optional Phase (replaces Passing The Torch and Description)
- If all other players are handed tokens during the Concretization phase, the Describer may declare “Descension!”
- The Describer chooses another player’s example description to be a description of the next Word and that player becomes the Describer as though Passing The Torch.
- The new Describer writes the new Word on a notecard, re-starts the Cycle timer and repeats the Example description. This counts as a new Cycle
- Proceed to Concretization.
I think the most difficult part of implementing this will be finding words that will place the group near the middle of the abstraction-concreteness lattice. Primary colors and emotional states should work well as a starting point.
Yep! I and my father will be going anyway.
After some thought, I hereby create Max Agency! Plucky comic superhero mascot of Zenith Agency (Z.A. Huzzah!) ...for Consequential Action (Z.A.C.A.) The acronym for which happens to be Max's battlecry, but only when shouted in triplicate of course!
Now that I have a word, the idea of an agency without agents (only aspiring agents) tickles me tremendously.
Other thoughts: Agency Institute for Rationality Training (A.I.R. Training)
Agency Foundation for Applied Rationality (A.F.A.R.)
I like how SIAI's name references both the event you're working toward and method of achieving it. Is there a single word that describes a watershed event that would indicate the rationality institute's direct success like "Singularity" does an intelligence explosion? That supporters could rally around and label themselves by (singularitarian)? A word for approximating the ideal Bayesian updater, for felling akrasia, for actually changing one's mind? Can we create or annex one?
Exaltation, Transcendence, Apotheosis, Enlightenment, Upload, Elevation, Laudation, Upgrade, Epiphanic, and Ideate come to mind, but what I'm looking for is something more like "the act (event) of becoming your best self" in a word. Too many of these have strong religious connotations for me.
Super Sapiens! ...I mean sapience.
Adroit Acumen
Elevated Erudition
Superb Sagacity
Crack Contemplation
In this case it's redirecting minds. That's the ultimate goal isn't it?
Now that would be completely unacceptable indeed. Is, say, being on the business end of the mental health system in the worst way possible something like that? For myself, I don't consider a life with something like that to be worth living.
So, the only reason you're still alive is that you haven't bothered (or been able) to verify whether you've forgotten thoughts you don't remember having had? My sympathies.
Born and raised in Price Hill on the edge of Delhi. I have no recent close-up photos of myself, but you can probably find an old one by googling my username. Otherwise I'll be the pale, nearsighted ginger with a ponytail, and some pi on his shirt.
Holding off on proposing locations. I am not familiar with the northern half of Cincinnati.
Ooh! This excites me. I'll start looking at possible venues here in Cincy when I get off work today. I can also ping the local skeptic and atheist meet-up groups to see if there are any LessWrong readers among them who missed the poll and this posting (as I almost did) and have them reply.
Elena Huston - Future In My Hands: An anthem against status quo bias, the sunk costs fallacy and appeals to authority (interpreting each even quatrain as a denigration of the prior odd quatrain).
I don't know what donating my time to SI would entail other than writing, so find it difficult to imagine in a positive frame. I may be able to get around this by training myself on the five-second level to instead mentally contrast a charity's desired future outcomes with the present (or your favorite charity's desired future outcomes, when tempted to switch) when asked, but how many others in my position will do so?
So where can I find anecdotes about how awesome and fun it is to be saving the world through FAI research and how rewarding it is to see your work have a direct impact, so I have something vicariously available to imagine when you ask me to donate my time?
If you have three arbiters and require at least two of them to be party to any transactions and the creation of new arbiters, one can be a trusted or paid third party without risking theft, account freeze or unauthorized arbiter creation and you can safely recover from losing a single device.
I am ignorant of the details necessary to implement this and how difficult it might be.
There is no problem with "Munchkinism." The problem is that in old RPG's the rules imply poorly designed (see lack of challenge upon full understanding of the system) tactical battle simulation games with some elements of strategy, while the advertising implies a social interaction and story-telling game without giving the necessary rules to support it. Thus different people think they're playing different games together and social interaction devolves into what people imagine they would do given a hypothetical situation without consequences (at least until the consequences are made explicit, violating their expectations as you note in your example).
Yes, that would be fair. Are you aware of any good methods for learning and practicing to be more concise?
On top of that, I expect there are already plenty of non-native, dedicated translators and interpreters for a given language gap. Oops, thank you both.
Oops. I realize now that I was confusing the definition of belief used here with the definition used for the game (a principled to-do list), so the idea isn't as applicable as I originally thought, but I'll try to answer you anyway.
As a player you can change your character's beliefs almost as often as you like and the game rewards you for tailoring them to the context of each scene you enact, with different rewards depending on whether you act in accordance with them or undermine them (this encourages you to have conflicting beliefs, which increases the drama of the shared story). Then, between game sessions, all players involved nominate those beliefs you appear never to undermine for promotion to trait-hood (indicating you've fulfilled your character's goals and they no longer need testing), and those you appear always to undermine for changing. Traits often give game mechanical bonuses and penalties, but can take almost a full story arc of deliberate undermining before being nominated for change.
Conflict in the game is handled in a very specific way. You describe your intent (what you want your character to achieve in the story) and how it is achieved, the GM declares the skill rolls or other game mechanics required and sets the stakes (consequences for failure). If the GM and none of the players can think of an interesting direction a failed roll could take the story in then no roll is made, you get what you wanted and the group moves on to the next, more interesting, conflict. Otherwise, the stakes are negotiated and you choose whether to roll or change your mind. Once the roll is made it's results are irreversible within the fiction.
To a large degree it is up to the GM to create interesting and painful stakes with which to challenge your beliefs, so your mileage will vary.
for just about any language there are huge numbers of native speakers who speak professional-level English
Exception: Sign Languages, though they have relatively small populations.
Re-reading this post reminded me of Burning Wheel, a table top role playing game that's reward system actively encourages questioning, testing and acting upon the goals and beliefs of a fictional character created by the player, but simultaneously and subversively places the character in complex situations that force the player to change those beliefs over time as a result of the conflicts they cause (and somewhat according to chance). The player has to accept that his character may become something completely alien to how it started during the course of play, yet continue to empathize with it in order to be rewarded for acting out it's actions in the fiction.
Would (re)designing such a game around further encouraging elements of rationality be too close to Dark Arts? (Luke Crane, the game's creator, sometimes speaks about game design as a form of mind control at the gaming conventions he frequents.)