Posts
Comments
Any chance you'd release this under a non-copyleft license?
But also note that while the past may be fixed, your knowledge of the past is probabilistic. I assume there is evidence you could encounter that would convince you that Putin ordering airstrikes in Syria didn't actually happen.
I take notes on my phone. I think some big tradeoffs compared to a paper notebook are less ease of writing, more convenience, less physical space taken up, and an end result that is easier to back up and work with in many ways.
Is human mind space countably finite? Just bring us all back please, I'll be in there somewhere.
I don't think there's a precise threshold, but when I use the phrase "change the world", I'm pretty confident that my interlocutor is thinking of people like Steve Jobs and companies like Apple and not thinking of people like me who don't have our own wikipedia articles and companies like the ones I work for that don't have names many would recognize and aren't credited with inventing/popularizing important product categories that millions of people now use every day.
People who "change the world" make big political, technological, or scientific changes and bring them into the lives of many people.
I am a recent graduate of the University of Toronto, where we've seen that talks on campus that are viewed as opposing or questioning feminism will have their advertisements torn down and mobs organized by the student union will show up to harass and physically block attendees and take other disruptive actions like pulling fire alarms. I expect this would generalize to suppression of other forms of un-PC speech and thinking.
That said, the administration at UofT seems to respond to these incidents more reasonably than the UCLA administration in the article (i.e. they didn't thoughtlessly capitulate), and my experience from taking courses across science, social science, and humanities faculties is that the atmosphere in general is definitely not extreme to the level of fire alarm pulling. I would guess that the extreme elements are mostly local to a small number of particular academic subject areas like women's studies, but that this minority has significant power to influence what is acceptable speech and thought.
I used to be creeped out by house centipedes, but I decided to get along with them after reading that they are generally harmless to humans and useful to have around because they kill all sorts of other household pests.
I think just remembering that they are a good thing and thinking of them as being on my team was helpful. I also gave cool names to the ones living in my basement (e.g. Zeus, Odin, Xerxes) and talked to them e.g. "Hi, centipedes. Keep up the good work, but please do try to stay away from me during the day, and remember our deal: you can live here, but my species has an ingrained fear of you guys, so if you drop down onto me from the ceiling or something I'm probably going to instinctively smash you."
My worry in that case would be present conditions bleeding into the memory and evaluation of those earlier pings. For example, I'd expect that when you're hungry your relative ranking of past ping moments is going to change to more heavily weigh moments when you were eating.
Your school might have useful resources. If there is a career center, go there and see what kind of resources and help are available. There could be a student internship program, student job boards, career fairs, etc. Professors sometimes have work opportunities as well (they might announce these, or you may have to ask).
I've read that the CEO of Levi's recommends washing jeans very infrequently.
Won't they smell? I have a pretty clean white-collar lifestyle, but I'm concerned about wearing mine even once or twice between machine washing. Is it considered socially acceptable to re-wear jeans?
Did the survey. Thank you once again, Yvain.
Did the survey. Thanks, Yvain.
Instrumental stupidity is the art of winning at Pinkie Pie's utility function.
Looks like a very useful list. One comment: I found the example in 2(a) a bit complicated and very difficult to parse.
Took the whole survey. My preferred political label of (Radical) Centrist survived all explicit radio buttons.
The phrasing I used there is indefensible, but the general idea I'm trying to get at is that many people acquire tons of things which in theory increase their power but in practise don't because they are never on hand when needed. Added to this are the tons of things many people acquire whose uselessness goes unnoticed because of a general failure to criticize potential acquisitions for power increasing ability at all.
My default position towards things is hate. I hate stuff. It gets dusty, it has to be managed, it takes up space. A room with lots of stuff in it is cognitively difficult for the brain to process; having lots of stuff around actually drains your mental energy.
http://www.paulgraham.com/stuff.html
I generally dislike owning things that I can't physically carry with me at all times (because "if you don't have something with you, owning it probably hasn't made you more powerful"). Consequently, the majority of what I own I carry with me. The only real exceptions are clothes and books, and I launched a project this week to replace the latter with digital versions which is moving at a decent rate of ten shelves a day.
Aren't comics like that the source of the cached thought we're trying to improve on here?
Celestia here doesn't seem to be having fun.
I think this is the big one. Sure, Celestia says that death is bad. She also describes her life as prolonged suffering and says that she envies mortals because immortals have purpose but don't actually live. The opinions and example of Celestia aren't necessarily to be taken as the theme of the work itself, but I can understand why people might be confused.
Arrives late to the party
Really great story, iceman. Some comments:
*Running the story through a beta group of non-LW bronies would definitely be a good idea to catch which ideas may need more explanation.
*I really like how it's repeatedly show that when you interact with a super-intelligence, even if it's just free conversation, the state of mind you leave in is probably going to be the state of mind it wants you to leave in. As others have said, this could be driven home even stronger by showing CelestAI strongly tailoring her interaction to different humans. Maybe even have her directly contradict herself in the information she uses to persuade people to upload.
*Related to the above, I can imagine a non-LW reader getting to chapter 5 and forever after wondering why Lars or anyone else never tries to shut Celestia down. I'm not sure how obvious an intuition it is that by the time you notice a super-intelligence doing things that make you want to stop it, it's probably already far too late to stop it. In this case I assumed Celestia would have made sure that she is already invincible by the standards of human technology before launching any plan that's going to antagonize people, but an unsophisticated reader might not get that.
This may just be me, but I'd prefer a bit more closure on the cosmic scale story. What *is Celestia's plan against running out of matter? Slow the clock speed of her servers over time? Bust open the physics textbooks and hope for a useful paradigm shift?
Anyway, very good stuff.
Daily I measure weight and workout performance.
Monthly I look at my receipts and spending and whether there were any large deviations from my budget.
Every six months I pick an unexceptional week and log everything I do by category (e.g. sleeping, preparing food, working, studying). I create summary tables for the typical week and use them to choose two or three improvements to implement in the next iteration.
Pragmatically, if I non-blindly pick some representation of turing machines that looks simple to me (e.g. the one Turing used), I don't really doubt that it's within a few thousand bits of the "right" version of solomonoff, whatever that means.
Why not?
So is there then a pragmatic assumption that can be made? Maybe we assume that if I pick a turing machine blindly, without specifically designing it for a particular output string, it's unlikely to be strongly biased towards that string.
Solomonoff Induction is supposed to be a formalization of Occam’s Razor, and it’s confusing that the formalization has a free parameter in the form of a universal Turing machine that is used to define the notion of complexity.
I'm very confused about this myself, having just read this introductory paper on the subject.
My understanding is that an "ideal" reference UTM would be a universal turing machine with no bias towards any arbitrary string, but rigorously defining such a machine is an open problem. Based on our observation of UTMs, the more arbitrary simplifications a Turing Machine makes, the longer its compiler will have to be on other UTMs. This is called the Short Compiler Assumption. Since we can't agree on a single ideal reference UTM, we instead approximate it by limiting ourselves to a class of "natural" UTMs which are mutually interpretable within a constant. The smaller the constant, the less arbitrary simplification the UTMs in the class will tend to make.
This seems to mesh with the sequences post on Occam's Razor:
What if you don't like Turing machines? Then there's only a constant complexity penalty to design your own Universal Turing Machine that interprets whatever code you give it in whatever programming language you like.
What I'm confused about is this constant penalty. Is it just "some constant" or is it knowable and small? Is there a specific UTM for which we can always write a short compiler on any other UTM?
I'm getting out of my league here, but I'd guess that there is an upper bound on how complex you can make certain instructions across all UTMs because UTMs must (a) be finite, and (b) at the lowest level implement a minimal set of instructions, including a functionally full set of logical connectives. So for example, say I take as my "nonbiased" UTM a UTM that aside from the elementary operations of the machine on its tape, jump instructions, etc. has only a minimal number of instructions implementing a minimally complete operator set with less than two connectives: {NAND} or {NOR}. My understanding is that anything that's a Universal Turing Machine is going to have to itself have a small number of instructions that implement the basic machine instructions and a complete set of connectives somewhere in its instruction set, and converting between {NAND} or {NOR} and any other complete set of connectives can be done with a trivially short encoding.
1) How can I know whether this belief is true?
Expose it to tests. For example, you might stick your head out a window and look up. The theory that the sky is green strongly predicts that you should see green and only very weakly allows for you to see anything else (your eyes may occasionally play tricks on you, perhaps you are looking close to a sunset, etc.).
2) How can I assign a probability to it to test its degree of truthfulness? 3) How can I update this belief?
If you knew absolutely nothing about skies other than that they were some colour, you would start with a symmetrical prior distributed across a division of colour space. You would then update your belief every time you come into contact with entangled information. For example, every time you look up at the sky and see a colour, your posterior probability that the sky is that colour goes up at the expense of alternative colours. Observing other people describe what they see when they look at the sky and learning about how vision works and the chemical composition of the sky are also good examples of evidence you could use.
In practise, manually updating every belief you have all the time is far to arduous, and most people collect large amounts of beliefs and information prior to learning much about statistics anyway. Because of this, the prior probability you assign to your beliefs will often have to be a quick approximation.
Cue for noticing rationalization: In a live conversation, I notice that the time it takes to give the justification for a conclusion when prompted far exceeds the time it took to generate the conclusion to begin with.
I think the condition from the beginning is that you're picking a unique girlfriend who knows all microphysical facts about your universe, including the content of any letters you have or will ever write.
I think that was an unfair clipping. The context of that quote was the OP's statement about the usefulness of getting clarification of language usage.
Let's suppose that this usage is in fact more common than the two that I cited as "correct". It seems to be either false or meaningless. What is Bob saying here?
You said in the OP that the more common usage takes the phrase to refer to any exception. So from that, Bob probably means that the brown bear you saw is an exception.
How does Rationalist Taboo help us?
Seeing as how Bob probably means that the brown bear is an exception, his argument is poor. So I would then say something like, "since you agree that there is an exception, you should agree that not all bears are black or white". If he disagrees, then he isn't using the common meaning after all and I would ask him to taboo the phrase "exception that proves the rule" to find out what he does mean.
This is my thought as well. Every one of the examples given I would attribute to dialectal differences between common usage and the more technical and jargon-filled language used by scientists and science fans. SaidAchmiz even admits that for some of these, the usage he doesn't like is more common, which is a big hint. My understanding is that speakers very rarely adopt usage which will be misunderstood by the language group they typically speak with.
“hmm, is that really what you meant to say?” is often met with absurd arguments to the effect that no, this phrasing is not nonsensical after all, these words mean what I want them to, and who the hell are you to try to legislate usage, anyway?
Isn't this exactly why we have the technique of Rationalist Taboo? It doesn't matter whether the meaning someone ascribes to a word seems stupid to you, once you understand what they mean by the word, and they understand what you mean by the word, you can move on. The best ways I've found to do this are to coin two new words (I like to prepend the word in question with the name of the person whose meaning we are trying to capture), or to always replace the word with its intended substance for the rest of the discourse.
The point I was making is that you attempt to maximise your utility function. Your utility function decreases when you learn of a bad thing happening
I think you're still confused. Utility functions operate on outcomes and universes as they actually are, not as you believe them to be. Learning of things doesn't decrease or increase your utility function. If it did, you could maximize it by taking a pill that made you think it was maximized. Learning that you were wrong about the utility of the universe is not the same as changing the utility of the universe.
I believe APMason's point is that your benchmarks are testing for anti-non-mainstreamism
The flaw I'd point out is that Clippy's utility function is utility = number of paperclips in the world not utility = number of paperclips Clippy thinks are in the world.
Learning about the creation or destruction of paperclips does not actually increase or decrease the number of paperclips in the world.
As near as I can tell I'm -want/+like/-approve on both wireheading and emperor-like superiority.
My general feeling has been that we explicitly declare Crocker's Rules or act as though they are an implicit norm so frequently that we might as well add the declaration to the site banner. I always considered this to be a byproduct of a community whose group knowledge includes acknowledgment of emotional biases, debate-as-warfare conceptual metaphors, etc.
I've never understood how severe food yield issues from overpopulation are supposed to come about. If the population is increasing far faster than we can increase the food yields, wouldn't the price of food massively increase and stop people from being able to afford to have children? Is the idea that the worldwide agricultural system would be gradually overtaxed and then collapse within a short period? If not, what were all the people eating the day before catastrophic overpopulation is declared?
Wow, you guys did an awesome job. Very funny to read through and the music is hilarious.
snicker Vampire Thomas Nagel knows what it's like to be a bat
I took the survey. Thanks for putting this together.
The accuracy has been doubled!
Extensive use of abbreviations and acronyms was primarily a convenience for writers, when writing was done by hand and then by typewriter, there is less justification for it now when most writing is done by computer.
I don't agree. My impression is that the popularity of abbreviations and acronyms is being driven by the rise in popularity of text messaging, which is usually done from phones with tiny, unusable keyboards; instant messaging, which is done in real time; and social networking site based communication, which often has hard limits on message length (e.g. Twitter).
they have a lot of advantages over the rest of the software world. They have a single product: one program that flies one spaceship. They understand their software intimately, and they get more familiar with it all the time. The group has one customer, a smart one. And money is not the critical constraint
Not that the methods here don't have their place, but it seems to me that this is a point by point list of exactly why the methodology used by this team is not used generally.
The average software project may involve many different products and many different programmers, making it difficult for anyone involved to become intimately knowledgeable with the work or for standardized programming practices to be enforced everywhere. There are usually very tight deadline and budget constraints, and the clients may or may not know exactly what they want so specifications are usually impossible to nail down and getting quick user feedback becomes very important.
The software design classes at my university teach Agile software development methods. The main idea is breaking a project down into small iterations, each one implementing a subset of the complete specification. After each iteration, the software can be handed off to testers and clients/users for evaluation, allowing for requirements to change dynamically during the development process. Agile seems to be the exact opposite of what is advocated in this article (invest money and resources at the beginning of the project and have a concrete specification before you write a single line of code).
Not prepared to advocate professional juries, but off the top of my head, I'd have a professional juror train in law, statistics, demographics, forensic science, and cognitive biases.
My point is that using violence to silence intellectual adversaries is very different from using violence against a perceived wartime enemy.
"Any sufficiently advanced magic is indistinguishable from technology." - Larry Niven
"Any sufficiently analyzed magic is indistinguishable from science!" - Agatha Heterodyne / Cinderella (explaining what Niven meant), Girl Genius
Except that based on videos and letters left behind, the hijackers considered Americans to be not just intellectual adversaries, but wartime ones. I believe the majority of the hijackers cited American military presence in the Middle East and military and economic support of Israel to that effect.
If my tenants paid rent with a piece of paper that said "moneeez" on it, I wouldn't call it paying rent.
Or they pay you with forged bills. You think you'll be able to deposit them at the bank and spend them to buy stuff, but what actually happens is the bank freezes your account and the teller at the store calls the police on you.
RA really has moved the goalposts on WA, which is one of those Dark Arts that we're not supposed to employ, even unintentionally.
It would certainly be annoying and a bit questionable to bring up your points of disagreement one at a time like RA did, but as long as he stops to update after receiving the information from WA, I don't know if I'd call this moving the goalposts.
Paul Graham said something very similar about figuring out a program:
"I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging.
For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do."
As a CS student currently at university, my experience has been identical. I also can't help but notice a similarity between these ideas and the methods of agile software development, where at each iteration you produce a working approximation of the final software, taking into account new information and new specification changes as you go.
Agreed, they can definitely get a bit absurd. This one is one of my favourites:
The short story "The Road Not Taken" by Harry Turtledove is also a good one if you can find it.
The concept is popular on 4chan's /tg/ board where they're called "humanity" stories or "humanity, fuck yeah" stories. Here's one archive of such threads:
http://suptg.thisisnotatrueending.com/archive.html?tags=humanity