Posts
Comments
I believe Alicorn meant to claim that the larger class of electric vehicles for ~1 person -- scooters, tricycles, skateboards, ebikes, etc -- are about to take off in a big way because there are a lot more people who would buy them if they knew about them/saw their friends using them than there are using them now
And our first child, Merlin Miles Blume, was born October 12th, 2016 =)
I think for me the problem is that I'm not being Bayesian. I can't make my brain assign 50% probability in a unified way. Instead, half my brain is convinced the hotel's definitely behind me, half is convinced it's ahead, they fail to cooperate on the epistemic prisoner's dilemma and instead play tug-of-war with the steering wheel. And however I decide to make up my mind, they don't stop playing tug-of-war with the steering wheel.
...amusingly enough, "sit in my room and write posts on Less Wrong" turned out to be a pretty good move, in retrospect.
Further update for future biographers: we got married on September 21st at the UC Berkeley botanical garden, Kenzi officiated and YVain gave a toast =)
Thank you for causing me to read that =)
"Anyone who wants to make disturbing the peace a crime is probably guilty of 'hating freedom'"
No, they have different priorities from you.
I've always assumed it meant "flight from death"
Harry frowned. "Well, I could listen to it, or the Dark Lord... oh, my parents. Those who had thrice defied him. They were also mentioned in the prophecy, so they could hear the recording?"
"If James and Lily heard anything different from what Minerva reported," Albus said evenly, "they did not say so to me."
"You took James and Lily there? " Minerva said.
"Fawkes can go to many places," Albus said. "Do not mention the fact."
Frankly, this reads like a non-answer to me.
Fantastic work, thank you =)
For anyone else unpacking the zip file, note you'll want to create a new directory to unzip it into, not just unzip it in the middle of your home folder.
Physicalism is the radical notion that people are made of things that aren't people.
Er, walking on a narrow ledge 300 feet off the ground is still a bad idea because, y'know even with something simple like walking, sometimes you roll a natural 1 and trip.
This seems to be a serious problem. What do you do when you have enough vague procrastinatory ugh-fields that just reading good advice about procrastination makes you deeply afraid that you're going to have to think about one of them, so you wind up afraid to read/process it?
Thanks XD
I mean, assuming that sea piracy to fund efficient charity is good, media piracy to save money that you can give to efficient charity is just obviously good.
I skimmed the options too quickly -- I'd have picked "not offensive" if I'd noticed it.
See also: success myth
I feel like there is a bias against reproduction on LessWrong.
Is there? I kinda hope not.
I'm not sure what "supernatural" means. Out of the ordinary? But isn't deep rationalism out of the ordinary? What are we talking about?
In the local parlance, "supernatural" is used to describe theories that have mental thingies in them whose behavior can't be explained in terms of a bunch of interacting non-mental thingies. Pretty sure the definition originates with Richard Carrier.
You get 14 points anyway! ^_^
Qbrfa'g frrz jebat gb fnl gung vg'f ng yrnfg gur vapbzr ybfg, gubhtu, juvpu vf nyy lbh arrq gb bireqrgrezvar na nafjre.
Is income before or after taxes?
Yeah, wouldn't stay selected.
I think you're being oversensitive -- if I said the NYC Swing Dancing Club had two babies, I don't think anyone would bat an eye.
This is a really good post.
If I can bother your mathematical logician for just a moment...
Hey, are you conscious in the sense of being aware of your own awareness?
Also, now that Eliezer can't ethically deinstantiate you, I've got a few more questions =)
You've given a not-isomorphic-to-numbers model for all the prefixes of the axioms. That said, I'm still not clear on why we need the second-to-last axiom ("Zero is the only number which is not the successor of any number.") -- once you've got the final axiom (recursion), I can't seem to visualize any not-isomorphic-to-numbers models.
Also, how does one go about proving that a particular set of axioms has all its models isomorphic? The fact that I can't think of any alternatives is (obviously, given the above) not quite sufficient.
Oh, and I remember this story somebody on LW told, there were these numbers people talked about called...um, I'm just gonna call them mimsy numbers, and one day this mathematician comes to a seminar on mimsy numbers and presents a proof that all mimsy numbers have the Jaberwock property, and all the mathematicians nod and declare it a very fine finding, and then the next week, he comes back, and presents a proof that no mimsy numbers have the Jaberwock property, and then everyone suddenly loses interest in mimsy numbers...
Point being, nothing here definitely justifies thinking that there are numbers, because someone could come along tomorrow and prove ~(2+2=4) and we'd be done talking about "numbers". But I feel really really confident that that won't ever happen and I'm not quite sure how to say whence this confidence. I think this might be similar to your last question, but it seems to dodge RichardKennaway's objection.
I've seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, "Because otherwise you can't talk about numbers as opposed to something else," so AFAIK it's theoretically possible that I'm the first to spell out that idea in exactly that way, but it's an obvious-enough idea and there's been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.
If memory serves, Hofstadter uses roughly this explanation in GEB.
Central planning is pushing their goals into everyone's individual incentive. Humans aren't IGF maximizers, and will respond to financial incentives.
With central planning, more women than men makes sense, and this system has central planning. Everyone isn't just trying to maximize IGF
Fable of the Dragon Tyrant would make a good animated short, I think.
OK, let's say you're looking down at a full printout of a block universe. Every physical fact for all times specified. Then let's say you do Solomonoff induction on that printout -- find the shortest program that will print it out. Then for every physical fact in your printout, you can find the nearest register in your program it was printed out of. And then you can imagine causal surgery -- what happens to your program if cosmic rays change that register at that moment in the run. That gives you a way to construe counterfactuals, from which you can get causality.
ETA: There's still some degrees of freedom in how this gets construed though. Like, what if the printout I'm compressing has all its info time-reversed -- it starts out with details about what we'd call the future, then the present, then the past. Then I'd imagine that the shortest program that'd print that out would process everything forward, store it in an accumulator, then run a reversal on that accumulator to print it out, the problem being that the registers printed out from might be downstream from where the value was. It seems like you need some extra magic to be sure of what you mean by "pretend this fact here had gone the other way"
This question seems decision-theory complete. If you can reify causal graphs in situations where you're in no state of uncertainty, then you should be able to reify them to questions like "what is the output of this computation here" and you can properly specify a wins-at-Newcomb's-problem decision theory.
I am still trying to figure out how to Have Computers correctly, because they suffer from this weird constraint where they're only really useful if I can carry them all over, but if I do that I lose them all the time.
(Symptomatically, I'm typing this on your broken/cast-off macbook =P)
a 15 minute break every 90 minutes
People can work for 90 minutes?! Like... without stopping?
Ah, gotcha =)
Sorry, what do you mean by "pass an ideological Turing test"? The version I'm familiar with gets passed by people, not definitions.
"Sexism" is a short code. Not only that, it's a short code which has already been given a strong negative affective valence in modern society. Fights about its definition are fights about how to use that short code. They're fights over a resource.
That code doesn't even just point to a class of behaviors or institutions -- it points to an argument, an argument of the form "these institutions favor this gender and that's bad for these reasons". Some people would like it to point more specifically to an argument that goes something like "If, on net, society gives more benefits to one gender, and puts more burdens on the other, then that's unfair, and we should care about fairness." Others would like it to point to "If someone makes a rule that applies differently to men and women, there's a pretty strong burden of proof that they're not making a suboptimal rule for stupid reasons. Someone should probably change that rule". The fight is over which moral argument will come to mind quickly, will seem salient, because it has the short code "sexism".
If I encounter a company where the men have a terrible dress code applied to them, but there's one woman's restroom for every three men's restroom, the first argument might not have much to say, but the second might move me to action. Someone who wants me to be moved to action would want me to have the second argument pre-cached and available.
In particular, I'm not a fan of the first definition, because it motivates a great big argument. If there's a background assumption that "sexism" points to problems to be solved, then the men and the women in the company might wind up in a long, drawn-out dispute over whose oppression is worse, and who is therefore a target of sexism, and deserving of aid. The latter definition pretty directly implies that both problems should be fixed if possible.
I'll take a shot.
What we choose to measure affects what we choose to do. If I adopt the definition above, and I ask a wish machine to "minimize sexism", maybe it finds that the cheapest thing to do is to ensure that for every example of institutional oppression of women, there's an equal and opposite oppression of men. That's...not actually what I want.
So let's work backwards. Why do I want to reduce sexism? Well, thinking heuristically, if we accept as a given that men and women are interchangeable for many considerations, we can assume that anyone treating them differently is behaving suboptimally. In the office in the example, the dress code can't be all that helpful to the work environment, or the women would be subject to it. Sexism can be treated as a pointer to "cheap opportunities to improve people's lives". The given definition cuts off that use.
In a Truman Show situation, the simulators would've shown us white pin-pricks for thousands of years, and then started doing actual astrophysics simulations only when we got telescopes.
The other day Yvain was reading aloud from Feser and I said I wished Feser would read The Simple Truth. I don't think this would help quite as much.
The Simple Truth sought to convey the intuition that truth is not just a property of propositions in brains, but of any system successfully entangled with another system. Once the shepherd's leveled up a bit in his craftsmanship, the sheep can pull aside the curtain, drop a pebble into the bucket, and the level in the bucket will remain true without human intervention.
Paths are made by walking
-Franz Kafka (quoted in Joy of Clojure)
I can pick up a mole (animal) and throw it. Anything I can throw weighs one pound. One pound is one kilogram.
--Randal Munroe, A Mole of Moles
Sometimes magic is just someone spending more time on something than anyone else might reasonably expect
--Teller (source)
Your confidence is inspiring, but I'd bet some false trichotomies are more obvious than others. (Though I can't immediately think of any examples of subtler false trichotomies to rattle off, so yeah)
Well, no one's voting for her anyway.
Best pony? [pollid:13]
At risk of failing to JFGI: can someone quickly summarize what remaining code work we'd like done? I've started wading into the LW code, and am not finding it quite as impenetrable as last time, so concrete goals would be good to have.
...that really should have occurred to me first.