Posts

Comments

Comment by Pfft on Strong men are socialist - how to use a study's own data to disprove it · 2017-06-04T16:56:44.485Z · LW · GW

So in the case of this particular paper, some other researchers did ask for the raw data, and they got it and carried out exactly the analysis I was interested in knowing about. So I guess it's a happy ending, except I didn't get to write a tumblr post back when there was a lot of buzz in the media about it. :)

Comment by Pfft on Strong men are socialist - how to use a study's own data to disprove it · 2017-06-04T00:27:25.689Z · LW · GW

This is amazingly great (I laughed out loud at the "Biceps-controlled socialism" graph), but I feel it only works because the original study authors made the rookie mistake of publishing their data set. The only time I have wanted to try something similar (for the brain mosaic paper), I hoped it would be possible to extract the data from the diagram, but no, the jpg in the pdf is sufficiently low-resolution that it doesn't work.

Comment by Pfft on Offenders' deadly thoughts may hold answer to reducing crime · 2017-01-15T01:23:11.017Z · LW · GW

Ok, so we should identify criminals with "thoughts of committing deadly violence, regardless of action", and then "many of these offenders should probably never be released from confinement". A literal thought crime.

Comment by Pfft on Open thread, Dec. 19 - Dec. 25, 2016 · 2016-12-20T17:26:38.290Z · LW · GW

Yes, there will always be some off-by-one errors, so the best we can hope for is to pick the convention that creates less of them. That said, the fact that most programming languages choose the zero-based convention seems to suggest that that's the best one.

There's also the revealed word of our prophet Dijkstra: EWD83 - Why numbering should start at zero.

Comment by Pfft on My problems with Formal Friendly Artificial Intelligence work · 2016-12-08T21:26:35.845Z · LW · GW

Yeah.

I think the orthodox MIRI position is not that logical proofs are necessary, or even the most efficient way, to make a super-intelligence. It's that humans need formal proofs to be sure that the AI will be well-behaved. A random kludgy program might be much smarter than your carefully proven one, but that's cold comfort if it then proceeds to kill you.

Comment by Pfft on Open thread, Nov. 7 - Nov. 13, 2016 · 2016-11-10T17:58:15.455Z · LW · GW

I mean, you can literally build an EmDrive yourself, but you definitely can't measure the tiny thrust yourself. You still need to trust the experts there, no?

Comment by Pfft on Open thread, Nov. 7 - Nov. 13, 2016 · 2016-11-10T17:57:19.615Z · LW · GW

Apart from the question about whether it produces any thrust, there is also the question of whether it will lead to any interesting scientific discoveries. For example, if it turns out that there was a bit of contaminating material that evaporated, the thrust is real but the space-faring implications are not...

Comment by Pfft on Open thread, Nov. 7 - Nov. 13, 2016 · 2016-11-10T17:53:37.453Z · LW · GW

Eh, elections seem hard to update on though. Before the election, I thought Clinton was 70% likely to win or so, because that's what Nate Silver said. Then Trump won. Was I wrong? Maybe, but it's not statistically significant at even p = 0.05.

So just looking at U.S. presidential elections, you'll never have enough data to see if you're calibrated or not. I guess you can seriously geek out on politics, and follow and make predictions for lots of local and foreign elections also. At that point, it's a serious hobby though, I'm much more of a casual.

Comment by Pfft on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-28T03:11:40.858Z · LW · GW

any suggestions?

Comment by Pfft on Open thread, Oct. 24 - Oct. 30, 2016 · 2016-10-28T03:08:40.236Z · LW · GW

It sounds pretty spectactular!

I found one paper about comets crashing into the sun, but unfortunately they don't consider as big comets as you do--the largest one is a "Hale-Bopp sized" one, which they take to be 10^15 kg (which already seems a little low, Wikipedia suggests 10^16 kg.)

I guess the biggest uncertainty is how common so big comets are (so, how often should we expect to see one crash into the sun). In particular, I think the known sun-grazing comets are much smaller than the big comet you consider.

Also, I wonder a bit about your 1 second. The paper says,

The primary response, which we consider here, will be fast formation of a localized hot airburst as solar atmospheric gas passes through the bow-shock. Energy from this airburst will propagate outward as prompt electromagnetic radiation (unless or until bottled up by a large increase in optical depth of the surrounding atmosphere as it ionizes), then in a slower secondary phase also involving thermal conduction and mass motion as the expanding hot plume rises.

If a lot of the energy reaching the Earth comes from the prompt radiation, then it should arrive in one big pulse. On the other hand, if the comet plunges deep into the sun, and most of the energy is absorbed and then transmitted via thermal conduction and mass motion, then that must be a much slower process. By comparison, a solar flare involves between 10^20 and 10^25 J, and it takes several minutes to develop.

Comment by Pfft on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-29T14:45:19.245Z · LW · GW

See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That's why T can be primitive recursive.

Comment by Pfft on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-28T17:10:41.509Z · LW · GW

The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That's not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that's not quite the thing that's usually meant by a universal machine.

I think the fact that you just need one loop is interesting, but it doesn't go as far as you claim; if an angel gives you a program, you still don't know how many steps to run it for, so you still need that one unbounded loop.

Comment by Pfft on Iterated Gambles and Expected Utility Theory · 2016-06-03T15:23:43.600Z · LW · GW

I'm not sure what you have in mind for treatment of risk in finance. People will be concerned about risk in the sense that they compute a probablility distribution of the possible future outcomes of their portfolio, and try to optimize it to limit possible losses. Some institutional actors, like banks, have to compute a "value at risk" measure (the loss of value in the portfolio in the bottom 5th percentile), and have to put up a collateral based on that.

But those are all things that happen before a utility computation, they are all consistent with valuing a portfolio based on the average of some utiity function of its monetary value. Finance textbooks do not talk much about this, they just assume that investors have some preference about expected returns and variance in returns.

Comment by Pfft on Iterated Gambles and Expected Utility Theory · 2016-06-01T21:19:21.978Z · LW · GW

It is very standard in economics, game theory, etc, to model risk aversion as a concave utility function. If you want some motivation for why, then e.g. the Von Neumann–Morgenstern utility theorem shows that a suitably idealized agent will maximize utility. But in general, the proof is in the pudding: the theory works in many practical cases.

Of course, if you want to study exactly how humans make decisions, then at some point this will break down. E.g. the decision process predicted by Prospect Theory is different from maximizing utility. So in general, the exact flavour of risk averseness exhibited by humans seems different from what Neumann-Morgenstern would predict.

But at that point, you have to start thinking whether the theory is wrong, or the humans are. :)

Comment by Pfft on April 2016 Media Thread · 2016-04-08T01:58:10.622Z · LW · GW

She eventually gives him the carrot pen so he can delete the recording, no?

Comment by Pfft on Lesswrong 2016 Survey · 2016-03-28T03:43:31.978Z · LW · GW

I took the survey!

Comment by Pfft on Open Thread March 21 - March 27, 2016 · 2016-03-21T14:30:18.728Z · LW · GW

I write down one line (about 80 characters) about what things I did each day. Originally I intended to write down "accomplishments" in order to incentivise myself into being more accomplished, but it has since morphed into also being a record of notable things that happened, and a lot of free-form whining over how bad certain days are. It's kindof nice to be able to go back and figure out when exactly something in the past happens, or generally reminisce about what was going on some years ago.

Comment by Pfft on Look for Lone Correct Contrarians · 2016-03-17T20:56:02.724Z · LW · GW

There is Omilibrium, which does the vote SVD-ing thing.

Comment by Pfft on Open Thread March 7 - March 13, 2016 · 2016-03-11T16:21:23.705Z · LW · GW

He is a historian, studying history of science. That subject is exactly about studying what people (scientists) are saying.

Comment by Pfft on Does Kolmogorov complexity imply a bound on self-improving AI? · 2016-02-18T17:22:43.131Z · LW · GW

I think Shane Legg's universal intelligence itself involves Kolmogorov complexity, so it's not computable and will not work here. (Also, it involves a function V, encoding the our values; if human values are irreducibly complex, that should add a bunch of bits.)

In general, I think this approach seems too good to be true? An intelligent agent is one which preforms well in the environment. But don't the "no free lunch" theorems show that you need to know what the environment is like in order to do that? Intuitively, that's what should cause the Kolmogorov complexity to go up.

Comment by Pfft on Open Thread, January 11-17, 2016 · 2016-02-17T05:24:10.078Z · LW · GW

For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend.

I figured this was an absurd caricature, but then this thing floated by on tumblr:

So when arguing against objectivity, they said, don’t make the post-modern mistake of saying there is no truth, but rather that there are infinite truths, diverse truths. The answer to the white, patriarchal, heteronormative, massively racist and ableist objectivity is DIVERSITY of subjectivities. And this, my friends, is called feminist epistemology: the idea that rather than searching for a unified truth to fuck all other truths we can understand and come to know the world through diverse views, each of which offers their own valid subjective view, each valid, each truthful. How? by interrupting the discourses of objectivity/normativity with discourses of diversity.

Objective facts: white, patriarchal, heteronormative, massively racist and ableist?

Comment by Pfft on Does Kolmogorov complexity imply a bound on self-improving AI? · 2016-02-14T22:05:02.112Z · LW · GW

Eliezer wrote a blog post about this question!

Comment by Pfft on February 2016 Media Thread · 2016-02-12T20:55:10.171Z · LW · GW

Realistic kissing simulator to get over the fear of kissing

Ok, this is pretty amazing.

Comment by Pfft on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T18:04:24.693Z · LW · GW

I guess because people want to live in the existing cities? It's not like there is nowhere to live in California--looking at some online apartment listings you can rent a 2 bedroom apt in Bakersfield CA for $700/month. But people still prefer to move to San Francisco and pay $5000/month.

Comment by Pfft on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-27T17:53:38.769Z · LW · GW

In animal training it is said that best way to get rid of an undesired behaviour is to train the animal with an incompatible behaviour. For example if you have a problem with your dog chasing cats, train it to sit whenever it sees a cat -- it can't sit and chase at the same time. Googling "incompatible behavior" or "Differential Reinforcement of an Incompatible Behavior" yields lots of discussion.

The book Don't Shoot the Dog talks a lot about this, and suggests that the same should be true for people. (This is a very Less Wrong-style book: half if it is very expert advice on animal training, half of it is animal-training-inspired self-help, which is probably on much less solid ground, but presented in a rational, scientific, extremely appealing style.)

Comment by Pfft on Open Thread, January 11-17, 2016 · 2016-01-15T16:08:03.768Z · LW · GW

Nitpick: it would be better to write "also a theorem of epistemic logic", since there are other modal logics where it is not provable. (E.g. just modal logic K).

Comment by Pfft on Open Thread, January 11-17, 2016 · 2016-01-15T16:00:10.994Z · LW · GW

I guess your theory is the same as what Alice Maz writes in the linked post. But I'm not at all convinced that that's a correct analysis of what Piper Harron is writing about. In the comments to Harron's post there are some more concrete examples of what she is talking about, which do indeed sound a bit like one-upping. I only know a couple of mathematicians, but from what I hear there are indeed lots of the social games even in math---it's not a pure preserve where only facts matter.

(And in general, I feel Maz' post seems a bit too saccharine, in so far as it seems to say that one-up-manship and status and posturing do not exist at all in the "nerd" culture, and it's all just people joyfully sharing gifts of factual information. I guess it can be useful as a first-order approximation to guide your own interactions; but it seems dangerously lossy to try to fit the narratives of other people (e.g., Harron) into that model.)

Comment by Pfft on Rationalist Magic: Initiation into the Cult of Rationatron · 2015-12-09T21:48:46.026Z · LW · GW

What are previous examples of people on LW applying mental techniques and getting into seriously harmful states?

Comment by Pfft on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-10T19:17:44.308Z · LW · GW

Source: been making my own jam for years, had plenty of time to experiment.

So did you actually make jam without sugar and then stored it for years before eating it?

Comment by Pfft on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-14T16:57:13.565Z · LW · GW

In the story the superhappies propose to self-modify to appreciate complex art, not just simple porn, and they say that humans and babyeaters will both think that is an improvement. So to some degree the superhappies (with their very ugly spaceships) are repulsive to humans, although not as strongly repulsive as the babyeaters.

Comment by Pfft on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-13T17:19:25.615Z · LW · GW

they are moral and wouldn't offer a deal unless it was beneficial according to both utility functions being merged (not just according to their value of happiness).

I guess whether it is beneficial or not depends on what you compare to? They say,

he obvious starting point upon which to build further negotiations, is to combine and compromise the utility functions of the three species until we mutually satisfice, providing compensation for all changes demanded.

So they are aiming for satisficing rather than maximizing utility: according to all three before-the-change moralities, the post-change state of affairs should be acceptable, but not necessarily optimal. Consider these possibilities:

1) Baby-eaters are modified to no longer eat sentient babies; humans are unchanged; Superhappies like art.

2) Baby-eaters are modified to no longer eat sentient babies; humans are pain-free and eat babies; Superhappies like art.

3) Baby-eaters, humans, and Superhappies are all unchanged.

I think the intention of the author is that, according to pre-change human morality, (1) is the optimal choice, (2) is bad but acceptable, and (3) is unacceptable. The superhappies in the story claim that (2) is the only alternative that is acceptable to all three pre-change moralities. So the super-happy ending is beneficial in the sense that it avoids (3), but it's a "bad" ending because it fails to get (1).

Comment by Pfft on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-08T13:47:40.505Z · LW · GW

Sure, I think that was annoying. But it's not the stated reason for the ban.

Comment by Pfft on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-07T16:57:12.141Z · LW · GW

Also, "monogamy versus hypergamy" has been discussed on Less Wrong since the dawn of time. See e.g. this post and discussion in comments, from 2009. Deciding now that this topic is impermissible crimethink seems like a pretty drastic narrowing of allowed thoughts.

Comment by Pfft on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-07T14:15:01.666Z · LW · GW

I... what? As I understand the comment, he wanted to ban sex outside marriage. Describing that as "women should be distributed to men they don't want sex with" seems ridiculously exaggerated.

I agree that his one-issue thing was tiresome, and perhaps there is some argument for making "being boring and often off-topic" a bannable offense in itself. But this moderation action seems poorly thought through.

Edit: digging through his comment history finds this comment, where he writes it would be better to marry daughters off as young virgins. So I guess he did hold the view Nancy ascribed to him, even if it was not in evidence in the comment she linked to.

Comment by Pfft on September 2015 Media Thread · 2015-09-03T22:54:24.296Z · LW · GW

The ending is a bit rushed. Here's hoping the sequel is good, it just arrived in the mail.

I thought the sequel was more boring. The structure of the books doesn't really work very well as a series, I feel. The things that I found most appealing about Justice were the new kind of narrator (in the flashbacks, when the same events are described from multiple viewpoints of the same character), and the gradual puzzle of figuring out how the universe works. But at the end of Justice that's all over, there is just a single ancillary left, and the whodunnit-mystery has been explained. So then Sword is a lot less novel, just another space opera...

Comment by Pfft on Stupid Questions September 2015 · 2015-09-03T16:14:00.136Z · LW · GW

Yes: http://manu.sporny.org/2011/public-domain-genome/

Comment by Pfft on Typical Sneer Fallacy · 2015-09-01T18:03:57.109Z · LW · GW

I'm not sure he actually enjoyed it (e.g. 1, 2), be it through fault-finding or otherwise...

Comment by Pfft on Meta post: Did something go wrong? · 2015-09-01T15:57:14.345Z · LW · GW

I feel this only raises more questions. :)

Comment by Pfft on Proper posture for mental arts · 2015-09-01T00:42:11.517Z · LW · GW

The description of the use of posture in aikido is super interesting!

I'm a little worried that analogizing "mental arts" to martial arts might lead the imagination in the wrong direction--it evokes ideas like "flexible" or "balanced" etc. But thinking about mental states when I get a lot of research done, the biggest one by far is when I'm trying to prove some annoying guy wrong on an inconsequential comment thread on tumblr. If I could only harness that motivation, I'd be set for life. Thinking about aikido practitioners primes me for things like "zen-like and serene", not "peeved and petty".

Comment by Pfft on Words per person year and intellectual rigor · 2015-08-31T23:15:33.062Z · LW · GW

Upvoted, but mostly for the first paragraph and photo. :)

Comment by Pfft on Open Thread - Aug 24 - Aug 30 · 2015-08-31T22:45:55.545Z · LW · GW

Just calling the problem undecidable doesn't actually solve anything. If you can prove it's undecidable, it creates the same paradox. If no Turing machine can know whether or not a program halts, and we are also Turing machines, then we can't know either.

I guess the answer to this point is that when constructing the proof that H(FL, FL) loops forever, we assume that H can't be wrong. So we are working in an extended set of axioms: the program enumerates proofs given some set of axioms T, and the English-language proof in the tumblr post uses the axiom system T + "T is never wrong" (we can write this T+CON(T) for short).

Now, this is not necessarily a problem. If you have reason to think that T is consistent, then most likely T+CON(T) is consistent also (except in weird cases). So if we had some reason to adopt T in the first place, then working in T+CON(T) is also a reasonable choice. (Note that this is different from working in a system which can prove its own consistency, which would be bad. The difference is that in T+CON(T), there is no way to prove that proofs using the additional "T is never wrong" axiom are correct).

More generally, the lesson of Gödel's incompleteness theorem is that it does not make sense to say that something is "provable" without specifying which proof system you are using, because there are no natural choice for an "ideal" system, they are all flawed. The tumblr post seems paradoxical because it implicitly shifts between two different axiom sets. In particular, it says

If there is no way for H to prove whether it halts or not, then we can’t prove it either.

but a correct statement is, we can't prove it either using the same set of axioms as H used. We have to use some addtional ones.

Comment by Pfft on What's in a name? That which we call a rationalist… · 2015-08-04T00:09:43.272Z · LW · GW

The voicing thing is known as rendaku. Generally it's a bit of a mystery when it will and will not happen. This thesis lists a bunch of proposed rules, two of which seem relevant:

  • Rendaku is favoured if the compound words are native-Japanese (yamatokotoba). This might be the reason for kozukai vs mahoutsukai, ko is native-Japanese and mahou is sino-Japanese. So by analogy, one would not expect voicing for beizutsukai.

  • Noun+Verb compounds exhibit rendaku if the noun is an "adverbial modifier" but none if it's a direct object. In "using magic" 魔法を使う magic is a direct object, so no voicing. On the other hand kozukai ('little servant'?) is an Adjective+Verb, which explains the voicing.

In any case, I guess the upshot is that we should expect beizutsukai, without rendaku,

Comment by Pfft on Stupid Questions August 2015 · 2015-08-03T23:13:59.694Z · LW · GW

I would imagine that using foxes give you a lot more to work with though. Foxes in nature live in pairs or small groups. The children stay around the parent for a long time. So they already have mechanisms in place for social behaviours. (And even if they are not expressed, there probably are some latent possibilities shared among mammals? E.g. this article about the evolution of housecats notes that they independently evolved a lot of the same behaviours that lion prides use to socialise, even though wildcats are solitary.)

Comment by Pfft on Magnetic rings (the most mediocre superpower) A review. · 2015-08-02T01:51:53.833Z · LW · GW

How about amortizing it among LessWrong users? If there are enough interested people we can pool up to buy a pair, each one in the pool gets to keep it for (say) a month, and then mails it in an envelope to the next guy. Maybe everyone has to write an experience report as a Less Wrong comment, too.

Comment by Pfft on How to win the World Food Prize · 2015-08-02T00:48:10.412Z · LW · GW

Indians call sterile mosquitos CIA agents (Washington Post, December 10, 1974).

"The history of genetic control trials against culicine mosquitoes in India in the mid-1970s shows how opposition can have far-reaching consequences. After several years of work on field testing of the mating competitiveness of sterile male mosquitoes, accusations that the project was meant to obtain data for biologic warfare using yellow fever were launched in the press and taken up by opposition politicians. Shortly afterward, a well-prepared attempt to eradicate an urban Ae. aegypti population by sterile male releases was banned by the government of India 2 days before its launch." (x)

In 1987, a book (“Once Again About the CIA”) was published by Novosti, with the quote: "The CIA Directorate of Science and Technology is continuously modernizing its inventory of pathogenic preparations, bacteria and viruses and studying their effect on man in various parts of the world. To this end, the CIA uses American medical centers in foreign countries. A case in point was the Pakistani Medical Research Center in Lahore… set up in 1962 allegedly for combating malaria." (x)

Genetically modified mosquitoes set off uproar in Florida Keys - "A lot of people just don't trust the FDA and this private company to tell us the truth"

Comment by Pfft on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-28T15:19:44.720Z · LW · GW

My impression is that chemical weapons were very effective in the Iran-Iraq war (e.g.), despite the gas mask having been invented.

Comment by Pfft on Philosophy professors fail on basic philosophy problems · 2015-07-17T15:07:33.288Z · LW · GW

Coming up: the post is promoted to Main; it is re-released as a MIRI whitepaper; Nick Bostrom publishes a book-length analysis; The New Yorker features a meandering article illustrated by a tasteful watercolor showing a trolly attacked by a Terminator.

Comment by Pfft on Crazy Ideas Thread · 2015-07-15T04:44:38.921Z · LW · GW

As I understood it, the reaction mass for Orion comes from the chemical explosives used to implode the bomb. (The bomb design would be quite unusual, with several tons of explosives acting on a very small amount of plutonium).

Comment by Pfft on Rationality Quotes Thread May 2015 · 2015-05-08T23:18:07.383Z · LW · GW

you do realize that Arthur Chu's actions have no bearing on whether GamerGate is terrible or not, right?

Ben Cotton

Comment by Pfft on May 2015 Media Thread · 2015-05-08T20:18:11.527Z · LW · GW

Yeah, the Barbie book seems kind of unfortunate. On the other hand, lambdaphagy wrote an also hillarious/depressing post about the criticism of the book: women writing about their experiences in IT is very problematic.