Posts

Effective Altruism, YouTube, and AI (talk by Lê Nguyên Hoang) 2018-11-17T19:21:19.247Z · score: 3 (1 votes)
rattumb debate: Are cognitive biases a good thing ? 2018-07-26T07:38:11.797Z · score: 3 (4 votes)
[deleted] 2018-07-24T16:06:17.467Z · score: 8 (4 votes)
The Craft And The Codex 2018-07-09T10:50:14.131Z · score: 13 (8 votes)
Some Remarks on the Nature of Political Conflict 2018-07-04T12:31:07.840Z · score: 16 (9 votes)
spaced repetition & Darwin's golden rule 2018-06-28T10:54:28.259Z · score: 8 (5 votes)
Loss aversion is not what you think it is 2018-06-20T15:56:03.510Z · score: 12 (11 votes)
How many philosophers accept the orthogonality thesis ? Evidence from the PhilPapers survey 2018-06-16T12:11:49.892Z · score: 5 (9 votes)
Dissolving Scotsmen 2018-05-10T16:08:11.862Z · score: 12 (3 votes)

Comments

Comment by paperclip-minimizer on The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments · 2019-06-02T16:58:38.998Z · score: 1 (1 votes) · LW · GW

How does this interact with time preference ? As stated, an elementary consequence of this theorem is that either lending (and pretty much every other capitalist activity) is unprofitable, or arbitrage is possible.

Comment by paperclip-minimizer on Two Small Experiments on GPT-2 · 2019-02-25T16:10:18.296Z · score: 0 (2 votes) · LW · GW

That would be a good argument if it were merely a language model, but if it can answer complicated technical questions (and presumably any other question), then it must have the necessary machinery to model the external world, predict what it would do in such and such circumstances, etc.

Comment by paperclip-minimizer on Two Small Experiments on GPT-2 · 2019-02-25T11:47:07.855Z · score: 1 (3 votes) · LW · GW

My point is, if it can answer complicated technical questions, then it is probably a consequentialist that models itself and its environment.

Comment by paperclip-minimizer on How could "Kickstarter for Inadequate Equilibria" be used for evil or turn out to be net-negative? · 2019-02-23T10:43:46.474Z · score: 4 (2 votes) · LW · GW

But this leads to a moral philosophy question: are time-discounting rates okay, and is your future self actually less important in the moral calculus than your present self ?

Comment by paperclip-minimizer on Two Small Experiments on GPT-2 · 2019-02-23T10:11:08.710Z · score: -1 (5 votes) · LW · GW

If an AI can answer a complicated technical question, then it evidently has the ability to use resources to further its goal of answering said complicated technical question, else it couldn't answer a complicated technical question.

Comment by paperclip-minimizer on Blackmail · 2019-02-23T10:05:54.678Z · score: 4 (3 votes) · LW · GW

But don't you need to get a gears-level model of how blackmail is bad to think about how dystopian a hypothetical legal-blackmail sociey is ?

Comment by paperclip-minimizer on Two Small Experiments on GPT-2 · 2019-02-23T10:03:27.301Z · score: 4 (2 votes) · LW · GW

There was discussion of tips on how to produce good Moloch content in the /r/slatestarcodex subreddit.

Comment by paperclip-minimizer on Two Small Experiments on GPT-2 · 2019-02-23T07:53:08.975Z · score: 3 (4 votes) · LW · GW

The world being turned in computronium computing in order to solve the AI alignment problem would certainly be an ironic end to it.

Comment by paperclip-minimizer on Implications of GPT-2 · 2019-02-20T17:55:40.972Z · score: 2 (2 votes) · LW · GW

My point is that it would be a better idea to put as prompt "What follows is a transcript of a conversation between two people:".

Comment by paperclip-minimizer on Blackmail · 2019-02-20T10:55:49.337Z · score: 4 (4 votes) · LW · GW

Note the framing. Not “should blackmail be legal?” but rather “why should blackmail be illegal?” Thinking for five seconds (or minutes) about a hypothetical legal-blackmail society should point to obviously dystopian results. This is not a subtle. One could write the young adult novel, but what would even be the point.

Of course, that is not an argument. Not evidence.

What ? From a consequentialist point of view, of course it is. If a policy (and "make blackmail legal" is a policy) probably have bad consequences, then it is a bad policy.

Comment by paperclip-minimizer on Implications of GPT-2 · 2019-02-20T10:52:35.194Z · score: 1 (1 votes) · LW · GW

It was how it was trained, but Gurkenglas is saying that GPT-2 could male a human-like conversation because Turing test transcripts are in the GPT-2 dataset, but it's conversations between humans in the GPT-2 dataset that would make possible GPT-2 making human-like conversations and thus potentially passing the Turing Test.

Comment by paperclip-minimizer on Blackmail · 2019-02-20T10:48:43.469Z · score: 6 (4 votes) · LW · GW

But if the blackmail information is a good thing to publish, then blackmailing is still immoral, because it should be published and people should be incentivized to publish it, not to not publish it. We, as a society, should ensure that if, say, someone routinely engage in kidnapping children to harvest their organs, and someone knows this information, then she should be incentivized to send this information to the relevant authorities and not to keep this information to herself, for reasons that are I hope obvious.

Comment by paperclip-minimizer on Implications of GPT-2 · 2019-02-18T21:58:50.781Z · score: -4 (2 votes) · LW · GW

I'm not sure what you're trying to say. I'm only saying that if your goal is to have an AI generate sentences that look like they were wrote by humans, then you should get a corpus with a lot of sentences that were wrote by humans, not sentences wrote by other, dumber, programs. I do not see why anyone would disagree with that.

Comment by paperclip-minimizer on Implications of GPT-2 · 2019-02-18T15:14:22.967Z · score: 1 (1 votes) · LW · GW

It would make much more sense to train GPT-2 using discussions between humans if you want it to pass the Turing Test.

Comment by paperclip-minimizer on “She Wanted It” · 2018-11-19T19:50:20.687Z · score: 0 (5 votes) · LW · GW

You need to define the terms you use in a way so that what you are saying is useful by having pragmatic consequences on the real world of actual things, and not simply on the same level as arguing by definition.

Comment by paperclip-minimizer on “She Wanted It” · 2018-11-18T19:05:25.357Z · score: 2 (4 votes) · LW · GW

If you have such a large definition of the right to exit being blocked, then there is practically no such thing as the right to exit not being blocked, and the claim in your original comment is useless.

Comment by paperclip-minimizer on “She Wanted It” · 2018-11-18T08:41:55.701Z · score: 2 (3 votes) · LW · GW

I can only link to another Pervocracy post.

Comment by paperclip-minimizer on “She Wanted It” · 2018-11-18T08:15:20.269Z · score: 9 (9 votes) · LW · GW

Excellent article ! You might want to add some trigger warnings, though.

edit: why so many downvotes in so little time ?

Comment by paperclip-minimizer on Open Thread October 2018 · 2018-10-14T10:39:25.396Z · score: 2 (2 votes) · LW · GW

Hey admins: The "ë" in "Michaël Trazzi" is weird, probably a bug in your handling of Unicode.

Comment by paperclip-minimizer on Does This Fallacy Have A Name? · 2018-10-14T10:34:26.594Z · score: 3 (2 votes) · LW · GW

Actually we all fall prey to this particular one without realizing it, in one aspect or another.

At least, you do. (With apologies to Steven Brust)

Comment by paperclip-minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-14T06:58:04.429Z · score: 1 (1 votes) · LW · GW

An high-Kolmogorov-complexity system is still a system.

Comment by paperclip-minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-12T16:37:49.065Z · score: 1 (1 votes) · LW · GW

I'm not sure what it would even mean to not have a Real Moral System. The actual moral judgments must come from somewhere.

Comment by paperclip-minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-01T18:15:09.353Z · score: 1 (1 votes) · LW · GW

Using PCA on utility functions could be an interesting research subject for wannabe AI risk experts.

Comment by paperclip-minimizer on The Tails Coming Apart As Metaphor For Life · 2018-10-01T18:14:21.838Z · score: 1 (1 votes) · LW · GW

I don't see the argument. I have an actual moral judgement that painless extermination of all sentient beings is evil, and so is tiling the universe with meaningless sentient beings.

Comment by paperclip-minimizer on The Scent of Bad Psychology · 2018-09-19T12:26:39.706Z · score: 1 (1 votes) · LW · GW

don’t trust studies that would be covered in the Weird News column of the newspaper

-- Ozy

Comment by paperclip-minimizer on Fundamental Philosophical Problems Inherent in AI discourse · 2018-09-18T09:15:59.834Z · score: 2 (2 votes) · LW · GW

Good post. Some nitpicks:

There are many models of rationality from which a hypothetical human can diverge, such as VNM rationality of decision making, Bayesian updating of beliefs, certain decision theories or utilitarian branches of ethics. The fact that many of them exist should already be a red flag on any individual model’s claim to “one true theory of rationality.”

VNM rationality, Bayesian updating, decision theories, and utilitarian branches of ethics all cover different areas. They aren't incompatible and actually fit rather neatly into each other.

As a Jacobin piece has pointed out

This is a Jacobite piece.

A critique of Pro Publica is not meant to be an endorsement of Bayesian justice system, which is still a bad idea due to failing to punish bad actions instead of things correlated with bad actions.

Unless you're omniscient, you can only punish things correlated with bad actions.

Comment by paperclip-minimizer on Decision Theory with F@#!ed-Up Reference Classes · 2018-09-05T15:12:33.169Z · score: 1 (3 votes) · LW · GW

While this may seem like merely a niche issue, given the butterfly effect and a sufficiently long timeline with the possibility of simulations, it is almost guaranteed that any decision will change.

I think you accidentally words.

Comment by paperclip-minimizer on Four kinds of problems · 2018-09-05T15:04:09.939Z · score: -22 (5 votes) · LW · GW

.

Comment by paperclip-minimizer on Open Thread August 2018 · 2018-09-05T13:51:23.009Z · score: 3 (2 votes) · LW · GW

Noticing an unachievable goal may force it to have an existential crisis of sorts, resulting in self-termination.

Do you have reasoning behind this being true, or is this baseless anthropomorphism ?

It should not hurt an aligned AI, as it by definition conforms to the humans' values, so if it finds itself well-boxed, it would not try to fight it.

So it is an useless AI ?

Comment by paperclip-minimizer on Open Thread August 2018 · 2018-09-05T13:49:23.083Z · score: 0 (3 votes) · LW · GW

Your whole comment is founded on a false assumption. Look at Bayes' formula. Do you see any mention of whether your probability estimate is "just your prior" or "the result of a huge amount of investigation and very strong reasoning" ? No ? Well this mean that this doesn't effect how much you'll update.

Comment by paperclip-minimizer on Open Thread August 2018 · 2018-09-05T13:42:57.664Z · score: 1 (1 votes) · LW · GW

"self-aware" can also be "self-aware" as in, say, "self-aware humor"

Comment by paperclip-minimizer on Open Thread August 2018 · 2018-09-05T13:41:05.952Z · score: 1 (1 votes) · LW · GW

I don't see why negative utilitarians would be more likely than positive utilitarians to support animal-focused effective altruism over (near-term) human-focused effective altruism.

Comment by paperclip-minimizer on Open Thread August 2018 · 2018-09-05T13:37:16.965Z · score: 1 (1 votes) · LW · GW

This actually made me not read the whole sequence.

Comment by paperclip-minimizer on Don't Get Distracted by the Boilerplate · 2018-08-15T17:08:25.483Z · score: 4 (3 votes) · LW · GW

[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!

How can an agent not conform the completeness axiom ? It literally just say "either the agent prefer A to B, or B to A, or don't prefer anything". Offer me an example of an agent that don't conform to the completeness axiom.

Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.

The entire rest of the section is a straightforward application of the theorem. The objection is that X don't happen in real life, and the counter-objection is that something like X do happen in real life, meaning the theorem do apply.

As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.

Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace "VNM" by "Dutch book" ? Your behavior does not really send the vibe of someone who want to approach this complicated issue honestly, and more send the vibe of someone looking for Internet debate points.

More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.

  • If I cross the street, I make a bet about whether a car will run over me.
  • If I eat a pizza, I make a bet about whether the pizza will taste good.
  • If I'm posting this comment, I make a bet about whether it will convince anyone.
  • etc.
Comment by paperclip-minimizer on Don't Get Distracted by the Boilerplate · 2018-08-14T19:33:25.450Z · score: 3 (1 votes) · LW · GW

This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.

VNM is used to show why you need to have utility functions if you don't want to get Dutch-booked. It's not something the OP invented, it's the whole point of VNM. One wonder what you thought VNM was about.

Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.

That we face trade-offs in the real world is a claim under dispute ?

Ditto.

Another way of phrasing it is that we can model "ignore" as a choice, and derive the VNM theorem just as usual.

Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.

Ditto.

Once again, please provide some real-world examples of when this applies.

OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.

Comment by paperclip-minimizer on Don't Get Distracted by the Boilerplate · 2018-08-14T18:10:58.664Z · score: 1 (1 votes) · LW · GW

Is Aumann robust to untrustworthiness ?

Comment by paperclip-minimizer on Top Left Mood · 2018-08-14T15:47:53.857Z · score: 3 (1 votes) · LW · GW

This bidimensional model is weird.

  • I can imagine pure mania: assigning a 100% probability to everything going right
  • I can imagine pure depression: assign a 100% probability to everything going wrong
  • I can imagine pure anxiety: a completely flat probability distribution of things going right or wrong

But I can't imagine pure top left mood. This lead me to think that the mood square is actually a mood triangle, and that there is no top left mood, only a spectrum of moods between anxiety and mania.

Comment by paperclip-minimizer on Survey: Help Us Research Coordination Problems In The Rationalist/EA Community · 2018-07-26T16:47:35.033Z · score: 3 (2 votes) · LW · GW

cough cough

Comment by paperclip-minimizer on [deleted] · 2018-07-25T18:50:27.161Z · score: 2 (2 votes) · LW · GW

This is excellent advice. Are you a moderator ?

Comment by paperclip-minimizer on [deleted] · 2018-07-25T17:08:05.098Z · score: 3 (3 votes) · LW · GW

I don't know. This make me anxious about writing critical posts in the future. I was about to begin to write another post that is similarly a criticism of an article wrote by someone else, and I don't think I'm going to do so.

Comment by paperclip-minimizer on [deleted] · 2018-07-25T10:35:01.694Z · score: 0 (0 votes) · LW · GW
Comment by paperclip-minimizer on [deleted] · 2018-07-25T06:59:44.170Z · score: 2 (2 votes) · LW · GW

Can I ask you what you mean by this ?

Comment by paperclip-minimizer on The Craft And The Codex · 2018-07-11T20:53:27.761Z · score: 5 (3 votes) · LW · GW

Never heard of a prank like this, this sound weird.

Comment by paperclip-minimizer on The Craft And The Codex · 2018-07-10T10:53:37.022Z · score: 7 (3 votes) · LW · GW

More generally, commenting isn't a good way to train oneself as a rationalist, but blogging is.

Comment by paperclip-minimizer on Some Remarks on the Nature of Political Conflict · 2018-07-09T09:48:19.234Z · score: 0 (2 votes) · LW · GW

I'm not sure what you mean.

Comment by paperclip-minimizer on Some Remarks on the Nature of Political Conflict · 2018-07-05T08:12:56.319Z · score: 9 (4 votes) · LW · GW

This isn't what "conflict theory" mean. Conflict theory is a specific theory about the nature of conflict, that say conflict is inevitable. Conflict theory doesn't simply mean that conflict exist.

Comment by paperclip-minimizer on Some Remarks on the Nature of Political Conflict · 2018-07-04T18:32:12.357Z · score: 3 (1 votes) · LW · GW

I don't agree with your pessimism. To re-use your example, if you formalize the utility created by freedom and equality, you can compare both and pick the most efficient policies.

Comment by paperclip-minimizer on spaced repetition & Darwin's golden rule · 2018-06-28T17:23:33.656Z · score: 5 (2 votes) · LW · GW

Fixed ;)

Comment by paperclip-minimizer on Loss aversion is not what you think it is · 2018-06-24T11:47:32.633Z · score: 2 (1 votes) · LW · GW

The author explain very clearly what the differences are between "people hate losses more than they like gains" and loss aversion. Loss aversion is people hating losing $1 while having $2 more than they like gaining $1 while having $1, even though it both case this the difference between having $1 and $2.

Comment by paperclip-minimizer on Remembering the passing of Kathy Forth. · 2018-06-23T16:35:54.954Z · score: 11 (5 votes) · LW · GW

I think we do disagree on if it's a good idea to widely spread as a message "HEY SUICIDAL PEOPLE HAVE YOU REALIZED THAT IF YOU KILL YOURSELF EVERYONE WILL SAY NICE THINGS ABOUT YOU AND WORK ON SOLVING PROBLEMS YOU CARE ABOUT LET’S MAKE SURE TO HIGHLIGHT THIS EXTENSIVELY".