Comment by khafra on Ethical Injunctions · 2017-02-27T15:38:57.147Z · score: 0 (0 votes) · LW · GW

Tangentially, there's an upcoming Netflix six-episode series named “The Heavy Water War,” that should cover both this event, and the sabotage of the heavy water production facility that led up to it.

Comment by khafra on Slate Star Codex Notes on the Asilomar Conference on Beneficial AI · 2017-02-08T22:02:58.205Z · score: 0 (0 votes) · LW · GW

It should be posted, but by someone who can more rigorously describe its application to an optimizer than "probably needs to be locally smooth-ish."

Comment by khafra on Slate Star Codex Notes on the Asilomar Conference on Beneficial AI · 2017-02-08T12:21:50.628Z · score: 3 (3 votes) · LW · GW

Point 8, about the opacity of decision-making, reminded me of something I'm surprised I haven't seen on LW before:

LIME, Local Interpretable Model-agnostic Explanations, can show a human-readable explanation for the reason any classification algorithm makes a particular decision. It would be harder to apply the method to an optimizer than to a classifier, but I see no principled reason why an approach like this wouldn't help understand any algorithm that has a locally smooth-ish mapping of inputs to outputs.

Comment by khafra on How to escape from your sandbox and from your hardware host · 2015-08-31T14:29:40.677Z · score: 3 (3 votes) · LW · GW

provably secure software mechanisms rely on an idealized model of hardware

In my experience, they also define an attacker model against which to secure. There are no guarantees against attackers with greater access, or abilities, than specified in the model.

Comment by khafra on Crazy Ideas Thread · 2015-07-22T10:51:23.481Z · score: 0 (0 votes) · LW · GW

Dave Asprey says, with a reasonably large set of referenced studies, that it's the mold in food which reduces your fed performance.

Comment by khafra on Are consequentialism and deontology not even wrong? · 2015-06-04T23:02:13.258Z · score: 0 (0 votes) · LW · GW

Related SMBC.

Comment by khafra on Debunking Fallacies in the Theory of AI Motivation · 2015-05-14T12:12:50.724Z · score: 2 (2 votes) · LW · GW

If you think this is wrong, take it up with the people whose work I am both quoting and analyzing in this paper, because THAT IS WHAT THEY ARE CLAIMING. I am not the one saying that "the AI is programmed with good intentions", that is their claim.

I think I spotted a bit of confusion: The programmers of the "make everyone happy" AI had good intentions. But the AI itself does not have good intentions; because the intent "make everyone happy" is not good, albeit in a way that its programmers did not think of.

Comment by khafra on A pair of free information security tools I wrote · 2015-05-07T13:20:09.386Z · score: 1 (1 votes) · LW · GW

If that data is encrypted (assuming no headers or footers or obvious block divisions), then it will appear to an attacker like random bytes. Whether or not that's distinguishable from the original image depends on whether the low bits of the original image are observably nonrandom, and that's not something I know offhand

It's super-easy to spot in a histogram, so much so that there's ongoing research into making it less detectable.

Comment by khafra on Gasoline Gal looks under the hood (post 1 of 3) · 2015-05-05T12:22:01.841Z · score: 2 (2 votes) · LW · GW

Presumably it's in conflict with the instrumental values of retaining resources which could be used for other terminal values (the money she would save, going with the fuel cell), and the combination of instrumental and terminal values represented by the improved acceleration of the fuel cell.

Comment by khafra on How to sign up for Alcor cryo · 2015-04-28T11:35:50.131Z · score: 6 (6 votes) · LW · GW

Do you have plans for when your term life insurance expires, but you're still alive (which is, actuarially speaking, fairly certain)?

Comment by khafra on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-16T11:42:44.425Z · score: 2 (2 votes) · LW · GW

2) A fanfic where smarter versions of canon characters fight each other with complicated plots.

Hogwarts Battle School

Comment by khafra on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-03T16:18:41.728Z · score: 7 (7 votes) · LW · GW

...supporters say the opposition leader was assassinated to silence him...

I see headlines like this fairly regularly.

Does anybody know of a list of notable opposition leaders, created when all members of the list were alive? Seems like it could be educational to compare the death rate of the list (a) across countries, and (b) against their respective non-notable demographics.

Comment by khafra on Harper's Magazine article on LW/MIRI/CFAR and Ethereum · 2014-12-15T11:58:25.826Z · score: 0 (0 votes) · LW · GW

I just want to know about the actuary from Florida; I didn't think we had any other LW'ers down here.

Comment by khafra on Rationality Quotes November 2014 · 2014-12-05T11:42:54.777Z · score: 2 (2 votes) · LW · GW

Your observation of the reading on the scale is true, of course. Your observation that the weight is 51 grams is false.

"This weight masses 51 grams" is not an observation, it's a theory attempting to explain an observation. It just seems so immediate, so obvious and inarguable, that it feels like an observation.

Comment by khafra on Rationality Quotes November 2014 · 2014-12-04T19:44:11.271Z · score: 1 (3 votes) · LW · GW

No observation is false. Any explanation for a given observation may, with finite probability, be false; no matter how obvious and inarguable it may seem.

Comment by khafra on Why I will Win my Bet with Eliezer Yudkowsky · 2014-12-02T12:33:21.704Z · score: 0 (0 votes) · LW · GW

an AI will not have a fanatical goal of taking over the world unless it is programmed to do this.

It is true that an AI could end up going “insane” and trying to take over the world, but the same thing happens with human beings

Are you asserting that all the historic conquerors and emperors who've taken over the world were insane? Is it physically impossible to for an agent to rationally plan to take over the world, as an intermediate step toward some other, intrinsic goal?

there is no reason that humans and AIs could not work together to make sure this does not happen

If the intelligence difference between the smartest AI and other AIs and humans remains similar to the intelligence difference between an IQ 180 human and an IQ 80 human, Robin Hanson's malthusian hellworld is our primary worry, not UFAI. A strong singleton taking over the world is only a concern if a strong singleton is possible.

If you program an AI with an explicit or implicity utility function which it tries to maximize...


But if you program an AI without an explicit utility function, just programming it to perform a certain limited number of tasks, it will just do those tasks.

Yes, and then someone else will, eventually, accidentally create an AI which behaves like a utility maximizer, and your AI will be turned into paperclips just like everything else.

Comment by khafra on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-30T18:29:33.003Z · score: 2 (2 votes) · LW · GW

Are there lists of effective charities for specific target domains? For social reasons, I sometimes want to donate to a charity focused on some particular cause; but given that constraint, I'd still like to make my donation as effective as possible.

Comment by khafra on Link: Elon Musk wants gov't oversight for AI · 2014-10-29T11:55:23.107Z · score: 8 (8 votes) · LW · GW

Do not spam high-status people, and do not communicate with high-status people in a transparent attempt to affiliate with them and claim some of their status for yourself.

Comment by khafra on 2014 Less Wrong Census/Survey · 2014-10-23T13:14:03.554Z · score: 46 (46 votes) · LW · GW

I would have given a response for digit ratio if I'd known about the steps to take the measurement before opening the survey, or if it were at the top of the survey, or if I could answer on a separate form after submitting the main survey. I didn't answer because I was afraid that if I took the time to do so, the survey form, or my https connection to it, or something else would time out, and I would lose all the answers I had entered.

Comment by khafra on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-10T14:43:59.473Z · score: 0 (0 votes) · LW · GW

To implement Bayes' Theorem, the prior of something must be known

Not quite the way I'd put it. If you know the exact prior for the unique event you're predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.

Comment by khafra on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-04T01:03:12.070Z · score: 2 (2 votes) · LW · GW

That one shows up in fiction every now and then, but If they're galaxy-spanning, there's no particular reason for them to have avoided eating all the stars unless we're completely wrong about the laws of physics. The motivation might not exactly be "hiding," but it'd have to be something along the lines of a nature preserve; and would require a strong singleton.

Comment by khafra on Open thread, Sept. 29 - Oct.5, 2014 · 2014-10-03T17:17:38.147Z · score: 3 (3 votes) · LW · GW

Alien-wise, most of the probability-mass not in the "Great Filter" theory is in the "they're all hiding" theory, right? Are there any other big events in the outcome space?

I intuitively feel like the "they're all hiding" theories are weaker and more speculative than the Great Filter theories, perhaps because including agency as a "black box" within a theory is bad, as a rule of thumb.

But, if most of the proposed candidates for the GF look weak, how do the "they're all hiding" candidates stack up? What is there, besides the Planetarium Hypothesis and Simulationism? Are there any that don't require a strong Singleton?

Comment by khafra on Rationality Quotes October 2014 · 2014-10-03T13:51:37.355Z · score: 1 (3 votes) · LW · GW

Don't let a summary of reality distract you from reality, even if it's an accurate summary.

-- Steven Kaas

Comment by khafra on Rationality Quotes September 2014 · 2014-09-26T15:05:45.608Z · score: 5 (5 votes) · LW · GW

It's really weird how [Stop, Drop, and Roll] is taught pretty much yearly but personal finance or ethics usually just have one class at the end of highschool.

-- CornChowdah, on reddit

Comment by khafra on Rationality Quotes September 2014 · 2014-09-08T15:20:57.508Z · score: 3 (3 votes) · LW · GW

what to do when your best effort to fill the gap is thin and unconvincing - the simplest solution would appear to be to go back to the person proposing the position that you are critically commenting about (or someone else who shares his views on the subject), and simply asking.

So, you go back to the person you're going to argue against, before you start the argument, and ask them about the big gap in their original position? That seems like it could carry the risk of kicking off the argument a little early.

Comment by khafra on Anthropics doesn't explain why the Cold War stayed Cold · 2014-09-03T13:35:46.618Z · score: 1 (1 votes) · LW · GW

The opposition is that the number of observers able to ask questions about royal siblings is not heavily correlated with the actual number of royal siblings historically present; while the number of observers able to ask questions about a lack of large thermonuclear exchanges is heavily correlated with the actual number of historical large thermonuclear exchanges.

Comment by khafra on Politics is hard mode · 2014-07-22T11:54:37.147Z · score: 2 (2 votes) · LW · GW

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"?

Yes, this is what I thought of when I read this:

In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.”

It's not that politics isn't important to get right, it's just that talking about has negative expected value. Nearly every political argument between two people makes at least one person further entrenched in error.

Maybe "politics is like that scene in a thriller where the two guys are fighting to reach a single gun; but in this case the handle and trigger are actually poisoned."

Comment by khafra on Rationality Quotes July 2014 · 2014-07-10T15:31:33.235Z · score: 3 (3 votes) · LW · GW

Learning can occur without theory. I spent years researching and developing systems to do just that.

If you're talking about unsupervised classification algorithms, don't they kinda make their theory as they learn? At least, in the "model," or "lossy compression" sense of "theory." Finding features that cluster well in a data set is forming a theory about that data set.

Comment by khafra on Open thread, 7-14 July 2014 · 2014-07-08T14:23:24.256Z · score: 1 (3 votes) · LW · GW

Has anybody written up a primer on "what if utility is lexically ordered, or otherwise not quite measurable in real numbers"? Especially in regard to dust specks?

Comment by khafra on Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming · 2014-06-26T11:54:22.039Z · score: 7 (7 votes) · LW · GW

Yeah, in the AGW case it sounds like the question's more like "to what extent is your belief the result of climate models, and to what extent is it the result of a linear regression model?"

Comment by khafra on Life insurance for Cryonics, how many years? · 2014-05-27T17:35:31.201Z · score: 0 (0 votes) · LW · GW

Your chances of dying before middle age are relatively small. Your chances of dying in a way that renders your brain preservable, before middle age, are astronomically small. Thus, although whole life costs around 2^3 as much as term, whole life provides something around 2^8 the benefit.

Comment by khafra on Botworld: a cellular automaton for studying self-modifying agents embedded in their environment · 2014-04-21T17:18:20.673Z · score: 0 (0 votes) · LW · GW

If you define waste heat, neural patterns, and so on as "output channels" then sure, we can say you only interact via I/O (although the line between I and O is fuzzy enough and your control over the O is small enough that I'd personally object to the distinction).

Also, even with perfect control of your own cognition, you would be restricted to a small subset of possible output strings. Outputting bits on multiple channels, each of which is dependent on the others, constrains you considerably; although I'm not sure whether the effect is lesser or greater than having output as a side effect of computation.

As I mentioned in a different context, it reminds me of UDT, or of the 2048 game: Every choice controls multiple actions.

Comment by khafra on Open thread, 21-27 April 2014 · 2014-04-21T16:15:36.467Z · score: 4 (4 votes) · LW · GW

Rejection Therapy is focused in that direction.

Comment by khafra on Open Thread April 16 - April 22, 2014 · 2014-04-16T13:09:55.887Z · score: 2 (2 votes) · LW · GW

Yet another possible failure mode for naive anthropic reasoning.

Comment by khafra on Open Thread April 8 - April 14 2014 · 2014-04-09T14:14:29.117Z · score: 6 (6 votes) · LW · GW

Since one big problem with neural nets is their lack of analyzability, this geometric approach to deep learning neural networks seems probably useful.

Comment by khafra on Polling Thread · 2014-04-08T11:18:33.214Z · score: 1 (1 votes) · LW · GW

My "other" vote is alternate-day fasting, which I've been doing all year. Not sure if that's what you're looking for, but I feel like it's a dietary restriction, and benefits my health.

Comment by khafra on Rationality Quotes April 2014 · 2014-04-03T13:53:38.089Z · score: 4 (4 votes) · LW · GW

And his textbook on the new univalent foundations of mathematics in homotopy type theory is here.

Comment by khafra on Open thread, 24-30 March 2014 · 2014-03-31T11:24:30.654Z · score: 0 (0 votes) · LW · GW

It is that latter property which makes it well suited for proofs in homotopy theory (and category theory). Most of the examples in slides you link to are about homotopy theory.

I found a textbook after reading the slides, which may be clearer. I really don't think their mathematical aspirations are limited to homotopy theory, after reading the book's introduction--or even the small text blurb on the site:

Homotopy type theory offers a new “univalent” foundation of mathematics, in which a central role is played by Voevodsky’s univalence axiom and higher inductive types. The present book is intended as a first systematic exposition of the basics of univalent foundations, and a collection of examples of this new style of reasoning

Comment by khafra on Open thread, 24-30 March 2014 · 2014-03-28T18:12:40.170Z · score: -1 (1 votes) · LW · GW

Well, I don't really math; but the way I understand it, computable universe theory suggests Solomonoff's Universal prior, while the ZFC-based mathematical universe theory--being a superset of the computable--suggests a larger prior; thus weirder anthropic expectations. Unless you need to be computable to be a conscious observer, in which case we're back to SI.

Comment by khafra on Open thread, 24-30 March 2014 · 2014-03-28T16:43:26.012Z · score: 1 (7 votes) · LW · GW

Apparently, founding mathematics on Homotopy Type Theory instead of ZFC makes automated proof checking much simpler and more elegant. Has anybody tried reformulating Max Tegmark's Level IV Multiverse using Homotopy Type Theory instead of sets to see if the implied prior fits our anthropic observations better?

Comment by khafra on [Link] Zack Weinersmith's One-Liner Generator · 2014-03-27T11:43:09.535Z · score: 1 (1 votes) · LW · GW

This is the same as Schmidhuber's compression-based theory of aesthetics, right?

Comment by khafra on Making LessWrong notable enough for its own Wikipedia page · 2014-03-17T13:58:45.522Z · score: 0 (0 votes) · LW · GW

Hacker News had a semi-joking strategy, "everyone post articles on Haskell internals*" on days following media exposure. It actually seemed to work pretty well--but I don't know if we have enough posting volume, and enough un-posted articles on the mathematical side of decision theory and anthropics to use a similar strategy.

*(edit: it was Erlang internals; gjm's memory is better than mine).

Comment by khafra on Irrationality Game III · 2014-03-13T15:33:10.437Z · score: 0 (0 votes) · LW · GW

I hope I haven't been misleading.

I don't think you've been any more misleading than a dualist is pretty much required to be. The basic ambiguities of dualism do, of course, remain:

  1. How does the non-spacetime stuff produce subjective experience, when spacetime stuff can't?

  2. How does your subjective experience correlate with the environment and actions of your material body, just as if there were two-way causation going on? (even when you reduce causation to a Pearl-style net, or to the large-scale behavior of many individually time-reversible components, this question remains).

Comment by khafra on Rationality Quotes March 2014 · 2014-03-13T13:54:36.419Z · score: 5 (5 votes) · LW · GW

Who does more good with their life--the person who contributes a large amount of money to efficient charities while avoiding the people nearby, or the person who ignores anyone more than 100 miles away while being nice to his mother, his employer, and the man he meets in the train?

Comment by khafra on Open thread, 11-17 March 2014 · 2014-03-13T12:11:05.422Z · score: 3 (3 votes) · LW · GW

Since it was during the military there's nothing I can do about this misdiagnosis

Even if the VA didn't want to give you partial disability, have you talked with veterans' advocates? They might be able to help you get compensation.

Comment by khafra on Irrationality Game III · 2014-03-13T10:55:56.336Z · score: 8 (8 votes) · LW · GW

Interesting. So, you have Robin Hanson's belief that we won't get a strong singleton; but you lack his belief that emulated minds will be able to evaluate each other's abilities with enough confidence that trade (taking into account the expected value of fighting) will be superior to fighting? That's quite the idiosyncratic position, especially for 95% confidence.

Comment by khafra on Irrationality Game III · 2014-03-13T10:46:08.782Z · score: 4 (4 votes) · LW · GW

By "any subcomponent," do you mean that the powerset of the universe is composed of conscious entities, even when light speed and expansion preclude causal interaction within the conscious entity? Because, if the universe is indeed spatially infinite, that means that the set of conscious entities is the infinity of the continuum; and I'm really confused by what that does to anthropic reasoning.

Comment by khafra on Irrationality Game III · 2014-03-13T10:41:13.268Z · score: 1 (1 votes) · LW · GW

What does "spacetime" mean? Is the real "you" neither a causal descendent, nor a causal ancestor, of any of your body's actions? I'd have to put that down somewhere around argmin probability.

Or do you just mean that you consider the real you to be something like a platonic computation, which your material body instantiates? That's not too far off from some realms of LW semi-orthodoxy.

Comment by khafra on Open Thread February 25 - March 3 · 2014-03-12T13:09:50.435Z · score: 0 (0 votes) · LW · GW

Don't you have to define the receiver as well as the transmitter, to have any idea about the channel bandwidth? I mean, if the "outside world" is the Dark Lords of the Matrix, the theoretical maximum output bandwidth is the processing speed of the mind.

Comment by khafra on Rationality Quotes March 2014 · 2014-03-10T15:32:09.458Z · score: 5 (5 votes) · LW · GW

... the controlling factor, the root cause, of risk is dependence, particularly dependence on the expectation of stable system state. Yet the more technologic the society becomes, the greater the dynamic range of possible failures. When you live in a cave, starvation, predators, disease, and lightning are about the full range of failures that end life as you know it and you are well familiar with each of them. When you live in a technologic society where everybody and everything is optimized in some way akin to just-in-time delivery, the dynamic range of failures is incomprehensibly larger and largely incomprehensible.

-- Dan Geer

(rationality applicability: antifragility & disjunctive prediction vs. optimization for conjunctive prediction)

[Link] Walking Through Doors Causes Forgetting

2011-11-21T14:56:47.207Z · score: 5 (6 votes)

Amateur Cryonics (one guy packed in dry ice) Festival Seeks Buyer

2011-06-17T16:57:41.276Z · score: 4 (7 votes)

Free Thought Film Festival: Tampa traditional rationalist gathering this weekend (13-15 May)

2011-05-14T00:00:12.341Z · score: 1 (2 votes)

Article on quantified lifelogging (

2010-11-15T16:38:02.521Z · score: 0 (3 votes)