Posts

Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels 2015-08-15T22:45:01.389Z

Comments

Comment by Transfuturist on Lesswrong 2016 Survey · 2016-03-28T23:59:00.186Z · LW · GW

Most of those who haven't ever been on Less Wrong will provide data for that distinction. It isn't noise.

Comment by Transfuturist on Lesswrong 2016 Survey · 2016-03-27T00:42:42.640Z · LW · GW

This is a diaspora survey, for the pan-rationalist community.

Comment by Transfuturist on Lesswrong 2016 Survey · 2016-03-27T00:39:57.842Z · LW · GW

I have taken the survey. I did not treat the metaphysical probabilities as though I had a measure over them, because I don't.

Comment by Transfuturist on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-24T17:57:16.042Z · LW · GW

I guess the rejection is more based on the fact that his message seems like it violates deep-seated values on your end about how reality should work than his work being bullshit.

Lumifer rejects him because he thinks Simon Anhold is simply a person who isn't serious but a hippy.

How about you let Lumifer speak for Lumifer's rejection, rather than tilting at straw windmills?

Comment by Transfuturist on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T23:42:03.027Z · LW · GW

The equivocation of 'created' in those four points are enough to ignore it entirely.

Comment by Transfuturist on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T23:06:58.716Z · LW · GW

I'm curious why this was downvoted. The last statement, which has political context?

Comment by Transfuturist on Open Thread Feb 22 - Feb 28, 2016 · 2016-02-22T20:32:13.176Z · LW · GW

Are there any egoist arguments for (EA) aid in Africa? Does investment in Africa's stability and economic performance offer any instrumental benefit to a US citizen that does not care about the welfare of Africans terminally?

Comment by Transfuturist on The Ultimate Testing Grounds · 2015-12-20T00:06:39.927Z · LW · GW

We don't need to describe the scenarios precisely physically. All we need to do is describe it in terms of the agent's epistemology, with the same sort of causal surgery as described in Eliezer's TDT. Full epistemological control means you can test your AI's decision system.

This is a more specific form of the simulational AI box. The rejection of simulational boxing I've seen relies on the AI being free to act and sense with no observation possible, treating it like a black box, and somehow gaining knowledge of the parent world through inconsistencies and probabilities and escaping using bugs in its containment program. White-box simulational boxing can completely compromise the AI's apparent reality and actual abilities.

Comment by Transfuturist on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-07T16:26:00.954Z · LW · GW

Stagnation is actually a stable condition. It's "yay stability" vs. "boo instability," and "yay growth" vs. "boo stagnation."

Comment by Transfuturist on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-07T15:26:45.889Z · LW · GW

(Ve could also be copied, but it would require coping of all world).

Why would that be the case? And if it were the case, why would that be a problem?

Comment by Transfuturist on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-07T06:25:14.248Z · LW · GW

Resurrect one individual, filling gaps with random quantum noise.

Resurrect all possible individuals with all combinations of noise.

That is a false trichotomy. You're perfectly capable of deciding to resurrect some sparse coverage of the distribution, and those differences are not useless. In addition, "the subject is almost exactly resurrected in one of the universes" is true of both two and three, and you don't have to refer to spooky alternate histories to do it in the first place.

Comment by Transfuturist on Deliberate Grad School · 2015-10-07T03:57:08.925Z · LW · GW

8(

Comment by Transfuturist on Deliberate Grad School · 2015-10-07T01:45:06.911Z · LW · GW

Quals are the GRE, right?

Comment by Transfuturist on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-07T01:32:26.295Z · LW · GW

...Okay? One in ten sampled individuals will be gay. You can do that. Does it really matter when you're resurrecting the dead?

Your own proposal is to only sample one, and call the inaccuracy "acausal trade," which isn't even necessary in this case. The AI is missing 100 bits. You're already admitting many-worlds. So the AI can simply draw those 100 bits out of quantum randomness, and in each Everett branch, there will be a different individual. The incorrect ones you could call "acausal travelers," even though you're just wrong. There will still be the "correct" individual, the exact descendant of this reality's instance, in one of the Everett branches. The fact that it is "correct" doesn't even matter, there is only ever "close enough," but the "correct" one is there.

Comment by Transfuturist on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-06T20:37:41.368Z · LW · GW

What's wrong with gaps? This is probabilistic in the first place.

Comment by Transfuturist on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-06T15:07:16.259Z · LW · GW

No, that's easy to grasp. I just wonder what the point is. Conservation of resources?

Comment by Transfuturist on Digital Immortality Map: How to collect enough information about yourself for future resurrection by AI · 2015-10-06T06:59:24.476Z · LW · GW

The evidence provided of any dead person produces a distribution on human brains, given enough computation. The more evidence there is, the more focused the distribution. Given post-scarcity, the FAI could simply produce many samples on each distribution.

This is certainly a clever way of producing mind-neighbors. I find problems with these sorts of schemes for resurrection, though. Socioeconomic privilege, tragedy of the commons, and data rot, to be precise.

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-06T01:32:34.211Z · LW · GW

You're confusing the intuitive notion of "simple" with "low Kolmogorov complexity"

I am using the word "simple" to refer to "low K-complexity." That is the context of this discussion.

It does if you look at the rest of my argument.

The rest of your argument is fundamentally misinformed.

Step 1: Stimulation the universe for a sufficiently long time.

Step 2: Ask the entity now filling up the universe "is this an agent?".

Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you're done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.

You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.

What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well "reducing entropy" as a concept does have low Kolmogorov complexity.

It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible." That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-05T19:47:12.080Z · LW · GW

It reminded me of reading Simpsons comics, is all.

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-05T15:05:30.358Z · LW · GW

Krusty's Komplexity Kalkulator!

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-05T04:18:39.299Z · LW · GW

Doesn't that undermine the premise of the whole "a godless universe has low Kolmogorov complexity" argument that you're trying to make?

Again, there is a difference between the complexity of the dynamics defining state transitions, and the complexity of the states themselves.

But, the AGI can. Agentiness is going to be a very important concept for it. Thus it's likely to have a short referent to it.

What do you mean by "short referent?" Yes, it will likely be an often-used concept, so the internal symbol signifying the concept is likely to be short, but that says absolutely nothing about the complexity of the concept itself. If you want to say that "agentiness" is a K-simple concept, perhaps you should demonstrate that by explicating a precise computational definition for an agent detector, and show that it doesn't fail on any conceivable edge-cases.

Saying that it's important doesn't mean it's simple. "For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must have low Kolmogorov complexity."

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-04T05:51:34.397Z · LW · GW

An AGI has low Kolmogorov complexity since it can be specified as "run this low Kolmogorov complexity universe for a sufficiently long period of time".

That's a fundamental misunderstanding of complexity. The laws of physics are simple, but the configurations of the universe that runs on it can be incredibly complex. The amount of information needed to specify the configuration of any single cubic centimeter of space is literally unfathomable to human minds. Running a simulation of the universe until intelligences develop inside of it is not the same as specifying those intelligences, or intelligence in general.

Also the AGI to be successful is going to have to be good at detecting agents so it can dedicated sufficient resources to defeating/subverting them. Thus detecting agents must have low Kolmogorov complexity.

The convenience of some hypothetical property of intelligence does not act as a proof of that property. Please note that we are in a highly specific environment, where humans are the only sapients around, and animals are the only immediately recognizable agents. There are sci-fi stories about your "necessary" condition being exactly false; where humans do not recognize some intelligence because it is not structured in a way that humans are capable of recognizing.

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-04T05:24:54.890Z · LW · GW

I could stand to meet a real-life human. I've heard they exist, but I've had such a hard time finding one!

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-04T05:18:43.194Z · LW · GW

I don't think mind designs are dependent on their underlying physics. The physics is a substrate, and as long as it provides general computation, intelligence would be achievable in a configuration of that physics. The specifics of those designs may depend on how those worlds function, like how jellyfish-like minds may be different from bird-like minds, but not the common elements of induction, analysis of inputs, and selection of outputs. That would mean the simplest a priori mind would have to be computed by the simplest provision of general computation, however. An infinitely divine Turing Machine, if you will.

That doesn't mean a mind is more basic than physics, though. That's an entirely separate issue. I haven't ever seen a coherent model of God in the first place, so I couldn't begin to judge the complexity of its unproposed existence. If God is a mind, then what substrate does it rest on?

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-04T04:59:35.523Z · LW · GW

Two or three people confused about K-complexity doesn't herald the death of LW.

Comment by Transfuturist on Rationality Quotes Thread September 2015 · 2015-10-04T04:52:31.566Z · LW · GW

The non-triviality arises from technical considerations

The laws of physics as we know them are very simple, and we believe that they may actually be even simpler. Meanwhile, a mind existing outside of physics is somehow a more consistent and simple explanation than humans having hardware in the brain that promotes hypotheses involving human-like agents behind everything, which explains away every religion ever? Minds are not simpler than physics. This is not a technical controversy.

Comment by Transfuturist on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-18T01:34:54.857Z · LW · GW

I especially like the question "Is it ethical to steal truffle mushrooms and champagne to feed your family?" That's an intuitive concept fairly voiced. Calculating the damage to the trolley is somewhat ridiculous, however.

Comment by Transfuturist on Open thread, Aug. 17 - Aug. 23, 2015 · 2015-08-18T01:27:27.363Z · LW · GW

The Open Thread is for all posts that don't necessitate their own thread. The media thread is for recommendations on entertainment. I don't see why comments should be necessary to bring a paper to LWs attention, especially in the Open Thread, and others clearly disagree with your opinion on this matter.

Comment by Transfuturist on Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels · 2015-08-17T22:22:25.155Z · LW · GW

I believe in this instance he was reasoning alethically. De facto you are not necessarily correct.

Comment by Transfuturist on Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels · 2015-08-16T01:40:52.976Z · LW · GW

I'm not claiming this is conclusive evidence of danger; I'm just concerned.

Comment by Transfuturist on Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels · 2015-08-16T01:37:05.158Z · LW · GW

Thanks. I only saw this press release and I was concerned that there might be danger.

Comment by Transfuturist on Top 9+2 myths about AI risk · 2015-07-03T08:08:32.012Z · LW · GW

As an example of number 10, consider the Optimalverse. The friendliest death of self-determination I ever did see.

Unfortunately, I'm not quite sure of the point of this post, considering you're posting a reply to news articles on a forum filled with people who understand the mistakes they made in the first place. Perhaps as a repository of rebuttals to common misconceptions posited in the future?

Comment by Transfuturist on [Link] Self-Representation in Girard’s System U · 2015-06-25T21:36:18.638Z · LW · GW

This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.

Comment by Transfuturist on The paperclip maximiser's perspective · 2015-05-04T21:07:32.490Z · LW · GW

She is preserving paperclip-valuing intelligence by protecting herself from the potential threat of non-paperclip-valuing intelligent life, and can develop interstellar travel herself.

It's a lonely job, but someone has to make the maximum possible amount of paperclips. Someone, and only one. Anyone else would be a waste of paperclip-material.

Comment by Transfuturist on Gasoline Gal looks under the hood (post 1 of 3) · 2015-05-04T20:59:53.874Z · LW · GW

It is not irrational, because preferences are arational. Now, Gal might be mistaken about her preferences, but she is the current best source of evidence on her preferences, so I don't see how her actions in this case are irrational either. She's an engine specialist and a mechanic, so it's perfectly understandable that she would want something she knew how to maintain and repair.

Comment by Transfuturist on Is Determinism A Special Case Of Randomness? · 2015-05-04T20:52:11.910Z · LW · GW

That doesn't get rid of randomness, it pushes it into the observer.

Comment by Transfuturist on Is my theory on why censorship is wrong correct? · 2015-04-17T23:19:43.688Z · LW · GW

You seem to be opposed to the nature of your species. This can't be very good for your self-esteem.

Comment by Transfuturist on Boxing an AI? · 2015-03-28T02:14:42.663Z · LW · GW

What use is such an AI? You can't even use the behavior of its utility function to predict a real-world agent because it would have such a different ontology. Not to mention the fact that GoL boards of the complexity needed for anything interesting would be massively intractable.

Comment by Transfuturist on Boxing an AI? · 2015-03-28T02:11:16.427Z · LW · GW

Hardcoding has nothing to do with it.

Comment by Transfuturist on Boxing an AI? · 2015-03-27T19:39:08.554Z · LW · GW

Well, actually, I think it could. Given that we want the AI to function as a problem solver for the real world, it would necessarily have to learn about aspects of the real world, including human behavior, in order to create solutions that account for everything the real world has that might throw off the accuracy of a lesser model.

Comment by Transfuturist on Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion · 2015-03-27T19:32:43.922Z · LW · GW

Thanks for the explanation. Do you have any alternatives?

Comment by Transfuturist on Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion · 2015-03-27T02:22:26.864Z · LW · GW

I disagree with John Rawl's veil-of-ignorance theory and even find it borderline disgusting (he is just assuming everybody is a risk-averse coward)

Um, what? What's wrong with risk-aversion? And what's wrong with the Veil of Ignorance? How does that assumption make the concept disgusting?

Comment by Transfuturist on Political topics attract participants inclined to use the norms of mainstream political debate, risking a tipping point to lower quality discussion · 2015-03-27T02:16:55.526Z · LW · GW

I think usernames would have to be anonymized, as well.

Comment by Transfuturist on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-26T04:38:00.001Z · LW · GW

This was very informative, thank you.

Comment by Transfuturist on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-24T23:22:07.924Z · LW · GW

Not so. I'm trying to figure out how to find the maximum entropy distribution for simple types, and recursively defined types are a part of that. This does not only apply to strings, it applies to sequences of all sorts, and I'm attempting to allow the possibility of error correction in these techniques. What is the point of doing statistics on coin flips? Once you learn something about the flip result, you basically just know it.

Comment by Transfuturist on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-23T20:51:55.811Z · LW · GW

The result of some built-in string function length(s), that, depending on the implementation of the string type, either returns the header integer stating the size, or counts the length until the terminator symbol and returns that integer.

Comment by Transfuturist on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-23T07:38:28.518Z · LW · GW

If this post doesn't get answered, I'll repost in the next open thread. A test to see if more frequent threads are actually necessary.

I'm trying to make a prior probability mass distribution for the length of a binary string, and then generalize to strings of any quantity of symbols. I'm struggling to find one with the right properties under the log-odds transformation that still obeys the laws of probability. The one I like the most is P(len(x)) = 1/(x+2), as under log-odds it requires log(x)+1 bits of evidence for strings of len(x) to meet even odds. For a length of 15, it uses all 4 bits in 1111, so its cost is 4 bits.

The problem is that 1/(x+2) does not converge, making it an improper prior. Are there some restrictions by which I can use this improper prior, or to find a proper prior with similarly desirable qualities?

Comment by Transfuturist on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-16T07:27:46.449Z · LW · GW

Or 3141 May 9.

Comment by Transfuturist on [FINAL CHAPTER] Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 122 · 2015-03-16T07:25:48.063Z · LW · GW

The dark side is Voldemort's thought patterns. In other words, Voldemort is constantly in the dark side.

Comment by Transfuturist on Rationality: From AI to Zombies · 2015-03-14T05:53:15.531Z · LW · GW

Ah, so about as large as it takes for a fanfic to be good. :P