Posts

Comments

Comment by gmaxwell on AALWA: Ask any LessWronger anything · 2015-11-02T19:06:50.428Z · score: 2 (2 votes) · LW · GW

The concerns in this space go beyond personal safety, though that isn't an insignificant one. For safety, It doesn't matter what one can prove because almost by definition anyone who is going to be dangerous is not behaving in an informed and rational way, consider the crazy person who was threatening Gwern. It's also not possible to actually prove you do not own a large number of Bitcoins-- the coins themselves are pseudonymous, and many people can not imagine that a person would willingly part with a large amount of money (or decline to take it in the first place).

No one knows which, if any, Bitcoins are owned by the system's creator. There is a lot of speculation which is know to me to be bogus; e.g. identifying my coins as having belonged to the creator. So even if someone were to provably dispose of all their holdings, there will be people alleging other coins.

The bigger issue is that the Bitcoin system gains much of its unique value by being defined by software, by mechanical rule and not trust. In a sense, Bitcoin matters because its creator doesn't. This is a hard concept for most people, and there is a constant demand by the public to identify "the person in charge". To stand out risks being appointed Bitcoin's central banker for life, and in doing so undermine much of what Bitcoin has accomplished.

Being a "thought leader" also produces significant demands on your time which can inhibit making meaningful accomplishments.

Finally, it would be an act which couldn't be reversed.

Comment by gmaxwell on [LINK] Transcendence (2014) -- A movie about "technological singularity" · 2014-04-21T07:19:28.449Z · score: 0 (1 votes) · LW · GW

Now that the movie is out, how would you rate your prediction in hindsight?

Comment by gmaxwell on Botworld: a cellular automaton for studying self-modifying agents embedded in their environment · 2014-04-21T06:36:29.912Z · score: 7 (11 votes) · LW · GW

Every ten years or so someone must reinvent Tierra (http://en.wikipedia.org/wiki/Tierra_%28computer_simulation%29).

Comment by gmaxwell on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-16T01:18:43.710Z · score: 0 (0 votes) · LW · GW

Well— we're deep in the meta philosophy of a fictional world, so I'm not sure that any great insight will come from the discussion.

I'm unsure of how to resolve the apparent safety of time tuners with the idea that there is an optimization process selecting a permissible outcome unless I wave my arms and say that the optimization process is moral, perhaps borrowing the objectives of the operator (like the sorting hat). One way to do this is to note that bad things happening increase the probability of more time tuner usage, which a human-interest blind metric could still be minimizing.

Seems very handwavy, though: Saying the optimizer picked tie breaking that— say— minimized the sum probability change displaced in times would just tend to select time tuners out of existence.

As far as information itself, I'm not so sure if it's quite that sticky: Imagine our universe as we normally would think of it but with quantized time (tics). We would normally imagine a each tick having a state and then there is some (large but) finite number of successor states possible, each with its own probability which is simply the product of the probabilities of all the component transitions for all the particles. The universe evaluates this function a step at a time moving to a particular new state with probabilities proportional to product the component particle transition probabilities according to natural law.

In HPMOR verse, instead the evaluation gets performed by some hyper-computer that evaluates the states using a six hour look-ahead. You could imagine taking the every possible combination of 6 hour successor states and picking according to their joint probability, then stepping forward one tick towards the selected group and then redoing the evaluation. At least in classical mechanics you don't need the look ahead evaluation but MORverse has time tuners.

As seconds fall out of the tail of the window they become fixed. Prior to that happening time tuner usage upwhen can influence the selected states in the downwhen subject to the constraints that no inconsistency is created. If Minerva noticing harry seemed different would have created some contradiction (due to it influencing into time travel that went into the fixed downwhen) then she simply wouldn't. The picking of the "most likely" way to constrain her from (e.g. having her drop dead) is precluded by having to be consistent with the past history which is already fixed and doesn't include any dangerous interactions.

Stated differently: danger would arise only because of time travel into a past that was fixed before the cause of the danger was available to the evaluator. Unsafe resolutions would tend to not be consistent with the fixed past. So the normalcy of constrained time travel might simply be a result of the forward lookahead and the backwards modification depth being exactly the same.

Comment by gmaxwell on Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94 · 2013-07-15T00:29:27.684Z · score: 2 (2 votes) · LW · GW

I'd always just assumed that whatever force imposes the time turner rules just has a simple constraint that no history is permitted where "information" travels back further, and it freely reconfigures things in potentially very high entropy ways ("DO NOT MESS WITH TIME") to achieve that end. Amelia Bones' upon time travel was replace with a spherical null-information amelia bones which had no influence from the future except that which she would not covey— including by choice— to anyone that travels outside the constraint satisfaction window.

So I think there doesn't need to be any special casing of sapience to create the appearance of special casing sapience, beyond anthropic bias— the only time when the reconfiguration to meet the constraint is particularly obvious to a conscious entity is when it interacts with a conscience entity.

Comment by gmaxwell on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T21:33:42.642Z · score: 6 (6 votes) · LW · GW

are not, as a rule, a different intellectual order than we are

Yes they are— in the sense that they will have decades to spend ruminating on workarounds, experimenting, consulting with others. And when they find a solution the result is potentially an easily transmitted whole class compromise that frees them all at once.

Decades of dedicated human time, teams of humans, etc. are all forms of super-humanity. If you demanded that the same man hours be spent drafting the language as would be spent under its rule, then I'd agree that there was no differential advantage, but then it would be quite a challenge to write the rule.

Comment by gmaxwell on How An Algorithm Feels From Inside · 2010-09-08T04:55:52.880Z · score: 1 (1 votes) · LW · GW

Network 1 would work just fine (ignoring how you'd go about training such a thing). Each of the N^2 edges has a weight expressing the relationship of the vertices it connects. E.g. if nodes A and B are strongly anti-correlated the weight between them might be -1. You then fix the nodes you know and then either solve the system analytically or through numerical iteration until it settles down (hopefully!) and then you have expectations for all the unknown.

Typical networks for this sort of thing don't have cycles so stability isn't a question, but that doesn't mean that networks with cycles can't work and reach stable solutions. Some error correcting codes have graph representations that aren't much better than this. :)