Posts

Comments

Comment by vitor on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T21:56:20.137Z · score: 2 (2 votes) · LW · GW

Did you also test what other software (optaplanner as mentioned by HungryHobo, any SAT solver or similar tool) can do to improve those same schedules?

Did you run your software on some standard benchmark? There exists a thing called the international timetabling competition, with publicly available datasets.

Sorry to be skeptical, but scheduling is an NP-hard problem with many practical applications and tons of research has already been done in this area. I will grant that many small organizations don't have the know-how to set up an automated tool, so there may still be a niche for you, specially if you target a specific market segment and focus on making it as painless as possible.

Comment by vitor on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T21:34:13.720Z · score: 3 (3 votes) · LW · GW

This will not have any practical consequences whatsoever, even in the long term. It is already possible to perform reversible computation (Paper by Bennett linked in the article) for which such lower bounds don't apply. The idea is very simple: just make sure that your individual logic gates are reversible, so you can uncompute everything after reading out the results. This is most easily achieved by writing the gate's output to a separate wire. For example an OR gate, instead of mapping 2 inputs to 1 output like

(x,y) --> (x OR y),

it would map 3 inputs to 3 outputs like

(x, y, z) --> (x,y, z XOR (x OR y)),

causing the gate to be its own inverse.

Secondly, I understand that the Landauer bound is so extremely small that worrying about it in practice is like worrying about the speed of light while designing an airplane.

Finally, I don't know how controversial the Landauer bound is among physicists, but I'm skeptical in general of any experimental result that violates established theory. Recall that just a while ago there were some experiments that appeared to show FTL communication, but were ultimately a sensor/timing problem. I can imagine many ways in which measurement errors sneak their way in, given the very small amount of energy being measured here.

Comment by vitor on My Kind of Moral Responsibility · 2016-05-04T13:11:45.124Z · score: 0 (0 votes) · LW · GW

The real danger, of course, is being utterly convinced Christianity is true when it is not.

The actions described by Lumifer are horrific precisely because they are balanced against a hypothetical benefit, not a certain one. If there is only an epsilon chance of Christianity being true, but the utility loss of eternal torment is infinite, should you take radical steps anyway?

In a nutshell, Lumifer's position is just hedging against Pascal's mugging, and IMHO any moral system that doesn't do so is not appropriate for use out here in the real world.

Comment by vitor on Open Thread March 21 - March 27, 2016 · 2016-03-22T16:18:04.547Z · score: 10 (10 votes) · LW · GW

Your problem is called a clustering problem. First of all, you need to answer how you measure your error (information loss, as you call it). Typical error norms used are l1 (sum of individual errors), l2 (sum of squares of errors, penalizes larger errors more) and l-infinity (maximum error).

Once you select a norm, there always exists a partition that minimizes your error, and to find it there are a bunch of heuristic algorithms, e.g. k-means clustering. Luckily, since your data is one-dimensional and you have very few categories, you can just brute force it (for 4 categories you need to correctly place 3 boundaries, and naively trying all possible positions takes only n^3 runtime)

Hope this helps.