Posts

SSC Zürich February Meetup 2020-01-25T17:21:46.229Z · score: 2 (1 votes)

Comments

Comment by vitor on Disasters · 2020-01-23T03:04:10.879Z · score: 6 (3 votes) · LW · GW

What is the "total cost of ownership" of supplies? keeping a stockpile fresh requires ongoing maintenance. In addition to direct cost and space required, you regularly have to devote time and attention to it. It just doesn't seem practical, specially a large supply of fresh water.

An additional cost is the hassle of moving/selling/giving away your stockpile if you move. If you have deep roots in the place you live, this might not be a large consideration, but I've moved at least once every 6 years all my life, some times considerably more often (future projection: even more often). "Unnecessary" stuff like this just adds an extra burden to an already burdensome time.

How would you say the value of having supplies changes as you reduce from 14 days to something smaller? It's non-linear for sure, so maybe there's a lower point that's a good compromise, e.g. 3 days of food and water. Another way of phrasing the question: where does the "two weeks" reference point in your post come from?

Comment by vitor on Use-cases for computations, other than running them? · 2020-01-23T02:29:55.800Z · score: 7 (4 votes) · LW · GW

Any semantic question about the program (semantic = about input/output relation, as opposed to syntactic = about the source code itself). Note that this is uncomputable due to rice's theorem. "Output" includes whether the computation halts.

Find a semantically equivalent computation that fulfills/maximizes some criterion:

  • more obviously correct
  • shorter in length
  • faster to run (on hardware x or model of computation y)
  • doesn't use randomness
  • doesn't leak info through side-channels
  • compliant with design pattern x
  • any other task done by a source-to-source compiler

Find a computation that is semantically equivalent after applying some mapping to the input and/or output:

  • runs on encrypted input/output pairs (homomorphic encryption)
  • computation is reversible (required before running on a quantum computer)
  • redundantly encoded, add metadata to output, etc. Example: run program on untrusted hardware in such a way that the result is trusted (hardware exposed to outer space, folding@home, etc)

Any question about the set of programs performing the same computation.

  • this computation must take at least x time
  • this computation cannot be done by any program of length less than x (kolmogorov complexity)

Treat the program as an anthropological artifact.

  • deduce the state of mind of the person that wrote the program
  • deduce the social environment in which the program was written
  • deduce the technology level required to make running the program practical
  • etc.

(Thanks for reminding me why I love CS so much!)

Comment by vitor on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T21:56:20.137Z · score: 2 (2 votes) · LW · GW

Did you also test what other software (optaplanner as mentioned by HungryHobo, any SAT solver or similar tool) can do to improve those same schedules?

Did you run your software on some standard benchmark? There exists a thing called the international timetabling competition, with publicly available datasets.

Sorry to be skeptical, but scheduling is an NP-hard problem with many practical applications and tons of research has already been done in this area. I will grant that many small organizations don't have the know-how to set up an automated tool, so there may still be a niche for you, specially if you target a specific market segment and focus on making it as painless as possible.

Comment by vitor on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T21:34:13.720Z · score: 3 (3 votes) · LW · GW

This will not have any practical consequences whatsoever, even in the long term. It is already possible to perform reversible computation (Paper by Bennett linked in the article) for which such lower bounds don't apply. The idea is very simple: just make sure that your individual logic gates are reversible, so you can uncompute everything after reading out the results. This is most easily achieved by writing the gate's output to a separate wire. For example an OR gate, instead of mapping 2 inputs to 1 output like

(x,y) --> (x OR y),

it would map 3 inputs to 3 outputs like

(x, y, z) --> (x,y, z XOR (x OR y)),

causing the gate to be its own inverse.

Secondly, I understand that the Landauer bound is so extremely small that worrying about it in practice is like worrying about the speed of light while designing an airplane.

Finally, I don't know how controversial the Landauer bound is among physicists, but I'm skeptical in general of any experimental result that violates established theory. Recall that just a while ago there were some experiments that appeared to show FTL communication, but were ultimately a sensor/timing problem. I can imagine many ways in which measurement errors sneak their way in, given the very small amount of energy being measured here.

Comment by vitor on My Kind of Moral Responsibility · 2016-05-04T13:11:45.124Z · score: 0 (0 votes) · LW · GW

The real danger, of course, is being utterly convinced Christianity is true when it is not.

The actions described by Lumifer are horrific precisely because they are balanced against a hypothetical benefit, not a certain one. If there is only an epsilon chance of Christianity being true, but the utility loss of eternal torment is infinite, should you take radical steps anyway?

In a nutshell, Lumifer's position is just hedging against Pascal's mugging, and IMHO any moral system that doesn't do so is not appropriate for use out here in the real world.

Comment by vitor on Open Thread March 21 - March 27, 2016 · 2016-03-22T16:18:04.547Z · score: 10 (10 votes) · LW · GW

Your problem is called a clustering problem. First of all, you need to answer how you measure your error (information loss, as you call it). Typical error norms used are l1 (sum of individual errors), l2 (sum of squares of errors, penalizes larger errors more) and l-infinity (maximum error).

Once you select a norm, there always exists a partition that minimizes your error, and to find it there are a bunch of heuristic algorithms, e.g. k-means clustering. Luckily, since your data is one-dimensional and you have very few categories, you can just brute force it (for 4 categories you need to correctly place 3 boundaries, and naively trying all possible positions takes only n^3 runtime)

Hope this helps.