Open thread, Jul. 11 - Jul. 17, 2016

post by MrMind · 2016-07-11T07:09:41.124Z · LW · GW · Legacy · 127 comments

Contents

127 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

127 comments

Comments sorted by top scores.

comment by Daniel_Burfoot · 2016-07-12T17:39:13.777Z · LW(p) · GW(p)

I have some software I am thinking about packaging up and releasing as open-source, but I'd like to gauge how interesting it is to people other than me.

The software is a highly useable implementation of arithmetic encoding. AE completely handles the problem of encoding, so in order to build a custom compressor for some data set, all you have to do is supply a probability model for the data type(s) you are compressing (I call this "BYOM" - Bring Your Own Model).

One of the key technical difficulties of data compression is that you need to keep the encoder and decoder in exact sync, or the whole procedure goes entirely off the rails. This problem is especially acute for the use case of AE, where you are potentially changing the model in response to every event. My software makes it very easy to guarantee that the sender/receiver are in sync, and at the same time it reduces the amount of code you have to write (basically you don't write a separate encoder and decoder, you just write one class that is used for both, depending on the configuration).

Replies from: Gunnar_Zarncke, ChristianKl, Gunnar_Zarncke, Gunnar_Zarncke, akvadrako, Lumifer
comment by Gunnar_Zarncke · 2016-07-15T05:52:01.816Z · LW(p) · GW(p)

Summary after your explanations: Yes, I'm really interested in an open-source packaging of your library. Preferably on github.

comment by ChristianKl · 2016-07-13T20:46:09.583Z · LW(p) · GW(p)

I read a bit of what you previously wrote about your approach but I didn't read your full book.

I think a bunch of Quantified Self applications would profit from good compression. It's for example relatively interesting to sample galvanic skin response in very short time intervals of 5ms. Similar things go for accelerometer data. It would be interesting what kind of data you can draw from the noisy heart rate data on smartwatches with shorter time intervals.

Smart watches could easily gather that data with shorter time accuracy than they currently do but they have relatively limited space.

In practice I think it will depend a lot on how easy it is to use your software.

Maybe you could also have a gamified version. You have a website and every week there's a dataset that get's published. Only have of the data is released. Every participant can enter their own model via a website and the person who's model compresses the unreleased part of the data the best wins.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-07-14T21:17:21.614Z · LW(p) · GW(p)

Thanks for the feedback.

You have a website and every week there's a dataset that get's published.

A couple years ago (wow, is LessWrong really that old?) I challenged people to Compress the GSS, but nobody accepted the offer...

Replies from: ChristianKl
comment by ChristianKl · 2016-07-15T20:47:22.379Z · LW(p) · GW(p)

The minimum amount of time investment to participate in the GSS challenge might take hours. For most people it's not even clear what steps are involved in building a model for compressing a dataset. It's not really gamified. I think it would be possible to have a website that allows people to make up a model in a minute to take part in the tournament.

A one-minute model might be bad, but it might get people into the mood for engaging into the game.

I also think that a QS dataset might be more interesting than compressing the GSS. Promotion wise I think it could be promoted via the QS website (I might still have posting privelages or simply ask, I doubt people would have a problem).

Of course it might be that I misunderstand the issue and it's not possible to build the website in a way that allows people to provide 1 minute models.

Replies from: gwern
comment by gwern · 2016-07-16T22:35:16.162Z · LW(p) · GW(p)

I also think that a QS dataset might be more interesting than compressing the GSS. Promotion wise I think it could be promoted via the QS website (I might still have posting privelages or simply ask, I doubt people would have a problem).

I dunno if it would be all that interesting. If someone wants to work on predictive modeling of datasets every week or month in a tournament format, they can just use Kaggle (and win with XGBOOST or a residual network, likely). I have fat/muscle/weight data on myself from an Omron scale going back 2 years with multiple measurements on most days; this is a reasonably interesting dataset because one can quantify measurement error, the variables are interrelated with one or two latent variables, there are definite nontrivial time trends, and it's easy to generate hold out data (if the tournament runs 1 month, then there's an additional 1 month of data which no one, including the organizer, had access to to score contributions with at the end) - but I doubt anyone would bother participating. I have an even bigger QS dataset incorporating all my recorded data of all kinds on a daily granularity, somewhere around 100+ summary variables, but the missingness is so high that it would be unpleasant to work with (I've been having a great deal of difficulty just getting lavaan/blavaan to run on it) and likewise I doubt there would be much interest in a competition. There needs to be some sort of incentive: either prizes, inherently interesting data, or some important intellectual/scientific point to it. Kaggles with a lot of participating have big prizes or sexy datasets like the Higgs boson or whales.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-17T11:05:43.976Z · LW(p) · GW(p)

There needs to be some sort of incentive: either prizes, inherently interesting data, or some important intellectual/scientific point to it

I think there a scientific point for those QS data sets that can be automatically measured with a high scale of granuality. Very frequently people measure less data because they don't want to store all the data that a single sensor can produce.

Currently acclerometer data get's compressed into the variable of "steps". That variable has the advantage that it has an intuitive meaning but it's likely not the best possible variable to gather when doing scientific work about how Pokemon Go leads people to do more exercise.

Replies from: gwern
comment by gwern · 2016-07-17T16:51:59.196Z · LW(p) · GW(p)

Doesn't that have as much to do with battery life and software engineering effort than anything? Those sensors could already log data in much more detail by streaming into an off-the-shelf compressor like xz, but they don't because good compression inherently requires a lot of computation/battery-life and adds complexity compared to naive methods. There don't seem to be many use-cases where people having already plugged in zpaq but that just isn't enough and they need even better compression.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-17T17:53:29.307Z · LW(p) · GW(p)

I think translating accerlerometer data into steps is effectively a way of data compression. But it's a way of data compression that's not optimized for leaving important features of the data intact but about trying to give users a variable they think they understand.

comment by Gunnar_Zarncke · 2016-07-13T19:13:54.881Z · LW(p) · GW(p)

Is it something like this? http://www.cipr.rpi.edu/research/SPIHT/EW_Code/FastAC_Readme.pdf

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-07-13T19:48:41.162Z · LW(p) · GW(p)

Thanks for posting this link, it contains a good illustration of the problem of using separate encoder/decoder implementations.

See how they have separate encoder/decoder implementations on page 8/9 of the document? That strategy is very very error prone. It is very hard for the programmer to ensure that the encoder and decoder are performing exactly the same updates, and even the slightest off-by-one error will cause the process to fail completely (I spent many hours trying to debug sync problems like this). This problem becomes more painful as you attempt to build more and more sophisticated compressors.

With my library, there is no separation of encoder and decoder logic; it is effectively the same code. That basically guarantees there will be no sync problems. Since I developed this technique I haven't had any sync problems.

comment by Gunnar_Zarncke · 2016-07-13T19:10:21.336Z · LW(p) · GW(p)

Which language?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-07-13T20:02:38.280Z · LW(p) · GW(p)

Java.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-07-13T22:51:25.351Z · LW(p) · GW(p)

Great. I'm interested. Performancewise it may not be the best possibility, but for reusability it's good. I wonder about the overhead of your abstraction.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-07-14T21:07:39.192Z · LW(p) · GW(p)

Thanks for the feedback!

Re: performance, my implementation is not performance optimized, but in my experience Java is very fast. According to this benchmark Java is only about 2x slower than pure C (also known as "portable assembly").

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-07-15T05:50:34.886Z · LW(p) · GW(p)

Yeah, the benchmark game. But arithmetic coding and the implied bit twiddling isn't exactly the strength of Java. On the other hand in this case the overhead of you in-sync de/encoding abstraction may be decisive.

comment by akvadrako · 2016-08-07T13:05:33.031Z · LW(p) · GW(p)

Just post it on github with no effort. If you start getting pull requests or issues logged, you'll have your answer.

comment by Lumifer · 2016-07-12T18:09:09.440Z · LW(p) · GW(p)

Two basic questions:

(1) What are the immediate practical applications?

(2) How qualified must the user be? (The "all you have to do is supply a probability model" part is worrying :-/)

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-07-13T13:02:36.182Z · LW(p) · GW(p)

Basically, if you have your own dataset that you wanted to compress with a special purpose model, you could try doing that. You could try out compression-based tricks for computer vision, like in this paper. You could use it as part of an information theory course if you wanted to show students a real example of compression in practice.

In my view it is quite easy to use, but you still need to be a programmer with some knowledge of stats and information theory.

Replies from: Lumifer
comment by Lumifer · 2016-07-13T14:39:15.523Z · LW(p) · GW(p)

So it's more of a library and less of an application?

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2016-07-13T16:26:14.533Z · LW(p) · GW(p)

Yes.

comment by HungryHobo · 2016-07-12T13:05:02.655Z · LW(p) · GW(p)

Some interesting news: the first autonomous soft tissue surgery, sounds like a notable breakthrough in machine vision was involved for distinguishing all the messy, fleshy internals of the (porcine) patient.

http://www.popularmechanics.com/science/health/a20718/first-autonomous-soft-tissue-surgery/

comment by gwern · 2016-07-12T23:58:22.886Z · LW(p) · GW(p)

I've written a summary of 'result-blind peer review' with all the references I could find: https://en.wikipedia.org/wiki/Scholarly_peer_review#Result-blind_peer_review Anyone know of more?

comment by morganism · 2016-07-11T23:32:58.187Z · LW(p) · GW(p)

Growing and interlinking neurons

First ever optic nerves regrown in a mammal.

http://sciencebulletin.org/archives/3026.html

and neurons bathed in IGF interconnect denser and more often

http://www.kurzweilai.net/neurons-grown-from-stem-cells-in-a-dish-reveal-clues-about-autism

Texting changes brain waves to new pattern

http://sciencebulletin.org/archives/2623.html

comment by James_Miller · 2016-07-11T21:46:52.423Z · LW(p) · GW(p)

How does being nervous influence your ability stats? Being nervous improves my mental abilities (I usually did better on standardized tests than I did on practice ones and I can tell that my recall is much better when I'm nervous), but I get clumsier and less articulate. Interestingly, when I'm nervous I come across as being far less intelligent than I normally do, even though the reverse is true.

Replies from: Gram_Stone, Unnamed
comment by Gram_Stone · 2016-07-11T23:14:48.859Z · LW(p) · GW(p)

Adrenaline can improve function in a number of domains, so it might be that anyone who has test anxiety or some other performance anxiety could in certain situations that they perceive as threatening have some sweet concentration of adrenaline in their bodies such that they have improved performance rather than impaired performance, and this is never recognized as occurring by the same mechanism as test or performance anxiety unspecified because the amount of adrenaline causes symptoms that aren't detrimental. Conceivably the amount of adrenaline and the effects thereof could be different across different domains.

Maybe that happens to you. What do you think?

EDIT: This might lead to empirical evidence. Anxiety may decrease performance when attention has to be switched between tasks, but may improve performance when the task is difficult and singular. Think social situations vs. exams.

Replies from: James_Miller
comment by James_Miller · 2016-07-11T23:55:37.112Z · LW(p) · GW(p)

Yes, this could be what's happening with me.

Replies from: niceguyanon
comment by niceguyanon · 2016-07-13T16:39:19.477Z · LW(p) · GW(p)

Same happens with certain physical activities, race day magic is so common.

comment by Unnamed · 2016-07-13T21:08:19.696Z · LW(p) · GW(p)

See: Yerkes-Dodson law and research on "optimal level of arousal".

comment by morganism · 2016-07-11T23:52:08.530Z · LW(p) · GW(p)

"In deep learning, architecture engineering is the new feature engineering"

Trying to set algo's so they are not limited sets of the developers, or the databases.

http://smerity.com/articles/2016/architectures_are_the_new_feature_engineering.html

An AI was used to optimize and align a set of lasers used to produce BoseEinstein Condensates, and optimized the problem in one hour.

"Fast machine-learning online optimization of ultra-cold-atom experiments"

http://www.nature.com/articles/srep25890

SETI research organizational optimization, and examining the definitions of what alien life actually is

Alien Mindscapes—A Perspective on the Search for Extraterrestrial Intelligence

http://online.liebertpub.com/doi/10.1089/ast.2016.1536

a nice little writeup on using SQL

http://www.sohamkamani.com/blog/2016/07/07/a-beginners-guide-to-sql/

and a free, and large book on maths- warning PDF alert

www3.nd.edu/~powers/ame.60611/M.pdf

comment by Gunnar_Zarncke · 2016-07-14T20:30:08.315Z · LW(p) · GW(p)

By now I have read (or skimmed) so many reviews of Age of Em that I probably could have read the book myself...

Anyway.

I though about the two future paths: A Hansonian Em future and a machine AI future. I wondered how to reconcile the (seeming?) contradiction between them. Then the idea occurred to me that maybe that both can be (mostly? partly?) true:

If FAI is possible it will likely make Ems possible shortly after. If it is friendly as assumed then an exploitation of the Ems (loss of human value) as feared by many comments is ruled out by construction. What remains are things that are essentially human in social regards and efficient in economic terms.

If Robin Hanson is correct in his assumptions about economics and society are correct then I think the world created by FAI may play out as described by him. At least as long as it takes to go to the next level. Which may be quite short. As predicted by Hanson too.

Replies from: MrMind
comment by MrMind · 2016-07-15T06:54:26.740Z · LW(p) · GW(p)

I know regard reading a book a not so trivial investment of time and energy, given the huge quantity of possible books I could be reading right now.
Is there any particular reason to believe Hanson's beliefs? So that it makes sense to anticipate the future the way he does?

Replies from: pcm
comment by pcm · 2016-07-15T18:45:26.317Z · LW(p) · GW(p)

There's no particular reason to believe all of his predictions. But that's also true of anyone else who makes as many predictions as the book does (on similar topics).

When you say "anticipate the future the way he does", are you asking whether you should believe there's a 10% chance of his scenario being basicly right?

Nobody should have much confidence in such predictions, and when Robin talks explicitly about his confidence, he doesn't sound very confident.

Good forecasters consider multiple models before making predictions (see Tetlock's work). Reading the book is a better way for most people to develop an additional model of how the future might be than reading new LW comments.

Replies from: MrMind
comment by MrMind · 2016-07-18T07:55:44.178Z · LW(p) · GW(p)

Nobody should have much confidence in such predictions

If your model doesn't even get to 10%, then I say: unless you have hundreds of competing models in your mind (who has?), then do not even bother.
Your comment helped me reach the conclusion that reading AoE would be a waste of time.

comment by Sable · 2016-07-12T15:54:59.302Z · LW(p) · GW(p)

I went to a party recently, and the host provided the food. At the end of the party, there was an awful lot left over, and my understanding is that most of it went to waste.

I had a thought when this was happening: if I was the host, why not keep track of how much food my guests actually ate, and try adjusting the amount of food at my next party to match?

The host was not a rationalist, as I suspect most hosts aren't, but upon researching the issue, it doesn't seem as if there's a widespread solution.

There are charities that focus on "recycling" food waste, and there are plenty of suggestions for how much food to bring to parties of various size, and yet I still have the experience of purchasing/preparing far too much food for parties, and almost every party I go to has far too much food available.

What exactly is going on, and how can it be made better? It seems to me as if this is a reasonably low-hanging fruit - getting people to properly estimate how much food people actually consume at parties in order to reduce food waste. It's the sort of calculation any restaurant with an all-you-can-eat buffet has clearly made in order to determine their price point.

Is this a publicity issue, that people don't realize they can optimize the amount of food they purchase and prepare? Or is it psychological, related to akrasia or a bias? I've been told that a host's greatest fear is that they run out of food, but why? Is the way to attack this problem through exposing that fear as unfounded?

This is one of the first external questions I've considered, since committing fully to instrumental rationality.

I'd like to hear everyone's thoughts on the matter.

Thanks.

TL;DR:

Why do people waste food at parties? Is this a solvable problem?

Replies from: Lumifer, Dagon, Gunnar_Zarncke, entirelyuseless, Viliam
comment by Lumifer · 2016-07-12T17:22:02.852Z · LW(p) · GW(p)

The reason parties are oversupplied with food is because the incentives are asymmetrical. Specifically, the loss from having too much food is considerably smaller than the loss from having too little food.

Having insufficient food is a significant loss of status since you failed as a host to provide proper hospitality. There are a bunch of obvious historical and cultural reasons why not being able to feed your guests is a bad thing, status-wise.

Having too much food is just a matter of some wasted money and/or having to eat leftovers for few days. Not a big deal at all nowadays.

Replies from: MrMind
comment by MrMind · 2016-07-13T08:16:56.177Z · LW(p) · GW(p)

That calories are used as social lubricant irks me a lot. I understand why it was so in the past, but we live in a world filled to the brim with food, do we really need tens of thousands of calories at any social gathering?
The answer is obiously not, indeed it would be beneficial to lower the amount circulating... But as Lumifer spotted and wannabe rationalists often overlook, what appears as waste and irrationality is actually a situation optimized for status.
Ignoring status is almost always a bad idea, BUT: we can always treat it as just another contraint.
Given that we need to optimize for status and waste reduction, what could we do?

  • coordinate with a charity to pick-up the leftovers
  • use food that can be easily refrigerated and consumed gradually later
  • have food in stages, so that variety masks lack of abundance (and pressure people into eating leftovers)
  • repackage leftovers and offer them as parting gifts ...

These are just from a less than five minute brainstorming session, I'm sure someone invested in this would come up with much more interesting and creative ideas.

Replies from: Lumifer, Gunnar_Zarncke, gjm, jsteinhardt
comment by Lumifer · 2016-07-13T14:38:14.984Z · LW(p) · GW(p)

and pressure people into eating leftovers

In the Western world where obesity is rampant, why do you want to pressure people into eating more?

Generally speaking, the party-leftovers issue doesn't strike me as much of a problem. I suggest doing a back-of-the-envelope calculation of the harm it causes.

Replies from: MrMind
comment by MrMind · 2016-07-14T07:08:29.189Z · LW(p) · GW(p)

why do you want to pressure people into eating more?

Well, because that's what the problem statement asked for! But yeah, it's probably a forgotten purpose: what should be optimized is the amount of food not wasted, not how much food remains at the end of the party.

the party-leftovers issue doesn't strike me as much of a problem

It's not indeed! But it's a nice simple little world, I took it as an exercise in rationality.

comment by Gunnar_Zarncke · 2016-07-13T19:36:21.842Z · LW(p) · GW(p)

I like your suggestions. Asking people whether they want to take leftovers is an option I have seen used a lot.

comment by gjm · 2016-07-13T10:30:16.635Z · LW(p) · GW(p)

pressure people into eating leftovers

That doesn't sound to me like it's compatible with "optimizing for status".

Replies from: MrMind
comment by MrMind · 2016-07-13T12:15:01.717Z · LW(p) · GW(p)

The sentence was perhaps ambiguous: I meant that the pressure for eating leftovers derived from the stages, from the fact that that particular food in x minutes will be no longer available. You know, the usual scarcity trick.
Not that the patron should encourage attendees to finish their plates :)

Replies from: ChristianKl
comment by ChristianKl · 2016-07-13T20:51:10.855Z · LW(p) · GW(p)

I meant that the pressure for eating leftovers derived from the stages, from the fact that that particular food in x minutes will be no longer available. You know, the usual scarcity trick.

I think that frequently people don't want to eat the last thing because it means that other can't eat the last thing, but social norms might vary.

comment by jsteinhardt · 2016-07-13T15:45:52.406Z · LW(p) · GW(p)

I don't think this is really a status thing, more a "don't be a dick to your guests" thing. Many people get cranky if they are hungry, and putting 30+ cranky people together in a room is going to be a recipe for unpleasantness.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-07-13T19:35:14.526Z · LW(p) · GW(p)

But there is a difference between having an amount appropriate to avoid crankiness and more than can be eaten.

Replies from: jsteinhardt
comment by jsteinhardt · 2016-07-13T21:01:50.643Z · LW(p) · GW(p)

But like, there's variation in how much food people will end up eating, and at least some of that is not variation that you can predict in advance. So unless you have enough food that you routinely end up with more than can be eaten, you are going to end up with a lot of cranky people a non-trivial fraction of the time. You're not trying to peg production to the mean consumption, but (e.g.) to the 99th percentile of consumption.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-07-13T22:55:11.065Z · LW(p) · GW(p)

You seem to think that people that are not completely satiated are automatically cranky. That doesn't match my observation.

Also you may have multiple dishes. For example we mostly start with a collaboratively prepared soup - which thereby will be the right size by construction. Later we have some snacks or sweets or fruits. First the fresh ones, later if needed packaged ones.

Replies from: jsteinhardt
comment by jsteinhardt · 2016-07-14T01:17:27.506Z · LW(p) · GW(p)

I don't think I need that for my argument to work. My claim is that if people get, say, less than 70% of a meal's worth of food, an appreciable fraction (say at least 30%) will get cranky.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-07-14T20:16:33.097Z · LW(p) · GW(p)

Then maybe we have different experience. Or differently selected people around us.

comment by Dagon · 2016-07-12T18:12:06.700Z · LW(p) · GW(p)

For most problems like this, it's worth solving once or twice at small scale before you look for general solutions. How many parties have you thrown (or guided the food procurement for), and what have you found that makes for better estimation of needs?

Have you talked with caterers or other experts in such estimation? It would be interesting to learn how they decide when to risk too little vs too much, and the clever tricks they have to control consumption (which will make the estimates more accurate). For instance, having lots of cheap starches and limited meat, along with explicit or subtle rationing, can lead to high waste measured by weight or calories, but fairly low waste measured by cost.

Replies from: Lumifer
comment by Lumifer · 2016-07-12T18:21:04.239Z · LW(p) · GW(p)

Not sure caterers will be helpful since they're paid for what they bring to the party and they don't care at all whether it gets eaten or not. Similarly, the all-you-can-eat buffets have lots of data from which to estimate how much an average customer eats, and they have the law of large numbers on their side, too.

For the house parties the usual answer is just experience. After a few missteps most people can learn to have a workable idea of the amount of food needed without formulating a full Bayesian model or even without a simple spreadsheet. Of course there is some uncertainty and the incentives make the host provide the amount at the top end of the reasonable estimate interval.

comment by Gunnar_Zarncke · 2016-07-13T19:33:19.221Z · LW(p) · GW(p)

Just to provide a data point: I'm hosting get-togethers of friends that you might call parties regularly and usually nothing or a very small amount is thrown away.

I'm wondering whether this might be specific to Germany. Here there is some social pressure to avoid wasting stuff (together with a strong trend for sustainability).

comment by entirelyuseless · 2016-07-13T15:20:58.761Z · LW(p) · GW(p)

"why not keep track of how much food my guests actually ate, and try adjusting the amount of food at my next party to match?"

Because the amount of food that people eat is not an absolute value, but a function of how much is there. If you do that adjustment, and then continue to do that adjustment, you will end with a situation without any food. That is true both at parties and in any other situation, like meals served to people who otherwise will have nothing to eat, at least to a first approximation -- if the last situation is absolute, you will get people eating some food, but it will not be enough to live on.

comment by Viliam · 2016-07-13T07:58:13.997Z · LW(p) · GW(p)

why not keep track of how much food my guests actually ate, and try adjusting the amount of food at my next party to match?

I guess there is not a fixed amount of food brought per guest, but rather a random distribution. The host's goal is not to make sure that the average "food brought" equals the average "food desired", but rather that with, say, 95% probability the current "food brought" is at least 90% of "food desired" (feel free to change the numbers to fit your experience). Also, the host is hedging against the possibility that the few guests who usually come with hands full of food, suddenly can't come or for some random reason come empty-handed.

There are charities that focus on "recycling" food waste

I guess the best way to improve the world is to have a list of such charities in your neighborhood ready in a printed form, and give it to the host if they are interested.

Replies from: gjm
comment by gjm · 2016-07-13T09:56:16.328Z · LW(p) · GW(p)

I agree with all that and would add:

  • "Too much food" is a much less fun-killing failure mode than "Not enough food".
  • You'd like guests to have a decent choice of things to eat even at the start when not so much has been brought and at the end when lots has been eaten. In particular, plenty of choice at the end of the party => lots of food left over.
  • At least some party food keeps well and serves nicely as snack food, so if you have too much you just eat it later. (Or maybe bring it to another party. Check those best-before dates!)
  • Having too much food kinda suggests "this person has lots of generous friends and/or limitless resources" whereas having too little kinda suggests "this person has no generous friends and is in financial trouble". Which message would you rather be sending to your party guests?
  • The wastage isn't super-expensive anyway. What fraction of your income do you spend on party food?
comment by HungryHobo · 2016-07-12T13:50:44.923Z · LW(p) · GW(p)

If true this has some spectacular implications for computing (long term).

http://phys.org/news/2016-07-refutes-famous-physical.html

"Now, an experiment has settled this controversy. It clearly shows that there is no such minimum energy limit and that a logically irreversible gate can be operated with an arbitrarily small energy expenditure. Simply put, it is not true that logical reversibility implies physical irreversibility, as Landauer wrote."

Some of the limits of computation, how much you could theoretically do with a certain amount of energy are based on what appear to have been incorrect beliefs about information processing and entropy.

It will push the research towards "zero-power" computing: the search for new information processing devices that consume less energy. This is of strategic importance for the future of the entire ICT sector that has to deal with the problem of excess heat production during computation.

It will call for a deep revision of the "reversible computing" field. In fact, one of the main motivations for its own existence (the presence of a lower energy bound) disappears.

Replies from: Vitor, Lumifer
comment by Vitor · 2016-07-12T21:34:13.720Z · LW(p) · GW(p)

This will not have any practical consequences whatsoever, even in the long term. It is already possible to perform reversible computation (Paper by Bennett linked in the article) for which such lower bounds don't apply. The idea is very simple: just make sure that your individual logic gates are reversible, so you can uncompute everything after reading out the results. This is most easily achieved by writing the gate's output to a separate wire. For example an OR gate, instead of mapping 2 inputs to 1 output like

(x,y) --> (x OR y),

it would map 3 inputs to 3 outputs like

(x, y, z) --> (x,y, z XOR (x OR y)),

causing the gate to be its own inverse.

Secondly, I understand that the Landauer bound is so extremely small that worrying about it in practice is like worrying about the speed of light while designing an airplane.

Finally, I don't know how controversial the Landauer bound is among physicists, but I'm skeptical in general of any experimental result that violates established theory. Recall that just a while ago there were some experiments that appeared to show FTL communication, but were ultimately a sensor/timing problem. I can imagine many ways in which measurement errors sneak their way in, given the very small amount of energy being measured here.

Replies from: Gunnar_Zarncke, Douglas_Knight
comment by Gunnar_Zarncke · 2016-07-13T19:16:49.804Z · LW(p) · GW(p)

While you can always make the computation reversible it comes at a price: Carrying around larger and larger number of bits which take space and time to communicate and store.

comment by Douglas_Knight · 2016-07-12T23:33:34.913Z · LW(p) · GW(p)

I think that the Laundauer limit is controversial. But if it's wrong, one should be able to explain on the level of theory. What ordinary models of physics say about their gate is much more convincing than an experiment. How did they design their gate if they don't have a competing theory?

comment by Lumifer · 2016-07-12T14:26:43.404Z · LW(p) · GW(p)

As far as I can see, the experiment has shown that what was considered to be the lower bound is actually not.

However I don't understand how the claim of "no lower bound at all" necessarily follows. For all we know there is just a different, lower (lower bound).

Replies from: HungryHobo
comment by HungryHobo · 2016-07-12T14:46:37.087Z · LW(p) · GW(p)

I found it odd as well but I think it's because it implies that the theoretical reason for that lower bound may be invalid.

There's likely going to turn out to be a different theoretical lower bound for some other reason but right now we don't have that theoretical reason.

comment by lsparrish · 2016-07-17T18:17:39.533Z · LW(p) · GW(p)

Found this great youtube channel by a guy named Isaac Arthur, covering a variety of space topics. Has videos on Dyson Spheres, colonizing the Moon, and even concepts for very long term survival of civilizations and people past the heat death of the universe. Very rational and comprehensive.

comment by Gunnar_Zarncke · 2016-07-17T14:41:13.020Z · LW(p) · GW(p)

[Link] Slashdot "New Study Shows Why Big Pharma Hates Medical Marijuana"

Christopher Ingraham writes in the Washington Post that a new study shows that painkiller abuse and overdose are significantly lower in states with medical marijuana laws and that when medical marijuana is available, pain patients are increasingly choosing pot over powerful and deadly prescription narcotics.

--

Replies from: root
comment by root · 2016-07-17T16:33:50.908Z · LW(p) · GW(p)

I've read (mostly things by Ron Maimon) that marijuana* can actually impair your ability to do calculations (and in extent, I'd also assume your ability to make decisions) and I'm curious if there's any truth to that.

  • Is there a difference between marijuana, medical marijuana, weed, instert_name_here? They seem to be used interchangeably. At least they seem to cause a similar if not the exact same effect.
Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2016-07-18T05:41:07.823Z · LW(p) · GW(p)

There are long-term effects, but the impact seems to be not fully clear (wikipedia, On the other hand there are many known side-effects of the normal drugs used for the same purpose.

Beside the medical properties there are also the social properties of a drug.

See also the AMS report Brain science, addiction and drugs.

comment by morganism · 2016-07-15T22:06:09.048Z · LW(p) · GW(p)

Now you can raise your neural networks at home, and then send em to school in the cloud, on GPU's.

"Today, researchers and developers can train their neural nets locally, and deploy them to Algorithmia’s scalable, cloud infrastructure, where they become smart API endpoints for other developers to use."

http://blog.algorithmia.com/2016/07/cloud-hosted-deep-learning-models/

A course in machine learning, printed or electronic

http://ciml.info/

comment by MrMind · 2016-07-13T12:23:18.791Z · LW(p) · GW(p)

I've been reading a slice of Neoreactionary - Anti-Neoreactionary discussions on Slate Star Codex.
A problem I've seen is that people are too hung-up to a positive / negative affiliation with the passage of time. The controversy seems to revolve mostly around "the past was good / the past was bad".
Who cares how the past was?
Just tell me what your values are and what political / social system you think serves them best!
It doesn't matter if it comes from the past, the Bible, Lord of the Rings or utopian literature. Just discuss the model! It's mostly fiction anyway.

(this mini-rant is directed at nobody in particular. I'll likely never have the occasion to discuss with a Neoreactionary)

Replies from: Viliam, Vaniver, ChristianKl
comment by Viliam · 2016-07-14T10:55:24.562Z · LW(p) · GW(p)

past = outside view

For example, if in the past people have repeatedly suggested a plan to create a paradise on Earth, and the plan, when realized, repeatedly ended with bloodshed and poverty, and now someone suggests the same plan again... I guess that's a reason to suspect it probably wouldn't end well. At the very least, the proponent should explain why exactly the previous instances have failed and what exactly they are planning to do differently today to avoid that specific failure.

But there is a difference between using the past as an outside view, i.e. conservatism; and worshipping the "past as my modern mind imagines it", i.e. neoconservatism / neoreaction. The latter is, ironically, in some aspects similar to the progressives who are worshipping the fictional future -- similar approach to modelling society, different aesthetics (or as you called it "positive / negative affiliation with the passage of time").

Replies from: MrMind
comment by MrMind · 2016-07-15T07:03:06.685Z · LW(p) · GW(p)

I would be a little more radical, but you said what I thought better than I could.

comment by Vaniver · 2016-07-13T15:09:16.230Z · LW(p) · GW(p)

I think a lot of political questions hinge on what's possible, and also what the consequences of policies are. If someone says "I think we should arrange marriages instead of letting individuals pick," then the immediate questions to settle are 1) will people allow such a policy to be put in place / comply with it, and 2) what will the consequences be?

(There's also the "does this align with principles" deontological question, but this is relatively easy to answer without looking at the past or present so I'll ignore it.)

And the past provides our primary data source to answer those sorts of questions. Yes, we can imagine multiple different causal effects of attempting to arrange marriages, but how those interplay with each other and shake out is hard to know. But other people tried that for us, and so we can investigate their experiments and come to a judgment.

Replies from: MrMind
comment by MrMind · 2016-07-14T07:19:26.202Z · LW(p) · GW(p)

The problem I see in using the past as evidence is that the further we go from our era, the more what we know is mostly made up.
True, we have documents and evidence and so on, but they only paint a relatively sketchy picture of what the society was, we mostly made up the details in a reasonable manner. Plus we don't get any statistical data on things like happiness, income, etc.
The risk of mistaking noise for signal is so high that it's probably worth throwing it all away, especially when the starting point of the conversation is "People were happier / sadder in xth century, so we should / shouldn't do as they did".
How can you possibly know?

Replies from: Vaniver
comment by Vaniver · 2016-07-14T14:21:40.386Z · LW(p) · GW(p)

The problem I see in using the past as evidence is that the further we go from our era, the more what we know is mostly made up.

Sure, quality of data degrades with distance, both in space and time. But I don't think it degrades to the point where it actually is worth throwing it all away.

How can you possibly know?

Is this a serious question, or a statement of anti-epistemology? (That is, all knowledge is uncertain, and so the right question is "how did you get to the level of uncertainty you have" rather than "how do you justify pretending that there is no uncertainty?")

Replies from: MrMind
comment by MrMind · 2016-07-15T07:01:22.239Z · LW(p) · GW(p)

But I don't think it degrades to the point where it actually is worth throwing it all away.

It's not only that data becomes more scarce. It's also that it becomes noisier. Case in point: many people believe the Gospels to be a semi-accurate narration of what happened during that era, but actually they were compiled centuries later, and historically contemporary source are both scarce and painting a completely different pictures.
The furthest we go, the higher the possibility of having bogus evidence.

Is this a serious question, or a statement of anti-epistemology?

A bit of both, I guess. A cautionary tale, but also a question I would definitely make if I were discussing with someone with that point of view.

comment by ChristianKl · 2016-07-13T20:22:08.207Z · LW(p) · GW(p)

It doesn't matter if it comes from the past, the Bible, Lord of the Rings or utopian literature. Just discuss the model! It's mostly fiction anyway.

I very much prefer people who base their political beliefs based on empirics about the real world compared to people who just base their political beliefs on made-up fantasy. I don't think there a good reason to treat both the same.

comment by Thomas · 2016-07-11T10:19:34.745Z · LW(p) · GW(p)

I have a neat software solution for something. Is it kosher to discuss it here, or it would be considered just as another spamming attempt?

Replies from: Elo
comment by Elo · 2016-07-11T11:43:52.669Z · LW(p) · GW(p)

Try us. Are you selling to us? If yes then maybe not so great to do (however Squirrelinhell released hasteworm just recently and no one complained). If no, then idea sharing is good.

Replies from: Thomas
comment by Thomas · 2016-07-11T13:37:33.746Z · LW(p) · GW(p)

Say, that you have a school with about 100 teachers, 1000 students, 25 rooms ... Each having his/hers demands and constraints.

Now, you want an optimal schedule - who doesn't. For that I have a software to do it automatically. Not semi-automatically like everyone else.

I want to test it for the North America and Australia's primary and secondary schools on several real life examples. For free, of course.

I am looking for a principal or his assistant to try this together over Skype.

Replies from: sdr, Vaniver, Viliam
comment by sdr · 2016-07-12T12:16:40.646Z · LW(p) · GW(p)

Heads up about the business side of this: selling to primary & secondary schools, esp outside of the US, is 8/10 difficult.

Specifically, even if the teachers are fully championing your solution, they do not wield any sort of purchasing authority (and sure as hell won't pay from their own wallet). Purchasing authority's incentive-structure does not align with "teacher happiness", "optimal schedule", or most things one would imagine being the mission of the school. It is, however, critical for them to control all sw used inside the school, and might actively discourage using non-approved vendors.

Replies from: Viliam, Thomas
comment by Viliam · 2016-07-13T07:48:26.802Z · LW(p) · GW(p)

Whose job is it typically to create the schedule? Do those people have political power in schools?

If your marketing point is "better schedules", then yes, it is about the benefit for teachers and students, and no one important cares about that. However, if your marketing point is "easier to make schedules", suddenly the school administration has an incentive to care.

comment by Thomas · 2016-07-12T17:41:47.221Z · LW(p) · GW(p)

Pure economically driven decisions should win eventually.

For example we have once reduced the number of school buses from 4 to 3. 20% or 160 students come with a bus. That's 3 full buses or 4 not so full buses. It's important however, that every arriving student has a class right away. Otherwise he may want to come with a later bus, overcrowding it.

Just on time arriving of those students with just 3 buses was a logistical nightmare. But just a constrain for the digital evolution of the school schedule.

Another big saving is to eliminate the afternoon school shift. We have 2 such cases already evolved.

Replies from: Lumifer
comment by Lumifer · 2016-07-12T18:03:25.439Z · LW(p) · GW(p)

Pure economically driven decisions should win eventually.

Only in the realm of spherical cows in vacuum.

Also known as "The markets can stay irrational longer than you can stay solvent".

comment by Vaniver · 2016-07-11T15:32:32.358Z · LW(p) · GW(p)

For that I have a software to do it automatically. Not semi-automatically like everyone else.

What optimization method are you using under the hood, if you don't mind me asking?

Replies from: Thomas
comment by Thomas · 2016-07-11T15:59:38.265Z · LW(p) · GW(p)

Evolution. Schedules are competing for being there. Every second 10000 or so are born and are mostly killed by the control program which let live only the top schedules according to the 30+ criteria set in the script.

Random (but perhaps clever) mutation and non-random selection, that's under the hood.

At first, the top schedule is a random one and not feasible at all. After a million (or a billion, that depends) generations the first feasible one appears and from there on, evolution produces more and more perfect schedules.

For every processor core, at least one evolution is going on. Each at least slightly different one. The program can spread across many computers and there may be as many as 100 or more parallel evolutions going on. They talk occasionally (via internet) and exchange their champions.

It has been 10 years long real life experiment, which went very well. A lot of schools involved, teachers and students and some academic papers published. Now it's time to spread it.

Replies from: HungryHobo, Viliam
comment by HungryHobo · 2016-07-12T12:57:43.893Z · LW(p) · GW(p)

so, parallel genetic algorithm based scheduling app with (ranked?) constraints?

In what way is it more automatic than existing similar apps?

presumably you still need to give it a list of constraints (say a few thousand constraints), possibly in a spreadsheet, some soft, some hard and it spits out a few of the top solutions or presumably an error if the hard constraints cannot be met?

What can it do that, say, optaplanner can't do?

Replies from: Thomas
comment by Thomas · 2016-07-12T16:45:09.735Z · LW(p) · GW(p)

parallel genetic algorithm

I wouldn't say it's "genetic algorithm", I prefer the term "evolution algorithm".

In what way is it more automatic than existing similar apps?

We did some testings. For example, we took some existing schedules and optimized them with our tool. The difference was substantial.

We also did some packings of circles inside a square and some spheres inside a cube, denser than it has been previously achieved.

We have built some 3D croswords 8 by 8 by 8 letters with no black field at all - field with English words.

I don't know if optalaner can do the same. I think not.

spreadsheet, some soft, some hard and it spits out a few of the top solutions

Every constrain has its own user specified weight. From 0 to 10^12 and every integer inside this interval. This is the measure of how soft or hard a constrain is.

Replies from: Vitor, Lumifer
comment by Vitor · 2016-07-12T21:56:20.137Z · LW(p) · GW(p)

Did you also test what other software (optaplanner as mentioned by HungryHobo, any SAT solver or similar tool) can do to improve those same schedules?

Did you run your software on some standard benchmark? There exists a thing called the international timetabling competition, with publicly available datasets.

Sorry to be skeptical, but scheduling is an NP-hard problem with many practical applications and tons of research has already been done in this area. I will grant that many small organizations don't have the know-how to set up an automated tool, so there may still be a niche for you, specially if you target a specific market segment and focus on making it as painless as possible.

Replies from: Thomas
comment by Thomas · 2016-07-13T05:25:32.316Z · LW(p) · GW(p)

We did some benchmarks. Sometimes we did it well, sometimes not that well.

For example in the case of Job Shop Scheduling benchmark we were unable to break a single record. There are records waiting to be break in JSS area, but we haven't broken a single one.

But we are still holding some (years old) packing records right now.

One may say, that JSS is the base of every scheduling and that packing is not. In fact, the real life scheduling is more complicated than either one of those benchmarks. We have many more constrains in real life. And it turns out, that many constrains somehow help the evolution to find trade-offs.

Replies from: HungryHobo
comment by HungryHobo · 2016-07-13T10:24:37.446Z · LW(p) · GW(p)

if you're the holders of some records for certain problem types then that grabs my interest.

I'd suggest leading with that since it's a strong one.

Replies from: gjm, Thomas
comment by gjm · 2016-07-13T15:49:04.324Z · LW(p) · GW(p)

Not necessarily for their target market.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2016-07-14T13:04:13.566Z · LW(p) · GW(p)

I belief that being flexible about target markets is one of the major ways businesses grow.

comment by Thomas · 2016-07-14T16:15:03.839Z · LW(p) · GW(p)

The best way to win principals is to show them that a ridiculously complex constrain may be applied and calculated automatically.

  • 4.5 school hours of S per week (4 hours on odd weeks and 5 hours on even weeks)
  • when there is the fifth hour in the week, then this hour may be the second hour of the subject S on that day
  • if it is on the same day, it should be immediately after the previous hour of the subject S
  • in the above case, it must be the last hour for the teacher
  • three classes of students are divided into 5 groups for the subject S
  • there are 4 teachers for those 5 groups, one teacher teaches groups number 2 and 4
  • there is a given list of students for groups 1, 3 and 5 and a combined list for students for groups 2 and 4
  • computer should divide the combined list into two separated lists (2 and 4) but they must not differ for more than 4 students in size
  • as one of those groups (2 or 4) are always idle, the subject M which is equally divided, must be taught then - or the S should be the first hour of the day
  • for there are only 4 hours of subject M per week
  • there are only 3 teachers of M
  • there are also 3 hours of subject A per week for those same students in 5 differently set groups
  • there are 5 teachers of A, but one of them also teaches the group number 1 of S
  • it would be nice but not mandatory if the number of waiting hours for students were 0

This is a real life example, I have discussed 1 hour ago with one of the teachers (math teacher) in one of our schools. It is not the most complex demand we had, by far.

S = Slovenian language M = Math A = Anglescina (guess what that is)

Replies from: HungryHobo
comment by HungryHobo · 2016-07-15T09:22:21.090Z · LW(p) · GW(p)

fair enough, I was underwhelmed by your initial post describing it but I agree that showing that your system can handle weird constraints in real examples is an excellent demonstration.

The record thing to me just happens to be a good demonstration that you're not just another little startup with some crappy schedualling software, you're actually at the top of the field in some areas.

comment by Lumifer · 2016-07-12T17:24:35.040Z · LW(p) · GW(p)

If your algorithm is actually the best-of-class for this problem, there are serious applications for it outside of schools.

Replies from: Thomas
comment by Thomas · 2016-07-12T18:15:10.425Z · LW(p) · GW(p)

I know that. But my focus in this thread are North America's schools as a big market.

But yes - how good this algorithm really is? Where is its optimal domain?

I guess, evolving algorithms is the best usage. Either from a previous known algorithm, either from scratch, either from data. Like evolving Kepler's law from planetary data. I wrote a post about that here, a few years ago.

http://lesswrong.com/lw/9pl/automatic_programming_an_example/

Replies from: Lumifer
comment by Lumifer · 2016-07-12T18:33:54.985Z · LW(p) · GW(p)

North America's schools as a big market

The thing is, it's a very fragmented market. The US schools are local, basically run at the town level, so for you it is essentially a retail market with a large number of customers each of which buys little. I'm guessing that you'll need a large sales organization to break in.

Replies from: HungryHobo
comment by HungryHobo · 2016-07-13T10:26:36.438Z · LW(p) · GW(p)

Or possibly to find an existing company selling office/organization/planning software that's already got a big share of the market and selling them license to the tech.

comment by Viliam · 2016-07-12T07:57:08.346Z · LW(p) · GW(p)

Does the solution space support this? I can imagine a schedule that only violates 1 criterium, but the nearest correct solution is far away from it. (Seems to me the schedules are similar to 3-SAT in this aspect.)

Replies from: Thomas
comment by Thomas · 2016-07-12T19:04:06.588Z · LW(p) · GW(p)

Does the solution space support this? I can imagine a schedule that only violates 1 criterium, but the nearest correct solution is far away from it.

This is indeed a big and fundamental problem. If 1 criterium only is violated and this persists for many millions of generations, the control program sees this semi-solution as worse and worse. Much worse than a 2 or 4 criteriums miss. So it's then killed.

It's even more complicated than that. Several such tricks are employed and this problem almost vanishes.

comment by Viliam · 2016-07-11T14:19:00.887Z · LW(p) · GW(p)

I used to make schedules with aSc TimeTables years ago. There is a free demo available. Could you compare your approach with this app?

The application is fully automatic, in the sense that you first enter all the data and constraints, and then you run the computation. (There is an option to put some items on specific place and "lock" them there, but this is strongly discouraged unless you really know what you are doing. Essentially, you can do it to speed up the computation if you have a logical proof that certain things must be done some way, but the application can't notice that and keeps wasting CPU time with alternative approaches.) As far as I know, the numbers of teachers / students / rooms / subjects are not limited, but of course their number has an impact on the complexity of the computation.

Replies from: Thomas
comment by Thomas · 2016-07-11T14:46:47.901Z · LW(p) · GW(p)

The complexity of constrains for each student or teacher is much greater with our software than with aSc. The scripting language is much more complex and enables you to describe pretty much every whim one might have. Like a different speed, teachers have between two different locations or after how many classes a break is mandatory for a specified teacher ... and many more.

It's student oriented primary and every student can have a very different curriculum then everybody else.

Still, all this will be automatically calculated and then optimized.

Now, we want to see how it will behave in practice for North America and Australia.

Replies from: Viliam
comment by Viliam · 2016-07-11T14:49:15.203Z · LW(p) · GW(p)

Sounds great! I hope you don't expect average teachers to write the scripts though.

Replies from: Thomas
comment by Thomas · 2016-07-11T15:13:16.146Z · LW(p) · GW(p)

Not an average, no. But at every other school, there is at least one teacher who is able to do it (for the entire school, of course) . Some like to work in pairs when scripting it.

I thought, I might find some among readers and contributors here as well. Looking for people with this (hard) problem.

comment by Bound_up · 2016-07-13T19:38:23.917Z · LW(p) · GW(p)

I've been thinking about belief as anticipation versus belief as association.

Some people associate with beliefs like they associate with sports teams. Asking them to provide evidence for their belief is like asking them to provide evidence for their sports team being "the best."

And beliefs as anticipation you know, I'm sure.

My question is: What are signs of a "belief" being an anticipation versus it being a mere association (or other non-anticipating belief)?

One is the attempt to defend against falsification: "If you REALLY believed you wouldn't be making excuses in advance, you would confidently accept a test that you knew would show how right you were."

Got any other useful ones?

Replies from: Viliam, ChristianKl
comment by Viliam · 2016-07-14T07:48:57.327Z · LW(p) · GW(p)

This is further complicated by the fact that even the anticipation-beliefs are probabilistic. So you can have a "belief" that says "I love my sports team", and a belief that says "I expect my team to win (probability 80%)".

So in both cases it is possible for the team to lose and the person to keep their belief.

comment by ChristianKl · 2016-07-13T20:40:21.974Z · LW(p) · GW(p)

I don't think the test does what you propose. I can both strongly identify with a belief and at the same time make anticipations based on the belief.

Replies from: Brillyant
comment by Brillyant · 2016-07-13T21:12:31.417Z · LW(p) · GW(p)

Three types exist.

1) Belief as association AND Belief as anticipation

2) Belief as anticipation ONLY

3) Belief as association ONLY

Only #3 Type beliefs would leave the believer making excuses in advance. They don't actually believe a claim to be true (anticipation), but they believe that assenting to the belief is important (association).

See Dennet's Belief in Belief and Sagan's Garage Dragon for more info.

I don't think it's quite and cut and dry as this, by the way. People have their personal probabilities in regard to how strongly they hold anticipatory beliefs. It's not all or nothing.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-14T20:03:22.635Z · LW(p) · GW(p)

Empirically I don't find this to be the case. I think most skeptics do have believes of anticipation that various paranormal effects won't happen. At the same time bring a skeptic in situations where his beliefs about the domain might reasonably get challenged they might make excuses in advance.

People have their personal probabilities in regard to how strongly they hold anticipatory beliefs. It's not all or nothing.

Most people don't use probability for their beliefs. They use mental processes such as the availability heuritistic, that doesn't correspond directly to probabilities.

See Dennet's Belief in Belief and Sagan's Garage Dragon for more info.

Neither Dennet nor Sagan are a psychologist or have similar experience with working with beliefs in other context. If you use their discussions that are essentially about ontology as being discussions about how humans reason you are going to make mistakes.

Replies from: Brillyant, Jiro
comment by Brillyant · 2016-07-22T16:16:11.291Z · LW(p) · GW(p)

Most people don't use probability for their beliefs. They use mental processes such as the availability heuritistic, that doesn't correspond directly to probabilities.

I meant "personal probability" as the confidence at which people intuit a belief as actually anticipatory (vs. a belief they merely assent to as an association.) This level of confidence is on a sliding scale (vs. all or nothing).

Replies from: ChristianKl
comment by ChristianKl · 2016-07-31T16:55:41.632Z · LW(p) · GW(p)

Moat-and-bailey. I don't think there was a suggestion in the above post that you meant with probability something that doesn't follow Kolmogorov's axioms and where you can't directly apply Bayes rule. Especially on LW I think it's valuable to call things that don't follow those axioms and therefore aren't what's usually meant with 'probability', 'probability'.

Replies from: gjm, Brillyant
comment by gjm · 2016-07-31T21:00:57.500Z · LW(p) · GW(p)

Moat-and-bailey

I don't normally point out typos (and it's probably better on balance for LW not to be the sort of nitpicky place where everyone does) but this one is (1) almost exactly backwards and (2) sufficiently plausible-sounding to be dangerous :-). It's motte and bailey. The motte is the raised mound with a fortification on it. The moat is the big ditch around the castle, usually filled with water.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2016-08-01T09:50:21.804Z · LW(p) · GW(p)

Thanks.

comment by Lumifer · 2016-08-01T15:59:31.304Z · LW(p) · GW(p)

The bailiffs got drunk on Baileys, crossed the moat, and demolished the motte leaving nothing but bay leaves and motes of dust floating in the air...

comment by Brillyant · 2016-08-02T16:18:24.157Z · LW(p) · GW(p)

Okay.

My point was only that there is a spectrum. Some beliefs are anticipatory (i.e. people actually believe them) and others are just associations (i.e. people don't believe them, but they find the idea of saying they believe in them to be so important they swear up and down they believe in them)...

But most beliefs are somewhere in the grey middle, with people assigning a "gut feeling probability" to each belief, without doing any math.

Replies from: ChristianKl
comment by ChristianKl · 2016-08-02T19:38:10.973Z · LW(p) · GW(p)

With those semantics people not only have a "gut feeling probability" but also a "heart feeling probability" and various similar "probabilities". Those don't have to be the same and depending on the context the person is going to use a different one.

Replies from: Brillyant
comment by Brillyant · 2016-08-03T13:00:50.993Z · LW(p) · GW(p)

Meh. Not really. There is a strong connotation in American English for "gut feeling" that means essentially instinct or intuition.

Here's a definition I found via Google's first page results: "an instinct or intuition; an immediate or basic feeling or reaction without a logical rationale"

This is what I meant. I think that would be clear to a high percentage of readers.

Replies from: ChristianKl
comment by ChristianKl · 2016-08-03T13:22:01.034Z · LW(p) · GW(p)

Here's again the problem that you don't look at the way humans reason but against the abstract concepts defined in the dictionary.

The way terms are defined in the dictionary has little to do with the empiric reality that some people give different intuitive answers when they feel into their gut or when they feel into their heart.

comment by Jiro · 2016-07-15T14:45:47.432Z · LW(p) · GW(p)

At the same time bring a skeptic in situations where his beliefs about the domain might reasonably get challenged they might make excuses in advance.

I can guess that if you were to meet a flat-earther with the intent of engaging with his ideas, you would start thinking of what things he might show you and why those things wouldn't actually demonstrate a flat earth. That does not mean you are making "excuses in advance".

"He's probably going to show me how ships disappear on the horizon, but I know that is affected by air refraction." "Oh, you're just making an excuse in advance."

Replies from: ChristianKl
comment by ChristianKl · 2016-07-15T20:44:23.708Z · LW(p) · GW(p)

That does not mean you are making "excuses in advance".

What empiric standard would you use to classify things as making excuses in advance?

Replies from: Jiro
comment by Jiro · 2016-07-16T06:36:53.415Z · LW(p) · GW(p)

I don't know, but I'm pretty sure that "I can respond to any claim he's likely to make" isn't it. I'm not sure there is such a thing at all, short of having your idea be outright unfalsifiable.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-16T17:24:53.960Z · LW(p) · GW(p)

It seems like there something that the OP means with "making excuses in advance". It might not what you think would be rightly called "making excuses in advance".

I don't think that category exists in a way where it can be successfully used to distinguish people who have anticipations and are identified with a belief from people who are just identified with it.

comment by turchin · 2016-07-12T21:26:02.318Z · LW(p) · GW(p)

"Superintelligence cannot be contained: Lessons from Computability Theory" http://arxiv.org/pdf/1607.00913.pdf

"Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible."

Replies from: gjm, torekp
comment by gjm · 2016-07-13T10:19:02.454Z · LW(p) · GW(p)

This paper frames the problem as "look at a program and figure out whether it will be harmful" and correctly observes that there is no way to solve that problem with perfect accuracy if the programs being analysed are arbitrary. But its arguments have nothing to say about, e.g., whether there's some way of preventing harm as it's about to happen; nor about whether it is possible to construct a program that provably does something useful without harming humans.

E.g., imagine a world where it is known that the only way to harm humans is to press a certain big red button labelled "Harm the Humans". The arguments in this paper show that there is no general procedure for deciding whether a computer with the ability to press this button will do so. But they don't rule out the possibility that you can make a useful machine with no access to the button, or a useful machine with a little bit of hardware in it that blows it up if it gets too close to the button.

(There are reasons to be concerned about such machines because in practice you probably can't causally isolate them from the button in the way required. The paper's introductory material discusses some such reasons. But they play no role in the technical argument of the paper, at least on the cursory reading I've given it.)

Replies from: turchin
comment by turchin · 2016-07-13T21:31:31.010Z · LW(p) · GW(p)

I think that it is difficult but may be possible to create superintelligent program which will provably do some formally specified thing.

But the main problem is that we can't specify formally what is "harming human". Or we can, but we can't be sure that it is safe definition.

So it results in some kind of circularity: we could prove that the machine will do X, but we can't prove that X is actually good and safe.

We may try to return the burden of prove to the machine. We must prove that it will prove that X is really good and safe. I have bad feelings about computability of this task.

That is why I generally skeptical of idea of mathematical prove of AI safety. It doesn't provide 100 per cent safety, because prove can have holes in it and the task is too complex to be solved in time.

Replies from: gjm
comment by gjm · 2016-07-13T23:06:15.733Z · LW(p) · GW(p)

This is a real and important difficulty, but it isn't what the paper is about -- they assume one can always readily tell whether people are being harmed.

comment by torekp · 2016-07-13T00:51:46.782Z · LW(p) · GW(p)

Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world

What is the notion of "includes" here? Edit: from pp 4-5:

This means that a superintelligent machine could simulate the behavior of an arbitrary Turing machine on arbitrary input, and hence for our purpose the superintelligent machine is a (possibly identical) super-set of the Turing machines. Indeed, quoting Turing, “a man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine”

comment by Gleb_Tsipursky · 2016-07-12T18:06:28.733Z · LW(p) · GW(p)

Did some rationality-informed commenting for my university television about guns and racism.