Comment by zeitpolizei on European Community Weekend 2019 · 2019-06-20T22:21:46.268Z · score: 3 (2 votes) · LW · GW

Some information regarding the start and end times:

  • Lunch + Shuttle (included) 30.08. 12:00-14:00 @ Akazienstraße 27, 10823 Berlin
  • Regular check in 15:00-16:00 @ Badeweg 1, 14129 Berlin
  • Check out on 02.09. until 10:00
  • Meeting Rooms are free to use until 15:00 on 02.09 for co-working after the event

Prompts for eliciting blind spots/bucket errors/bugs

2019-04-01T19:41:59.677Z · score: 13 (8 votes)
Comment by zeitpolizei on In My Culture · 2019-03-11T00:50:20.327Z · score: 20 (6 votes) · LW · GW
Most importantly, this framing is always about drawing contrasts: you're describing ways that your culture _differs_ from that of the person you're talking to. Keep this point in the forefront of your mind every time you use this method: you are describing _their_ culture, not just yours. [...] So, do not ever say something like "In my culture we do not punish the innocent" unless you also intend to say "Your culture punishes the innocent" -- that is, unless you intend to start a fight.

Does this also apply to your own personal culture (whether aspiring or as-is), or "just" the broader context culture?

Because in my (aspiring) culture simple statements of fact are generally interpreted at face value and further evidence is required to make less charitable interpretations. This is especially true for interpretations that assume the speaker has made some kind of judgement.

So, let's go meta here and see whether I intended to say "Your culture generally makes less charitable interpretations of statements than mine." I guess the answer is yes, though I would like to point out the distinction here between personal culture and broader context culture, hence my question at the beginning. [Writing this I'm also realizing it's really difficult to disentangle statements about culture from judgments. I'm noticing cognitive dissonance because I actually do think my culture is better, but I don't like myself being judgmental.]

Now why did I write the comment above? Because in my culture-as-is the language used in the OP ("always", "do not ever") is too strong given my epistemic status.

Again, we can analyze the intent of this "In my culture"-statement. Here my intent is to say "your culture uses language differently from mine" OR "My epistemic status is different from yours."

Not a direct response to your comment, but related and gives background to my initial question: In my aspiring culture a straightforward question (whatever that means) is by default meant and interpreted (primarily) as an expression of genuine curiosity about the answer.

Thinking about and writing this comment, I've realized that my own culture may be a lot more idiosyncratic than I thought. I also found it really interesting to see my initial prompt to write this post (an immediate gut reaction of "I don't agree with that") dissolve into an understanding of how the disagreement can be due to either cultural or epistemic differences.

NB: There is some entanglement here between intentions, interpretations and responses. In describing a "perfect" culture intentions and interpretations can be freely interchanged to a large extent because if everyone has the same culture they will make the correct assumptions about other people's intents and states of mind. So saying "In my culture people say X because they want Y" is equivalent to saying "In my culture when someone says X people know that that person wants Y". And then there is to an extent a disconnect between the epistemic status of your interpretation of the other person's state of mind and your own reaction, because different reactions entail different costs. Even if an uncharitable interpretation has the highest probability of being correct it often makes sense to act under the assumption that a more charitable interpretation is correct.

Rationality Retreat in Cologne Area, Germany, Spring 2019

2019-03-04T21:36:27.401Z · score: 22 (8 votes)
Comment by ZeitPolizei on [deleted post] 2019-02-10T14:25:01.812Z

test comment

Comment by ZeitPolizei on [deleted post] 2019-02-10T14:06:57.900Z
Comment by zeitpolizei on Rationality Retreat in Europe: Gauging Interest · 2018-08-14T13:24:39.111Z · score: 1 (1 votes) · LW · GW

I agree in principle, though this depends of course how far in advance it is announced. If it's reasonable to expect that it's possible to fill 20 slots with 2 months advance notice this gives more flexibility in planning.

Comment by zeitpolizei on Rationality Retreat in Europe: Gauging Interest · 2018-08-14T13:20:10.812Z · score: 1 (1 votes) · LW · GW

"Retreat" in the sense of a spiritual retreat, but with the topic of rationality instead of meditation or spirituality. Following the same principle as, e.g. the Czech EA retreat.

"Rationality" as it is generally understood on LessWrong. So this is aimed at people who aspire to be more rational and want to interact with like-minded people.

Comment by zeitpolizei on Rationality Retreat in Europe: Gauging Interest · 2018-08-14T08:44:03.204Z · score: 1 (1 votes) · LW · GW

Good point! I hadn't really thought of Facebook and the local groups for advertising.

Rationality Retreat in Europe: Gauging Interest

2018-08-13T20:33:29.932Z · score: 9 (7 votes)
Comment by zeitpolizei on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2018-03-11T17:43:42.974Z · score: 18 (4 votes) · LW · GW

[Note: mostly just me trying to order my thoughts, kind of hoping someone can see and tell me where my confusion comes from]

So the key insight regarding suffering seems to be that pain is not equal to suffering. Instead there is a mental motion (flinching away from pain) that produces (or is equal to?) suffering. And whereas most people see pain as intrinsically bad, Looking allows you to differentiate between the pain and the flinching away, realizing that pain in and of itself is not bad. It also allows you to get rid of the flinching away, thus eliminating the suffering, but without eliminating the pain. But is the flinching away intrinsically bad? Or is it also possible to defuse from the flinching in a way that makes it less unpleasant?

And then, is there also an equivalent for good experiences? Pain is to suffering as pleasure is to…? Is there a mental motion of turning towards, or welcoming an experience, which is ultimately responsible for seeing pleasurable experiences as good? And if the flinching away is in some way intrinsically bad, is this opposite motion intrinsically good?

Now, once you get that pain is not equal to suffering, and you've thus managed to eliminate suffering for you personally, what reasons remain to try to change something about what you expect to experience in the future? If you still care about other people suffering, then of course there is plenty to do, to reduce other people's suffering by reducing the pain they experience. But it wouldn't really be about the pain, just the reaction to the pain.

Then, suppose we somehow managed that all people (or conscious entities) no longer experience the flinching from pain suffering. Would there still be reasons to make the world "better", or would we be content with things just unfolding however, because as long as we don't suffer over it, nothing is intrinsically bad? Is the kind of suffering that comes from the flinching away from pain maybe the only thing that is bad in a morally relevant way? Once suffering is out of the picture, what kinds of wants, preferences, reasons or values remain, that actually make a difference to how the world is supposed to look? Intuitively, a world in which suffering is eliminated via getting rid of an aversion to pain feels very much like a world where everyone is wireheaded and would contain very little value, if any. I have a sense of it being a bad thing if people are feeling okay (or great, in the case of actual wireheading) while the world is actually really shitty. Now is this sense of it being a bad thing due to values that I hold which go beyond pleasure and suffering, and are they stable under reflection? Or is the correct conclusion that yes, once nobody suffers anymore, it doesn't matter if the rest of the world looks really bad? Is the reason it feels so bad simply because I still have the alief that pain is intrinsically bad, and Looking would allow me to see that pain really is in a way irrelevant?

if you truly step outside your entire motivational system, then that leaves the part that just stepped out with no motivational system,

And if you see yourself going to the store to get some food, well, why not go along with that? After all, to stop acting as you always have, would require some special motivation to do so.

Even if you do manage to defuse from everything that causes you suffering, your existing personality and motivational system will still be in charge of what it is that you Look at in the future.

These quotes, as well as what I remember others saying about enlightenment, make it sound like there is still ultimately a "self" or "I" that is the one that "steps outside your motivational system", "sees yourself going to the store", "manages to defuse", or "sees through the illusion of the self". But if I understand correctly, what actually happens is that there is a conscious process that makes one of these motions, but it doesn't have any privileged position and is no more the "true self" than e.g. the urge to go to the store. So ultimately all these different parts, insights, and thoughts are just part of the same single person. I would have initially expected this to mean that there would be feedback between the different parts (e.g. realizing pain isn't so bad should also eliminate the motivations for avoiding pain). But upon reflection, it seems like those kinds of insights are only possible because there is no feedback between the different parts? I feel like I may be mixing together some things here that are actually separate.

Comment by zeitpolizei on Book Review: Consciousness Explained · 2018-03-10T19:08:36.240Z · score: 2 (2 votes) · LW · GW
Another theme of the book that reaches its crescendo…

The paragraph beginning with this sentence is duplicated and butchered.

Comment by zeitpolizei on The Intelligent Social Web · 2018-02-23T11:20:25.878Z · score: 42 (11 votes) · LW · GW

I haven't really understood where the fakeness in the framework is. And the other comments also seem to not acknowledge, that it is a fake framework, which I am interpreting as people taking this framework at face value to be true or real. I suspect I haven't quite understood what is meant by "fake framework".

I'm currently seeing two main ways in which I can make the fakeness make sense to me:

  1. People do step out of their roles quite often in real life, breaking the expectations of the web. So the framework works better for broad strokes predictions, than specific behavior. Or rather, there is a lot of behavior not accounted for in the framework.
  2. Just like "every model is wrong", every framework is fake, and this is a framework that is "less fake" than others.

A thing that's rather irrelevant to the actual topic at hand, but that I feel like sharing:

  • Me reading this: "Oh neat, links about attachment theory, this is acutely relevant to me. Let's check out the 'how to tell if your partner is X' links.
  • Reading the links: "Hm yeah, seems like I'm anxious-preoccupied and the person I'm interested in could be dismissive-avoidant."
  • Reading this again after a few hours, because I felt I hadn't understood it completely: "Oh cool, let's also look at this link 'Ending the Anxious-Avoidant Dance', that seems like it's exactly what I need to know right now."
  • I continue reading the article and…

    In the husband/​wife script mentioned above, there’s a tendency for the “wife” to get excited when “she” learns about the relationship script, because it looks to “her” like it suggests how to save the relationship — which is “her” enacting “her” role.

Wait a second. Aw fuck me, this is exactly what's happening to me right now! My mood instantly improved by a ton and I kept laughing for several minutes.

PS: It's probably more helpful to point your Attachment Theory link to here instead.

Comment by zeitpolizei on Two kinds of Agency · 2018-02-08T22:09:13.901Z · score: 3 (1 votes) · LW · GW

Ah, you seem to automatically interpret the "math is useless" as meaning "math is useless to me". But people can also mean − and that's what I was trying to get at − that "math is of no use for anything, to anyone". This would be the X being good threatened as Kaj pointed out.

Comment by zeitpolizei on Two kinds of Agency · 2018-02-08T09:50:51.365Z · score: 3 (1 votes) · LW · GW

Is there a difference between what you are describing and simply having a more or less nuanced view on the matter? It seems like you're confirming exactly what Paul Graham describes. You've made your identity as a mathematician smaller and are thus no longer threatened by people expressing certain opinions on math. But there are still things that are fundamental to your identity as a mathematician, that need protecting. If someone says "math is useless" does that not evoke a feeling of needing to defend maths?

Comment by zeitpolizei on The Craft & The Community - A Post-Mortem & Resurrection · 2018-01-13T19:39:28.942Z · score: 3 (1 votes) · LW · GW

There is now a map and a preregistration database.

Comment by zeitpolizei on Happiness Is a Chore · 2017-12-20T12:27:33.554Z · score: 12 (4 votes) · LW · GW

Could you paint a more detailed picture of what you mean by happiness? There is a wide range of things that can be called happiness, and I assume you only mean some of them. In particular, I don't think you mean the happiness you feel when you get a reward, because that's what we are actually optimized for achieving.

Comment by zeitpolizei on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T21:48:20.192Z · score: 1 (1 votes) · LW · GW

It seems no one on LW is able to explain to you how and why people want different material. To my mind, Kaj's explanation is perfectly clear. I'm afraid it's up to you, to figure it out for yourself. Until you do, people will keep giving you invalid arguments, or downvote and ignore you.

Comment by zeitpolizei on Open Letter to MIRI + Tons of Interesting Discussion · 2017-12-03T04:07:28.397Z · score: 1 (1 votes) · LW · GW

how will writing it again change anything?

Why should anyone answer this question? Kaj has already written an answer to this question above, but you don't understand it. How will writing it again change anything? You still won't understand it. This request for an explanation makes no sense whatsoever. It's not that you understand the answer and have some complaint and want it to be better in some way, you just won't understand.

You claim you want to be told when you're mistaken, but you completely dismiss any and all arguments. You're just like "these people obviously haven't spent hundreds of hours learning and thinking about CR, so there is no way they can have any valid opinion about it" and won't engage their arguments on a level so that they are willing to listen and able to understand.

Comment by zeitpolizei on Any Good Criticism of Karl Popper's Epistemology? · 2017-11-29T21:52:07.320Z · score: 12 (6 votes) · LW · GW
I'm tapping out.
Comment by zeitpolizei on The Mad Scientist Decision Problem · 2017-11-29T17:28:07.243Z · score: 6 (2 votes) · LW · GW

I suspect you may be thinking of the thing where people prefer e.g. a (A1) 100% chance of winning 100€ (how do I make a dollar sign?) to a (A2) 99% chance of winning 105€, but at the same time prefer (B2) a 66% chance of winning 105€ to (B1) a 67% chance of winning 100€. This is indeed irrational, because it means you can be exploited. But depending on your utility function, it is not necessarily irrational to prefer both A1 to A2 and B1 to B2.

Comment by zeitpolizei on Any Good Criticism of Karl Popper's Epistemology? · 2017-11-29T11:02:01.620Z · score: 14 (4 votes) · LW · GW
The current topic is epistemology, not the color of the sky, so you don't get to gloss over epistemology as you might in a conversation about some other topic.

So because the discussion in general is about epistemology, you won't accept any arguments for which the epistemology isn't specified, even if the topic of that argument doesn't pertain directly to epistemology, but if the discussion is about something else, you will just engage with the arguments regardless of the epistemology others are using?

That seems… unlikely to work well (if the topic is epistemology) and inconsistent.

I'd like to reiterate, that I would really appreciate a link to an example where somebody convinced you to change your mind. Failing that, you've mentioned elsewhere that you often changed your mind in discussions with David Deutsch. If you might reproduce or at least sketch a discussion you've had with him, I would be very interested.

Comment by zeitpolizei on Any Good Criticism of Karl Popper's Epistemology? · 2017-11-29T09:11:33.169Z · score: 11 (3 votes) · LW · GW

Given some data and multiple competing hypotheses that explain the data equally well, the laws of probability tell us that the simplest hypothesis is the likeliest. We call this principle of preferring simpler hypotheses Occam's Razor. Moreover, using this principle works well in practice. For example in machine learning, a simpler model will often generalize better. Therefore I know that Occam's Razor is "any good". Occam's Razor is a tool that can be used for problems as described by the italicized text above. It makes no claims regarding arguments or criticisms.

I don't really see why I would need a coherent/perfect/complete epistemology to make this kind of argument or come to that conclusion. It seems to me, like you are saying, that any claims that aren't attained via the One True Epistemology are useless/invalid/wrong. That you wouldn't even accept someone saying that the sky is blue, if that person didn't first show you that they are using the right epistemology.

I notice that I don't know what an argument that you would accept could even look like. You're a big fan of having discussions written down in public. Could you link to an example where you argued for one position and then changed your mind because of somebody else's argument(s)?

Comment by zeitpolizei on Any Good Criticism of Karl Popper's Epistemology? · 2017-11-29T04:02:40.992Z · score: 11 (3 votes) · LW · GW
how do you know Occam's Razor is any good?

Imo chapter 28 of this book gives a good sense why Occam's Razor is good. I'll try to explain it here briefly as I understand it.

Suppose we have a class of simple models, with three free binary parameters, and a class of more complex models, with ten free binary parameters. We also have some data, and we want to know which model we should choose to explain the data. A priori, out of the parameter sets for the simple model each has a probability of 1/8 of being the best one, whereas for the complex model the probability is only 1/1024. As we observe the data, probability mass moves between the parameter sets. Given equally good fit between data and model, the best simple model will always have a higher probability than the best complex model. For one, because it started with a higher probability. For another, because there will be several complex models fitting the data about equally well. E.g. there may be 8 complex models, which all fit the data better than the second-best simple model. So the probability mass needs to be shared between all of those.

A complex model needs to fit the data better in order to gain enough probability mass to beat out the simpler model.

So even if we do not penalize complex models just for being more complex, we still favour simpler ones.

Comment by zeitpolizei on List of civilisational inadequacy · 2017-11-24T16:00:29.914Z · score: 2 (1 votes) · LW · GW

No, because the main reason I recommended this is that I only have a vague understanding of what is meant by civilisational inadequacy.

Comment by zeitpolizei on List of civilisational inadequacy · 2017-11-23T23:37:41.601Z · score: 5 (2 votes) · LW · GW

It would be nice to include a definition of what is meant by civilisational inadequacy, or at least a link to a reference.

Comment by ZeitPolizei on [deleted post] 2017-10-20T17:25:37.089Z

OK, if I'm interpreting this correctly, "consistency" could be said to be the ability to make a plan and follow through with it, barring new information or unexpected circumstances. So the actions the CDT agent has available aren't just "say yes" and "say no" but also "say yes, get into the car, and bring the driver 1000$ once you are in the city", interpreting all of that as a single action.

However in that case, it is not necessary to distinguish between detecting lies and simulating.

Comment by ZeitPolizei on [deleted post] 2017-10-20T15:30:40.236Z

How is detecting lies fundamentally different from simulation? What is a lie? If I use a memory charm on myself, such that I honestly believe I will pay 1000$, but only until I arrive in the city, would that count as a lie? Isn't the whole premise of the hitchhiker problem, that the driver cannot be tricked, and you're just saying "Ah, but if the driver can be tricked in this way, this type of decision theorist can trick her!"

Comment by zeitpolizei on Productive Disagreement Practice Thread: Double Crux · 2017-10-09T10:24:48.203Z · score: 1 (1 votes) · LW · GW

Is anything known about how many people who weren't already rationalists have been inspired by HPMOR to make a serious effort at being rational and changing the world, and (even harder to find out) what they have actually done as a result?

I have been keeping track of which people have read at least parts of HPMOR either directly or indirectly because of my recommendation, so I think I can give at least a rough idea of what the answer may look like.

All of this is as far as I know, I haven't directly asked many of the people about this.

Including myself, I know of 14 people who read (parts of) HPMOR (excluding the people I've met through LW/EA of course). Of those:

  • 3 (including myself) I would consider actively involved in the LW community, having read a lot of the rationalist materials and going to real-life meetups

  • 3 are interested in rationality but haven't actually looked into it that much

  • 6 don't seem interested in learning more about rationality

  • 2 I don't know anything about

The first group of people is probably has the highest percentage of "people who […] have been inspired by HPMOR to make a serious effort at being rational and changing the world", but it's not inconceivable that those people also exist in the other groups. And it is only the first group, that it would be possible to capture with a LW survey.

Comment by zeitpolizei on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-09T09:38:20.350Z · score: 2 (2 votes) · LW · GW

Right now I'm exploring the possibility of setting up a site similar to yourmorals so that the survey can be effectively broken up and hosted in a way where users can sign in and take different portions of it at their leisure.

It may be worth collaborating with the EA community on this, since there is considerable overlap, both in participants and in the kinds of surveys people may be interested in.

Comment by zeitpolizei on Map of the AI Safety Community · 2017-09-26T09:04:06.218Z · score: 2 (1 votes) · LW · GW

I'd consider putting FRI closer to Effective Altruism, since they are also concerned with suffering more generally.

Do you have criteria for including fiction? Other relevant fiction I am aware of:

Also Vernor Vinge is spelled with an 'o'.

Comment by zeitpolizei on Voting Weight Discussion · 2017-09-24T07:09:15.506Z · score: 2 (2 votes) · LW · GW

I think both private non-anonymous reactions and public anonymous reactions are likely to be valuable, whereas public non-anonymous reactions could be potentially harmful and private anonymous reactions seem mostly useless.

"I've seen this" coming from the parent poster and "nice post" are valuable feedback for the author of the post/comment, but less useful information for other people so it would best be private and non-anonymous.

Reactions that say something about the content of a comment, like "interesting" or "confusing" are more useful if they are public and anonymous.

Comment by zeitpolizei on Voting Weight Discussion · 2017-09-24T06:24:41.502Z · score: 1 (1 votes) · LW · GW

StackExchange also has a minimum reputation requirement for votes to count. When you try to vote on something it displays a box saying the vote was recorded but doesn't change the publicly displayed vote count.

What I don't like about the way it is implemented on StackExchange is that it seems it's not possible to take back a vote until you have enough reputation to vote at all.

Besides protecting the vote count from being distorted by newcomers, I think the main advantage is that it makes it much harder to farm a bunch of karma with sockpuppet accounts.

Comment by zeitpolizei on Understanding Policy Gradients · 2017-09-14T01:38:09.586Z · score: 0 (0 votes) · LW · GW

From what I've read so far, I think Information Theory, Inference and Learning Algorithms does a rather good job of conveying the intuitions behind topics.

Comment by zeitpolizei on Idea for LessWrong: Video Tutoring · 2017-07-09T03:46:08.116Z · score: 0 (0 votes) · LW · GW

It reminds me a lot of the "mastermind group" thing, where we had weekly hangouts to talk about our goals etc. The America/Europe group eventually petered out (see here for retrospective by regex), the Eurasia/Australia group appears to be ongoing albeit with only two (?) participants.

There have also been online reading groups for the sequences, iirc. I don't know how those went though.

forums, wikis, open source software

I see a few relevant differences:

  • number of participants: If there are very many people, most of which are only sporadically active, you still get nice progress/activity. The main advantage of this video tutoring idea is personalization, which would not work with many participants.
  • small barrier to entry, small incremental improvements: somewhat related to the last point, people can post a single comment, fix a single bug, or only correct a few spelling mistakes on a wiki, and then never come back, but it will still have helped.
  • independence/asynchronicity: also kind of related to small barrier to entry. For video tutoring you need at least two people agreeing on a time and keeping that time free. In all the other cases everyone can pretty much "come and go" whenever they want. In principle it would be possible to do everything with asynchronous communication. In practice you will also have some real-time communication e.g. via IRC channels.
  • Pareto contribution: I don't actually have data on this, but especially on small open source projects and wikis the bulk of the work is probably done by a single contributor, who is really passionate about it and keeps it running.
Comment by zeitpolizei on Against lone wolf self-improvement · 2017-07-07T16:29:13.015Z · score: 2 (2 votes) · LW · GW

Relevant recent post.

Comment by zeitpolizei on Idea for LessWrong: Video Tutoring · 2017-06-26T09:51:43.855Z · score: 1 (1 votes) · LW · GW

I think this is a great idea, likely to have positive value for participants. So going Hamming questions on this, I think two things are important.

  1. I think the most likely way this is going to "fail", is that a few people will get together, then meet about three times, and then it will just peter out, as participants are not committed enough to participate long-term. Right now, I don't think I personally would participate without there being a good reason to believe participants will keep showing up, like financial incentives, for example.
  2. Don't worry too much about doing it the Right Way from the beginning. If you get some people together, just start with the first best thing that comes to mind and iterate.
Comment by zeitpolizei on Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world? · 2017-05-13T15:02:51.975Z · score: 0 (0 votes) · LW · GW

To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.

But, as you also point out, evolution "selects on the criterion of ingroup reproductive fitness", which does select for a specific type of mind and ethics, especially if you also have the constraint, that the agent should be intelligent. As far as I am aware all of the animals considered the most intelligent are social animals (octopi may be an exception?). The most important aspect of an evolutionary algorithm is the fitness function. The real world seems to impose a fitness function, that, if it selects for intelligence, it also seems to select for sociability, something that generally seems like it would increase friendliness.

True, humans rebelled against and overpowered evolution

Evolution among humans is as real as it is among any lifeform. Until every human being born is actually genetically designed from scratch there is evolutionary pressure favoring some genes over others.

Careless evolution managed to create humans on her first attempt at intelligence, but humans, given foresight and intelligence, have an extreme challenge making sure an AI is friendly? How can we explain this contradiction?

Humans are not friendly. There are countless examples of humans, who, if they had the chance, would make the world into a place that is significantly worse for most humans. The reason none of them have succeeded yet is that so far no one has had a power/intelligence advantage so large that they could just impose their ideas unilaterally onto the rest of humanity.

Comment by zeitpolizei on European Soylent alternatives · 2017-03-01T14:12:57.236Z · score: 0 (0 votes) · LW · GW

There is also Bertrand, which is organic. Their ingredients look like it would be pretty tasty, but it costs 9€ per day.

Comment by zeitpolizei on Self medicating for Schizophrenia with - cigarettes ? · 2017-01-24T11:20:29.518Z · score: 2 (2 votes) · LW · GW

Slate star codex also wrote about this. http://slatestarcodex.com/2016/01/11/schizophrenia-no-smoking-gun/

Comment by zeitpolizei on Seeking better name for "Effective Egoism" · 2016-11-26T16:36:08.884Z · score: 4 (4 votes) · LW · GW

What's wrong with (instrumental) Rationality?

Comment by zeitpolizei on Help with Bayesian priors · 2016-08-14T16:41:09.894Z · score: 0 (0 votes) · LW · GW

Yeah, the estimates will always be subjective to an extent, but whether you choose historic figure, or all humans and fictional characters that ever existed or whatever, shouldn't make huge differences to your results, because, in Bayes' formula, the ratio P(C|E)/P(C) ¹ should always be roughly the same, regardless of filter.

¹ C: coin exists
E: person existed

Comment by zeitpolizei on The AI in Mary's room · 2016-05-27T23:57:02.440Z · score: 1 (1 votes) · LW · GW

why can't Mary just look at the neural spike trains of someone seing red?

Why can't we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?

Comment by zeitpolizei on The AI in Mary's room · 2016-05-26T17:28:15.773Z · score: 0 (0 votes) · LW · GW

The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from previously known experiences. But in most cases there is a difference between imagining a sensation and experiencing it.

In the case of the AI, there could either be the case where no information is passed between the color perception subroutine and the main processing unit, in which case the AI may have a new experience, but not learn anything new. Or some representation of the experience of being in the subroutine is saved to memory, in which case something new is learned.

Comment by zeitpolizei on Suggest best book as an introduction to computational neuroscience · 2016-04-30T04:02:52.271Z · score: 1 (1 votes) · LW · GW

From the cover text of How to Build a Brain it seems the main focus is on the architecture of SPAWN, and I suspect it does not actually give a proper introduction to other areas of computational neuroscience. That said, I wouldn't be surprised if it is the most enjoyable book to read on the topic, that you can find. I have read Computational Neuroscience by Hanspeter Mallot, which is very short, weird and not very good. I'm currently about halfway through Theoretical Neuroscience by Dayan and Abbott. My impression is, it might be decent for people with a strong physics/math background, it's OK if you have some prior knowledge about the topics (e.g. having visited a lecture) and rather bad otherwise.

Edit: My prof told me about Information Theory, Inference and Learning Algorithms (legal free online version), which is, as the title implies more about information theory and learning algorithms (so more mathy), but from the perspective of neuroscience, so it's missing a lot of the typical topics of computational neuroscience. I have just started reading it, but so far it seems really well written (4.35 rating on goodreads), and it also contains exercises and reflection questions.

Comment by zeitpolizei on [Link] Using mindkillers to promote rationality · 2015-09-12T17:00:29.377Z · score: 3 (3 votes) · LW · GW

All the links direct me to Ohio State University email login.

Comment by zeitpolizei on September 2015 Media Thread · 2015-09-04T21:46:20.252Z · score: 1 (1 votes) · LW · GW

Human Learning and Memory, by Dadid A. Lieberman (2012)

A well-written overview of current knowledge about human learning and memory. Of special interest:

  • the use of reinforcement as a teacher, parent, pet-owner or for self-improvement
  • for me personally: strategy to combat insomnia (results pending)
  • implications of memory research for study strategies
Comment by zeitpolizei on Stupid Questions August 2015 · 2015-08-07T06:21:42.999Z · score: 0 (0 votes) · LW · GW

I used a measuring cup (iirc 75ml) for the powder. My typical meal would be three cups of powder and 300ml water. It's quite thick that way, my friend used more water.

Comment by zeitpolizei on Stupid Questions August 2015 · 2015-08-06T20:51:51.696Z · score: 1 (1 votes) · LW · GW

When I used home-made soylent, I first put in (all) the water, then the powder. My shaker also has a plastic grid inset in the lid. Putting in the water first also lets you see exactly how much water you have (transparent measuring cup shaker). I don't remember ever having any problems.

Comment by zeitpolizei on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-05T16:08:36.915Z · score: 0 (0 votes) · LW · GW

Good point, when I wrote down the predictions, I just used my usual unrealistically optimistic estimate of: "This is in principle doable in this time and I want to do it.", i.e. my usual "planning" mode, without considering how often I usually fail to execute my "plans". So in this case, I think I adjusted neither my optimism, nor my plans, I only put my estimate for success into actual numbers for the first time (and hoped that would do the trick).

Comment by zeitpolizei on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T11:03:03.182Z · score: 1 (1 votes) · LW · GW

What does "if used ethically" mean?

I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters.
Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter.

The US is already using it's drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people.

I wasn't aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.

Comment by zeitpolizei on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-03T21:50:22.284Z · score: 2 (2 votes) · LW · GW

Using Prediction Book (or other prediction software) for motivation

Does anyone have experience with the effects of documenting things you need to do in PredictionBook (or something similar) and the effects it has on motivation/actually doing those things? Basically, is it possible to boost your productivity by making more optimistic predictions? I've been dabbling with PredictionBook and tried it with two (related) things I had to do, which did not work at all.

Thoughts, experiences?

Comment by zeitpolizei on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-03T17:03:22.011Z · score: 4 (4 votes) · LW · GW

Trying to summarize here:

The open letter says: "If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons."

The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war.

I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don't want to have them.

I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren't experts on politics, and don't know what the actual effects of an autonomous weapon ban would be.