Posts

strangepoop's Shortform 2019-09-10T17:04:58.375Z · score: 3 (1 votes)

Comments

Comment by strangepoop on Bíos brakhús · 2019-09-26T09:34:39.969Z · score: 1 (1 votes) · LW · GW

I think a counterexample to "you should not devote cognition to achieving things that have already happened" is being angry at someone who has revealed they've betrayed you, which might acause them to not have betrayed you.

Comment by strangepoop on strangepoop's Shortform · 2019-09-10T17:04:58.545Z · score: 5 (3 votes) · LW · GW

Is metarationality about (really tearing open) the twelfth virtue?

It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.

(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)

The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.

Comment by strangepoop on If physics is many-worlds, does ethics matter? · 2019-07-28T09:05:50.460Z · score: 1 (1 votes) · LW · GW

I want to ask this because I think I missed it the first few times I read Living in Many Worlds: Are you similarly unsatisfied with our response to suffering that's already happened, like how Eliezer asks, about the twelfth century? It's boldface "just as real" too. Do you feel the same "deflation" and "incongruity"?

I expect that you might think (as I once did) that the notion of "generalized past" is a contrived but well-intentioned analogy to manage your feelings.

But that's not so at all: once you've redone your ontology, where the naive idea of time isn't necessarily a fundamental thing and thinking in terms of causal links comes a lot closer to how reality is arranged, it's not a stretch at all. If anything, it follows that you must try and think and feel correctly about the generalized past after being given this information.

Of course, you might modus tollens here.

Comment by strangepoop on Go Do Something · 2019-05-21T17:45:46.082Z · score: 5 (4 votes) · LW · GW

Soares also did a good job of impressing this in Dive In:

In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.

The idea doesn't have to be good, and it doesn't have to be feasible, it just needs to be the best incredibly concrete plan that you can come up with at the moment. Don't worry, it will change rapidly when you start slamming it into reality. The important thing is to come up with a concrete plan, and then start executing it as hard as you can — while retaining a reflective state of mind updating in the face of evidence.
Comment by strangepoop on The concept of evidence as humanity currently uses it is a bit of a crutch. · 2019-05-21T17:33:41.603Z · score: 2 (2 votes) · LW · GW

I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.

Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.

Comment by strangepoop on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-15T21:19:13.437Z · score: 1 (1 votes) · LW · GW

To further elaborate 4: your example of the string "1" being a conscious agent because you can "unpack" it into an agent really feels like it shouldn't count: you're just throwing away the "1" and replaying a separate recording of something that was conscious. This sounds about as much of a non-sequitur as "I am next to this pen, so this pen is conscious".

We could, however, make it more interesting by making the computation depend "crucially" on the input. But what counts?

Suppose I have a program that turns noise into a conscious agent (much like generative models can turn a noise vector into a face, say). If we now seed this with a waterfall, is the waterfall now a part of the computation, enough to be granted some sentience/moral patienthood? I think the usual answer is "all the non-trivial work is being done by the program, not the random seed", as Scott Aaronson seems to say here. (He also makes the interesting claim of "has to participate fully in the arrow of time to be conscious", which would disqualify caching and replaying.)

But this can be made a little more confusing, because it's hard to tell which bit is non-trivial from the outside: suppose I save and encrypt the conscious-generating-program. This looks like random noise from the outside, and will pass all randomness tests. Now I have another program with the stored key decrypt it and run it. From the outside, you might disregard the random-seed-looking-thingy and instead try to analyze the decryption program, thinking that's where the magic is.

I'd love to hear about ideas to pin down the difference between Seeding and Decrypting in general, for arbitrary interpretations. It seems within reach, and like a good first step, since the two lie on roughly opposite ends of a spectrum of "cruciality" when the system breaks down into two or more modules.

Comment by strangepoop on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-15T21:13:08.842Z · score: 1 (1 votes) · LW · GW

Responses to your four final notes:

1. This is, as has been remarked in another comment, pretty much Dust theory. See also Moravec's concise take on the topic, referenced in the Dust theory FAQ. Doing a search for it on LW might also prove helpful for previous discussions.

2. "that was already there"? What do you mean by this? Would you prefer to use the term 'magical reality fluid' instead of "exists"/"extant"/"real"/"there" etc, to mark your confusion about this? If you instead feel like you aren't confused about these terms, please provide (a link to) a solution. You can find the problem statement in The Anthropic Trilemma.

3. Eliezer deals with this using average utilitarianism, depending on whether or not you agree with rescuability (see below).

4. GAZP vs GLUT talks about the difference between a cellphone transmitting information of consciousness vs the actual conscious brain on the other end, and generalizes it to arbitrary "interpretations". That is, there are parts of the computation that are merely "interpreting", informing you about consciousness and others that are "actually" instantiating. It may not be clear what exactly the crucial difference is yet, but I think it might be possible to rescue the difference, even if you can construct continuums to mess with the notion. This is of course deeply tied to 2.

----

It may seem that my takeaway from your post is mostly negative, this is not the case. I appreciate this post, it was very well organized despite tackling some very hairy issues, which made it easier to respond to. I do feel like LW could solve this somewhat satisfactorily, perhaps some people already have and don't bother pointing the rest of us/are lost in the noise?

Comment by strangepoop on Epistemic Tenure · 2019-03-07T11:29:57.774Z · score: 1 (1 votes) · LW · GW

it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered

I feel like this is fighting the hypothesis. As Garrabrant says:

Attention is a conserved resource, and attention that I give to Bob is being taken away from attention that could be directed toward GOOD ideas.

It doesn't matter whether or not you think it is possible to track rationality through some singular epistemesis score. The question is: you have limited attentional resources and the problem OP outlined; "rationality" is probably complicated; what do you do anyway?

How you divvy them is the score. Or, to replace the symbol with the substance: if you're in charge of divvying those resources, then your particular algorithm will decide what your underlings consider status/currency, and can backpropagate into their minds.

Comment by strangepoop on Out to Get You · 2019-03-05T22:04:04.207Z · score: 1 (1 votes) · LW · GW

Maybe you meant "cutting corners" rather than cutting corners? ie you did understand the distinction between the thing and the appearance of the thing, you just forgot to add the quotes.

Comment by strangepoop on Epistemic Tenure · 2019-03-04T11:00:05.776Z · score: 1 (1 votes) · LW · GW

I think your "attentional resources" are just being Counterfactually Mugged here, so if you're okay with that, you ought to be okay with some attention being diverted away from "real" ideas, if you're reasonably confident in your construction of the counterfactual "Bob’s idea might HAVE BEEN good".

This way of looking at it also says that tenure is a bad metaphor: your confidence in the counterfactual being true can change over time.

(If you then insist that this confidence in your counterfactual is also something that affects Bob, which it kinda does, then I'm afraid we're encountering an instance of unfair problem class in the wild and I don't know what to do)

As an aside, this makes me think: What happens when all consumers in the market are willing to get counterfactually mugged? Where I'm not able to return my defected phone because prediction markets said it would have worked? I suppose this is not very different from the concept of force majeure, only systematized.

Comment by strangepoop on Unconscious Economics · 2019-02-27T15:10:28.938Z · score: 29 (14 votes) · LW · GW

It's worth noting that David Friedman's Price Theory clearly states this in the very first chapter, just three paragraphs down:

The second half of the assumption, that people tend to find the correct way to achieve their objectives, is called rationality. This term is somewhat deceptive, since it suggests that the way in which people find the correct way to achieve their objectives is by rational analysis--analyzing evidence, using formal logic to deduce conclusions from assumptions, and so forth. No such assumption about how people find the correct means to achieve their ends is necessary.

One can imagine a variety of other explanations for rational behavior. To take a trivial example, most of our objectives require that we eat occasionally, so as not to die of hunger (exception--if my objective is to be fertilizer). Whether or not people have deduced this fact by logical analysis, those who do not choose to eat are not around to have their behavior analyzed by economists. More generally, evolution may produce people (and other animals) who behave rationally without knowing why. The same result may be produced by a process of trial and error; if you walk to work every day, you may by experiment find the shortest route even if you do not know enough geometry to calculate it. Rationality in this sense does not necessarily require thought. In the final section of this chapter, I give two examples of things that have no minds and yet exhibit rationality.

I don't think it counts as a standard textbook, but it is meant to be a textbook.

On the whole, I think it's perfectly okay for economists to mostly ignore how the equilibrium is achieved, since like you pointed out, there are so many juicy results popping out from just the fact that they are achieved on average.

Also, I enjoyed the examples in your post!

Comment by strangepoop on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-02-25T21:35:18.842Z · score: 1 (1 votes) · LW · GW

effortless pattern-recognition is what machine learning can do today, while effortful attention, and explicit reasoning (which seems to be a subset of effortful attention) is generally beyond ML’s current capabilities.

Just to be clear, are you or aren't you (or neither) saying that this is only a matter of scale?

It seems to me like you're saying it could indeed only be a matter of scale, we're just in the stage of figuring out what the right dimension to amp up is ("be coherent for longer").

Comment by strangepoop on Double-Dipping in Dunning--Kruger · 2018-11-28T08:58:06.957Z · score: 3 (3 votes) · LW · GW

See, the problem is now that I've also internalized this (seemingly true) lesson, the +15% might double-boost my ass numbers.

But maybe if we accumulate enough lessons we can get increasingly close to the truth by adding these "higher order terms"?

I don't think so - the error bars do not necessarily diminish. For example:

  • Ass number for drawing ability percentile: ~70%
  • Dunning Kruger correction: ~50%
  • Double-dip correction: ~65%

Did I do it right? I have no idea. Every step might have already been taken into account in the first asstimate. Every system-2 patch that we discover might have immediately patched system-1.

One (admittedly lazy) way out is to chuck all context-sensitive formal rules like 'add/subtract X%' and leave it entirely to system-1: play calibration games for skill-percentiles.

Comment by strangepoop on If You Want to Win, Stop Conceding · 2018-11-26T12:33:46.275Z · score: 9 (2 votes) · LW · GW

I hope we haven't forgotten Stuck in the Middle With Bruce and Soares' Have No Excuses, which starts with a quote from Bonds That Make Us Free.

I think one reason people end up using a minimax strategy is that it's just easier to compute than EV-maximization.

But more importantly, it just feels like there's no downsides - it's free insurance!

If you want to have a convincing excuse however, you might actively impede your chances. (It might be possible to distance yourself from the excuse so the insurance is actually free, but I think this is unlikely/hard.)

If you've already hedged so hard that you've bet a lot against yourself, you might have sufficiently changed the payoffs to make losing rational, especially if you also add a penalty to Being Mediocre. This is why post like this are needed.

Comment by strangepoop on Unrolling social metacognition: Three levels of meta are not enough. · 2018-08-30T07:39:30.115Z · score: 5 (5 votes) · LW · GW

Somehow this comment was really inspiring! I'm glad this exchange happened, so maybe I should upvote grandparent too? :P

BTW,

incredulity-as-attack

[not] acknowledging that your first concern had been addressed

We have terms for these! They are, respectively, stonewalling and logical rudeness.

I'm still split on how I feel about jargon, and of course it's good that you didn't use any here, but it does give the concepts you describe some legitimacy (for better or worse). Legitimacy helps especially in cases where such expressions are dismissed as over-reactions unique to you, and are thus assumed to be your responsibility to fix, by some implicit jargon-efficiency argument ("if this were a thing to be concerned about, we'd have a name for it!").

Comment by strangepoop on Simplicio and Sophisticus · 2018-07-22T21:53:52.909Z · score: 3 (3 votes) · LW · GW

"... natural science has shown a curious mixture of rationalism and irrationalism. Its prevalent tone of thought has been ardently rationalistic within its own borders, and dogmatically irrational beyond those borders. In practice such an attitude tends to become a dogmatic denial that there are any factors in the world not fully expressible in terms of its own primary notions devoid of further generalization. Such a denial is the self-denial of thought."

- A.N. Whitehead, Process and Reality

I can't really tell yet, but David Chapman's work seems to be trying to hint at this phenomenon all the time. See his How to Think Real Good, for example, even if you don't agree with his characterization of Bayesian rationality. There's also Fixation and Denial, where he goes into some failure modes when dealing with hard-to-fully-formalize things. Meta-rationality seems to be mostly about this, AFAICT.

I have to say, most of Chapman's stuff feels like pure lampshading, ie acknowledging that there is a problem and then simply moving on. I suppose he's building up to more practical advice.

If you're getting frustrated (I certainly am) that all everyone seems to be doing about this is offering loose and largely unhelpful tips, I think that's something Alan Perlis anticipated: "One can't proceed from the informal to the formal by formal means."

(of course, that's just another restatement of the fact that there is a problem.)

Comment by strangepoop on Osmosis learning: a crucial consideration for the craft · 2018-07-18T06:19:50.485Z · score: 3 (2 votes) · LW · GW

See also: "show, don't tell"/the iceberg theory in writing and the monad tutorial fallacy in functional programming. These are weakish evidence for the existence of this phenomenon, although they still reside in the lingual realm.

[posting a double comment because it is sufficiently different and the previous one is already too long]

Comment by strangepoop on Sleeping Beauty Resolved? · 2018-07-16T11:04:16.458Z · score: 1 (1 votes) · LW · GW

I'd say your reply is at least a little bit of logical rudeness, but I'll take the "Sure, ...".

I was pointing specifically at the flaw* in bringing up Everett branches into the discussion at all, not about whether the context happened to be changing here.

I wouldn't really mind the logical rudeness (if it is so), except for the missed opportunity of engaging more fully with your fascinating comment! (see also *)

It's also nice to see that the followup to OP starts with a discussion of why it's a good/easy first rule to, like I said, just ban non-timeless propositions, even if we can eventually come with a workable system that deals with it well.

(*) As noted in GP, it's still not clear to me that this is a flaw, only that I couldn't come up with anything in five minutes! Part of the reason I replied was in the hopes that you'd have a strong defense of "everettian-indexicals", because I'd never thought of it that way before!

Comment by strangepoop on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T17:58:29.799Z · score: 8 (4 votes) · LW · GW

What rossry said, but also, why do you expect to be "winning" all arms races here? Genes in other people may have led to development of meme-hacks that you don't know are actually giving someone else an edge in a zero sum game.

In particular, they might call you fat or stupid or incompetent and you might end up believing it.

Comment by strangepoop on Mathematical Mindset · 2018-07-13T13:25:30.030Z · score: 4 (3 votes) · LW · GW
For mathematics is not about proofs; it is about definitions. The essence of great mathematics is coming up with a powerful definition that results in short proofs.

Or in software terms, coming up with a powerful and elegant exposed API/top level functions that don't require peeking into the abstraction (which would imply a "longer" "proof", following Curry-Howard).

Comment by strangepoop on An Agent is a Worldline in Tegmark V · 2018-07-13T13:11:35.049Z · score: 4 (3 votes) · LW · GW

I'm a little confused.

It seemed to me that the way Tegmark had put it, level IV is meta-closed: any consistent map of (even possibly eventually inconsistent) maps is still just a consistent map in level IV; it doesn't have to model any particular territory, it just has to be a mathematical "structure". Maybe you're saying that this is actually in level V and my view of level IV is too inclusive (but I think Tegmark would disagree with you, see esp. appendix A of his original paper), or maybe I missed your point altogether.

It's not even clear that there would be a notion of an "agent" in every level IV universe (in fact I'd say it's clear that this is NOT the case), so I think the idea of a worldline between them would not be well-defined. Nevertheless, I'm fine with non-standard uses of terms if it helps communicate the idea you have after you've clarified your usage of them (but don't canonize independently in an already nebulous territory!), but I'm having some trouble with that.

So can you clarify what you mean, particularly by level IV? (reasonably precise english is fine :P)

ETA: Okay, given your Mathematical Mindset post, I'm doubly fine with your redefinition, but I still want it :3

Comment by strangepoop on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T12:08:32.568Z · score: 2 (2 votes) · LW · GW

I suppose you mean the fallibility of memory. I think Garrabrant meant it tautologically though (ie, as the definition of "past").

Comment by strangepoop on Bayesian Probability is for things that are Space-like Separated from You · 2018-07-13T11:54:17.605Z · score: 17 (4 votes) · LW · GW
I think the LW zeitgeist doesn't really engage with this.

Really? I feel quite the opposite, unless you're saying we could do still more. I think LW is actually one of the few communities that take this sort of non-dualism/naturalism in arriving at a probabilistic judgement (and all its meta levels) seriously. We've been repeatedly exposed to the fact that Newcomblike problems are everywhere since a long time ago, and then relatively recently, with Simler's wonderful post on crony beliefs (and now, his even more delightful book with Hanson, of course).

ETA: I'm missing quite a few posts that were even older (Wei Dai's? Drescher's? yvain had something too IIRC), it'd be nice if someone else who does remember posted them here.

Comment by strangepoop on A Sarno-Hanson Synthesis · 2018-07-13T10:57:03.638Z · score: 4 (4 votes) · LW · GW

why private? :(

Comment by strangepoop on A Sarno-Hanson Synthesis · 2018-07-13T10:56:18.526Z · score: 8 (2 votes) · LW · GW

Related to the sleep example, because you didn't say exactly this, and it makes a stronger case:

I noticed some time ago that my misery when waking up was a negotiation tactic from when my parents would wake me up. They were nice enough to let me sleep a little longer if I looked sufficiently upset at being woken up. It became obvious recently that I was pattern-matching my alarm to a parent. How do I know this? Because I knew that if I started loudly singing a cheery tune with a smile on my face I'd automatically become less miserable, but I never did, because I didn't want to be less miserable, even though my parents weren't around anymore. I started doing it when I realized this, and it works pretty well!

(There was one more problem, of feeling like I'm manipulating myself, which seems at first to be at odds with building self-loyalty. I think this went away as I got more comfortable with the idea of sometimes "wanting to be manipulated" for my own success, of desiring less freedom (which would be sacrilege to my younger self). Reading about Kegan's model of adult development and experimenting with BDSM helped me get there somehow.)

One problem with applying this thesis (did I mention I wholeheartedly agree with it?) is that it's hard to refrain from inadvertently reinforcing such negotiation tactics when someone else looks miserable (like my parents did), ie ferberization is painful (not to mention patronizing when done to adults). I think it's possible to be honest about it with someone reasonable and smart enough to grasp the subtleties, and then, usually only after they're done having their episode, but there's no good solution to this AFAIK. Else we wouldn't have hard problems of income redistribution either - the problem of helping those who need it without inducing weakness/dependence.

BTW, Is there an economic term for this specific problem?

Comment by strangepoop on Osmosis learning: a crucial consideration for the craft · 2018-07-13T09:51:58.586Z · score: 3 (3 votes) · LW · GW

I love this post! If you've ever noticed that short workshops conducted by an expert seems to teach you a lot more and faster than spending weeks on something despite videos and docs, I think you ought to agree with toonalfrink's thesis.

I especially like the very crisply stated problem of asymmetrically engineering the subsymoblic flow towards "better" equilibria. I'll have fun thinking about that for a while.

(In fact, I'd say the first two 'areas of inquiry' you mention are really just natural subproblems of the third - bandwidth throttling is probably what we'll attempt to bind to "knowledge intelligence" once we know how to quantify and price it.)


A developed theory might also be applied negatively (as in, mainly throttling) to less domain-specific things like containing toxicity and misery, when you have less choice in the company you keep! I have personally tried to model this problem as mutiplayer-meditation: where distracting terrible thoughts can arise also from minds outside your own and you can, with focused practice, decide their influence on you. I think your model captures this more generally, and makes it more of a systematic communal effort.


When you talk about "verbal communication" though, it's not clear whether you're referring only to their attempts at talking about how they got so smart (or rich etc) or if you're also including specific object-level problems that they solve out loud (in a blogpost where they just apply their smarts, say), which could be said to allow for a sort of one-way, low-bandwidth osmosis.

Of course, the problem with a blogpost versus seeing them in-person (or even in-video, or some other neutral feed) is that they are themselves doing the filtering of what's notable - and as we all know around here, people usually have a poor idea of what is and isn't obvious to others. But this might have more to do with memorability than underspecification. I've noticed I often forget certain pieces of advice, but as a human, I tend to have good retention of the mannerisms of people. Another possible cause is the ability to ask questions without trivial inconveniences (like having to wait a long time for an answer, or gesturing at pictures). Yet another one is seeing a live demonstration of things actually working, so you're more likely to try it consistently rather than throwing away the whole thing when it doesn't work the first couple of times.

(Notice that memorability vs underspecification vs interactivity vs demo aren't distinguished when comparing workshops and self-study. Can we think of other factors?)


Anyway, I think we could start measuring this with increasingly-less-than-in-person media to start singling out what the factors really are, so we can continue to avoid meeting real people :P

Comment by strangepoop on Against accusing people of motte and bailey · 2018-06-05T17:06:46.465Z · score: 8 (3 votes) · LW · GW

All the actual problems are located around the tendency to hold every individual who calls themselves an X accountable for (simultaneously) all the opinions ever propounded under the label of X.

Agreed.

Responding to reductions is like responding to insults: if they don't spring out of genuine confusion or you've run out of your good deeds for the day, you don't have to respond.

I mean, if a surefire way to get you to give me information is to hold your reputation hostage, then you're going to be spending an awful lot of time fielding queries from strangers.

Comment by strangepoop on When is unaligned AI morally valuable? · 2018-05-25T18:34:41.357Z · score: 2 (1 votes) · LW · GW

Upvoted, this was exactly my reaction to this post. However, you may want to look at the link to alignment in the OP. Christiano is using "alignment" in a very narrow sense. For example, from the linked post:

The definition is intended de dicto rather than de re. An aligned A is trying to “do what H wants it to do.” Suppose A thinks that H likes apples, and so goes to the store to buy some apples, but H really prefers oranges. I’d call this behavior aligned because A is trying to do what H wants, even though the thing it is trying to do (“buy apples”) turns out not to be what H wants: the de re interpretation is false but the de dicto interpretation is true.

... which rings at least slightly uncomfortable to my ears.

Comment by strangepoop on When is unaligned AI morally valuable? · 2018-05-25T15:45:58.616Z · score: 3 (2 votes) · LW · GW

I'm curious, how does this work out for fellow animals?

ie If you value (human and non-human) animals directly in your utility function, and then you also use UDT, are you not worried about double counting the relevant intuitions, and ending up being too "nice", or being too certain that you should be "nice"?

Perhaps it is arguable that that is precisely what's going on when we end up caring more for our friends and family?

Comment by strangepoop on Sleeping Beauty Resolved? · 2018-05-24T20:32:12.117Z · score: 2 (1 votes) · LW · GW

There is a relevant distinction: the machinery being used (logical assignment) has to be stable for the duration of the proof/computation. Or perhaps, the "consistency" of the outcome of the machinery is defined on such a stability.

For the original example, you'd have to make sure that you finish all relevant proofs within a period in or within a period in . If you go across, weird stuff happens when attempting to preserve truth, so banning non-timeless propositions makes things easier.

You can't always walk around while doing a proof if one of your propositions is "I'm standing on Second Main". You could, however, be standing still in any one place whether or not it is true. ksvanhorn might call this a space parametrization, if I understand him correctly.

So here's the problem: I can't imagine what it would mean to carry out a proof across Everett branches. Each prover would have a different proof, but each one would be valid in its own branch across time (like standing in any one place in the example above).

I think a refutation of that would be at least as bizarre as carrying out a proof across space while keeping time still (note: if you don't keep time still, you're probably still playing with temporal inconsistencies), so maybe come up with a counterexample like that? I'm thinking something along the lines of code=data will allow it, but I couldn't come up with anything.

Comment by strangepoop on Paper Trauma · 2018-02-14T23:51:48.404Z · score: 8 (3 votes) · LW · GW

Have any of you tried Wacom Tablet + Inkscape?

It's (nearly) everything people have asked for above, and my favorite way to think using System 3, as some call it.

I've had a similar love as Critch for large canvases to dump my thoughts in. This usually meant desks in the library that I would messily (and unashamedly) scribble on from top to bottom, then losing them the next day. When I really wanted to save those notes I ended up taking some hasty pictures (that I never revisited because tiny).

But with Inkscape, consider - infinte canvas and resolution, combinations of text, freehand and other vector graphics, zoom & pan for focusing or looking at the big picture, permanent digital storage (and little environmental waste), colors (like others have said, this one changes everything, but especially so without trivial inconveniences and no penalties on experimentation), and my personal favorite - guaranteed neatness, with the ability to group and move objects around. (I really love the fact that all strokes automatically become objects because I find it very difficult to keep tidy while in a manic frenzy; this lets you draw fast and edit later, as in writing.)

The only drawback is that it isn't nearly as portable. And that I can't seem to dot the i's in freehand sometimes because vector pens don't make points.

Comment by strangepoop on Open thread, October 2 - October 8, 2017 · 2017-10-03T22:08:33.539Z · score: 2 (2 votes) · LW · GW

Can someone help me out with Paul Christiano's email/contact info? Couldn't find it anywhere online.

I might be able to discuss possibilities for implementing his Impact Certificate ideas with some very capable people here in India.

Comment by strangepoop on Markets are Anti-Inductive · 2017-09-13T10:14:19.124Z · score: 1 (1 votes) · LW · GW

Is it unfair to say that prediction markets will deal with all of these cases?

I understand that's like responding to "This is a complicated problem that may remain unsolved, it is not clear that we will be able to invent the appropriate math to deal with this." with "But Church-Turing thesis!".

But all I'm saying is that it does apply generally, given the right apparatus.

Comment by strangepoop on The Majority Is Always Wrong · 2017-06-08T15:05:14.396Z · score: 0 (0 votes) · LW · GW

Isn't this just a special case of Berkson's paradox?

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-19T12:59:47.387Z · score: 0 (0 votes) · LW · GW

Exclusion isn't always socially appropriate. If I take a cab home everyday (which I pay for), and a friend can literally take a free ride because her place is on the way, should I "exclude" her if she doesn't want to share the cost? She claims it doesn't cost me extra, I'd be paying for the cab anyway if she lived somewhere else.

But of course I can come up with un-excludable externalities:

I share a house that's in pretty bad shape, and I decide to get some fresh painting done. This is a net benefit to all the housemates, but we would value them differently. I want this slightly more than all the others. So I have to pay the entire amount.

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-18T23:40:32.299Z · score: 1 (1 votes) · LW · GW

Incidentally, Gary Drescher makes the same (citation free) statement in a footnote in Chapter 7 - Deriving Ought from Is:

Utilitarian bases for capitalism—arguments that market forces promote the greatest good—are another matter, best suited for other books. For here, suffice it to note that even in theory, an unconstrained market does not promote the greatest good overall, but rather the greatest good weighted by the participants’ relative wealth.

I remember asking for a reference about a year ago on LWIRC, but that didn't help much.

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-18T23:11:33.061Z · score: 0 (0 votes) · LW · GW

Can you help me with this?

It seems to me:

'reductionism/naturalism' + 'continuity of consciousness in time' + 'no tiny little tags on particles that make up a conscious mind' = 'patternism'

Are you saying that there's something wrong with the latter two summands? Or it doesn't quite add up?

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-18T22:13:45.515Z · score: 0 (0 votes) · LW · GW

See my reply to Oscar_Cunningham below; I'm not sure if Egan's law is followed exactly (it never is, otherwise you've only managed to make the same predictions as before, with a complexity penalty!)

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-18T22:05:22.751Z · score: 0 (0 votes) · LW · GW

I don't think that follows exactly. Specifically, that "you're acting for the sake of things which you won't experience".

You are correct in your pricing of quantum flips according to payoffs adjusted by the Born rule.

But the payoffs from your dead versions don't count, assuming you can only find yourself in non-dead continuations. I don't know if this is a position (Bostrom or Carroll have almost surely written about it) or just outright stupidity, but it seems to me that this assumption (of only finding yourself alive) shrinks your ensemble of future states, leaving your decision theoretic judgements to only deal with the alive ones

If I'm offered a bet of being given $0 or $100 over two flips of a fair quantum coin, with payoffs:

|00> -> $0

|11> -> $100

|01> -> certain immediate death

|10> -> certain immediate death

I'd still price it at $50, rather than $25.

You could say, a little vaguely, that the others are physical possibilities, but they're not anthropic possibilities.

As for "I can still act for the sake of things which I won't experience" in general, where you care about dead versions, apart from you being able to experience such, you might find Living in Many Worlds helpful, specifically this bit:

Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the 12th century, which are also beyond your ability to affect. But the 12th century is not your responsibility, because it has, as the quaint phrase goes, "already happened". I would suggest that you consider every world which is not in your future, to be part of the "generalized past".

If you care about other people finding you dead and mourning you though, then the case would be different, and you'd have to adjust your payoffs accordingly.

Note again though, this should have nothing necessarily to do with QM (all of this would hold in a large enough classical universe).

As for me, personally, I don't think I buy immortality, but then I'd have to modus tollens out a lot of stuff (like stepping into a teleporter, or even perhaps the notion of continuity).

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-18T21:18:21.522Z · score: 1 (1 votes) · LW · GW

Is there some nice game-theoretic solution that deals with the 'free rider problem', in the sense of making everyone pay in proportion to their honest valuation? Like how Vickery Auctions reveal honest prices, or Sperner's lemma can help with envy-free rent division?

Comment by strangepoop on Open thread, May 15 - May 21, 2017 · 2017-05-15T23:36:23.654Z · score: 4 (4 votes) · LW · GW

Why does patternism [the position that you are only a pattern in physics and any continuations of it are you/you'd sign up for cryonics/you'd step into Parfit's teleporter/you've read the QM sequence]

not imply

subjective immortality? [you will see people dying, other people will see you die, but you will never experience it yourself]

(contingent on the universe being big enough for lots of continuations of you to exist physically)

I asked this on the official IRC, but only feep was kind enough to oblige (and had a unique argument that I don't think everyone is using)

If you have a completely thought out explanation for why it does imply that, you ought never to be worried about what you're doing leading to your death (maybe painful existence, but never death), because there would be a version of you that would miraculously escape it.

If you bite that bullet as well, then I would like you to formulate your argument cleanly, then answer this (rot13):

jul jrer lbh noyr gb haqretb narfgurfvn? (hayrff lbh pbagraq lbh jrer fgvyy pbafpvbhf rira gura)

ETA: This is slightly different from a Quantum Immortality question (although resolutions might be similar) - there is no need to involve QM or its interpretations here, even in a classical universe (as long as it's large enough), if you're a patternist, you can expect to "teleport" to another exact clone somewhere that manages to live.

Comment by strangepoop on Open thread, Dec. 19 - Dec. 25, 2016 · 2016-12-19T16:24:43.194Z · score: 1 (1 votes) · LW · GW

Can someone recommend a book on Economics basics with the same level of force and completion as a Jaynes/Drescher/Pearl/Nozick/Dawes?

I mean, with powerful freeing laws (I feel like this is exactly analogous to EY's requiredism in the free will sequence) that can let my imagination wander without fear of fooling myself too much.

I realize that this may be asking for too much given the nature of the field, but anything that is close will do.