niplav's Shortform

post by niplav · 2020-06-20T21:15:06.105Z · LW · GW · 35 comments

35 comments

Comments sorted by top scores.

comment by niplav · 2021-01-31T14:32:43.012Z · LW(p) · GW(p)

I have the impression that very smart people have many more ideas than they can write down & explain adequately, and that these kinds of ideas especially get developed in & forgotten after conversations among smart people.

For some people, their comparative advantage could be to sit in conversations between smart people, record & take notes, and summarize the outcome of the conversations (or, alternatively, just interview smart people, let them explain their ideas & ask for feedback, and then write them down so that others understand them).

Replies from: Viliam, adamShimi
comment by Viliam · 2021-02-04T21:52:35.687Z · LW(p) · GW(p)

If you interview smart people on a video and publish it on your channel, it will be almost like Joe Rogan, who is quite popular. If you prioritize "smart" over "popular", and interview some of them multiple times (under the assumption they have many more ideas than they can write down), it may be less generally popular, but also more original, so you could create your own niche. In addition, you could have a blog with short summaries of the interviews, with the videos embedded (which would make it easier to move to another platform if YouTube bans you).

I think this would be awesome. If you are tempted to try it, and only wait for encouragement, then by all means go ahead, you have my blessing! (I would be somewhat tempted myself, but my spoken English is not very good.)

By the way, the ideas don't even have to be that special. Someone explaining their daily job can be already interesting enough. I mean, worst case, you don't care, so you skip this specific video. That means, the potential material would not be scarce, and you are only limited by the time you want to spend doing this.

comment by adamShimi · 2021-01-31T14:59:24.956Z · LW(p) · GW(p)

I regularly think about this. My big question is how to judge if your comparative advantage is that instead of doing the research/having the ideas yourself.

comment by niplav · 2020-10-06T07:40:43.362Z · LW(p) · GW(p)

Thoughts on Fillers Neglect Framers in the context of LW.

  • LW seems to have more framers as opposed to fillers
  • This is because
    • framing probably has slightly higher status (framing posts get referenced more often, since they create useful concept handles)
    • framing requires more intelligence as opposed to meticulous research, and because most LWers are hobbyists, they don't really have a lot of time for research
    • there is no institutional outside pressure to create more fillers (except high status fillers such as Gwern)
  • The rate of fillers to framers seems to have increased since LWs inception (I am least confident in this claim)
    • Perhaps this is a result of EY leaving behind a relatively accepted frame
  • I am probably in the 25th percentile-ish of intelligence among LWers, and enjoy researching and writing filling posts. Therefore, my comparative advantage is probably writing filling type posts as opposed to framing ones.
comment by niplav · 2021-04-09T21:31:30.104Z · LW(p) · GW(p)

The child-in-a-pond thought experiment is weird, because people use it in ways it clearly doesn't work for (especially in arguing for effective altruism).

For example, it observes you would be altruistic in a near situation with the drowning child, and then assumes that you ought to care about people far away as much as people near you. People usually don't really argue against this second step, but very much could. But the thought experiment makes no justification for that extension of the circle of moral concern, it just assumes it.

Similarly, it says nothing about how effectively you ought to use your resources, only that you probably ought to be more altruistic in a stranger-encompassing way.

But not only does this thought experiment not argue for the things people usually use it for, it's also not good for arguing that you ought to be more altruistic!

Underlying it is a theme that plays a role in many thought experiments in ethics: they appeal to game-theoretic intuition for useful social strategies, but say nothing of what these strategies are useful for.

Here, if people catch you letting a child drown in a pond while standing idly, you're probably going to be excluded from many communities or even punished. And this schema occurs very often! Unwilling organ donors, trolley problems, and violinists.

Bottom line: Don't use the drowning child argument to argue for effective altruism.

Replies from: habryka4
comment by habryka (habryka4) · 2021-04-09T23:29:23.335Z · LW(p) · GW(p)

I don't know, I think it's a pretty decent argument. I agree it sometimes gets overused, but I do think given it's assumptions "you care about people far away as much as people closeby" and "there are lots of people far away you can help much more than people close by" and "here is a situation where you would help someone closeby, so you might also want to help the people far away in the same way" are all part of a totally valid logical chain of inference that seems useful to have in discussions on ethics. 

Like, you don't need to take it to an extreme, but it seems locally valid and totally fine to use, even if not all the assumptions that make it locally valid are always fully explicated.

Replies from: Dagon, niplav
comment by Dagon · 2021-04-10T02:52:41.424Z · LW(p) · GW(p)

On self-reflection, I just plain don't care about people far away as much as those near to me.  Parts of me think I should, but other parts aren't swayed.  The fact that a lot of the motivating stories for EA don't address this at all is one of the reasons I don't listen very closely to EA advice.  

I am (somewhat) an altruist.  And I strive to be effective at everything I undertake.  But I'm not an EA, and I don't really understand those who are.

Replies from: habryka4
comment by habryka (habryka4) · 2021-04-10T04:33:15.700Z · LW(p) · GW(p)

Yep, that's fine. I am not a moral prescriptivist who tells you what you have to care about. 

I do think that you are probably going to change your mind on this at some point in the next millennium if we ever get to live that long, and I do have a bunch of arguments that feel relevant, but I don't think it's completely implausible you really don't care.

I do think that not caring about how people are far away is pretty common, and building EA on that assumption seems fine. Not all clubs and institutions need to be justifiable to everyone.

comment by niplav · 2021-04-10T09:54:15.567Z · LW(p) · GW(p)

Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they "trick" people into agreeing with assumption one.

(for the record, I think the first premise is true)

comment by niplav · 2020-10-15T22:22:45.996Z · LW(p) · GW(p)

I just updated my cryonics cost-benefit analysis with

along with some small fixes and additions.

The basic result has not changed, though. It's still worth it.

comment by niplav · 2020-09-25T17:31:55.345Z · LW(p) · GW(p)

Adblockers have positive externalities: they remove much of the incentive to make websites addictive.

Replies from: Viliam, mr-hire
comment by Viliam · 2020-09-27T09:04:21.178Z · LW(p) · GW(p)

If the adblockers become too popular, websites will update to circumvent them. It will be a lot of work at the beginning, but probably possible.

Currently, most ads are injected by JavaScript that downloads them from a different domain. That allows adblockers to block anything coming from a different domain, and the ads are blocked relatively simply.

The straightforward solution would be to move ad injection to the server side. The PHP (or whatever language) code generating the page would contact the ad server, download the ad, and inject it into the generated HTML file. From the client perspective, it is now all coming from the same domain; it is even part of the same page. The client cannot see the interaction between server and third party.

Problem with this solution is that it is too easy for the server to cheat; to download thousand extra ads without displaying them to anyone. The advertising companies must find a way to protect themselves from fraud.

But if smart people start thinking about it, they will probably find a solution. The solution doesn't have to work perfectly, only statistically. For example, the server displaying the ad could also take the client's fingerprint and send it to the advertising company. Now this fingerprint can of course either be real, or fictional if the server is cheating. But the advertising company could cross-compare fingerprints coming from thousand servers. If many different servers report having noticed the same identity, the identity is probably real. If a server reports too many identities that no one else have ever seen, the identities are probably made up. The advertising company would suspect fraud if the fraction of unique identities reported by one server exceeds 20%. Something like this.

comment by Matt Goldenberg (mr-hire) · 2020-09-25T18:52:09.285Z · LW(p) · GW(p)

They also have negative externalities, moving websites from price discrimination models that are available to everyone, to direct pay models that are only available to people who can afford that.

comment by niplav · 2020-09-02T19:50:38.270Z · LW(p) · GW(p)

Mistake theory/conflict theory seem more like biases (often unconscious, hard to correct in the moment of action) or heuristics (should be easily over-ruled by object-level considerations).

comment by niplav · 2020-06-20T21:15:06.685Z · LW(p) · GW(p)

[epistemic status: tried it once] Possible life improvement: Drinking cold orange juice in the shower. Just did it, it felt amazing.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-06-22T14:58:16.779Z · LW(p) · GW(p)

Cold shower or hot shower?

Replies from: niplav
comment by niplav · 2020-06-23T11:53:12.645Z · LW(p) · GW(p)

Hot shower.

comment by niplav · 2021-04-11T21:23:14.430Z · LW(p) · GW(p)

Life is quined matter.

Replies from: Spiracular
comment by Spiracular · 2021-04-11T21:40:15.052Z · LW(p) · GW(p)

DNA is a quine, when processed by DNA Replicase.

Although with a sufficiently complex (or specific) compiler or processor, any arbitrary stream of information can be made into a quine.

You can embed information in either of instructions/substrate, or compiler/processor. Quines usually seem limited to describing instructions/substrate, but that's not the only place information can be drawn from. I've kinda come to think of it as 2-term operation (ex: "this bit of quine code is a quine wrt python").

(More physical quines: Ink scribbles on a sheet of paper are a quine wrt a copy-machine. At about the far-end of "replicative complexity gets stored in the processor," you have the "activated button" of a button-making machine (which activates with a button, and outputs buttons in an active state). I think the "activated button" here is either a quine, or almost a quine.)

The cool thing about life is that it it is both a quine, and its own processor. (And also, sorta its own power source.)

I find it simpler to call systems like this "living" (at least while active/functional), since they're meaningfully more than just quines.

Viruses are definitely quines, though. Viruses and plasmids are non-living quines that compile and run on living biological systems.

Replies from: niplav
comment by niplav · 2021-04-13T12:56:17.826Z · LW(p) · GW(p)

Isn't life then a quine running on physics itself as a substrate?

I hadn't considered thinking of quines as two-place, but that's obvious in retrospect [LW · GW].

Replies from: Spiracular
comment by Spiracular · 2021-04-14T22:28:55.585Z · LW(p) · GW(p)

It's sorta non-obvious. I kinda poked at this for hours, at some point? It took a while for me to settle on a model I liked for this.

Here's the full notes for what I came up with. [LW(p) · GW(p)]

Physics: Feels close. Hm... biological life as a self-compiler on a physics substrate?

DNA or gametes seem really close to a "quine" for this: plug it into the right part of an active compiler, and it outputs many instances of its own code + a customized compiler. Although it crashes/gets rejected if the compiler is too different (ex: plant & animal have different regulatory markers & different sugar-related protein modifications).

I don't have a fixed word for the "custom compiler" thing yet ("optimized compiler"? "coopted compiler"? "spawner"? "Q-spawner"?). I have seen analogous stuff in other places, and I'm tempted to call it a somewhat common pattern, though. (ex: vertically-transmitted CS compiler corruption, or viruses producing tumor micro-environments that are more favorable to replicating the virus)

comment by niplav · 2020-10-16T10:33:32.491Z · LW(p) · GW(p)

Two-by-two for possibly important aspects of reality and related end-states:

Coordination is hard Coordination is easy
Defense is easier Universe fractured into many parties, mostly stasis Singleton controlling everything
Attack is easier Pure Replicator Hell Few warring factions? Or descent into Pure Replicator hell?
Replies from: Dagon, daniel-kokotajlo
comment by Dagon · 2020-10-16T16:28:27.005Z · LW(p) · GW(p)

I think there are multiple kinds of attacks, which matter in this matrix.  Destruction/stability is a different continuum from take control of resources/preserve control of resources.   

I also think there's no stasis - the state of accessible resources (heat gradients, in the end) in the universe will always be shifting until it's truly over.  There may be multi-millenea equilibria, where it feels like stasis on a sufficiently-abstract level, but there's still lots of change.  As a test of this intuition, the Earth has been static for millions of years, and then there was a shift when it started emitting patterned EM radiation, which has been static (though changing in complexity and intensity) for 150 years or so.

I agree with Daniel, as well, that coordination and attack-to-control leads to singleton (winner takes all).  I think coordination and attack-to-destroy may lead to singleton, or may lead to cyclic destruction back to pre-coordination levels.  Defense and coordination is kind of hard to define - perfect cooperation is a singleton, right? But then there's neither attack nor defense.  I kind of hope defense and coordination leads to independence and trade, but I doubt it's stable - eventually the better producer gets enough strength that attack becomes attractive.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-10-16T10:41:55.813Z · LW(p) · GW(p)

Shouldn't the singleton outcome be in the bottom right quadrant? If attack is easy but so is coordination, the only stable solution is one where there is only one entity (and thus no one to attack or be attacked.) If by contrast defense is easier, we could end up in a stable multipolar outcome... at least until coordination between those parties happen. Maybe singleton outcome happens in both coordination-is-easy scenarios.

comment by niplav · 2020-07-19T18:27:26.359Z · LW(p) · GW(p)

Idea to for an approach how close GPT-3 is to "real intelligence": generalist forecasting!

Give it the prompt: "Answer with a probability between 0 and 1. Will the UK's Intelligence and Security committee publish the report into Russian interference by the end of July?", repeat for a bunch of questions, grade it at resolution.

Similar things could be done for range questions: "Give a 50% confidence interval on values between -35 and 5. What will the US Q2 2020 GDP growth rate be, according to the US Bureau of Economic Analysis Advance Estimate?".

Perhaps include the text of the question to allow more priming.

Upsides: It seems to me that making predictions is a huge part of intelligence, relatively easy to check and compare with humans

Downsides: Resolution will not be available for quite some time, and when the results are in, everybody will already be interested in the next AI project. Results only arrive "after the fact".

comment by niplav · 2021-04-03T18:45:16.008Z · LW(p) · GW(p)

After nearly half a year and a lot of procrastination, I fixed (though definitely didn't finish) my post on range and forecasting accuracy [LW · GW].

Definitely still rough around the edges, I will hopefully fix the charts, re-analyze the data and add a limitations section.

comment by niplav · 2020-12-22T11:51:05.981Z · LW(p) · GW(p)

epistemic status: a babble

While listening to this podcast yesterday, I was startled by the section on the choice of Turing machine in Solomonoff induction, and the fact that it seems arbitrary. (I had been thinking about similar questions, but in relation to Dust Theory).

One approach could be to take the set of all possible turing machines , and for each pair of turing machines find the shortest/fastest program so that emulates on . Then, if is shorter/faster than , one can say that is simpler than (I don't know if this would always intuitively be true). For example, python with witch can emulate python without witch in a shorter program that vice versa (this will only make sense if you've listened to the podcast).

Maybe this would give a total order over Turing machines with a maximum. One could then take the simplest Turing machine and use that one for Solomonoff induction.

comment by niplav · 2020-09-26T07:28:03.019Z · LW(p) · GW(p)

Happy Petrov Day everyone :-)

comment by niplav · 2020-09-05T13:46:04.068Z · LW(p) · GW(p)

Politics supervenes on ethics and epistemology (maybe you also need metaphysics, not sure about that).

There's no degrees of freedom left for political opinions.

Replies from: niplav
comment by niplav · 2020-09-05T14:10:53.603Z · LW(p) · GW(p)

Actually not true. If "ethics" is a complete & transitive preference over universe trajectories, then yes, otherwise not necessarily.

comment by niplav · 2020-08-12T23:20:32.625Z · LW(p) · GW(p)

I feel like this meme is related to the troll bridge problem [LW · GW], but I can't explain how exactly.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-08-13T01:10:56.075Z · LW(p) · GW(p)

In both cases, the reasoner has poor reasoning (in the meme, due to being bad at math; in the bridge problem, due to unexpected paradoxes in standard mathematical formulations). Where they diverge is that in the meme, poor logic maximizes (presumably) the decider's benefit, whereas in the bridge problem, the poor logic minimizes the benefit the agent gets. But in both cases, the poor logic is somewhat self-reinforcing (though even moreso in the bridge problem than the meme)

comment by niplav · 2020-09-18T20:25:32.152Z · LW(p) · GW(p)

If we don't program philosophical reasoning into AI systems, they won't be able to reason philosophically.

Replies from: gworley
comment by G Gordon Worley III (gworley) · 2020-09-19T00:39:55.125Z · LW(p) · GW(p)

This is an idea that's been talked about here before, but it's not even exactly clear what philosophical reasoning is or how to train for it, let alone if it's a good idea to teach an AI to do that.

comment by niplav · 2020-12-22T11:50:30.448Z · LW(p) · GW(p)

epistemic status: a babble

While listening to this podcast yesterday, I was startled by the section on the choice of Turing machine in Solomonoff induction, and the fact that it seems arbitrary. (I had been thinking about similar questions, but in relation to Dust Theory).

One approach could be to take the set of all possible turing machines , and for each pair of turing machines find the shortest/fastest program so that emulates on . Then, if is shorter/faster than , one can say that is simpler than (I don't know if this would always intuitively be true).

Maybe this would give a total order over Turing machines, with one or more maxima. One could then take the simplest Turing machine and use that one for Solomonoff induction.