What am I fighting for? 2021-04-20T23:27:26.416Z
Could degoogling be a practice run for something more important? 2021-04-17T00:03:42.790Z
[Recruiting for a Discord Server] AI Forecasting & Threat Modeling Workshop 2021-04-10T16:15:07.420Z
Aging: A Surprisingly Tractable Problem 2021-04-08T19:25:42.287Z
Averting suffering with sentience throttlers (proposal) 2021-04-05T10:54:09.755Z
TASP Ep 3 - Optimal Policies Tend to Seek Power 2021-03-11T01:44:02.814Z
Takeaways from the Intelligence Rising RPG 2021-03-05T10:27:55.867Z
Reading recommendations on social technology: looking for the third way between technocracy and populism 2021-02-24T11:48:06.451Z
Quinn's Shortform 2021-01-16T17:52:33.020Z
Is it the case that when humans approximate backward induction they violate the markov property? 2021-01-16T16:22:21.561Z
Infodemics: with Jeremy Blackburn and Aviv Ovadya 2021-01-08T15:44:57.852Z
Chance that "AI safety basically [doesn't need] to be solved, we’ll just solve it by default unless we’re completely completely careless" 2020-12-08T21:08:47.575Z
Announcing the Technical AI Safety Podcast 2020-12-07T18:51:58.257Z
How ought I spend time? 2020-06-30T16:53:53.787Z
Have general decomposers been formalized? 2020-06-27T18:09:06.411Z
Do the best ideas float to the top? 2019-01-21T05:22:51.182Z
on wellunderstoodness 2018-12-16T07:22:19.250Z


Comment by Quinn (quinn-dougherty) on Could degoogling be a practice run for something more important? · 2021-04-22T12:24:03.711Z · LW · GW

I think the litmus test for the value of reducing dependency on a given product/technology is whether we think it's empowering or enfeebling. Consider arithmetic calculators: is it empowering to delegate boring stuff to subroutines freeing up your mind to do harder stuff, or is it enfeebling because it reduces incentive to learn to do mental arithmetic well? Dependence can be a problem in either case.

Each product needs to be assessed individually.

Comment by Quinn (quinn-dougherty) on Open and Welcome Thread - April 2021 · 2021-04-05T17:12:50.852Z · LW · GW

I'm trying to decide if i'm going to write up a thought about longtermism I had.

I think there are two schools of thought-- that the graph of a value function over time is continuous or discontinuous. The continuous school of thought suggests that you get near term evidence about long term consequences, and the discontinuous school of thought does not interpret local perturbation in this way at all.

I'm sure this is covered in one of the many posts about longtermism, and the language of continuous functions could either make it clearer or less clear depending on the audience.

Comment by Quinn (quinn-dougherty) on Takeaways from the Intelligence Rising RPG · 2021-03-05T11:30:34.527Z · LW · GW

I can't post a complete ruleset, but I can add some insight-- each party had "stats" representing hard power, soft power, budget, that sort of thing. Each turn you could spend "talent" stats on research arbitrarily, and you could take two "actions" which were GM-mediated expenditures of things like soft power, budget, etc. The game board was a list of papers and products that could be unlocked, unlocking papers released new products onto the board

Comment by Quinn (quinn-dougherty) on Reading recommendations on social technology: looking for the third way between technocracy and populism · 2021-02-24T20:46:38.606Z · LW · GW

isn't increasing the competence of the voter akin to increasing the competence of the official, by proxy? I'm pattern matching this to yet another push-pull compromise between the ends of the spectrum, with a strong lean toward technocracy's side.

I'm assuming I'll have to read Brennan for his response to the criticism that it was tried in u.s. and made a lot of people very upset / is widely regarded as a bad move.

I agree with Gerald Monroe about the overall implementation problems even if you assume it wouldn't just be a proxy for race or class war (which I think is a hefty "if").

Just doesn't seem like "off the spectrum" thinking to me, though it may be the case that reading Brennan will improve my appreciation of the problem.

Comment by Quinn (quinn-dougherty) on Scott and Rohin doublecrux on AI with human models · 2021-02-22T18:16:17.682Z · LW · GW

should i be subscribed to a particular youtube channel where these things get posted?

Comment by Quinn (quinn-dougherty) on Anki decks by LW users · 2021-02-21T14:49:31.346Z · LW · GW

Quick Bayes Table, by alexvermeer. A simple deck of cards for internalizing conversions between percent, odds, and decibels of evidence.

link broken

Comment by Quinn (quinn-dougherty) on Deducing Impact · 2021-02-12T19:03:33.449Z · LW · GW
Comment by Quinn (quinn-dougherty) on Lessons I've Learned from Self-Teaching · 2021-01-25T12:23:27.640Z · LW · GW

Leverage the Pareto principle, get 80% of the benefit out of the key 20/30/40% of the concepts and exercises, and then move on.

This is hard to instrumentalize regarding difficulty. I find that the hardest exercises are likeliest to be skipped (after struggling with them for an hour or two), but it doesn't follow that I can expect the easier ones (which I happened to have completed) to lie in that key 20%.

Comment by Quinn (quinn-dougherty) on Quinn's Shortform · 2021-01-16T17:59:13.686Z · LW · GW

::: latex :::

Comment by Quinn (quinn-dougherty) on Quinn's Shortform · 2021-01-16T17:57:39.185Z · LW · GW

:::what about this:::

:::hm? x :: Bool -> Int -> String :::

Comment by Quinn (quinn-dougherty) on Quinn's Shortform · 2021-01-16T17:52:33.634Z · LW · GW

testing latex in spoiler tag

Testing code block in spoiler tag

Comment by Quinn (quinn-dougherty) on Infodemics: with Jeremy Blackburn and Aviv Ovadya · 2021-01-08T15:46:11.239Z · LW · GW

7p on thursday the 14th for New York, 4p in San Fransisco

Comment by Quinn (quinn-dougherty) on Announcing the Technical AI Safety Podcast · 2020-12-08T19:02:02.930Z · LW · GW

When I submitted to pocketcasts it said we were already on it :)

Comment by Quinn (quinn-dougherty) on Have general decomposers been formalized? · 2020-07-10T15:11:55.172Z · LW · GW

Thank you Abram. Yes, factored cognition is more what I had in mind. However, I think it's possible to speak of decomposition generally enough to say that PCA/SVD is a decomposer, albeit an incredibly parochial one that's not very useful to factored cognition.

Like, my read of IDA is that the distillation step is proposing a class of algorithms, and we may find that SVD was a member of that class all along.

Comment by Quinn (quinn-dougherty) on How ought I spend time? · 2020-06-30T23:43:23.413Z · LW · GW

I'll check out Lynette's post.

I'd like to take a shot at technical AI alignment

Comment by Quinn (quinn-dougherty) on How ought I spend time? · 2020-06-30T21:14:48.829Z · LW · GW

What granularity of time are you talking about? When you "never maintain 1 and 2 at the same time", is that any given minute, or any given decade?

I would say every couple months is an opportunity to either pivot or continue.

Comment by Quinn (quinn-dougherty) on Have general decomposers been formalized? · 2020-06-28T17:35:55.391Z · LW · GW

Sorry, I think I might have a superficial understanding of encoders and embeddings. Would you be able to try pointing out for me how decomposition is performed in that case (or point me toward a favorite reading on the subject)? When I think of feeding a sentence into an encoder, I can think of multiple ways in which some compositional structure might be inferred.

I'm drawing up a proof of concept with seq2seq learners right now, but my hypothesis is that they will be inadequate decomposers suitable only for benchmarking a baseline.

Comment by Quinn (quinn-dougherty) on The Politics of Age (the Young vs. the Old) · 2019-03-30T04:35:35.852Z · LW · GW

SITG-suffrage Sorry, by this point OP and I had established "right to vote weighted by stake" as a concept, using the words "skin-in-the-game", so SITG was an acronym for skin-in-the-game, and suffrage referred to right to vote.

Parents are different from any other group in my comment because I was referencing Richard Kennaway's question "Does having children whose future you care about also count as skin in the game?"

Comment by Quinn (quinn-dougherty) on The Unexpected Philosophical Depths of the Clicker Game Universal Paperclips · 2019-03-30T04:29:21.883Z · LW · GW

A year or two before the paperclip version came out I played a lot of AdVenture Capitalist (and it's sequel, wait for it, AdVenture Communist), was wondering to myself whether reinforcement learning researchers would find it interesting, and wondering if deep mind would start training up agents to compete in AdVenture Capitalist tournaments.

Comment by Quinn (quinn-dougherty) on The Politics of Age (the Young vs. the Old) · 2019-03-27T18:29:16.647Z · LW · GW

Does having children whose future you care about also count as skin in the game?

Unclear. There's a lot to unpack, because we don't know the 1. narcissism or 2. epistemic competence distributions across parents. I.e., we can't expect that what parents' say are in their kids' interests actually share their kids' interests (either through willful misdirection or through earnest mistakes).

Or you can say that your skin-in-the-game factor is proprotional to how much you've already invested in the status quo. If you've spent 50 years working towards a goal it seems unfair that a 16-year old know-nothing should be able, on a whim, to throw all of that away.

I don't mean to guilt-by-association dismiss this, but it strongly reminds me of the property/land interpretation of SITG-suffrage.

The risk of 16 year old know-nothings throwing things away on a whim is measured against the risk of bad "tradition is the democracy of the dead" / "most insolent of tyrannies is to govern from beyond the grave" scenarios. Which equilibrium is worse, a civilization unable to cooperate across lifetimes (because kids constantly throw everything away and start over, reinventing wheels and repeating mistakes), or one where adults only inherit agency at age 70 and by then all they care about is the same stuff the previous 70+ cohort cared about? I think "epistemically defer to the elderly when it seems wise to do so" is a more beneficial heuristic than "we owe the elderly deference for the sacrifices they made before I was born", and if we're going to bet on the distribution of how responsibly we expect these heuristics to scale, I'd much rather bet on the former.

Comment by Quinn (quinn-dougherty) on The Politics of Age (the Young vs. the Old) · 2019-03-24T20:20:08.536Z · LW · GW

A skin-in-the-game vote multiplier based on age might look like mean lifespan - your age. That's the logical consequence of saying that people who have to put up with outcomes longer ought to weigh higher in shaping them. It should floor out at around 1 at the upper limit, and the lower limit should come from enforceability of anti-fraud measures (i.e. effectiveness at stopping parents from using kids who can't walk yet for extra votes) instead of from anyone's intuitions about when kids can think for themselves.

If some experts got together and said that brain development, knowledge, wisdom, etc. peaks at N, then you'd want the multiplier to be convex with a max at N.

With functions like these, averages between them, etc. there's a lot of material to play with, in terms of starting with one-person-one-vote and fixing it's weirdness with multipliers.

Maybe the latest in voting theory or the current stage of quadratic voting research already considered all this and came up with something more promising.

Comment by Quinn (quinn-dougherty) on Do the best ideas float to the top? · 2019-02-24T16:42:18.867Z · LW · GW

IMO, this is what I briefly suggested by linking to Scott's Against Murderism with the words "misleading compression", i.e., I think describing a policy as murderistic and optimizing for stories are each instances of misleading compression.

If it’s only stories which matter, yet you split your efforts between stories and reality, then you will likely be outcompeted by someone who spent all of their resources on crafting good stories.

This is 100% what I find alarming about misinformation (both the malicious kind and the emergent/inadequate kind), and I don't know a reason why alignment via debate would be resilient.

Comment by Quinn (quinn-dougherty) on Do the best ideas float to the top? · 2019-02-24T16:25:46.303Z · LW · GW

Sorry. The point was NAT, density_{1,2,3} was devised scaffolding for the MVB (minimum viable blogpost). I imagine that NAT has already been discovered, discussed, problematized etc. somewhere but I couldn't find it. I have a background assumption that attention economists are competent and well-intentioned people, so I trust that they have the situation under control.

Comment by Quinn (quinn-dougherty) on Do the best ideas float to the top? · 2019-02-24T16:17:50.124Z · LW · GW

thanks for your comment.

Likewise, what level do you want a NAT to be implemented at? Personal behavior? Structure of group blog sites? Social norms?

  • personal behavior: probably not viable without a dystopian regime of microchips embedded into brains.

  • structure of group blog sites: maybe-- these things have been suggested and tried, i.e. I can't tell you how many times I've seen a reddit comment lamenting the incentives of their upvote system.

  • weirdly, I found out about the Brave browser last week (weird because it's apparently been around for a while): attempting to overthrow advertising with an attention-measuring coin. This is great news!

  • I was thinking a lot about NAT reading this paper. In the context of debate judges, NAT is a bit of a "last minute jerry-rig / frantically shore up the levy" solution, something engineers would stumble upon in an elaborate and convoluted debugging process--- the exact opposite of the kind of the solutions alignment researchers are interested in.

if an AI comes to you and says, “I would like to design the particle accelerator this way because,” and then makes to you an inscrutable argument about physics, you’re faced with this tough choice. You can either sign off on that decision and see if it has good consequences, or you can be like, no, don’t do that ’cause I don’t understand it. -- Paul Christiano

Comment by Quinn (quinn-dougherty) on Functional silence: communication that minimizes change of receiver's beliefs · 2019-02-14T00:13:41.941Z · LW · GW

Not much to add, but: Yes, you nailed it, I see it in the world all the time.

Comment by Quinn (quinn-dougherty) on Do the best ideas float to the top? · 2019-01-22T22:00:47.677Z · LW · GW

This one had slipped by me, so thanks for pointing me to it. It'll take me at least a week to read and digest. I'll add a comment here (eventually) if I have anything to say.

Comment by Quinn (quinn-dougherty) on Do the best ideas float to the top? · 2019-01-22T21:57:38.154Z · LW · GW

I don't know a lot about evolution, but I suspect any benefits of building on memetics work directly would fall under the umbrella of "what about when we're tipping the scale in favor of some ingroup?". I defined density_3 as a placeholder for this along with all maximization related issues, and then said "we'll ignore this for now and focus on more basic foundations". I don't know if I'll return to it, but if I do, it'll take me a really really long time.

Comment by Quinn (quinn-dougherty) on What makes people intellectually active? · 2019-01-20T20:39:46.647Z · LW · GW

I'm not trying to hold it constant, I'm just trying to understand a relatively low standard, because that's the part I feel confused about. It seems relatively much easier to look at bad intellectual output and say how it could have been better, think about the thought processes involved, etc. Much harder to say what goes into producing output at all vs not doing so.

I think I understand the distinction, and I think if it was as simple as "people undershoot their actual capacities in favor of humility / don't want to risk wasting anybody's time" everyone would have adjusted social norms to remedy it by now.


Comment by Quinn (quinn-dougherty) on What makes people intellectually active? · 2019-01-18T05:34:05.851Z · LW · GW

I was thinking in a very different direction upon reading "a lot of people also find that writing down your ideas, causes you to have even more ideas." I know what you mean in the context of a reinforcement system, but I think it misses the more pressing phenomena, at least in my experience of uncertainty whether i'm inventing or indulging, of working on ideas.

The "even more ideas" part sounds to me like a sort of (combinatorial) explosion, when my stroke of inspiration is much more problematic, much less elegant than I thought. Sometimes this also means much less original than I thought, but this isn't a bad thing-- convincing oneself that something is being discovered is often the most effective way at grokking it! You don't really lose anything when you find out that it's, in fact, old news.

Sometimes all this means is it will take more work than I thought, to follow the idea through. Other times it means this is the wrong rabbit hole.

But I think many of us AD(H)Ders develop a suspicion or even hostility to this "indulgent" signal, the internal phenomena of believing oneself to be creative, because they've too easily looted the reward (been superficially creative) at the expense of rigor ("why finish all those exercises in that boring book when this system i just wrote down is totally AGI already?")

(At the same time, you and Lawrence Block are 100% right about nurturing/environment as well)

All in all, I can't wrap my head around "what is the difference between a producer and a consumer of thought?" because the question as posed seems to hold rigor, even quality, constant/irrelevant.

(Many years ago a composer told me that when Schoenberg was at UCLA, young composers had to spend hours with him for 3+ years just analyzing mozart before he would consider looking at your music, compared to conservatories now you're expressing yourself from day one. There is doubtless an analogy to AI risk-- which culture is more productive?)