Posts

Managing AI Risks in an Era of Rapid Progress 2023-10-28T15:48:25.029Z
What is your financial portfolio? 2023-06-28T18:39:15.284Z
Sama Says the Age of Giant AI Models is Already Over 2023-04-17T18:36:22.384Z
A Particular Equilibrium 2023-02-08T15:16:52.265Z
Idea: Learning How To Move Towards The Metagame 2023-01-10T00:58:35.685Z
What Does It Mean to Align AI With Human Values? 2022-12-13T16:56:37.018Z
Algon's Shortform 2022-10-10T20:12:43.805Z
Does Google still hire people via their foobar challenge? 2022-10-04T15:39:35.260Z
What's the Least Impressive Thing GPT-4 Won't be Able to Do 2022-08-20T19:48:14.811Z
Minerva 2022-07-01T20:06:55.948Z
What is the solution to the Alignment problem? 2022-04-30T23:19:07.393Z
Competitive programming with AlphaCode 2022-02-02T16:49:09.443Z
Why capitalism? 2015-05-03T18:16:02.562Z
Could you tell me what's wrong with this? 2015-04-14T10:43:49.478Z
I'd like advice from LW regarding migraines 2015-04-11T17:52:04.900Z
On immortality 2015-04-09T18:42:35.626Z

Comments

Comment by Algon on eggsyntax's Shortform · 2024-07-20T12:06:45.161Z · LW · GW

I'm writing a page for AIsafety.info on scaffolding, and was struggling to find a principled definition. Thank you for this!

Comment by Algon on Algon's Shortform · 2024-07-17T14:03:26.756Z · LW · GW

When tracking an argument in a comment section, I like to skip to the end to see if either of the arguers winds up agreeing with the other. Which tells you something about how productive the argument is. But when using the "hide names" feature on LW, I can't do that, as there's nothing distinguishing a cluster of comments as all coming from the same author. 

I'd like a solution to this problem. One idea that comes to mind is to hash all the usernames in a particular post and a particular session, so you can check if the author is debating someone in the comments without knowing the author's LW username. This is almost as good as full anonymity, as my status measures take a while to develop, and I'll still get the benefits of being able to track how beliefs develop in the comments.

@habryka 

Comment by Algon on Dalcy's Shortform · 2024-07-15T17:39:46.008Z · LW · GW

I'm not sure what you mean by operational vs axiomatic definitions. 

But Shannon was unaware of the usage of  in statistical mechanics. Instead, he was inspired by Nyquist and Hartley's work, which introduced ad-hoc definitions of information in the case of constant probability distributions. 

And in his seminal paper, "A mathematical theory of communication", he argued in the introduction for the logarithm as a measure of information because of practicality, intuition and mathematical convenience. Moreover, he explicitly derived the entropy of a distribution from three axioms: 
1) that it be continuous wrt. the probabilities, 
2) that it increase monotonically for larger systems w/ constant probability distributions, 
3) and that it be a weighted sum the entropy of sub-systems. 
See section 6 for more details.

I hope that answers your question.

Comment by Algon on Fluent, Cruxy Predictions · 2024-07-13T11:36:11.347Z · LW · GW

Whether I would get an article written, or a part of a website setup, by Friday. I was sure I wouldn't, and I didn't. But the predictions I made weren't cruxy. 

Comment by Algon on Fluent, Cruxy Predictions · 2024-07-11T01:16:21.160Z · LW · GW

If this feels at least somewhat compelling, what if you just got yourself to Fatebook right now, and make a couple predictions that'll resolve within couple days, or a week? Fatebook will send you emails reminding you about it, which can help bootstrap a habit.

Done.

Comment by Algon on The Standard Analogy · 2024-07-08T15:00:25.213Z · LW · GW

I find it ironic that Simplicia's position in this comment is not too far from my own, and yet my reaction to it was "AIIIIIIIIIIEEEEEEEEEE!". The shrieking is about everyone who thinks about alignment having illegible models from the perspective of almost everyone else, of which this thread is an example.

Comment by Algon on The Standard Analogy · 2024-07-08T14:32:06.158Z · LW · GW

EEA

What the heck does "EEA" mean?

Comment by Algon on Is anyone working on formally verified AI toolchains? · 2024-06-21T20:56:26.029Z · LW · GW

I was thinking of units tests generated from some spec for helping with that part. If someone could build such a spec/tool and share it, said spec/tool could be extensively analysed and iterated upon. 

Comment by Algon on My AI Model Delta Compared To Christiano · 2024-06-20T11:12:44.299Z · LW · GW

I'd like to try another analogy, which makes some potential problems for verifying output in alignment more legible. 

Imagine you're a customer and ask a programmer to make you an app. You don't really know what you want, so you give some vague design criteria. You ask the programmer how the app works, and they tell you, and after a lot of back and forth discussion, you verify this isn't what you want. Do you know how to ask for what you want, now? Maybe, maybe not. 

Perhaps the design space you're thinking of is small, perhaps you were confused in some simple way that the discussion resolved, perhaps the programmer worked with you earnestly to develop the design you're really looking for, and pointed out all sorts of unknown unknowns. Perhaps.

I think we could wind up in this position. The position of a non-expert verifying an experts' output, with some confused and vague ideas about what we want from the experts. We won't know the good questions to ask the expert, and will have to rely on the expert to help us. If ELK is easy, then that's not a big issue. If it isn't, then that seems like a big issue.

Comment by Algon on My AI Model Delta Compared To Christiano · 2024-06-19T22:11:09.375Z · LW · GW

generation can be easier than validation because when generating you can stay within a subset of the domain that you understand well, whereas when verifying you may have to deal with all sorts of crazy inputs.

Attempted rephrasing: you control how you generate things, but not how others do, so verifying their generations can expose you to stuff you don't know how to handle.

Example: 
"Writing code yourself is often easier than validating someone else's code"
 

Comment by Algon on Emrik Quicksays · 2024-06-18T18:34:10.226Z · LW · GW

I messed up. I meant to comment on another comment of yours, the one replying to niplav's post about fat tails disincentivizing compromise. That was the one I really wished I could bookmark. 

Comment by Algon on Emrik Quicksays · 2024-06-18T16:18:24.654Z · LW · GW

This comment is making me wish I could bookmark comments on LW. @habryka,

Comment by Algon on Richard Ngo's Shortform · 2024-06-17T18:10:07.672Z · LW · GW

I'm working on this right now, actually. Will hopefully post in a couple of weeks.

This sounds cool. 

That seems reasonable. But I do think there's a group of people who have internalized bayesian rationalism enough that the main blocker is their general epistemology, rather than the way they reason about AI in particular.

I think your OP didn't give enough details as to why internalizing Bayesian rationalism leads to doominess by default. Like, Nora Belrose is firmly Bayesian and is decidedly an optimist. Admittedly, I think she doesn't think a Kolmogorov prior is a good one, but I don't think that makes you much more doomy either. I think Jacob Cannel and others are also Bayesian and non-doomy. Perhaps I'm using "Bayesian rationalism" differently than you are, which is why I think your claim, as I read it, is invalid. 

I think the point of 6 is not to say "here's where you should end up", but more to say "here's the reason why this straightforward symmetry argument doesn't hold".

Fair enough. However, how big is the asymmetry? I'm a bit sceptical there is a large one. Based off my interactions, it seems like ~ everyone who has seriously thought about this topic for a couple of hours has radically different models, w/ radically different levels of doominess. This holds even amongst people who share many lenses (e.g. Tyler Cowen vs Robin Hanson, Paul Christiano vs. Scott Aaronson, Steve Hsu vs Michael Nielsen etc.). 

There's still something importantly true about EU maximization and bayesianism. I think the changes we need will be subtle but have far-reaching ramifications. Analogously, relativity was a subtle change to newtonian mechanics that had far-reaching implications for how to think about reality.

I think we're in agreement over this. (I think Bayesianism less wrong than EU maximization, and probably a very good approximation in lots of places, like Newtonian physics is for GR.)  But my contention is over Bayesian epistemology tripping many rats up when thinking about AI x-risk. You need some story which explains why sticking to Bayesian epistemology is tripping up very many people here in particular. 

Any epistemology will rule out some updates, but a problem with bayesianism is that it says there's one correct update to make. Whereas radical probabilism, for example, still sets some constraints, just far fewer.

Right, but in radical probabilism the type of beliefs is still a real valued function, no? Which is in tension w/ many disparate models that don't get compressed down to a single number. In that sense, the refined formalism is still rigid in a way that your description is flexible. And I suspect the same is true for Infra-Bayesianism, though I understand that even less well than radical probabilism. 

Comment by Algon on Richard Ngo's Shortform · 2024-06-13T23:18:25.174Z · LW · GW

I think this post doesn't really explain why rats have high belief in doom, or why they're wrong to do so. Perhaps ironically, there is a better a version of this post on both counts which isn't so focused on how rats get epistemology wrong and the social/meta-level consequences. A post which focuses on the object-level implications for AI of a theory of rationality which looks very different from the AIXI-flavoured rat-orthodox view.

I say this because those sorts of considerations convinced me that we're much less likely to be buggered. I.e. I no longer believe EU maximization is/will be a good description by default of TAI or widely economically productive AGI, mildly superhuman AGI or even ASI, depending on the details. Which is partly due to a recognition that the arguments for EU maximization are weaker than I thought, arguments for LDT being convergent are lacking, the notions of optimality we do have are very weak, the existence and behaviour of GPT-4, Claude Opus etc. 

6 seems too general a claim to me. Why wouldn't it work for 1% vs 10%, and likewise 0.1% vs 1% i.e. why doesn't this suggest that you should round down P(doom) to zero. Also, I don't even know what you mean by "most" here. Like, are we quantifying over methods of reasoning used by current AI researchers right now? Over all time? Over all AI researchers and engineers? Over everyone in the West? Over everyone who's ever lived? Etc. 

And it seems to me like you're implicitly privileging ways of combining these opinions that get you 10% instead of 1% or 90%, which is begging the question. Of course, you could reply that a P(doom) of 10% is confused, that isn't really your state of knowledge, lumping in all your sub-agents models into a single number is too lossy etc. But then why mention that 90% is a much stronger prediction than 10% instead of saying they're roughly equally confused? 

7 I kinda disagree with. Those models of idealized reasoning you mention generalize Bayesianism/Expected Utility Maximization. But they are not far from the Bayesian framework or EU frameworks. Like Bayesianism, they do say there are correct and incorrect ways of combining beliefs, that beliefs should be isomorphic to certain structures, unless I'm horribly mistaken. Which sure is not what you're claiming to be the case in your above points. 

Also, a lot of rationalists already recognize that these models are addressing flaws in Bayesianism like logical omniscience, embeddedness etc. Like, I believed this at least around 2017, and probably earlier. Also, note that these models of epistemology are not in tension with a strong belief that we're buggered. Last I checked, the people who invented these models believe we're buggered. I think they may imply that we're a little less than the EU maximization theory though, but I don't think this is a big difference. IMO this is not a big enough departure to do the work that your post requires. 

 

Comment by Algon on My AI Model Delta Compared To Yudkowsky · 2024-06-10T17:44:25.252Z · LW · GW

The AI Optimists (i.e. the people in the associated Discord server) have a lot of internal disagreement[1], to the point that I don't think it's meaningful to talk about the delta between John and them.  That said, I would be interested in specific deltas e.g. with @TurnTrout, in part because he thought we'd get death by default and now doesn't think that, has distanced himself from LW, and if he replies, is more likely to have a productive argument w/ John than Quintin Pope or Nora Belrose would. Not because he's better, but because I think John and him would be more legible to each other. 

  1. ^

    Source: I'm on the AI Optimists Discord server and haven't seen much to alter my prior belief that  ~ everyone in alignment disagrees with everyone else.

Comment by Algon on 0. CAST: Corrigibility as Singular Target · 2024-06-08T12:43:32.211Z · LW · GW

This sounds like the sequence that I have wanted to write on corrigibility since ~2020 when I stopped working on the topic. So I am excited to see someone finally writing the thing I wish existed!

Comment by Algon on The Natural Abstraction Hypothesis: Implications and Evidence · 2024-06-01T15:26:40.280Z · LW · GW

However, this also feels like a feature of the "valley of confused abstractions". Humans didn't evolve based on individual snapshots of reality, we evolved with moving pictures as input data. If a human in the ancestral environment, when faced with a tiger leaping out of the bushes, decided to analyse its fur patterns rather than register the large orange shape moving straight for them, it's unlikely they would have returned from their first hunting trip. When we see a 2D image, we naturally "fill in the gaps", and create a 3D model of it in our heads, but current image classifiers don't do this, they just take a snapshot. For reasons of architectural bias, they are more likely to identify the abstraction of texture than the arguably far more natural abstraction of shape. Accordingly, we might expect video-based models to be more likely to recognise shapes than image-based models, and hence be more robust to this failure mode.

If this were true, then you'd expect NeRF's and SORA to be a lot less biased towards textures, right? 

Comment by Algon on What do coherence arguments actually prove about agentic behavior? · 2024-06-01T12:02:43.497Z · LW · GW

I've observed much the same thing. And my conclusion is that we don't have any proofs that ever greater intelligence converges to EU-maximizing agents.

 

Edit: We don't even have an argument that meets the standards of rigour in physics. 
 

Comment by Algon on Less Anti-Dakka · 2024-06-01T11:04:38.639Z · LW · GW

What were the benefits?

Comment by Algon on Web-surfing tips for strange times · 2024-05-31T18:44:09.464Z · LW · GW

Google’s auto-translation tools

Are there chrome-native tools which are better than google translate for webpages? Because you can use the latter on Firefox and it works pretty well.

EDIT: Fixed the link to google translate. Thanks, @the gears to ascension for pointing that out!

Comment by Algon on keltan's Shortform · 2024-05-30T18:46:35.097Z · LW · GW

That's fair, but it sounds like a personal preference. I asked because maybe you knew there was something unusually bad about small flats in the Bay Area that even folks like me would find annoying. 

Comment by Algon on keltan's Shortform · 2024-05-30T12:10:22.175Z · LW · GW
  • you should be worried someone convinces you to move to the bay. it's not worth it. like, literally entirely for cost of housing reasons, no other reason, everything else is great, there's a reason people are there anyway. but phew, the niceness comes with a honkin price tag. and no, living in a 10ft by 10ft room to get vaguely normal sounding rent is not a good idea, even though it's possible.

Why's this not a good idea? 10ft by 10ft is a lot of room. More than I had in some flats when I went to university.

Comment by Algon on My Hammer Time Final Exam · 2024-05-22T21:05:40.604Z · LW · GW

Starting steps for chores/longer term tasks I've been putting off. Things that reduce the problem a little on other words.

Comment by Algon on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-19T20:49:28.278Z · LW · GW

Thanks!

Comment by Algon on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-19T20:45:50.384Z · LW · GW

I can't see a link to any LW dialog at the top.

Comment by Algon on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-18T23:52:01.288Z · LW · GW

Oh, I see! That makes a lot more sense. But he should really write up/link to his project then, or his collaborator's project.

Comment by Algon on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-18T22:17:16.441Z · LW · GW

Alas, I think it's quite unlikely that this article will make somebody fund me. It's just that I noticed how extremely slow I am (without collaborators) to create a proper grant application.

IDGI. Why don't you work w/ someone to get funding? If you're 15x more productive, then you've got a much better shot at finding/filling out grants and then getting funding for you and your partner. 

EDIT:
Also, you're a game dev and hence good at programming. Surely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment? 

Comment by Algon on TurnTrout's shortform feed · 2024-05-18T13:33:14.708Z · LW · GW

Because future rewards are discounted

Don't you mean future values? Also, AFAICT, the only thing going on here that seperates online from offline RL is that offline RL algorithms shape the initial value function to give conservative behaviour. And so you get conservative behaviour.

Comment by Algon on Why you should learn a musical instrument · 2024-05-18T11:35:49.942Z · LW · GW

Since you seem interested in nootropics, I wonder if you've read Gwern's list of nootropic self-experiments? He covers a lot of supplements, some of which are pretty obscure AFAICT.~

EDIT: https://gwern.net/nootropic/nootropics

Comment by Algon on Why you should learn a musical instrument · 2024-05-18T10:59:05.169Z · LW · GW

No, it's just that my prior says nootropics almost never work so I was wondering if you had some data suggesting this did e.g. by dowing a RCT on yourself or using signal processing techniques to detect if supplementing this stuff lead to a causal change in reflex times or so forth.

EDIT: Though I am vegan and I'm really ignorant about what makes for a good diet. So I'd be curious to hear why it's helpful for vegans to take this stuff.

Comment by Algon on Why you should learn a musical instrument · 2024-05-18T10:40:51.602Z · LW · GW

Why do you think DHA algea powder works?

Comment by Algon on My Hammer Time Final Exam · 2024-05-17T13:40:03.722Z · LW · GW

Having a large set of small 2-10 minutes task on the screen may thus feel (incorrectly) overwhelming. 

The size of a task on the screen is a leaky abstraction (of it's length in time).

This is a valuable insight and makes reading this whole post worth it for me. And the obvious thought for how to correct this error is to attatch time-estimates for small tasks and convert them into a time-period view on a calendar. That way, it feels like "oh, I need 20 minutes to do all my chores today, better set a pomdoro" instead of "I have 20 things to do! 20 is big!"

Will have to try this. TEST: It doesn't look that big, though I'm including starting steps of longer term tasks. Hmm, this doesn't feel that bad, thought maybe that's the endorphins from deciding to test this talking. 

BTW I gave a strong upvote because I want to see more rationality related content on LW. Otherwise I would've given a normal upvote, or maybe not even that. Nevertheless, that still means this post gets a strong upvote.
 

Comment by Algon on Why you should learn a musical instrument · 2024-05-16T13:44:39.375Z · LW · GW

I did notice that I learned much quicker than I have in the past when I’ve tried to learn instruments. Which tells me that my current character build optimisation towards learning and memory is working. That was a good data point to update on.

Wot. 

Please explain!

Comment by Algon on The Best Tacit Knowledge Videos on Every Subject · 2024-05-14T12:18:40.282Z · LW · GW

This is not a video, but I think it counts as a useful example of tacit knowledge. 

Domain: Google-fu

Link: https://gwern.net/search-case-studies

Person: Gwern

Background: Creator of consistently thorough essays

Why: Gwern talks you through what he did to hunt down obscure resources on the internet and in the process shows you how much dakka you could bring to bear on googling things you don't know.

Comment by Algon on Against Student Debt Cancellation From All Sides of the Political Compass · 2024-05-13T20:30:28.511Z · LW · GW

Wow, I didn't realize just how bad student debt cancellation is from so many perspectives. Now I want more policy critiques like this. 

Comment by Algon on Deep Honesty · 2024-05-08T23:25:27.912Z · LW · GW

John Carmack is a famously honest man. To illustrate this, I'll give you two stories. When Carmack was a kid, he desperately wanted the macs in his schools computer lab. So he and a buddy tried to steal some. They got caught because Carmack's friend was too fat to get through the window. Carmack went to juvie. When the counselor asked him if he wouldn't get caught, would he do it again? Carmack answered yes for this counterfactual.

Later, when working as a young developer, Carmack and his fellow employees would take the company workstations home to code games over the weekend. Their boss eventually noticed this and wondered if they were borrowing company property without permission. He quickly hit on a foolproof plan to catch them: just ask Carmack because he cannot tell a lie. Carmack said yes. 

These stories aren't really a response to your point. I just find them to be hilarious examples of the inability to lie. They're also an existence proof of someone being unable to lie but still doing very well. 

Comment by Algon on Designing for a single purpose · 2024-05-08T22:00:37.555Z · LW · GW

Ah, that makes sense! Well, it does seem to work out for some businesses, in particular East Asian business conglomerates. Let me quote from a common cog article on the topic of near every company having an equillibrium point past which further growth is difficult w/o a line of capital.  

Chinese businessmen and the SME Loop

With a few notable exceptions, the vast majority of successful traditional Chinese businessmen have chosen the route of escaping the SME loop by pursuing additional — and completely different — lines of businesses. This has led to the prevalence of ‘Asian conglomerates’ — where a parent holding company owns many subsidiaries in an incredibly diverse number of industries: energy, edible oils, shipping, real estate, hospitality, telecommunications and so on. The benefit of this structure has been to subsidise new business units with the profits of other business units.

Why a majority of Chinese businessmen chose this route remains a major source of mystery for me. When I left the point-of-sale business in late 2017, I wondered what steps my boss would take to escape the SME loop. And I began to wonder if the first generation of traditional Chinese businessmen chose the route of multiple diversified businesses because it was the easiest way to escape the SME loop ... or if perhaps there was something about developing markets that caused them to expand this way.

(And if so, why are there less such conglomerates in the West? Why are these conglomerates far more common in Asia? These are interesting questions — but the answers aren’t readily available to me; not for a few decades, and not until I’ve have had the experience of growing such businesses.)

Perhaps the right way to think about this is that the relentless pursuit of growth led them to expand into adjacent markets — and the markets for commodities and infrastructure was ripe for the taking in the early years of South East Asia’s development.

Here, we see that chinese businessmen expand to keep up their free cash flow to fund their attempts to innovate enough to keep growing to larger scales. 

Comment by Algon on Designing for a single purpose · 2024-05-08T21:47:55.250Z · LW · GW

Uh, Brian did cut out a great deal of fat from AirBnB and the company clearly survived its brush with death due to Covid19. So I don't see why you'd say it didn't work.

Comment by Algon on Designing for a single purpose · 2024-05-08T20:25:38.834Z · LW · GW

Brian Chesky, a co-founder of AirBnB, claimed that their company did get bloated and they lost focus before Covid happened and they had to cut the fat or die. And he claimed this error is common amongst late-state startups. From "The Social Radars: Brian Chesky, Co-Founder & CEO of Airbnb (Part II)". So I think turning into an octupus is something that happens to succesful startups, and is probably what's happening to Dropbox.

Comment by Algon on Some Experiments I'd Like Someone To Try With An Amnestic · 2024-05-07T19:28:06.960Z · LW · GW

Even though I think the comment was useful, it doesn't look to me like it was as useful as the typical 139 karma comment as I expect LW readers to be fairly unlikely to start popping benzos after reading this post. IMO it should've gotten like 30-40 karma. Even 60 wouldn't have been too shocking to me. But 139? That's way more karma than anything else I've posted. 
 

I don't think it warrants this much karma, and I now share @ryan_greenblatt's concerns about the ability to vote on Quick Takes and Popular Comments introducing algorithmic virality to LW. That sort of thing is typically corrosive to epistemic hygeine as it changes the incentives of commenting more towards posting applause-lights. I don't think that's a good change for LW, as I think we've got too much group-think as it is. 

Comment by Algon on Thomas Kwa's Shortform · 2024-05-07T15:16:30.664Z · LW · GW

I think you should write it. It sounds funny and a bunch of people have been calling out what they see as bad arguements that alginment is hard lately e.g. TurnTrout, QuintinPope, ZackMDavis, and karma wise they did fairly well. 

Comment by Algon on Some Experiments I'd Like Someone To Try With An Amnestic · 2024-05-07T10:56:09.444Z · LW · GW

@habryka this comment has an anomalous amount of karma. It showed up on popular comments, I think, and I'm wondering if people liked the comment when they saw it there which lead to a feedback loop of more eyeballs on the comment, more likes, more eyeball etc. If so, is that the intended behaviour of the popular comments feature? It seems like it shouldn't be.

Comment by Algon on GDP per capita in 2050 · 2024-05-06T18:17:41.448Z · LW · GW

Good point. I grabbed the dataset of gdp per capita vs life expectancy for almost all nations from OurWorldInData, log transformed GDP per capita and got a correlation of 0.85.

Comment by Algon on GDP per capita in 2050 · 2024-05-06T17:22:40.460Z · LW · GW

Although GDP per capita is distinct from this expanded welfare metric, the correlation between GDP per capita and this expanded welfare metric is very strong at 0.96, though there is substantial variation across countries, and welfare is more dispersed (standard deviation of 1.51 in logs) than is income (standard deviation of 1.27 in logs).[9]

I checked the paper and it looks like they're comparing welfare by "how much more would person X from the US have to consume to move to another country i?" Which results in equations like this:

which says what the factor  ,  should be in terms of differences in life expectancy, consumption, lessure and inequality. So I suppose it isn't suprising that it's quite correlated with GDP, given the individual correlations at play here, but I am suprised that it is so strongly correlated since I'd expect e.g. life expectancy vs gdp to correlate at maybe 0.8[1]. Which is a fair bit weaker than a 0.96 correlation!

  1. ^

    I checked. It's 0.67. 

Comment by Algon on GDP per capita in 2050 · 2024-05-06T15:47:13.920Z · LW · GW

This looks cool and I  want to read it in detail, but I'd like to push back a bit against an implicit take that I thought was present here: namely, that GDP takes into account major technological breakthroughs. Let me just quote some text from this article: What Do GDP Growth Curves Really Mean?
 

More generally: when the price of a good falls a lot, that good is downweighted (proportional to its price drop) in real GDP calculations at end-of-period prices.

… and the way we calculate real GDP in practice is to use prices from a relatively recent year. We even move the reference year forward from time to time, so that it’s always near the end of the period when looking at long-term growth.

Real GDP Mainly Measures The Goods Which Are Revolutionized Least

Now let’s go back to our puzzle about growth since 1960, and electronics in particular.

The cost of a transistor has dropped by a stupidly huge amount since 1960 - I don’t have the data on hand, but let’s be conservative and call it a factor of 10^12 (i.e. a trillion). If we measure in 1960 prices, the transistors on a single modern chip would be worth billions. But instead we measure using recent prices, so the transistors on a single modern chip are worth… about as much as a single modern chip currently costs. And all the world’s transistors in 1960 were worth basically-zero.

1960 real GDP (and 1970 real GDP, and 1980 real GDP, etc) calculated at recent prices is dominated by the things which are expensive today - like real estate, for instance. Things which are cheap today are ignored in hindsight, even if they were a very big deal at the time.

In other words: real GDP growth mostly tracks production of goods which aren’t revolutionized. Goods whose prices drop dramatically are downweighted to near-zero, in hindsight.

When we see slow, mostly-steady real GDP growth curves, that mostly tells us about the slow and steady increase in production of things which haven’t been revolutionized. It tells us approximately-nothing about the huge revolutions in e.g. electronics.

Comment by Algon on Some Experiments I'd Like Someone To Try With An Amnestic · 2024-05-04T22:15:32.725Z · LW · GW

Important notice: benzodiazepines are serious business: benzo withdrawals are amongst the worst experiences a human can go through, and combinations of benzos with alcohol, barbiturates, opioids or tricyclic antidepressants are very dangerous: benzos played a role in 31% of the estimated 22,767 deaths from prescription drug overdose in the United States.

If you're experimenting with benzos, please be very careful!

Comment by Algon on How to write Pseudocode and why you should · 2024-05-04T14:48:25.288Z · LW · GW

Probably it would have been worse as a perpetual draft.

Comment by Algon on How to write Pseudocode and why you should · 2024-05-03T15:51:09.631Z · LW · GW

An example of you writing psuedocode woud've helped a great deal, especially if it illustrated what you thought was a core skill. 

Comment by Algon on The Mom Test: Summary and Thoughts · 2024-05-01T23:12:30.693Z · LW · GW

Thank you for this, I'm conducting user interviews right now and there were some suprising things in your review, as well as obviously good ideas that I would probably have missed. Organizing meetups in the field would not have occured to me, and is a good idea. 

Comment by Algon on On Not Pulling The Ladder Up Behind You · 2024-04-29T23:57:19.644Z · LW · GW

that's still the second worst thing that's happened when running a megameetup from my perspective.

You can't just say that and not elaborate!