GeneSmith's Shortform
post by GeneSmith · 2024-09-07T05:09:46.961Z · LW · GW · 26 commentsContents
27 comments
26 comments
Comments sorted by top scores.
comment by GeneSmith · 2024-09-06T23:52:18.503Z · LW(p) · GW(p)
Billionaires read LessWrong. I have personally had two reach out to me after a viral blog post I made back in December of last year.
The way this works is almost always that someone the billionaire knows will send them an interesting post and they will read it.
Several of the people I've mentioned this to seemed surprised by it, so I thought it might be valuable information for others.
Replies from: Vladimir_Nesov, daniel-kokotajlo, fallcheetah7373, alex-k-chen↑ comment by Vladimir_Nesov · 2024-09-07T14:34:00.185Z · LW(p) · GW(p)
That's not the kind of thing that's good to legibly advertise.
Replies from: ChristianKl, Benito↑ comment by ChristianKl · 2024-09-08T09:47:37.178Z · LW(p) · GW(p)
I think an important point here is that GeneSmith actually wrote a post that's of high quality and interest to billionaires that people pass around.
The mechanism that he described is not about billionaires reading random posts on the front page but about high value posts being passed around. Billionares have network that help them to get send post that are valuable to them.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-09-08T19:24:57.657Z · LW(p) · GW(p)
The point I'm making doesn't depend on truth of the claim or validity of the argument (from the GeneSmith post followup) that suggests it. What I'm suggesting implies that public legible discussion of truth of the claim or of validity of the arguments is undesirable.
Replies from: ChristianKl↑ comment by ChristianKl · 2024-09-08T19:37:18.193Z · LW(p) · GW(p)
I think there's a pretty strong default that discussing the truth of claims that actually matter to the decisions people make is worthwhile on LessWrong.
Saying, we can speak about the truth of some things but not about those that are actually really motivating for real-world decision-making seems to me like it's not good for LessWrong culture.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-09-11T17:31:29.384Z · LW(p) · GW(p)
Sure, that's a consideration, but it's a global consideration that still doesn't depend on truth of the claim or validity of the argument. Yes Requires the Possibility of No [LW · GW], not discussing a Yes requires not discussing a No, and conversely. In the grandparent comment, I merely indicated that failing to discuss truth of the claim or validity of the argument is consistent with the point I was making.
↑ comment by Ben Pace (Benito) · 2024-09-07T15:21:42.945Z · LW(p) · GW(p)
Why not?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-09-07T15:28:13.921Z · LW(p) · GW(p)
If the claim is sufficiently true and becomes sufficiently legible to casual observers, this shifts the distribution of new users, and behavior of some existing users, in ways that seem bad overall.
Replies from: Viliam, Benito, daniel-kokotajlo↑ comment by Viliam · 2024-09-07T17:06:34.647Z · LW(p) · GW(p)
Tomorrow, everyone will have their Patreon account added to their LW profile, and all new articles will be links to Substack, where the second half of the article is available for paying subscribers only. :D
Replies from: Benito, Vladimir_Nesov↑ comment by Ben Pace (Benito) · 2024-09-07T20:45:02.702Z · LW(p) · GW(p)
To be clear, I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.
I agree that the second thing sounds v damaging to public discourse.
Replies from: adele-lopez-1, saul-munn↑ comment by Adele Lopez (adele-lopez-1) · 2024-09-07T21:05:13.814Z · LW(p) · GW(p)
Consider finding a way to integrate Patreon or similar services into the LW UI then. That would go a long way towards making it feel like a more socially acceptable thing to do, I think.
Replies from: Viliam↑ comment by Viliam · 2024-09-08T20:46:28.160Z · LW(p) · GW(p)
That could be great especially for people who are underconfident and/or procrastinators.
For example, I don't think anyone would want to send any money to me, because my blogging frequency is like one article per year, and the articles are perhaps occasionally interesting, but nothing world-changing. I'm like 99% sure about this. But just in the hypothetical case that I am wrong... or maybe if in future my frequency and quality of blogging will increase but I will forget to set up a way to sponsor me... if I find out too late that I was leaving money on the table, while spending 8 hours a day at a job that doesn't really align with my values, I would be really angry.
The easiest solution could be like: if someone has a Patreon link, put it in the profile; but if someone doesn't, put there a button like "dude, too bad you don't have a Patreon account, otherwise I would right now donate you $X per month". And if someone clicks it and specifies a number, remember it, and when the total sum of hypothetical missed donations reaches a certain threshold, for example $1000 a month, display a notification to the user. That should be motivating enough to set up the account. And when the account is finally entered in the profile, all users who clicked the button in the past would be notified about it. -- So if the people actually want to send you money, you will find out. And if they don't, you don't need to embarrass yourself with setting up and publishing the account.
I also have some negative feelings about it. I think the most likely reason is that websites that offer the option of payment are often super annoying about it. Like, shoving the "subscribe" button in your face all the time, etc. That's usually because the website itself gets a cut from the money sent. I think if this incentive does not exist, then the LW developers could do this option very unobtrusive. Like, maybe only when you make a strong upvote, display a small "$" icon next to the upvote arrow, with tooltip "would you like to support the author financially?" and only after clicking on it, show the Patreon link, or the "too bad you don't have Patreon" button. Also, put the same "$" icon in the author's profile. -- The idea is that only the people who bother to look at author's profile or who made a strong upvote would be interested in sending money, so the option should be only displayed to them. Furthermore, hiding the information behind a small "$" icon that needs to be clicked first makes it as unobtrusive as possible. (Even less obtrusive than having the Patreon link directly in the profile, which is how people would do it now.)
Linkposts to articles that are subscriber-only should be outright banned. (And if they are not, I would downvote them.) If you require payment for something, don't shove it to my face. It is okay to make a few free articles and use them as advertisement for the paid ones. But everyone who votes on an article should see the same content of the article. -- But that's how it de facto works now; I don't really remember seeing a paid article linked from LW.
Basically, if someone wants to get paid and believes that they will get the readers, there is already a standard way to do that: make a Substack account, post there some free and paid articles, and link the free ones from LW. The advantage of my proposal is the feedback for authors who were not aware that they have a realistic option to get paid for their writing. Plus if we have a standardized UI for that, the authors do not need to think about whether to put the links in their profiles or their articles, how much would be too annoying and how much means leaving money on the table.
↑ comment by Saul Munn (saul-munn) · 2024-10-04T21:27:31.668Z · LW(p) · GW(p)
I wish more LW users had Patreons linked to from their profiles/posts. I would like people to have the option of financially supporting great writers and thinkers on LessWrong.
is this something you’ve considered building into LW natively?
↑ comment by Vladimir_Nesov · 2024-09-07T17:19:25.994Z · LW(p) · GW(p)
A norm is more effective when it acts at all the individual relatively insignificant steps, so that they don't add up. The question of whether the steps are pointing in the right direction is the same for all the steps, so could as well be considered seriously at the first opportunity, even when it's not a notable event on object level.
Replies from: Viliam↑ comment by Viliam · 2024-09-07T17:54:28.266Z · LW(p) · GW(p)
For the record, the ":D" at the end of my comment only meant that I don't think that literally everyone will do this tomorrow. But yes, the temptation to slightly move in given direction is real -- I can feel it myself (unfortunately I have no Patreon account and no product to sell), though I will probably forget this tomorrow -- and some people will follow the nudge more than the others. Also, new people may be tempted to join for the wrong reasons.
On the other hand, even before saying it explicitly, this hypothesis was... not too surprising, in my opinion. I mean, we already knew that some rich people are supporting LW financially; it would make sense if they also read it occasionally. Also, we already had lots of people trying to join LW for the wrong reasons; most of them fail. So I think that the harm of saying this explicitly is small.
↑ comment by Ben Pace (Benito) · 2024-09-11T18:21:44.307Z · LW(p) · GW(p)
For the record I think regular users being aware of the social and financial incentives on the site is worth the costs of people goodharting on them. We have a whole system set up for checking the content of new users that the team goes through daily to make sure it meets certain quality bars, and I still think that having a 100+ karma or curated post basically always requires genuinely attempting to make a valuable intellectual contribution to the world. That's not a perfect standard but it's help up in the face of a ton of other financial incentives (be aware that starting safety researcher salaries at AI capabilities companies are like $300k+).
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-09-07T19:32:19.334Z · LW(p) · GW(p)
Why would the shift be bad? More politics, more fakery, less honest truth-seeking? Yeah that seems bad. There are benefits too though (e.g. makes people less afraid to link to LW articles). Not sure how it all shakes out.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2024-09-07T04:02:27.786Z · LW(p) · GW(p)
Yep. Other important people (in government, in AGI research groups) do too.
↑ comment by lesswronguser123 (fallcheetah7373) · 2024-09-08T09:16:49.397Z · LW(p) · GW(p)
I thought this was kind of known that few of the billionaires were rationalist adjacent in a lot of ways, given effective altruism caught on with billionaire donors, also in the emails released by OpenAI https://openai.com/index/openai-elon-musk/ there is link to slatestarcodex forwarded to elonmusk in 2016, elon attended eliezer's conference iirc. There are a quite of places you could find them in the adjacent circles which already hint to this possibility like basedbeffjezos's followers being billionaires etc. I was kind of predicting that some of them would read popular things on here as well since they probably have overlapping peer groups.
↑ comment by Alex K. Chen (parrot) (alex-k-chen) · 2024-09-08T20:25:49.703Z · LW(p) · GW(p)
It's one of the most important issues ever, and has a chance of solving mass instability/unhappiness caused by wide inequality in IQs in the population, by giving the less-endowed a shot to increase their intelligence.
comment by GeneSmith · 2022-05-23T01:42:48.245Z · LW(p) · GW(p)
How are people here dealing with AI doomerism? Thoughts about the future of AI and specifically the date of creation of the first recursively self-improving AGI have invaded almost every part of my life. Should I stay in my current career if it is unlikely to have an impact on AGI? Should I donate all of my money to AI-safety-related research efforts? Should I take up a career trying to convince top scientists at DeepMind to stop publishing their research? Should I have kids if that would mean a major distraction from work on such problems?
More than anything though, I've found the news of progress in the AI field to be a major source of stress. The recent drops in Metaculus estimates of how far we are from AGI have been particularly concerning. And very few people outside of this tiny almost cult-like community of AI safety people even seem to understand the unbelievable level of danger we are in right now. It often feels like there are no adults anywhere; there is only this tiny little island of sanity amidst a sea of insanity.
I understand how people working on AI safety deal with the problem; they at least can actively work on the problem. But how about the rest of you? If you don't work directly on AI, how are you dealing with these shrinking timelines and feelings of existential pointlessness about everything you're doing? How are you dealing with any anger you may feel towards people at large AI orgs who are probably well-intentioned but nonetheless seem to be actively working to increase the probability of the world being destroyed? How are you dealing with thoughts that there may be less than a decade left until the world ends?
comment by GeneSmith · 2023-05-12T15:59:32.023Z · LW(p) · GW(p)
This may seem like small peanuts compared to AI ending the world, but I think it will be technically possible to de-anonymize most text on the internet within the next 5 years.
Analysis of writing style and a single author's idiosyncracies has a long history of being used to reveal the true identity of anonymous authors. It's how the Unabomber was caught and also how JK Rowling was revealed as the author of The Cuckoo's Calling.
Up until now it was never really viable to perform this kind of analysis at scale. Matching up the authors of various works also required a single person to have read many of the author's previous text.
I think LLMs are going to make textual fingerprinting at a global scale possible within the next 5 years (if not already). This in turn implies that any archived writing you've done under a pseudonym will be attributable to you.
comment by GeneSmith · 2023-05-26T20:25:54.530Z · LW(p) · GW(p)
It seems like there is likely a massive inefficiency in the stock market right now in that the stocks of companies likely to benefit from AGI are massively underpriced. I think the market is just now starting to wake up to how much value could be captured by NVIDIA, TSMC and some of the more consumer facing giants like Google and Microsoft.
If people here actually believe that AGI is likely to come sooner than almost anyone expects and have a much bigger impact than anyone expect, it makes sense to buy these kind of stocks because they are likely underpriced right now.
In the unlikely event that AGI goes well, you'll be one of the few who stand to gain the most from the transition.
I basically already made this bet to a very limited degree a few months ago and am currently up about 20% on my investment. It's possible of course that NVIDIA and TSMC could crash, but that seems unlikely in the long run.
comment by GeneSmith · 2023-05-12T16:05:51.747Z · LW(p) · GW(p)
I think it's time for more people in AI Policy to start advocating for an AI pause.
It seems very plausible to me that we could be within 2-5 years of recursively self-improving AGI, and we might get an AGI-light computer virus before then (Think ChaosGPT v2).
Pausing AI development actually seems like a pretty reasonable thing to most normal people. The regulatory capacity of the US government is the most functional piece, and bureaucrats put in charge of regulating something love to slow down progress.
The hardware and software aspects need to be targeted. There should be strict limits placed on training new state-of-the-art models and a program to limit sales of graphics cards and other hardware that can train the latest models.
comment by GeneSmith · 2023-01-03T07:08:15.346Z · LW(p) · GW(p)
If we are in a simulation, it implies an answer to the question of "Why do I exist?"
Suppose the following assumptions are true:
- The universe is a construct of some larger set of simulations designed by some meta-level entity who is chiefly concerned with the results of the simulation
- The cost of computation to that entity is non-zero
If true, these assumptions imply a specific answer to the question "Why do I exist?" Specifically, it implies you exist because you are computationally irreducible.
By computationally irreducible, I mean that the state of the universe cannot be computed in any manner more efficient than simulating your life.
If it could, and the assumptions stated above hold true, it seems extremely likely that the simulation designer would have run a more efficient algorithm capable of producing results.
Perhaps this argument is wrong. It's certainly hard to speculate about the motivations of a universe-creating entity. But if correct, it implies a kind of meaning for our lives: there's no better way to figure out what happens in the simulation than you living your life. I find that to be a strangely comforting thought.
comment by GeneSmith · 2022-11-08T19:47:52.512Z · LW(p) · GW(p)
FTX has just collapsed; Sam Bankman Fried's net worth probably quite lowHuge news from the crypto world this morning: FTX (Sam Bankman Fried's company and the third largest crypto exchange in the world) has paused customer withdrawals and announced it is entering negotiations with Binance to be acquired. The rumored acquisition price is $1.
This has major implications for the EA/Rationalist space, since Sam is one of the largest funders of EA causes. From what I've read his net worth is tied up almost entirely in FTX stock and its proprietary cryptocurrency, FTT.
I can't find a source right now, but I think Sam's giving accounted for about a third of all funding in the EA space. So this is going to be a painful downsizing.
The story of what happened is complicated. I'll probably write something about it later.
Just read this: https://forum.effectivealtruism.org/posts/yjGye7Q2jRG3jNfi2/ftx-will-probably-be-sold-at-a-steep-discount-what-we-know
comment by GeneSmith · 2021-04-24T21:29:19.427Z · LW(p) · GW(p)
Does anyone have a good method to estimate the number of COVID cases India is likely to experience in the next couple of months? I realize this is a hard problem but any method I can use to put bounds on how good or how bad it could be would be helpful.