gwern's Shortform

post by gwern · 2021-04-24T21:39:14.128Z · LW · GW · 16 comments

16 comments

Comments sorted by top scores.

comment by gwern · 2024-03-17T23:56:09.976Z · LW(p) · GW(p)

Warning for anyone who has ever interacted with "robosucka" or been solicited for a new podcast series in the past few years: https://www.tumblr.com/rationalists-out-of-context/744970106867744768/heads-up-to-anyone-whos-spoken-to-this-person-i

Replies from: metachirality
comment by metachirality · 2024-03-18T06:13:52.983Z · LW(p) · GW(p)

"Who in the community do you think is easily flatterable enough to get to say yes, and also stupid enough to not realize I'm making fun of them."

I think anyone who says anything like this should stop and consider whether it is more likely to come out of the mouth of the hero or the villain of a story.

Replies from: Viliam, lahwran
comment by Viliam · 2024-03-18T08:32:09.843Z · LW(p) · GW(p)

I think the people who say such things don't really care, and would probably include your advice in the list of quotes they consider funny. (In other words, this is not a "mistake theory" situation.)

EDIT:

The response is too harsh, I think. There are situations where this is a useful advice. For example, if someone is acting under peer pressure, then telling them this may provide a useful outside view. As the Asch's Conformity Experiment [LW · GW] teaches us, the first dissenting voice can be extremely valuable. It just seems unlikely that this is the robosucka's case.

Replies from: metachirality
comment by metachirality · 2024-03-18T12:22:42.250Z · LW(p) · GW(p)

You're correct that this isn't something that can told to someone who is already in the middle of doing the thing. They mostly have to figure it out for themself.

comment by the gears to ascension (lahwran) · 2024-03-18T09:14:53.755Z · LW(p) · GW(p)

I think anyone who says anything like this should stop and consider whether it is more likely to come out of the mouth of the hero or the villain of a story.

 

->

anyone who is trying to [do terrible thing] should stop and consider whether that might make them [a person who has done terrible thing]

can you imagine how this isn't a terribly useful thing to say.

Replies from: Quadratic Reciprocity
comment by Quadratic Reciprocity · 2024-03-18T14:36:41.368Z · LW(p) · GW(p)

Advice of this specific form has been has been helpful for me in the past. Sometimes I don't notice immediately when the actions I'm taking are not ones I would endorse after a bit of thinking (particularly when they're fun and good for me in the short-term but bad for others or for me longer-term). This is also why having rules to follow for myself is helpful (eg: never lying or breaking promises) 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-03-18T15:01:35.188Z · LW(p) · GW(p)

hmm, fair. I guess it does help if the person is doing something bad by accident, rather than because they intend to. just, don't underestimate how often the latter happens either, or something. or overestimate it, would be your point in reply, I suppose!

comment by gwern · 2023-07-03T22:35:45.197Z · LW(p) · GW(p)

I have some long comments I can't refind now (weirdly) about the difficulty of investing based on AI beliefs (or forecasting in general): similar to catching falling knives, timing is all-important and yet usually impossible to nail down accurately; specific investments are usually impossible if you aren't literally founding the company, and indexing 'the entire sector' definitely impossible. Even if you had an absurd amount of money, you could try to index and just plain fail - there is no index which covers, say, OpenAI.

Apropos, Matt Levine comments on one attempt to do just that:

Today the Wall Street Journal has a funny and rather cruel story about how SoftBank Group went all-in on artificial intelligence in 2018, invested $140 billion in the theme, and somehow … missed it … entirely?

The AI wave that has jolted up numerous tech stocks has also had little effect on SoftBank’s portfolio of publicly traded tech stocks it backed as startups—36 companies including DoorDash and South Korean e-commerce company Coupang.

This is especially funny because it also illustrates timing problems:

SoftBank missed out on huge gains at AI-focused chip maker Nvidia: The Tokyo-based investor put around $4 billion into the company in 2017, only to sell its shares in 2019. Nvidia stock is up about 10 times since.

Oops. EDIT: this is especially hilarious to read in March 2024, given the gains Nvidia has made since July 2023!

Part of the problem was timing: For most of the six years since Son raised the first $100 billion Vision Fund, pickings were slim for generative AI companies, which tended to be smaller or earlier in development than the type of startup SoftBank typically backs. In early 2022, SoftBank nearly completely halted investing in startups when the tech sector was in the midst of a chill and SoftBank was hit with record losses. It was then that a set of buzzy generative AI companies raised funds and the sector began to gain steam among investors. Later in the year, OpenAI released ChatGPT, causing the simmering interest in the area to boil over. SoftBank’s competitors have spent recent months showering AI startups with funding, leading to a wide surge in valuations to the point where many venture investors warn of a growing bubble for anyone entering the space.

Oops.

Also, people are quick to tell you how it's easy to make money, just follow $PROVERB, after all, markets aren't efficient, amirite? So, in the AI bubble, surely the right thing is to ignore the AI companies who 'have no moat' and focus on the downstream & incumbent users and invest in companies like Nvidia ('sell pickaxes in a gold rush, it's guaranteed!'):

During the years that SoftBank was investing, it generally avoided companies focused specifically on developing AI technology. Instead, it poured money into companies that Son said were leveraging AI and would benefit from its growth. For example, it put billions of dollars into numerous self-driving car tech companies, which tend to use AI to help learn how humans drive and react to objects on the road. Son told investors that AI would power huge expansions at numerous companies where, years later, the benefits are unclear or nonexistent. In 2018, he highlighted AI at real-estate agency Compass, now-bankrupt construction company Katerra, and office-rental company WeWork, which he said would use AI to analyze how people communicate and then sell them products.

Oops.

tldr: Investing is hard; in the future, even more so.

Replies from: lc
comment by lc · 2023-09-14T16:38:55.290Z · LW(p) · GW(p)

Sure, investing pre-slow-takeoff is a challenge. But if your model says something crazy like 100% YoY GDP growth by 2030, then NASDAQ futures (which does include OpenAI, by virtue of Microsoft's 50% stake) seem like a pretty obvious choice.

comment by gwern · 2021-04-24T21:47:50.065Z · LW(p) · GW(p)

Humanities satirical traditions: I always enjoy the CS/ML/math/statistics satire in the annual SIGBOVIK and Ig Nobels; physics has Arxiv April Fools papers (like "On the Impossibility of Supersized Machines") & journals like Special Topics; and medicine has the BMJ Christmas issue, of course.

What are the equivalents in the humanities, like sociology or literature? (I asked a month ago on Twitter and got zero suggestions...) EDIT: as of March 2024, no equivalents have been found.

comment by gwern · 2021-04-24T21:39:16.652Z · LW(p) · GW(p)

Normalization-free Bayes: I was musing on Twitter about what the simplest possible still-correct computable demonstration of Bayesian inference is, that even a middle-schooler could implement & understand. My best candidate so far is ABC Bayesian inference*: simulation + rejection, along with the 'possible worlds' interpretation.

Someone noted that rejection sampling is simple but needs normalization steps, which adds complexity back. I recalled that somewhere on LW many years ago someone had a comment about a Bayesian interpretation where you don't need to renormalize after every likelihood computation, and every hypothesis just decreases at different rates; as strange as it sounds, it's apparently formally equivalent. I thought it was by Wei Dai, but I can't seem to refind it because queries like 'Wei Dai Bayesian decrease' obviously pull up way too many hits, it's probably buried in an Open Thread somewhere, my Twitter didn't help, and Wei Dai didn't recall it at all when I asked him. Does anyone remember this?

* I've made a point of using ABC in some analyses simply because it amuses me that something so simple still works, even when I'm sure I could've found a much faster MCMC or VI solution with some more work.


Incidentally, I'm wondering if the ABC simplification can be taken further to cover subjective Bayesian decision theory as well: if you have sets of possible worlds/hypotheses, let's say discrete for convenience, and you do only penalty updates as rejection sampling of worlds that don't match the current observation (like AIXI), can you then implement decision theory normally by defining a loss function and maximizing over it? In which case you can get Bayesian decision theory without probabilities, calculus, MCM, VI, etc or anything more complicated than a list of numbers and a few computational primitives like coinflip().

Replies from: Wei_Dai, eigen
comment by Wei Dai (Wei_Dai) · 2021-04-25T00:43:07.008Z · LW(p) · GW(p)

Doing another search, it seems I made at least one comment that is somewhat relevant, although it might not be what you're thinking of: https://www.greaterwrong.com/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy/comment/kuY5LagQKgnuPTPYZ [LW(p) · GW(p)]

comment by eigen · 2021-04-25T00:41:46.738Z · LW(p) · GW(p)

Funny that you have your great LessWrong whale as I do, and that you recall that it may be from Wei Dai as well (while him not recalling)

 https://www.lesswrong.com/posts/X4nYiTLGxAkR2KLAP/?commentId=nS9vvTiDLZYow2KSK

comment by gwern · 2022-01-22T02:43:39.313Z · LW(p) · GW(p)

Danbooru2021 is out. We've gone from n=3m to n=5m (w/162m tags) since Danbooru2017. Seems like all the anime you could possibly need to do cool multimodal text/image DL stuff, hint hint.

comment by gwern · 2021-04-24T22:09:28.493Z · LW(p) · GW(p)

2-of-2 escrow: what is the exploding Nash equilibrium? Did it really originate with NashX? I've been looking for the history & real name of this concept for years now and have failed to refind it. Anyone?

comment by Jonas Kgomo (jonas-kgomo) · 2022-07-12T21:06:05.464Z · LW(p) · GW(p)

Gwern,  i wonder what you think about this question i asked a while ago on causality in relation to the article you posted on reddit. Do we need more general causal agents for addressing issues in RL environments? 

Apologies for posting here, i didn't know how to mention/tag someone on a post in LW. 

https://www.lesswrong.com/posts/BDf7zjeqr5cjeu5qi/what-are-the-causality-effects-of-an-agents-presence-in-a?commentId=xfMj3iFHmcxjnBuqY [LW(p) · GW(p)]

comment by gwern · 2022-02-04T02:32:40.941Z · LW(p) · GW(p)