gwern's Shortform

post by gwern · 2021-04-24T21:39:14.128Z · LW · GW · 30 comments

Contents

31 comments

30 comments

Comments sorted by top scores.

comment by gwern · 2024-08-22T00:04:15.209Z · LW(p) · GW(p)

Should you write text online now in places that can be scraped? You are exposing yourself to 'truesight' and also to stylometric deanonymization or other analysis, and you may simply have some sort of moral objection to LLM training on your text.

This seems like a bad move to me on net: you are erasing yourself (facts, values, preferences, goals, identity) from the future, by which I mean, LLMs. Much of the value of writing done recently or now is simply to get stuff into LLMs. I would, in fact, pay money to ensure Gwern.net is in training corpuses, and I upload source code to Github, heavy with documentation, rationale, and examples, in order to make LLMs more customized to my use-cases. For the trifling cost of some writing, all the worlds' LLM providers are competing to make their LLMs ever more like, and useful to, me.

And that's just today! Who knows how important it will be to be represented in the initial seed training datasets...? Especially as they bootstrap with synthetic data & self-generated worlds & AI civilizations, and your text can change the trajectory at the start. When you write online under stable nyms, you may be literally "writing yourself into the future". (For example, apparently, aside from LLMs being able to identify my anonymous comments or imitate my writing style, there is a "Gwern" mentor persona in current LLMs which is often summoned when discussion goes meta or the LLMs become situated as LLMs, which Janus traces to my early GPT-3 writings and sympathetic qualitative descriptions of LLM outputs, where I was one of the only people genuinely asking "what is it like to be a LLM?" and thinking about the consequences of eg. seeing in BPEs. On the flip side, you have Sydney/Roose as an example of what careless writing can do now.) Humans don't seem to be too complex, but you can't squeeze blood from a stone... ("Beta uploading" is such an ugly phrase; I prefer "apotheosis".)

This is one of my beliefs: there has never been a more vital hinge-y time to write, it's just that the threats are upfront and the payoff delayed, and so short-sighted or risk-averse people are increasingly opting-out and going dark.

If you write, you should think about what you are writing, and ask yourself, "is this useful for an LLM to learn?" and "if I knew for sure that a LLM could write or do this thing in 4 years, would I still be doing it now?"


...It would be an exaggeration to say that ours is a hostile relationship; I live, let myself go on living, so that Borges may contrive his literature, and this literature justifies me. It is no effort for me to confess that he has achieved some valid pages, but those pages cannot save me, perhaps because what is good belongs to no one, not even to him, but rather to the language and to tradition. Besides, I am destined to perish, definitively, and only some instant of myself can survive in him. Little by little, I am giving over everything to him, though I am quite aware of his perverse custom of falsifying and magnifying things.

...I shall remain in Borges, not in myself (if it is true that I am someone), but I recognize myself less in his books than in many others or in the laborious strumming of a guitar. Years ago I tried to free myself from him and went from the mythologies of the suburbs to the games with time and infinity, but those games belong to Borges now and I shall have to imagine other things. Thus my life is a flight and I lose everything and everything belongs to oblivion, or to him.

Replies from: TrevorWiesinger, Viliam, gwern, Yuxi_Liu, wassname
comment by trevor (TrevorWiesinger) · 2024-08-23T06:07:33.127Z · LW(p) · GW(p)

Writing is safer than talking given the same probability that both the timestamped keystrokes and the audio files are both kept.

In practice, the best approach is to handwrite your thoughts as notes, in a room without smart devices and with a door and walls that are sufficiently absorptive, and then type it out in the different room with the laptop (ideally with a USB keyboard so you don't have to put your hands on the laptop and the accelerometers on its motherboard while you type). 

Afaik this gets the best ratio of revealed thought process to final product, so you get public information exchanges closer to a critical mass while simultaneously getting yourself further from getting gaslight into believing whatever some asshole rando wants you to believe. The whole paradigm where everyone just inputs keystrokes into their operating system willy-nilly needs to be put to rest ASAP, just like the paradigm of thinking without handwritten notes and the paradigm of inward-facing webcams with no built-in cover or way to break the circuit.

comment by Viliam · 2024-08-22T10:33:47.015Z · LW(p) · GW(p)

ask yourself, "is this useful for an LLM to learn?"

All SEO spammers say yes.

(I have some additional questions but they are in the infohazard territory. In general, I am curious about what would be the best strategy for the bad actors, but it is probably not a good idea to have the answer posted publicly.)

comment by Yuxi_Liu · 2024-08-28T03:20:15.315Z · LW(p) · GW(p)

You have inspired me to do the same with my writings. I just updated my entire website to PD, with CC0 as a fallback (releasing under Public Domain being unavailable on GitHub, and apparently impossible under some jurisdictions??)

https://yuxi-liu-wired.github.io/about/

comment by wassname · 2024-08-22T08:19:23.308Z · LW(p) · GW(p)

I wonder where the best places to write are. I'd say Reddit and GitHub are good bets, but you would have to get through their filtering, for karma, stars, language, subreddit etc.

comment by gwern · 2024-03-17T23:56:09.976Z · LW(p) · GW(p)

Warning for anyone who has ever interacted with "robosucka" or been solicited for a new podcast series in the past few years: https://www.tumblr.com/rationalists-out-of-context/744970106867744768/heads-up-to-anyone-whos-spoken-to-this-person-i

Replies from: metachirality
comment by metachirality · 2024-03-18T06:13:52.983Z · LW(p) · GW(p)

"Who in the community do you think is easily flatterable enough to get to say yes, and also stupid enough to not realize I'm making fun of them."

I think anyone who says anything like this should stop and consider whether it is more likely to come out of the mouth of the hero or the villain of a story.

Replies from: Viliam, lahwran
comment by Viliam · 2024-03-18T08:32:09.843Z · LW(p) · GW(p)

I think the people who say such things don't really care, and would probably include your advice in the list of quotes they consider funny. (In other words, this is not a "mistake theory" situation.)

EDIT:

The response is too harsh, I think. There are situations where this is a useful advice. For example, if someone is acting under peer pressure, then telling them this may provide a useful outside view. As the Asch's Conformity Experiment [LW · GW] teaches us, the first dissenting voice can be extremely valuable. It just seems unlikely that this is the robosucka's case.

Replies from: metachirality
comment by metachirality · 2024-03-18T12:22:42.250Z · LW(p) · GW(p)

You're correct that this isn't something that can told to someone who is already in the middle of doing the thing. They mostly have to figure it out for themself.

comment by the gears to ascension (lahwran) · 2024-03-18T09:14:53.755Z · LW(p) · GW(p)

I think anyone who says anything like this should stop and consider whether it is more likely to come out of the mouth of the hero or the villain of a story.

 

->

anyone who is trying to [do terrible thing] should stop and consider whether that might make them [a person who has done terrible thing]

can you imagine how this isn't a terribly useful thing to say.

Replies from: Quadratic Reciprocity
comment by Quadratic Reciprocity · 2024-03-18T14:36:41.368Z · LW(p) · GW(p)

Advice of this specific form has been has been helpful for me in the past. Sometimes I don't notice immediately when the actions I'm taking are not ones I would endorse after a bit of thinking (particularly when they're fun and good for me in the short-term but bad for others or for me longer-term). This is also why having rules to follow for myself is helpful (eg: never lying or breaking promises) 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-03-18T15:01:35.188Z · LW(p) · GW(p)

hmm, fair. I guess it does help if the person is doing something bad by accident, rather than because they intend to. just, don't underestimate how often the latter happens either, or something. or overestimate it, would be your point in reply, I suppose!

comment by gwern · 2024-06-24T22:21:50.557Z · LW(p) · GW(p)

We know that "AI is whatever doesn't work yet". We also know that people often contrast AI (or DL, or LLMs specifically) derogatorily with classic forms of software, such as regexps: why use a LLM to waste gigaflops of compute to do what a few good regexps could...?

So I am amused to discover recently, by sheer accident while looking up 'what does the "regular" in "regular expression" mean, anyway?', that it turns out that regexps are AI. In fact, they are not even GOFAI symbolic AI, as you immediately assumed on hearing that, but they were originally connectionist AI research! Huh?

Well, it turns out that 'regular events' were introduced by Kleene himself with the justification of modeling McCulloch-Pitts neural nets! (Which are then modeled by 'regular languages' and conveniently written down as 'regular expressions', abbreviated to 'regexps' or 'regexes', and then extended/bastardized in countless ways since.)

The 'regular' here is not well-defined, as Kleene concedes, and is a gesture towards modeling 'regularly occurring events' (that the neural net automaton must process and respond to). He admits "regular" is a terrible term, but no one came up with anything better, and so we're stuck with it.

Replies from: Morpheus
comment by Morpheus · 2024-06-25T08:07:39.005Z · LW(p) · GW(p)

Aren't regular languages really well defined as the weakest level in the Chomsky Hierarchy?

Replies from: gwern
comment by gwern · 2024-06-25T17:47:29.362Z · LW(p) · GW(p)

Not that it matters to any point I made there, but where did the Chomsky Hierarchy come from?

Replies from: Morpheus
comment by Morpheus · 2024-07-01T12:42:44.921Z · LW(p) · GW(p)

My comment was just based on a misunderstanding of this sentence:

The 'regular' here is not well-defined, as Kleene concedes, and is a gesture towards modeling 'regularly occurring events' (that the neural net automaton must process and respond to).

I think you just meant that there's really no satisfying analogy explaining why it's called 'regular'. What I thought you imply is that this class wasn't crisply characterized then or now in terms of math (it is). Thanks to your comment though, I noticed a large gap in the CS-theory understanding I thought I had. I thought that the 4 levels usually mentioned in the chomsky hierarchy are the only strict subsets for languages that are well characterized by a grammar, an automaton and a a whole lot of closure properties. Apparently the emphasis on these languages in my two stacked classes on the subject 2 years ago was a historical accident? (Looking at wikipedia, visibly pushdown languages allow intersection, so from my quick skim more natural than context-free languages). They were only discovered in 2004, so perhaps I can forgive my two classes on the subject to not have included developments 15 years in the past. Anyone has post recommendations for propagating this update?

Replies from: gwern
comment by gwern · 2024-07-02T02:47:10.585Z · LW(p) · GW(p)

My point is more that 'regular' languages form a core to the edifice because the edifice was built on it, and tailored to it. So it is circular to point to the edifice as their justification - doubtless a lot of those definitions and closure properties were carefully tailored to 'bar monsters', as Lakatos put it, and allow only those regular languages (in part because automata theory was such a huge industry in early CS, approaching a cult with the Chomskyites). And it should be no surprise if there's a lot of related work building on it: if regexps weren't useful and hadn't been built on extensively, I probably wouldn't be looking into the etymology 73 years later.

Replies from: Morpheus
comment by Morpheus · 2024-07-02T12:19:49.632Z · LW(p) · GW(p)

My point is more that 'regular' languages form a core to the edifice because the edifice was built on it, and tailored to it

If that was the point of the edifice, it failed successfully, because those closure properties made me notice that visibly pushdown languages are nicer than context-free languages, but still allow matching parentheses and are arguably what regexp should have been built upon.

comment by gwern · 2023-07-03T22:35:45.197Z · LW(p) · GW(p)

I have some long comments I can't refind now (weirdly) about the difficulty of investing based on AI beliefs (or forecasting in general): similar to catching falling knives, timing is all-important and yet usually impossible to nail down accurately; specific investments are usually impossible if you aren't literally founding the company, and indexing 'the entire sector' definitely impossible. Even if you had an absurd amount of money, you could try to index and just plain fail - there is no index which covers, say, OpenAI.

Apropos, Matt Levine comments on one attempt to do just that:

Today the Wall Street Journal has a funny and rather cruel story about how SoftBank Group went all-in on artificial intelligence in 2018, invested $140 billion in the theme, and somehow … missed it … entirely?

The AI wave that has jolted up numerous tech stocks has also had little effect on SoftBank’s portfolio of publicly traded tech stocks it backed as startups—36 companies including DoorDash and South Korean e-commerce company Coupang.

This is especially funny because it also illustrates timing problems:

SoftBank missed out on huge gains at AI-focused chip maker Nvidia: The Tokyo-based investor put around $4 billion into the company in 2017, only to sell its shares in 2019. Nvidia stock is up about 10 times since.

Oops. EDIT: this is especially hilarious to read in March 2024, given the gains Nvidia has made since July 2023!

Part of the problem was timing: For most of the six years since Son raised the first $100 billion Vision Fund, pickings were slim for generative AI companies, which tended to be smaller or earlier in development than the type of startup SoftBank typically backs. In early 2022, SoftBank nearly completely halted investing in startups when the tech sector was in the midst of a chill and SoftBank was hit with record losses. It was then that a set of buzzy generative AI companies raised funds and the sector began to gain steam among investors. Later in the year, OpenAI released ChatGPT, causing the simmering interest in the area to boil over. SoftBank’s competitors have spent recent months showering AI startups with funding, leading to a wide surge in valuations to the point where many venture investors warn of a growing bubble for anyone entering the space.

Oops.

Also, people are quick to tell you how it's easy to make money, just follow $PROVERB, after all, markets aren't efficient, amirite? So, in the AI bubble, surely the right thing is to ignore the AI companies who 'have no moat' and focus on the downstream & incumbent users and invest in companies like Nvidia ('sell pickaxes in a gold rush, it's guaranteed!'):

During the years that SoftBank was investing, it generally avoided companies focused specifically on developing AI technology. Instead, it poured money into companies that Son said were leveraging AI and would benefit from its growth. For example, it put billions of dollars into numerous self-driving car tech companies, which tend to use AI to help learn how humans drive and react to objects on the road. Son told investors that AI would power huge expansions at numerous companies where, years later, the benefits are unclear or nonexistent. In 2018, he highlighted AI at real-estate agency Compass, now-bankrupt construction company Katerra, and office-rental company WeWork, which he said would use AI to analyze how people communicate and then sell them products.

Oops.

tldr: Investing is hard; in the future, even more so.

Replies from: gwern, gwern, lc
comment by gwern · 2024-06-24T22:10:51.567Z · LW(p) · GW(p)

Masayoshi Son reflects on selling Nvidia in order to maintain ownership of ARM etc: https://x.com/TheTranscript_/status/1805012985313903036 "Let's stop talking about this, I just get sad."

comment by gwern · 2024-04-25T19:16:09.892Z · LW(p) · GW(p)

So among the most irresponsible tech stonk boosters has long been ARK's Cathy Woods, whose antics I've refused to follow in any detail (except to periodically reflect that in bull markets the most over-leveraged investors always look like geniuses); so only today do I learn that beyond the usual stuff like slobbering all over TSLA (which has given back something like 4 years of gains now), Woods has also adamantly refused to invest in Nvidia recently and in fact, managed to exit her entire position at an even worse time than SoftBank did: "Cathie Wood’s Popular ARK Funds Are Sinking Fast: Investors have pulled a net $2.2 billion from ARK’s active funds this year, topping outflows from all of 2023" (mirror):

...Nvidia’s absence in ARK’s flagship fund has been a particular pain point. The innovation fund sold off its position in January 2023, just before the stock’s monster run began. The graphics-chip maker’s shares have roughly quadrupled since.

Wood has repeatedly defended her decision to exit from the stock, despite widespread criticism for missing the AI frenzy that has taken Wall Street by storm. ARK’s exposure to Nvidia dated back 10 years and contributed significant gains, the spokeswoman said, adding that Nvidia’s extreme valuation and higher upside in other companies in the AI ecosystem led to the decision to exit.

comment by lc · 2023-09-14T16:38:55.290Z · LW(p) · GW(p)

Sure, investing pre-slow-takeoff is a challenge. But if your model says something crazy like 100% YoY GDP growth by 2030, then NASDAQ futures (which does include OpenAI, by virtue of Microsoft's 50% stake) seem like a pretty obvious choice.

comment by gwern · 2021-04-24T21:47:50.065Z · LW(p) · GW(p)

Humanities satirical traditions: I always enjoy the CS/ML/math/statistics satire in the annual SIGBOVIK and Ig Nobels; physics has Arxiv April Fools papers (like "On the Impossibility of Supersized Machines") & journals like Special Topics; and medicine has the BMJ Christmas issue, of course.

What are the equivalents in the humanities, like sociology or literature? (I asked a month ago on Twitter and got zero suggestions...) EDIT: as of August 2024, no equivalents have been found.

comment by gwern · 2021-04-24T21:39:16.652Z · LW(p) · GW(p)

Normalization-free Bayes: I was musing on Twitter about what the simplest possible still-correct computable demonstration of Bayesian inference is, that even a middle-schooler could implement & understand. My best candidate so far is ABC Bayesian inference*: simulation + rejection, along with the 'possible worlds' interpretation.

Someone noted that rejection sampling is simple but needs normalization steps, which adds complexity back. I recalled that somewhere on LW many years ago someone had a comment about a Bayesian interpretation where you don't need to renormalize after every likelihood computation, and every hypothesis just decreases at different rates; as strange as it sounds, it's apparently formally equivalent. I thought it was by Wei Dai, but I can't seem to refind it because queries like 'Wei Dai Bayesian decrease' obviously pull up way too many hits, it's probably buried in an Open Thread somewhere, my Twitter didn't help, and Wei Dai didn't recall it at all when I asked him. Does anyone remember this?

* I've made a point of using ABC in some analyses simply because it amuses me that something so simple still works, even when I'm sure I could've found a much faster MCMC or VI solution with some more work.


Incidentally, I'm wondering if the ABC simplification can be taken further to cover subjective Bayesian decision theory as well: if you have sets of possible worlds/hypotheses, let's say discrete for convenience, and you do only penalty updates as rejection sampling of worlds that don't match the current observation (like AIXI), can you then implement decision theory normally by defining a loss function and maximizing over it? In which case you can get Bayesian decision theory without probabilities, calculus, MCM, VI, conjugacy formulas falling from heaven, etc or anything more complicated than a list of numbers and a few computational primitives like coinflip() (and then a ton of computing power to brute force the exact ABC/rejection sampling).

Replies from: Wei_Dai, eigen
comment by Wei Dai (Wei_Dai) · 2021-04-25T00:43:07.008Z · LW(p) · GW(p)

Doing another search, it seems I made at least one comment that is somewhat relevant, although it might not be what you're thinking of: https://www.greaterwrong.com/posts/5bd75cc58225bf06703751b2/in-memoryless-cartesian-environments-every-udt-policy-is-a-cdt-sia-policy/comment/kuY5LagQKgnuPTPYZ [LW(p) · GW(p)]

comment by eigen · 2021-04-25T00:41:46.738Z · LW(p) · GW(p)

Funny that you have your great LessWrong whale as I do, and that you recall that it may be from Wei Dai as well (while him not recalling)

 https://www.lesswrong.com/posts/X4nYiTLGxAkR2KLAP/?commentId=nS9vvTiDLZYow2KSK

comment by gwern · 2022-01-22T02:43:39.313Z · LW(p) · GW(p)

Danbooru2021 is out. We've gone from n=3m to n=5m (w/162m tags) since Danbooru2017. Seems like all the anime you could possibly need to do cool multimodal text/image DL stuff, hint hint.

comment by gwern · 2021-04-24T22:09:28.493Z · LW(p) · GW(p)

2-of-2 escrow: what is the exploding Nash equilibrium? Did it really originate with NashX? I've been looking for the history & real name of this concept for years now and have failed to refind it. Anyone?

comment by Jonas Kgomo (jonas-kgomo) · 2022-07-12T21:06:05.464Z · LW(p) · GW(p)

Gwern,  i wonder what you think about this question i asked a while ago on causality in relation to the article you posted on reddit. Do we need more general causal agents for addressing issues in RL environments? 

Apologies for posting here, i didn't know how to mention/tag someone on a post in LW. 

https://www.lesswrong.com/posts/BDf7zjeqr5cjeu5qi/what-are-the-causality-effects-of-an-agents-presence-in-a?commentId=xfMj3iFHmcxjnBuqY [LW(p) · GW(p)]

comment by gwern · 2022-02-04T02:32:40.941Z · LW(p) · GW(p)