LessWrong 2.0 Reader
View: New · Old · Topnext page (older posts) →
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-02-18T21:23:00.000Z · comments (240)
Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-02-25T00:39:33.000Z · comments (78)
This is a fun slice of life. I'm glad y'all had a good time!
abhimanyu-pallavi-sudhir on Abhimanyu Pallavi Sudhir's ShortformOh right, lol, good point.
shoshannah-tekofsky on DyslucksiaOh interesting! Maybe I'm wrong. I'm more curious about something like a survey on the topic now.
lorxus on DyslucksiaAnyway, my prediction is that non-dyslectics do not subvocalize - it's much too slow. You can't read faster than you speak in that case.
Maybe I'm just weird, but I totally do sometimes subvocalize, but incredibly quickly. Almost clipped or overlapping to an extent, in a way that can only really work inside your head? And that way it can go faster than you can physically speak. Why should your mental voice be limited by the limits of physical lips, tongue, and glottis, anyway?
lorxus on Open Thread Spring 2024Excellent, thanks!
olli-jaerviniemi on D0TheMath's ShortformMuch research on deception (Anthropic's recent work, trojans, jailbreaks, etc) is not targeting "real" instrumentally convergent deception reasoning, but learned heuristics.
If you have the slack, I'd be interested in hearing/chatting more about this, as I'm working (or trying to work) on the "real" "scary" forms of deception. (E.g. do you think that this paper [LW · GW] has the same failure mode?)
jacques-thibodeau on jacquesthibs's ShortformAnybody know how Fathom Radiant (https://fathomradiant.co/) is doing?
They’ve been working on photonics compute for a long time so I’m curious if people have any knowledge on the timelines they expect it to have practical effects on compute.
Also, Sam Altman and Scott Gray at OpenAI are both investors in Fathom. Not sure when they invested.
I’m guessing it’s still a long-term bet at this point.
OpenAI also hired someone who worked at PsiQuantum recently. My guess is that they are hedging their bets on the compute end and generally looking for opportunities on that side of things. Here’s his bio:
Ben Bartlett I'm currently a quantum computer architect at PsiQuantum working to design a scalable and fault-tolerant photonic quantum computer. I have a PhD in applied physics from Stanford University, where I worked on programmable photonics for quantum information processing and ultra high-speed machine learning. Most of my research sits at the intersection of nanophotonics, quantum physics, and machine learning, and basically consists of me designing little race tracks for photons that trick them into doing useful computations.
bogdan-ionut-cirstea on Bogdan Ionut Cirstea's ShortformFor the pretraining-finetuning paradigm, this link is now made much more explicitly in Cross-Task Linearity Emerges in the Pretraining-Finetuning Paradigm; as well as linking to model ensembling through logit averaging.
fowlertm on fowlertm's ShortformYouTube can generate those automatically, or you can rip the .mp4 with an online service (just Google around, there are tons), then pass it to something like Otter.ai
pendertif on There’s no such thing as a tree (phylogenetically)“ taxonomy is not automatically a great category for regular usage.”
This is great, and I love the specific example of trees as a failure to classify a large set into subsets.
Something that’s not exactly the same problem, but rhymes, is that of genre classification for content discovery. Consider Spotify playlists. There are millions of songs, and hundreds of classified genres. Genres are classified much like species/genus taxonomies— two songs share a genre if they share a common ancestor of music. Led Zeppelin and the Beatles are different, but they both derive from traditions of electric guitar which grew out of the blues which… and so on. So we say Led Zeppelin and the Beatles are the same genre, “Rock”. You can do this kind of classification in much greater detail to carve out new genres and subgenres.
However when it comes to utility and discovery, genres underperform. Despite being the same genre, there are few parties which shuffle between the melancholic “Yesterday” and the screaming “Ramble On”. People seek songs which are similar to others in strategy, NOT in tradition. As you said:
“tree is a strategy. Wood is a strategy. Fruit is a strategy. A fish is also a strategy”
Successful user created playlists on Spotify (ones which are public with lots of likes) tend NOT to use genre. The tend to be called something line “rainy tuesday drive home from work”, or “music school nerds playing ALL the notes”. Rather than carve out a subset using genre (a playlist is a subset of music), they define it by strategy.
A failure in taxonomy.