Posts

.CLI's Shortform 2024-06-06T22:02:51.701Z

Comments

Comment by .CLI (Changbai Li) on .CLI's Shortform · 2024-12-04T07:15:35.582Z · LW · GW

How did safety engineering get invented for different disciplines, and how do their invention relate to engineering and theory?

Inspired by davidad's tweets: 1, 2, 3

It seems commonsense that a deeper (theoretical) understanding helps both engineering as well as safety engineering. Which one do you think does theory help more? And which development helped grow theory research more?

My intuition is that:

  1. First we started building something by trial-and-error, empirical results.
  2. We formulated some safety best practices. But they are all heuristics from the trial-and-error.
  3. Then we started gaining theoretical understanding of what we are doing.
  4. Only then do we become able to advance "safety engineering".
  5. At the same time, we also get much better at building that thing - much better at engineering.

How well does this mesh with real-life? In the bridges' case, safety engineering was invented separately, well after we understood how to build bridges - and well after we built a lot of bridges. The pioneers in safety engineering oft have formal math background. This seems to match the intuition above.

That said -

  • We did build a lot of bridges, and a lot of them failed, before safety engineering came about. And how much did theories for safety engineering help with bridge capability?
  • Did the field advance by novel theory works? Or was it more about the application of existing theories?
  • Related to that question is: did safety engineering require an entirely different set of theories that have little to do with bridge capability? (This seems obviously true to me: for example, environmental wear-and-tear and the process of metal rusting does not affect capability, but we need to understand them for safety.)
Comment by .CLI (Changbai Li) on .CLI's Shortform · 2024-09-05T21:43:04.266Z · LW · GW

is this part of the reason so many AI researchers think it's cool and enlightened to not believe in highly general architectures

I do hear No Free Lunch theorem get thrown around when an architecture fails to solve some problem which its inductive bias doesn't fit. But I think it's just thrown around as a vibe.

Comment by .CLI (Changbai Li) on A Bird's Eye View of the ML Field [Pragmatic AI Safety #2] · 2024-09-04T23:57:45.493Z · LW · GW

Love the post! http://pragmaticaisafety.com/ is down for me right now though. Do the authors still endorse this sequence?

Comment by .CLI (Changbai Li) on A Bird's Eye View of the ML Field [Pragmatic AI Safety #2] · 2024-09-04T23:56:10.513Z · LW · GW

Just spent one year in academia; my experience trying to talk to researchers about AGI match what Dan wrote about.

Comment by .CLI (Changbai Li) on .CLI's Shortform · 2024-09-01T04:39:52.930Z · LW · GW

(ramblingly) Does the No Free Lunch Theorem imply that there's no one single technique that would always work for AGI alignment? Initial thought is probably not, because the theorem states that the performance of all optimization algorithms are identical across all possible problems. However,  AGI alignment is a subset of these problems.

Comment by .CLI (Changbai Li) on Language and Capabilities: Testing LLM Mathematical Abilities Across Languages · 2024-07-29T06:28:48.826Z · LW · GW

GPT-4 can do math because it has learned particular patterns associated with tokens, including heuristics for certain digits, without fully learning the abstract generalized pattern.

This finding seems consistent with some literatures, such as this where they found that if the multiplication task has an unseen computational graph, then performance deteriorates rapidly. Perhaps check out the keyword "shortcut learning" too.

Comment by .CLI (Changbai Li) on The Best Tacit Knowledge Videos on Every Subject · 2024-07-14T19:42:45.491Z · LW · GW

Game Design

The videos under this category fits better the label "game development" instead. Game Design is more focused on designing rules, mechanics, sometimes narratives, instead of programming.

Comment by .CLI (Changbai Li) on North Oakland: Projects, July 9th · 2024-07-05T16:43:47.446Z · LW · GW

Is the event happening on June 11th or July 9th?

Comment by .CLI (Changbai Li) on .CLI's Shortform · 2024-06-06T22:02:52.079Z · LW · GW

I think there should be more effort into researching the limits of controllability for self-improving machines. That aspect of rapid self improvement seems pretty important to me since it's there regardless of which architecture we use to get to the singularity. If the singularity is dangerous no matter how we get there, or how aligned our first try is, then, [clears throat and raises sign] don't build AGI?

Comment by .CLI (Changbai Li) on More on the Apple Vision Pro · 2024-02-13T19:29:41.208Z · LW · GW

I bought the device and watched Interstellar on top of Mt. Hood with the stars as the background. It was a phenomenal experience. That said, having to bear the weight of the device for 2.5 hours, and other limits such as FOV & lens glare makes me hesitant to say movie's the one killer app right now. I don't think there is a killer app yet - Apple wants us to come in for that.

Comment by .CLI (Changbai Li) on Where are the red lines for AI? · 2022-08-06T10:32:17.676Z · LW · GW

The strategic awareness property would be an interesting one to measure. Which existing system would you say are more or less strategically aware? Are there examples we could point toward, like the social media algorithm one?