Posts

What is complexity science? (Not computational complexity theory) How useful is it? What areas is it related to? 2020-09-26T09:15:50.446Z
Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment 2020-07-14T22:48:04.929Z
Why take notes: what I get from notetaking and my desiderata for notetaking systems 2020-05-29T21:46:10.221Z
Is there software for goal factoring? 2020-02-18T19:55:37.764Z
Hard Problems in Cryptocurrency: Five Years Later - Buterin 2019-11-24T09:38:20.045Z
Philip's Shortform 2019-09-14T12:30:37.482Z
Reneging prosocially by Duncan Sabien 2019-06-18T18:52:46.501Z
How to determine if my sympathetic or my parasympathetic nervous system is currently dominant? 2019-05-31T20:40:30.664Z
AI Safety Prerequisites Course: Revamp and New Lessons 2019-02-03T21:04:16.213Z
Fundamentals of Formalisation Level 7: Equivalence Relations and Orderings 2018-08-10T15:12:46.683Z
Fundamentals of Formalisation Level 6: Turing Machines and the Halting Problem 2018-07-23T09:46:42.076Z
Fundamentals of Formalisation Level 5: Formal Proof 2018-07-09T20:55:04.617Z
Fundamentals of Formalisation Level 4: Formal Semantics Basics 2018-06-16T19:09:16.042Z
Fundamentals of Formalisation Level 3: Set Theoretic Relations and Enumerability 2018-06-09T19:57:20.878Z
Idea: OpenAI Gym environments where the AI is a part of the environment 2018-04-12T22:28:20.758Z

Comments

Comment by philip_b (crabman) on "The Solomonoff Prior is Malign" is a special case of a simpler argument · 2024-11-21T05:41:40.488Z · LW · GW

If you think you might be in a solipsist simulation, you might try to add some chaotic randomness to your decisions. For example, go outside under some trees and wait till any kind of tree leaf or seed or anything hits your left half of the face, choose one course of action. If it hits the other half of your face, choose another course of action. If you do this multiple times in your life, each of your decisions will depend on the state of the whole earth and on all your previous decisions, since weather is chaotic. And thus the simulators will be unable to get good predictions about you using a solipsist simulation. A potential counterargument is that they analyze your thinking and hardcode this binary random choice, i.e. hardcode the memory of the seed hitting your left side. But then there would need to be an intelligent process analyzing your thinking to try and isolate the randomness. But then you could make the dependence of your strategy on randomness even more complicated.

Comment by philip_b (crabman) on "It's a 10% chance which I did 10 times, so it should be 100%" · 2024-11-18T04:50:09.934Z · LW · GW

Nice. I have a suggestion how to improve the article. Put a clearly stated theorem somewhere in the middle, in its own block, like in academic math articles.

Comment by philip_b (crabman) on Internal music player: phenomenology of earworms · 2024-11-16T05:16:45.697Z · LW · GW

Why do you hate earworms? To me, they are mildly pleasant. The only moments when I wish I didn’t have an earworm happening at that moment is when I’m trying to remember another tune and the earworm for musicianship purposes and the earworm prevents me from being able to do that.

Comment by philip_b (crabman) on Am I confused about the "malign universal prior" argument? · 2024-08-28T09:40:12.967Z · LW · GW

Instead of inspecting all programs in the UP, just inspect all programs with length less than n. As n becomes larger and larger, this covers more and more of the total probability mass in the up and the total probability mass covered this way approaches 1. What to do about the non-halting programs? Well, just run all the programs for m steps, I guess. I think this is the approximation of UP that is implied.

Comment by philip_b (crabman) on Quick look: applications of chaos theory · 2024-08-19T14:34:01.269Z · LW · GW

Well, now I'm wondering - is neural network training chaotic?

Comment by philip_b (crabman) on Quick look: applications of chaos theory · 2024-08-19T14:30:20.139Z · LW · GW

This is awesome, I would love more posts like this. Out of curiosity, how many hours have you and your colleague spent on this research.

Comment by philip_b (crabman) on Exposure can’t rule out disasters · 2024-08-16T10:31:18.164Z · LW · GW

In my personal experience, exposure therapy did help me with the fear of such "extreme" risks.

Comment by philip_b (crabman) on Rabin's Paradox · 2024-08-14T21:41:37.108Z · LW · GW

In the very beginning of the post, I read: "Quick psychology experiment". Then, I read: "Right now, if I offered you a bet ...". Because of this, I thought about a potential real life situation, not a platonic ideal situation, that the author is offering me this bet. I declined both bets. Not because they are bad bets in an abstract world, but because I don't trust the author in the first bet and I trust them even less in the second bet.

If you rejected the first bet and accepted the second bet, just that is enough to rule you out from having any utility function consistent with your decisions.

Under this interpretation, no it doesn't.

Could you, the author, please modify the thought experiment to indicate that it is assumed that I completely trust the one who is proposing the bet to me? And, maybe discuss other caveats too. Or just say that it's Omega who's offering me the bet.

Comment by philip_b (crabman) on We’re not as 3-Dimensional as We Think · 2024-08-04T18:29:45.169Z · LW · GW

So you say humans don't reason about the space and objects around them by keeping 3d representations. You think that instead the human brain collects a bunch of heuristics what the response should be to a 2d projection of 3d space, given different angles - an incomprehhensible mishmash of neurons like in an artificial neural network that doesn't have any CNN layers for identifying the digit by image, and just memorizes all rules for all types of pictures with all types of angle like a fully connected layer.

Comment by philip_b (crabman) on Viliam's Shortform · 2024-07-27T18:12:01.216Z · LW · GW

I guess I was not clear enough. In your original post, you wrote "On one hand, there are countably many definitions ..." and "On the other hand, Cantor's diagonal argument applies here, too. ...". So, you talked about two statements - "On one hand, (1)", "On the other hand, (2)". I would expect that when someone says "One one hand, ..., but on the other hand, ...", what they say in those ellipses should contradict each other. So, in my previous comment, I just wanted to point out that (2) does not contradict (1) because countable infinity + 1 is still countable infinity.

take all the iterations you need, even infinitely many of them

Could you clarify how I would construct that?

For example, what is the "next cardinality" after countable?

I didn't say "the next cardinality". I said "a higher cardinality".

Comment by philip_b (crabman) on Viliam's Shortform · 2024-07-22T11:06:55.569Z · LW · GW

Ok, so let's say you've been able to find a countably infinite amount of real numbers and you now call them "definable". You apply the Cantor's argument to generate one more number that's not in this set (and you go from the language to the meta language when doing this). Countably infinite + 1 is still only countably infinite. How would you go to a higher cardinality of "definable" objects? I don't see an easy way.

Comment by philip_b (crabman) on How do natural sciences prove causation? · 2024-06-28T12:09:14.944Z · LW · GW

To check if A causes B, you can check what happens when you intervene and modify A, and also what happens when you intervene and modify B. That's not always possible though. You can consult "Causality: Models, Reasoning, and Inference" by Pearl for more details.

Comment by philip_b (crabman) on On Claude 3.5 Sonnet · 2024-06-24T13:02:44.747Z · LW · GW

They commit to not using your data to train their models without explicit permission.

I've just registered on their website because of this article. During registration, I was told that conversations marked by their automated system that overlooks if you are following their terms of use are regularly overlooked by humans and used to train their models.

Comment by philip_b (crabman) on Do you believe in hundred dollar bills lying on the ground? Consider humming · 2024-05-24T07:34:23.576Z · LW · GW

When learning to sing, humming is used to extend your range higher. Not sure if it's used to extend it lower.

Comment by philip_b (crabman) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-19T10:23:37.680Z · LW · GW

Replied in PM.

Comment by philip_b (crabman) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-19T05:27:48.131Z · LW · GW

I would like to make a recommendation to Johannes that he should try to write and post content in a way that invokes less feelings of cringe in people. I know it does invoke that because I personally feel cringe.

Still, I think that there isn’t much objectively bad about this post. I’m not saying the post is very good or convincing. I think its style is super weird but that should be considered to be okay in this community. These thoughts remind me of something Scott Alexander once wrote - that sometimes he hears someone say true but low status things - and his automatic thoughts are about how the person must be stupid to say something like that, and he has to consciously remind himself that what was said is actually true.

Also, all these thoughts about this social reality sadden me a little - why oh why is AI safety such a status-concerned and “serious business” area nowadays?

Comment by philip_b (crabman) on Why you should learn a musical instrument · 2024-05-16T08:53:24.062Z · LW · GW

I've been learning to play diatonic harmonica for the last 2 years. This is my first instrument and I can confirm that learning an instrument (and music theory) is a lot of fun and it has also taught me some new things about how to learn things in general.

Comment by philip_b (crabman) on Do you believe in hundred dollar bills lying on the ground? Consider humming · 2024-05-16T08:49:15.378Z · LW · GW

I hum all the time anyway.

Comment by philip_b (crabman) on Dyslucksia · 2024-05-10T05:49:30.164Z · LW · GW

Unless I don’t recognize the sounds. It’s like asking me to beatbox the last 5 seconds of the gurgling of a nearby river. How the fudge would I do that?

Wait, are there people who can do that?

I think that's pretty easy :)

Comment by philip_b (crabman) on The power of finite and the weakness of infinite binary point numbers · 2024-04-20T08:55:30.177Z · LW · GW

Please go, study math fundamentals properly, and then come back. What you wrote doesn't make much sense.

Comment by crabman on [deleted post] 2024-01-22T13:26:37.628Z

I think this last edit is bad.

Comment by philip_b (crabman) on johnswentworth's Shortform · 2023-12-27T15:33:47.267Z · LW · GW

Is there any "native" textbook that is pragmatic and explains how to use bayesian in practice (perhaps in some narrow domain)?

Comment by philip_b (crabman) on Wireheading and misalignment by composition on NetHack · 2023-10-29T13:46:03.278Z · LW · GW

Did the model randomly stumble upon this strategy? Or was there an idea pitched by the language model, something like "hey, what if we try to hallucinate and maybe we can hack the game that way"?

Comment by philip_b (crabman) on PortAudio M1 Latency · 2023-10-12T17:20:46.731Z · LW · GW

Are you able to play sounds using other programs (e.g. open a YouTube video in the background) while getting great latency in reaper or in something similar to reaper?

Comment by philip_b (crabman) on PortAudio M1 Latency · 2023-10-12T05:47:57.756Z · LW · GW

I've been thinking of buying an M1 MacBook because everyone says that Apple's sound system is great and works out of the box correctly with low latency and no problems, unlike Windows+Wasapi, Windows+ASIO, and Linux. I want to use it for music stuff without an external audio interface. How true is this and would you recommend it?

Comment by philip_b (crabman) on luciaquirke's Shortform · 2023-10-12T05:45:00.562Z · LW · GW

You says Vast.AI is the "most reliable provider". In my experience, it's an unreliable mess with sometimes buggy not properly working servers and non-existent support service. I will also say the same about runpod.io. On the other hand, lambdalabs had been very reliable in my experience and has a much better UX. The main problem with LambdaLabs is that nowadays it happens pretty often that it has no available servers.

Comment by philip_b (crabman) on Revisiting the Manifold Hypothesis · 2023-10-02T14:35:26.336Z · LW · GW

This sounds similar to whether a contemporary machine learning model can break a cryptographic cipher, a hash function, or something like that.

Comment by philip_b (crabman) on The Lightcone Theorem: A Better Foundation For Natural Abstraction? · 2023-05-15T04:39:15.631Z · LW · GW

Can you formulate the theorem statement in a precise and self-sufficient way that is usually used in textbooks and papers so that a reader can understand it just by reading it and looking up the used definitions?

Comment by philip_b (crabman) on [Linkpost] GatesNotes: The Age of AI has begun · 2023-03-22T08:37:29.478Z · LW · GW

I have a kinda-unrelated question. Does Bill Gates write gatesnotes completely himself just because he wants? Or is this a marketing/pr thing and is written by other people? If it's the former, then I want to read it. If it's the latter, I don't.

Comment by philip_b (crabman) on Johannes C. Mayer's Shortform · 2022-12-15T16:16:27.906Z · LW · GW

Do you mean "What do you want me to do" in the tone of voice that means "There's nothing to do here, bugger off"? Or do you mean "What do you want me to do?" in the tone of voice that means "I'm ready to help with this. What should I do to remedy the problem?"?

Comment by philip_b (crabman) on Basic building blocks of dependent type theory · 2022-12-15T15:58:11.234Z · LW · GW

I have recently read The Little Typer by Friedman and Christiansen. I suspect that this book can serve as an introduction similarly to this (planned, so far) sequence of posts. However, the book is not concise at all.

Comment by philip_b (crabman) on Jailbreaking ChatGPT on Release Day · 2022-12-02T15:36:33.695Z · LW · GW

Are those instructions for making a Molotov cocktail and for hotwiring a car real? They look like something someone who's only seen it done in movies would do. Same question for methamphetamine, except that recipe looks more plausible.

Comment by philip_b (crabman) on Refactoring Myself: 4 Years Later · 2022-11-20T22:01:39.568Z · LW · GW

Thanks for writing this update! I think your English skills have improved a lot.

Comment by philip_b (crabman) on Help Me Refactor Myself I am Lost · 2022-11-15T19:39:20.023Z · LW · GW

I've just read your previous two posts. I, too, will be interested to read another post of yours.

Comment by philip_b (crabman) on The Alignment Community Is Culturally Broken · 2022-11-13T22:37:00.244Z · LW · GW

I am (was) an X% researcher, where X<Y. I wish I had given up on AI safety earlier. I suspect it would've been better for me if AI safety resources explicitly said things like "if you're less than Y, don't even try", although I'm not sure if I would've believed them. Now, I'm glad that I'm not trying to do AI safety anymore and instead I just work at a well paying relaxed job doing practical machine learning. So, I think pushing too many EAs into AI safety will lead to those EAs suffering much more, which happened to me, so I don't want that to happen and I don't want the AI Alignment community to stop saying "You should stay if and only if you're better than Y".

Actually, I wish there were more selfish-oriented resources for AI Alignment. Like, with normal universities and jobs, people analyze how to get into them, have a fulfilling career, earn good money, not burn out, etc. As a result, people can read this and properly analyze if it makes sense for them to try to get into jobs or universities for their own food. But with a career in AI safety, this is not the case. All the resources look out not only for the reader, but also for the whole EA project. I think this can easily burn people.

Comment by philip_b (crabman) on Consider Taking Zinc Every Time You Travel · 2022-11-13T14:08:28.787Z · LW · GW

I still take these zinc lozenges when I suspect that I might fall with a common cold. I feel like they help me somewhat. Maybe my colds have been shorter since I've started taking Zinc but I'm not sure. I haven't been tracking any data explicitly. I guess I'm gonna be taking Zinc for common cold as long as I don't get further evidence about it not working.

Comment by philip_b (crabman) on The Slippery Slope from DALLE-2 to Deepfake Anarchy · 2022-11-05T15:04:36.714Z · LW · GW
Comment by philip_b (crabman) on Writing Russian and Ukrainian words in Latin script · 2022-10-25T21:47:07.375Z · LW · GW

Perhaps you can just use the international phonetic alphabet?

Comment by philip_b (crabman) on Baby Monitor with Delay · 2022-10-03T16:36:05.313Z · LW · GW

I don't know how to square that with the idea that one shouldn't ignore their crying kids. I have no idea how kids' crying at night works. Is it possible that a parent should just suck it up and come and comfort the baby every time they cry? Maybe you can comfort her since she's crying but not give her the reward of soothing her until she falls asleep? Is it possible that she cries at night because she's doesn't get enough cuddles during the day or because the room looks scary or something like that? I don't know enough about the situation and I don't have any kids of my own and don't have any practical experience of dealing with them. Maybe you can be there with her in her sleeping room when she cries but still make it so that she learns to self-soothe and put herself to sleep? Like, idk, stay with her but don't rock her to sleep or something like that.

Comment by philip_b (crabman) on Baby Monitor with Delay · 2022-10-03T15:22:02.566Z · LW · GW

Ok, I don't know more than that about addressing children's crying. I just thought that ignoring it is (almost always?) bad but I'm not sure.

Comment by philip_b (crabman) on Baby Monitor with Delay · 2022-10-03T14:54:12.217Z · LW · GW

I'm not sure how to read this; where are you on the continuum from "I heard it's bad" to "I read all the papers and came to a deep considered view"?

I also thought so when I read your post. I'm at the "The book 'The Boy Who Was Raised as a Dog' says so" point. The book is not about sleep in particular, it's about psychological trauma in childhood, especially the one obtained from neglect.

Also, I think this might cause the child to develop either an avoidant attachment style (there's no point in crying or asking others for help, they won't come anyway).

Comment by philip_b (crabman) on How do I find tutors for obscure skills/subjects (i.e. fermi estimation tutors) · 2022-09-16T11:35:31.842Z · LW · GW

I also don't know how to find tutors for narrow subjects. For instance, I would like a little bit of tutoring about

  • panoptic segmentation
  • dependent types

but I don't know how to find one.

Comment by philip_b (crabman) on Encultured AI Pre-planning, Part 2: Providing a Service · 2022-08-20T22:23:10.078Z · LW · GW

The link to the next post in this post is broken.

Comment by philip_b (crabman) on Announcing Encultured AI: Building a Video Game · 2022-08-20T22:14:50.840Z · LW · GW

Is this the beginning of Friendship is Optimal?

Comment by philip_b (crabman) on Dwarves & D.Sci: Data Fortress · 2022-08-07T13:12:40.387Z · LW · GW

What role do I, the data scientist dwarf, have?

Comment by philip_b (crabman) on Unifying Bargaining Notions (1/2) · 2022-07-26T00:26:43.662Z · LW · GW

In the first part, the two respective properties of the two definitions of chaaness you mentioned apply after rescaling and shifting of utility functions is done, right? I.e., the properties actually say "after rescaling and shifting the points, if you move the Pareto-frontier points for a player up, they should get more utility" and "untaken options are irrelevant if you don't change the scale after removing them". Now, I don't see why these properties are interesting and what they correspond to in real life. In contrast, if they applied before rescaling and shifting, then they would be quite interesting. So, can you please elaborate why they are interesting as they are and what they actually mean as they are?

Comment by philip_b (crabman) on Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose · 2022-07-06T17:24:22.329Z · LW · GW

I just want to say that your described solution to "Problem 1: Differentiating effective interventions from unfalsifiable woo" suggests to me that your curriculum would be mostly useless for me, and maybe for many other people as well, because it won't go deep enough. I think either I've already gotten everything I can get from shallow interventions "like better nutrition, using your speaking voice more effectively, improving your personal financial organization, emergency preparedness, and implementing a knowledge management system", or they were never that good in the first place. Personally, I am focusing on psychotherapy right now. It's unfortunate that it consists mostly of borderline-unfalsifiable woo but that's all we've got.

Comment by philip_b (crabman) on D&D.Sci June 2022: A Goddess Tried To Reincarnate Me Into A Fantasy World, But I Insisted On Using Data Science To Select An Optimal Combination Of Cheat Skills! · 2022-06-04T11:58:55.083Z · LW · GW

My solution:

I choose Radiant Splendor and Enlightenment simply because out of all champions with personality like mine, it had the highest win frequency. And it even has a solid number of samples - 244. Basically, I narrowed down the dataset to only rows with the same personality like mine. Perhaps I could get some more info from other rows, but that would require spending more time.

Comment by philip_b (crabman) on D&D.Sci June 2022: A Goddess Tried To Reincarnate Me Into A Fantasy World, But I Insisted On Using Data Science To Select An Optimal Combination Of Cheat Skills! · 2022-06-04T11:03:57.620Z · LW · GW

Does the order of the two skills matter? Of course, I can check this from data, but perhaps you'd be willing to just answer this straight away so that I won't have to.

Comment by philip_b (crabman) on Another Calming Example · 2022-06-04T10:36:17.989Z · LW · GW

I am glad to hear that.