Posts

How do health systems work in adequate worlds? 2024-02-09T00:54:38.443Z
Questions about Solomonoff induction 2024-01-10T01:16:58.595Z
Chomsky on ChatGPT (link) 2023-03-09T07:00:09.935Z
(Link) I'm Missing a Chunk of My Brain 2022-09-05T02:10:04.912Z
Is population collapse due to low birth rates a problem? 2022-08-26T15:28:24.475Z
Is there any evidence that handwashing does anything to prevent COVID? 2022-07-25T07:34:31.211Z
ETH is probably undervalued right now 2022-06-19T02:20:15.752Z
Why I don't believe in doom 2022-06-07T23:49:17.551Z
We will be around in 30 years 2022-06-07T03:47:22.375Z
How does the world look like 10 years after we have deployed an aligned AGI? 2022-04-19T11:34:35.951Z
Can you help me find this book? 2022-03-02T10:19:11.649Z
Hedging omicron impact to supply chains 2022-01-17T18:49:16.684Z
A common misconception about the evolution of viruses 2022-01-06T10:00:36.706Z
Thank you, Queensland 2022-01-05T14:21:21.501Z
How the response to a pandemic affecting mainly children would look like? 2022-01-05T10:15:49.223Z
What can we learn from traditional societies? 2021-10-05T00:24:35.098Z
[Book review] Who we are and how we got here 2021-09-30T04:26:56.053Z
Beware of small world puzzles 2021-08-30T06:00:55.054Z
What is the name of this fallacy? 2021-08-11T02:10:43.894Z
How to make more people interested in rationality? 2021-07-11T02:30:15.457Z
Can Bitcoin transition from PoW to PoS? 2021-05-14T03:43:10.925Z
Does anyone know this webpage about Machine Learning? 2021-04-16T01:03:43.804Z

Comments

Comment by mukashi (adrian-arellano-davin) on Questions about Solomonoff induction · 2024-01-11T03:24:15.776Z · LW · GW

Thank you for the comprehensive answer and for correcting the points where I wasn't clear. Also, thank you for pointing out that the Kolmogorov complexity of a program is the length of the program that writes that program

The complexity of the algorithms was totally arbitrary and for the sake of the example.

I still have some doubts, but everything is more clear now (see my answer to Charlie Steiner also)

Comment by mukashi (adrian-arellano-davin) on Questions about Solomonoff induction · 2024-01-11T03:20:01.293Z · LW · GW

I think that re-reading again your answer made something click. So thanks for that

The observed data is not **random**, because random is not a property of the data itself.
The hypotheses that we want to evaluate are not random either, because we are analysing Turing machines that generate those data deterministically.

If the data is HTHTHT,  we do not test a python script that is doing:

random.choices(["H","T"], k=6)

What we test instead is something more like

["H"] +["T"]+["H"]+["T"]+["H"]+["T"]

And

["HT"]*3

In this case, this last script will be simpler and for that reason, will receive a higher prior.

If we apply this is a Bayesian setting, the likelihood of all these hyptohesis is necessarily 1, so the posterior probabilty just becomes the prior (divided by some factor), which is proportional to the length of the program. This makes sense because it is in agreement with Occam's razor.

The thing I still struggle to see is how I connect this framework with probabilistic hypothesis that I want to test, such as the data was generated by a fair coin. One possibility that I see (but I am not sure this is the correct thing) is testing all the possible strings generated by an algorithm like this:

i=0
while True:
   random.seed(i)
   random.choices(["H","T"], k=6)

The likelihood of the strings like HHTHTH is 0 so we remove them and then we are left only with the algorithms that are consistent with the data.

Not totally sure of the last part

Comment by mukashi (adrian-arellano-davin) on Questions about Solomonoff induction · 2024-01-10T06:21:25.348Z · LW · GW

The part I understood is, that you weigh the programs based on the length in bits, the longer the program the less weight it has. This makes total sense.

I am not sure that I understand the prefix thing and I think that's relevant. For example, it is not clear to me if once I consider a program that outputs 0101 I will simply ignore other programs that output that same thing plus one bit (e.g. 01010).

I also find still fuzzy (and know at least I can put my finger on it) is the part where Solomonoff induction is extended to deal with randomness.

Let me see if I can make my question more specific:

Let's imagine for a second that we live in a universe where only the next programs could be written:

  • A) A program that produces deterministically a given sequence of five digits (there are 2^5 of this programs
  • B) A program that produces deterministically a given sequence of 6 digits (there are 2^6 of them)
  • C) A program that produces 5 random coin flips with p=0.5

The programs in A have 5 bits of Kolmogorov complexity each. The programs in B have 6 bits. The program C has 4

We observe the sequence O = HTHHT

I measure the likelihood for each possible model. I discard the models with L = 0

A) There is a model here with likelihood 1

B) There are 2 models here, each of them with likelihood 1 too

C) This model has likelihood 2^-5

Then, things get murky:

the priors for each models will be 2^-5 for model A, 2^-6 for model B and 2^-4 for model C, according to their Kolmogorov complexity? 

Comment by mukashi (adrian-arellano-davin) on Questions about Solomonoff induction · 2024-01-10T05:26:09.843Z · LW · GW

Yes, this is something I can see easily, but I am not sure how Solomonoff induction accounts for that 

Comment by mukashi (adrian-arellano-davin) on Questions about Solomonoff induction · 2024-01-10T05:23:29.600Z · LW · GW

I think this is pointing to what I don't understand: how do you account for hypotheses that explain data generated randomly? How do you compare a hypothesis which is a random number generator with some parameters against a hypothesis which has some deterministic component?

 Is there a way to understand this without reading the original paper (which will probably take me quite long)? 

When you understood this, how was your personal process that took you from knowing about probabilities and likelihood to understanding Solomonoff induction? Did you have to read the original sources or you found some good explanations somewhere?

I also don't get if this is a calculation that you can do in a single step or if this is a continuous thing. In other words, Solomonoff induction would work only if we assume that we keep observing new data? 

Sorry for the stupid questions, as you can see, I am confused.

Comment by mukashi (adrian-arellano-davin) on Compensating for Life Biases · 2024-01-10T01:27:25.678Z · LW · GW

I have followed a similar strategy using Anki cards. However, I think that allocating a specific time slot to review your principles and then "act" on then is probably much more effective than passively remind those principles. I will adopt this.

Comment by mukashi (adrian-arellano-davin) on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-18T09:20:30.419Z · LW · GW

What happens if there is more than one powerful agent just playing the charade game? Is there any good article about what happens in a universe where multiple AGI are competing among them? I normally find only texts that consider that once we get AGI we all die so there is no room for these scenarios.

Comment by mukashi (adrian-arellano-davin) on "Humanity vs. AGI" Will Never Look Like "Humanity vs. AGI" to Humanity · 2023-12-17T01:23:29.563Z · LW · GW

I have been (and I am not the only one) very put off by the trend in the last months/years of doomerism pervading LW, with things like "we have to get AGI right at the first try or we all die" repeated constantly as a dogma.

To someone who is very skeptical of the classical doomist position (aka AGI will make nanofactories and will kill everyone at once), this post is very persuasive and compelling. This is something I could see happening.  This post serves as an excellent example for those seeking effective ways to convince skeptics.
 

Comment by mukashi (adrian-arellano-davin) on The Handbook of Rationality (2021, MIT press) is now open access · 2023-10-10T11:24:33.754Z · LW · GW

Thank you so much for sharing, it looks great

Comment by mukashi (adrian-arellano-davin) on Memory bandwidth constraints imply economies of scale in AI inference · 2023-09-18T04:54:45.715Z · LW · GW

This thing?
https://www.scientificamerican.com/article/what-is-the-memory-capacity/
 

Comment by mukashi (adrian-arellano-davin) on Memory bandwidth constraints imply economies of scale in AI inference · 2023-09-18T01:03:13.671Z · LW · GW

Many of the calculations on the brain capacity are based on wrong assumptions. Is there an original source for that 2.5 PB calculation? This video is very relevant to the topic if you have some time to check it out:

 

Comment by mukashi (adrian-arellano-davin) on Introducing Fatebook: the fastest way to make and track predictions · 2023-07-31T13:43:11.580Z · LW · GW

Thanks so much🙏

Comment by mukashi (adrian-arellano-davin) on Introducing Fatebook: the fastest way to make and track predictions · 2023-07-12T11:33:50.707Z · LW · GW

Same I would do in Slack! I simply have some work groups in Discord, that's why

Comment by mukashi (adrian-arellano-davin) on Introducing Fatebook: the fastest way to make and track predictions · 2023-07-12T02:24:46.382Z · LW · GW

Is this available for discord?

Comment by mukashi (adrian-arellano-davin) on Introducing bayescalc.io · 2023-07-08T00:40:14.161Z · LW · GW

Great! Can you make that, if I input P for hypothesis A, 1 - P appears automatically for Hypothesis B?

Comment by mukashi (adrian-arellano-davin) on 60+ Possible Futures · 2023-06-27T11:24:58.907Z · LW · GW

This should be curated. Just reading this list is a good exercise for those people that attribute a very high probability to a single possible scenario.

Comment by mukashi (adrian-arellano-davin) on We Are Less Wrong than E. T. Jaynes on Loss Functions in Human Society · 2023-06-05T13:16:19.544Z · LW · GW

I don't see why Jaynes is wrong. I guess it depends on the interpretation? If two humans are chasing the same thing there is a limited amount of it, of course they are in conflict with each other. Isn't that what Jaynes is pointing at?

Comment by mukashi (adrian-arellano-davin) on The challenge of articulating tacit knowledge · 2023-06-01T07:52:06.683Z · LW · GW

Good post, I hope to read more from you

Comment by mukashi (adrian-arellano-davin) on The Crux List · 2023-06-01T03:32:32.302Z · LW · GW

Yeah, sorry about that. I didn't put much effort into my last comment.

Defining intelligence is tricky, but to paraphrase EY, it's probably wise not to get too specific since we don't fully understand Intelligence yet. In the past, people didn't really know what fire was. Some would just point to it and say, "Hey, it's that shiny thing that burns you." Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don't think the discussion about AGI and doom scenarios gets much benefit from a super precise definition of intelligence. A broad definition that most people agree on should be enough, like "Intelligence is the capacity to create models of the world and use them to think."

But I do think we should aim for a clearer definition of AGI (yes, I realize 'Intelligence' is part of the acronym). What I mean is, we could have a more vague definition of intelligence, but AGI should be better defined. I've noticed different uses of 'AGI' here on Less Wrong. One definition is a machine that can reason about a wide variety of problems (some of which may be new to it) and learn new things.  Under this definition, GPT4 is pretty much an AGI. Another common definition on this forum is an AGI is a machine capable of wiping out all humans. I believe we need to separate these two definitions, as that's really where the core of the crux lies.

Comment by mukashi (adrian-arellano-davin) on The Crux List · 2023-06-01T00:27:14.907Z · LW · GW

What is an AGI? I have seen a lot of "not a true scotman" around this one.

Comment by mukashi (adrian-arellano-davin) on The bullseye framework: My case against AI doom · 2023-05-31T04:25:24.027Z · LW · GW

I guess the crux here for most people is the timescale. I agree actually that things can get eventually very bad if there is no progress in alignment etc, but the situation is totally different if we have 50 or 70 years to work on that problem or, as Yudkowsky keeps repeating, we don't have that much time because AGI will kill us all as soon as it appears.

Comment by mukashi (adrian-arellano-davin) on The bullseye framework: My case against AI doom · 2023-05-30T14:48:53.450Z · LW · GW

The standard argument you will probably listen is that AGI will be capable of killing everyone because they can think so much faster than humans. I haven't seen yet a serious engagement from doomers to the argument of capabilities. I agree with everything you said here and to me these arguments are obviously right.

Comment by mukashi (adrian-arellano-davin) on Book Review: How Minds Change · 2023-05-27T02:41:54.378Z · LW · GW

Any source you would recommend to know more about the specific practices of Mormons you are referring to?

Comment by mukashi (adrian-arellano-davin) on Where do you lie on two axes of world manipulability? · 2023-05-26T12:30:50.388Z · LW · GW

The Babbage example is the perfect one. Thank you, I will use it

Comment by mukashi (adrian-arellano-davin) on Where do you lie on two axes of world manipulability? · 2023-05-26T12:21:44.471Z · LW · GW

This would clearly put my point in a different place from the doomers

Comment by mukashi (adrian-arellano-davin) on Where do you lie on two axes of world manipulability? · 2023-05-26T07:22:47.161Z · LW · GW

I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them. 

The reason is that it is not very clear to me the exact meaning of "tractable for a SI". I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very few factories (and for some specific technologies the number of factories is n=1). Will we get there eventually? Yes. But not at the speed that EY fears.

I think you summarised pretty well my position in this paragraph:

"I think another common view on LW is that many things are probably possible in principle, but would require potentially large amounts of time, data, resources, etc. to accomplish, which might make some tasks intractable, if not impossible, even for a superintelligence. "

So I do think that EY believes in "magic" (even more after reading his tweet), but some people might not like the term and I understand that.

In my case using the word magic does not refer only at breaking the laws of physics. Magic might refer to someone who holds such a simplified model of the world that think, that you can make in a matter of days all those factories, machines and working nanotechnology (on the first try) and then succesfully deploy them everywhere killing everyone, and that we will get to that point in a matter of days AND that there won't be any other SI that could work to prevent those scenarios. I don't think I am misrepresenting EY point of view here, correct me otherwise,

If someone believed that a good group of engineers working one week in a spacecraft model could succesfuly 30 years later in an asteroid close to Proxima Centaury, would you call it magical thinking? I would. There is nothing beyond the realm of physics here! But it assumes so many things and it is so stupidly optimistic that I would simply dismiss it as nonsense.

Comment by mukashi (adrian-arellano-davin) on The way AGI wins could look very stupid · 2023-05-13T12:27:46.490Z · LW · GW

I agree with this take, but do those plans exist, even in theory?

Comment by mukashi (adrian-arellano-davin) on Fatebook for Slack: Track your forecasts, right where your team works · 2023-05-12T02:24:40.182Z · LW · GW

This is fantastic. Is there anything remotely like this available for Discord?

Comment by mukashi (adrian-arellano-davin) on A more grounded idea of AI risk · 2023-05-11T10:21:49.147Z · LW · GW

I don't see how that implies that everyone dies.

It's like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.

Comment by mukashi (adrian-arellano-davin) on Thriving in the Weird Times: Preparing for the 100X Economy · 2023-05-09T08:33:54.737Z · LW · GW

Thanks for this post. It's refreshing to hear about how this technology will impact our lives in the near future without any references to it killing us all

Comment by mukashi (adrian-arellano-davin) on Contra Yudkowsky on AI Doom · 2023-04-24T08:47:25.507Z · LW · GW

There are some other assumptions that go into Eliezer's model that are required for doom. I can think of one very clearly which is:

5.  The transition to that god-AGI will be as quick that other entities won't have the time to reach also superhuman capabilities. There are no "intermediate" AGIs that can be used to work on Alignment related problems or even as a defence from unaligned AGIs

Comment by mukashi (adrian-arellano-davin) on What would “The Medical Model Is Wrong” look like? · 2023-04-21T07:18:02.302Z · LW · GW

I wish you recover soon with all my heart

Comment by mukashi (adrian-arellano-davin) on What would “The Medical Model Is Wrong” look like? · 2023-04-21T07:17:06.751Z · LW · GW

I believe I have found a perfect example where the "Medical Model is Wrong," and I am currently working on a post about it. However, I am swamped with other tasks, I wonder if I will ever finish it.

In my case, I am highly confident that my model is correct, while the majority of the medical community is wrong.  Using your bullet points:

1.Personal: I have personally experienced this disease and know that the standard treatments do not work. 

2.Anecdotal: I am aware of numerous cases where the conventional treatment has failed. In fact, I am not aware of any cases where it has been successful. 

3.Research papers: I came across a research paper from 2022  that shares the same opinion as mine. 

4.Academics: Working in academia, I am well aware of its limitations. In this specific case, there is a considerable amount of inertia and a lack of communication between different subfields, as accurately described in the book "Inadequate Equilibria" by EY. 

5.Medical: Most doctors hold the same opinion because they are influenced by their education. Therefore, if 10 doctors provide the same response, it should not be considered as 10 independent opinions. 

6.Countercultural experts: No idea here

7.Communities: I have not explored this extensively, but completing this post I am talking about might be the beginning 

8. Someone claims to have completely made the condition disappear using arbitrary methods. I am not personally aware of any such cases but I suspect that it is feasible and could potentially be relatively simple. 

9.Models: I have a precise mechanistic model of the disease and why the treatments fail to cure it. I work professionally in a field closely related to this disease.

In summary, my confidence comes from, 1. being an expert in a closely related field and understanding what other people are missing and above all, why they are missing it, 2. having a mechanistic model 3. finding publications that manifest similar opinions.


 

Comment by mukashi (adrian-arellano-davin) on What is the best source to explain short AI timelines to a skeptical person? · 2023-04-13T05:51:43.573Z · LW · GW

Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I'm on the camp that we are 2 or 3 years away to an AGI (it's hard to see why GPT4 does not qualify as that), I don't think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there

Comment by mukashi (adrian-arellano-davin) on What is the best source to explain short AI timelines to a skeptical person? · 2023-04-13T05:46:16.761Z · LW · GW

Has he tried personally to interact with GPT4? Can't think of a better way. It convinced even Bryan Caplan, who had bet publicly against it

Comment by mukashi (adrian-arellano-davin) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-10T08:47:42.212Z · LW · GW

I would certainly appreciate knowing the reason for the downvotes

Comment by mukashi (adrian-arellano-davin) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-10T08:39:01.850Z · LW · GW

I guess I will break my recently self-imposed rule of not talking about this anymore. 

I can certainly envision a future where multiple powerful AGIs fight against each other and are used as weapons, some might be rogue AGIs and some others might be at the service of human-controlled institutions (such as Nation Estates). To put it more clearly: I have trouble imagining a future where something along these lines DOES NOT end up happening.

But, this is NOT what Eliezer is saying. Eliezer is saying:

The Alignment problem has to be solved AT THE FIRST TRY because once you create this AGI we are dead in a matter of days (maybe weeks/months, it does not matter). If someone thinks that Eliezer is saying something else, I think they are not listening properly.  Eliezer can have many flaws but lack of clarity is not one of them.

In general, I think this is a textbook example of the Motte and Baley fallacy. The Motte is:  AGI can be dangerous, AGI will kill people, AGI will be very powerful.  The Baley is:  AGI creation means the imminent destruction of all human life and therefore we need to stop now all developments.

I never discussed the Motte. I do agree with that. 

Comment by mukashi (adrian-arellano-davin) on Ng and LeCun on the 6-Month Pause (Transcript) · 2023-04-10T06:21:04.080Z · LW · GW

But I think they do believe what they say. Is it maybe that they are ... pointing to something else? when using the word AGI? In fact, I do not even know if there is a commonly accepted definition of AGI.  

Comment by mukashi (adrian-arellano-davin) on Ng and LeCun on the 6-Month Pause (Transcript) · 2023-04-09T14:06:08.752Z · LW · GW

I don't see either how some people can say that AGI will take decades when GPT4 is already almost there

Comment by mukashi (adrian-arellano-davin) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-08T07:34:06.626Z · LW · GW

That's a possibility

Comment by mukashi (adrian-arellano-davin) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-08T02:55:13.774Z · LW · GW

Certainly no paperclips

Comment by mukashi (adrian-arellano-davin) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-07T08:45:30.184Z · LW · GW

Your comment is sitting at positive karma only because I strong upvoted it. It is a good comment, but people on this site are very biased in the opposite direction. And this bias is going to drive non-doomers eventually away from this site (probably many have already left), and LW will continue descending in a spiral of non-rationality. I really wonder how people in 10 or 15 years, when we are still around in spite of powerful AGI being widespread, will rationalize that a community devoted to the development of rationality ended up being so irrational.  And that was my last comment showing criticism of doomers, everytime I do it costs me a lot of karma. 

Comment by mukashi (adrian-arellano-davin) on Eliezer Yudkowsky’s Letter in Time Magazine · 2023-04-06T01:38:42.793Z · LW · GW

I can't agree more with you. But this is a complicated position to maintain here in LW, and one that gives you a lot of negative karma

Comment by mukashi (adrian-arellano-davin) on New survey: 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development · 2023-04-05T03:19:40.445Z · LW · GW

One of the many ways this could backfire badly is by allowing authoritarian states like China to take the lead in the development of AIs.

Comment by adrian-arellano-davin on [deleted post] 2023-04-03T02:54:51.209Z

+1 here

Comment by mukashi (adrian-arellano-davin) on Eliezer's Videos · 2023-03-31T19:14:48.931Z · LW · GW

Sorry, I assumed you posted that just before the interview

Comment by mukashi (adrian-arellano-davin) on Eliezer's Videos · 2023-03-31T01:58:12.619Z · LW · GW

Well, it seems it is your lucky day:

Comment by mukashi (adrian-arellano-davin) on [Linkpost] GatesNotes: The Age of AI has begun · 2023-03-22T22:26:48.272Z · LW · GW

What do you mean by true IA?

Comment by mukashi (adrian-arellano-davin) on [Linkpost] GatesNotes: The Age of AI has begun · 2023-03-22T05:21:55.638Z · LW · GW

I am not sure how anyone would say that  "[N]one of the breakthroughs of the past few months have moved us substantially closer to strong AI." unless he hasn't really followed the breakthroughs of the past few months or had read only bad secondhand reports

Comment by mukashi (adrian-arellano-davin) on What fact that you know is true but most people aren't ready to accept it? · 2023-03-10T00:41:36.627Z · LW · GW

I have no idea about that topic specifically. What I would suggest is: read yourself the literature. This is going to allow you to, at least, ask better questions when meeting the dentist