Posts

What to do if a nuclear weapon is used in Ukraine? 2022-10-19T18:43:58.486Z
Would "Manhattan Project" style be beneficial or deleterious for AI Alignment? 2022-08-04T19:12:44.560Z
Impactful data science projects 2022-04-11T04:27:51.133Z
A proposed system for ideas jumpstart 2021-12-14T21:01:00.506Z
Grading scheme from Valentine Smith 2021-10-23T19:29:23.842Z
Why there are no online CFAR workshops? 2021-09-05T15:02:54.192Z
Erratum for "From AI to Zombies" 2021-08-12T04:39:05.779Z
Which animals can suffer? 2021-06-01T03:42:43.331Z
Which activities do you prefer to better recover productivity? 2021-06-01T01:01:42.384Z
How one uses set theory for alignment problem? 2021-05-29T00:28:01.832Z
Is nuclear war indeed unlikely? 2021-05-23T23:14:54.331Z
Simulation theology: practical aspect. 2021-05-05T02:20:59.684Z
Nutrition for brain. 2021-03-17T05:00:37.053Z
Chaotic era: avoid or survive? 2021-02-22T01:34:41.317Z
Quadratic, not logarithmic 2021-02-08T03:42:34.781Z
Singularity&phase transition-2. A priori probability and ways to check. 2021-02-08T02:21:27.825Z
Can we model technological singularity as the phase transition? 2020-12-26T03:20:19.726Z

Comments

Comment by Just Learning on Impactful data science projects · 2023-06-01T16:54:50.998Z · LW · GW

So far nothing, was distracted by other stuff in my life. Yes, let's chat! frombranestobrains@gmail.com

Comment by Just Learning on What to do if a nuclear weapon is used in Ukraine? · 2022-10-20T14:20:52.124Z · LW · GW

After the rest of the USA is destroyed the very unstable situation (especially taking into account how many people have guns) is quite likely. In my opinion countries (and remote parts of countries) that will not be under attack at all are much better 

Comment by Just Learning on Can we model technological singularity as the phase transition? · 2022-07-17T13:32:44.752Z · LW · GW

Thank you for your research! First of all, I don't expect the non-human parameter to give a clear power-law, since we need to add humans as well. Of course, close to singularity the impact of humans will be very small, but maybe we are not that close yet. Now for the details:


Compute:
1. Yes, Moore's law was a quite steady exponential for quite a while, but we indeed should multiply it.
2. The graph shows just a five years period, and not the number of chips produced, but revenue. The five years period is too small for any conclusions, and I am not sure that fluctuations in revenue are not driven mainly by market price rather than by produced amount. 

Data storage:
 Yes, I saw that one before, seems more like they just draw a nice picture rather than real data.

General remarks:

I agree with the point that AGI appearance can be sufficiently random. I can see two mechanisms that potentially may make it less random. First, we may need a lot of computational resources, data storage etc. to create it, and as a lab or company reaches the threshold, it happens easily with already existing algorithms. Second, we may need a lot of digitalized data to train AGI, so the transition again happens only as we have that much data. 
Lastly, notice that cthe reation of AGI is not a singularity in a mathematical sense yet. It will certainly accelerate our progress, but not to infinity, so if the data will predict for example singularity in 2030, it will likely mean AGI earlier than that. 

How trustworthy would this prediction be? Depends on the amount of data and noise. If we have just 10-20 datapoints scattered all around the graph, so you can connect the dots in any way you like - not really. If, instead, we are lucky and the control parameter happened to be something easily measurable (something such that you can get just-in-time statistics, like the number of papers on arXiv right now, so we can get really a lot of data points) and the parameter continues to change as theory predicts - it would be a quite strong argument for the timeline. 

It is not very likely that the control parameter will be that easily measurable and will obey power-law that good. I think it is a very high risk - very high gain project (very high gain, because if the prediction will be very clear it will be possible to persuade more people that the problem is important).




 

Comment by Just Learning on A proposed system for ideas jumpstart · 2021-12-15T08:51:30.783Z · LW · GW

You are making a good point. Indeed, the system that would reward authors and experts will be quite complicated, so I was thinking about it on a purely volunteering basis (so in the initial stages it is non-profit). Then, if the group of people willing to work on the project was formed, they may turn it into a business project. If the initial author of the idea is in the project, he may get something, otherwise, no - the idea is already donated, no donations back. I will make an update to the initial post to clarify this point.

As to your idea, I am totally not an expert in this field. Hopefully, we will find the experts for all our ideas (I also have a couple).

Comment by Just Learning on Grading scheme from Valentine Smith · 2021-10-26T08:06:34.116Z · LW · GW

Thank you very much, it does! 
I think you answer is worth to be published as a separate post. It will be relevant for everyone who is teaching.  

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-07T20:36:50.245Z · LW · GW

It would be very interesting to look at the results of this experiment in more detail.

 

Yes, maybe I explained what I mean not very well; however, gjm (see commentaries below) seems to get it. The point is not that CFAR is very much like Lifespring (though I may have sounded like that), the point is that there are certain techniques (team spirit, deep emotional connections etc.) that are likely to be used in such workshops, that will most certainly make participants love workshop and organizers (and other participants) , but their effect on the participant's life can be significantly weaker than their emotional change of mind.  These techniques work sufficiently worse for the online workshops, so this was one of the reason I tried to understand why CFAR does not hold online workshops. Another reason was resentment towards CFAR for not doing it, for it would be much more convenient to me. 

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-06T20:12:20.822Z · LW · GW

Is there any proven benefits of meditation retreats in comparison with regular meditation?

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-06T18:50:06.697Z · LW · GW

Ok, your point makes sense.

Basically, I am trying to figure out for myself if going to the workshop would be beneficial for me.  I do believe that CFAR does not simply try to get as much money as possible. However, I am concerned that people after the workshop are strongly biased towards liking it not because it really helps, but because of psychological mechanisms akin to Lifespring. I am not saying that CFAR is doing it intentionally, it could just have been raised somehow on its own. Maybe these mechanisms are even beneficial to whatever CFAR is doing, but they definitely make evaluation harder.  
 

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-06T08:46:56.642Z · LW · GW

"When I was talking to Valentine (head of curriculum design at the time) a while ago he said that the spirit is the most important thing about the workshop."

Now, this already sounds a little bit disturbing and resembling Lifespring. Of course, the spirit is important, but I thought the workshop is going to arm us with instruments we can use in real life, not only in the emotional state of comradeship with like-minded rationalists.
 

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-06T08:39:04.800Z · LW · GW

I can understand your point, but I am not persuaded yet. Let me maybe clarify why. During the year and a half of COVID, the in-person workshops were not possible. During this time, there were people, who would strongly benefit from the workshop, and the workshop would be helpful at this time (for example, they were making a career choice). Some of them can allow private places for the time of the workshop. It seems that for them, during this time the online workshop would be certainly more beneficial than no workshop at all. Moreover, conducting at least one online workshop would be a good experiment that would give useful information. It is totally not obvious to me why the priors that "online workshop is useless or harmful, taking into account opportunity cost" are so high that this experiment should not be conducted. 

Yes, I hope someone from CFAR can maybe explain it better to me.

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-05T21:35:03.687Z · LW · GW

It is a good justification for this behavior, but it does not seem to be the most rational choice. Indeed, one could specify that the participant of the online workshop must have a private space (own bedroom, office, hotel room, remote place in a park - whatever fits). I am pretty sure there is a significant number of people, who would prefer an online workshop to the offline one (especially when all offline are canceled due to COVID), and who have or can find a private space for the duration of the workshop. To say that we are not doing it because some people do not have privacy is like for the restaurant to stop offer meat to everyone because there are vegans among customers. Of course, online workshop is not for everyone, but there are people for whom it would work. 

Comment by Just Learning on Why there are no online CFAR workshops? · 2021-09-05T16:53:22.951Z · LW · GW

I agree that for some people physical contact (like hugs, handshaking etc.) indeed means a lot. However, it is not for everyone. Moreover, even if the online workshop is less effective due to lack of this spirit, is it indeed so ineffective that it is worse than no workshop at all? Finally, why just not to try? It sounds like a thing that should be tried at least one time, and if it fails - well, then we see that it fails. 

Yes, I hope someone who attended CFAR (or even somehow related to it) would see this question and give their answer. 

Comment by Just Learning on Erratum for "From AI to Zombies" · 2021-08-17T05:18:30.333Z · LW · GW

Are there any other examples when rationality guides you faster than the scientific approach? If so it would be good to collect and mention them. If no I am pretty suspicious about QM one as well. 

Comment by Just Learning on Which animals can suffer? · 2021-06-03T04:17:19.403Z · LW · GW

First of all, it is my mistake - in the paper they used pain more like a synonym to suffering. They wanted to clarify that the animal avoids tissue damage (heat, punching, electric shock etc.) not just on the place, but learns to avoid it. To avoid it right there is simply nociception that can be seen in many low-level animals.

I don't know much about the examples you mentioned. For example, bacterias certainly can't learn to avoid stimuli associated with something bad for them. (Well, they can on the scale of evolution, but not as a single bacteria). 

Comment by Just Learning on Which animals can suffer? · 2021-06-02T04:45:37.634Z · LW · GW

If it is, does it mean that we should all artificial neural network training consider as animal experiments? Should we put something like "code welfare is also animal welfare"?

Comment by Just Learning on Which animals can suffer? · 2021-06-01T19:22:54.040Z · LW · GW

I agree with the point about the continuous ability to suffer rather than a threshold. I totally agree that there is no objective answer, we can't measure sufferings. The problem is, however, that it leaves a practical question that is not clear how to solve, namely how we should treat other animals and our code. 

Comment by Just Learning on Which animals can suffer? · 2021-06-01T19:15:19.146Z · LW · GW

Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won't be the functional approach, we can't extrapolate it blindly to the  AI.

If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.

Comment by Just Learning on Which animals can suffer? · 2021-06-01T18:39:11.713Z · LW · GW

I like the idea. Basically, you suggest taking the functional approach and advance it. What do you think can be this type of process?  

Comment by Just Learning on How one uses set theory for alignment problem? · 2021-05-30T20:38:46.022Z · LW · GW

Thank you!

Comment by Just Learning on How one uses set theory for alignment problem? · 2021-05-29T20:25:29.995Z · LW · GW

Thank you, but it is again like to say: "oh,  to solve physics problem you need calculus. Calculus uses real numbers. The most elegant way to introduce real numbers is from rational numbers from natural numbers  via Peano axiomatics. So let's make physicists study Peano  axiomatic, set theory and formal logic". 

In any area of math, you need some set theory and logic - but usually in the amount that can be covered in one-two pages. 

Comment by Just Learning on How one uses set theory for alignment problem? · 2021-05-29T19:30:31.359Z · LW · GW

Thank you, but I would say it is too general answer. For example, suppose your problem is to figure out planet motion. You need calculus, that's clear. So, according to this logic, you would first need to look at the building blocks. Introduce natural numbers using Peano axioms, then study their properties, then introduce rational, and only then construct real numbers. And this is fun, I really enjoyed it. But does it help to solve the initial problem? Not at all. You can just introduce real numbers immediately. Or, if you care only about solving mechanics problems, you can work with the "intuitive" calculus of infinitesimals, like Newton himself did. It is not mathematically strict, but you will solve everything you need.
So, when you study other areas of math (like probability theory, for example), you need some knowledge of set theory, that's right. But this set theory is not something profound, which has to be studied separately.  It will be introduced in a couple of pages. I don't know much about the decision theory, does it use more? 

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-27T05:04:31.837Z · LW · GW

It is worrisome indeed. I would say, it definitely does not help and only increases a risk. However, I don't think this country-that-must-not-be-named would start the nuclear war first, simply because it has too much to lose and its non-nuclear opportunities are excellent. This may change in future - so yes, there is some probability as well.

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-27T04:50:29.402Z · LW · GW

That is exactly the problem. Suppose the Plutonia government sincerely believes, that as soon as other countries will be protected, they will help people of Plutonia to overthrow the government? And they kind of have reasons for such belief. Then (in their model of the world) the world protected from them is a deadly threat,  basically capital punishment. The nuclear war, however, is horrible, but there are bomb shelters where they can survive and have enough food inside just for themselves to live till natural death. 

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-27T04:39:15.607Z · LW · GW

The problem is that retaliation is not immediate (missiles takes few hours to reach the goal). For example, Plutonia can demonstratively destroy one object and declare that any attempt of retaliation will be retaliated in double. As soon as other country launches N missiles, Plutonia launches 2 N.

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-24T16:48:39.677Z · LW · GW

Yes, absolutely, it is the underlying thesis. 

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-24T16:46:07.745Z · LW · GW

Well, "democratic transition" will not necessarily solve that (like basically it did not completely resolve the problem with the end of the Cold War), you are right, so actually, the probability must be higher than I estimated - even worse news. 
 Is there any other options for decreasing the risk?

From a Russian perspective. Well, I didn't discuss it with officials in the government, only with the friends who support the current government. So I can only say what they think and feel, and of course, it is just anecdotal evidence. When I explicitly discussed with one of them the possibility of the nuclear war, he stated that this possibility is small and as long as the escalation will be beneficial for Russia he will support it. 


 I don't want to go here into politics and discuss what type of government would be better for Russia.  I was more interested to estimate the probability of the nuclear war (or other catastrophes mentioned on the main post).

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-24T06:04:34.786Z · LW · GW

When I say use, I mean actually detonating - not necessarily destroying a big city, but initially maybe just something small. 
Within the territory is possible, though I think outside is more realistic (I think the army will be eventually to weak to fight the external enemies with modern technology, but will always be able to fight unarmed citizens).

Comment by Just Learning on Is nuclear war indeed unlikely? · 2021-05-24T06:02:59.544Z · LW · GW

Sorry, I didn't get what do you mean by "non-dominant political controllership", can you rephrase it?

Comment by Just Learning on What Do We Mean By "Rationality"? · 2021-05-23T23:19:29.251Z · LW · GW

Thank you, wonderful series!

Comment by Just Learning on What Do We Mean By "Rationality"? · 2021-05-14T20:11:16.550Z · LW · GW

How should we deal with the cases when epistemic rationality contradicts instrumental? For example, we may  want to use placebo effect because one of our values is that healthy is better than sick, and less pain is better than more pain. But placebo effect is based on the fact that we believe pill to be a working medicine that is wrong. Is there any way to satisfy both epistemic and instrumental rationality?

Comment by Just Learning on Simulation theology: practical aspect. · 2021-05-08T21:15:31.381Z · LW · GW

Hmmm, but I am not saying that the benevolent simulators hypothesis is false and that I just choose to believe in it because it brings a positive effect. Rather opposite - I think that benevolent simulators are highly likely (more than 50% chance). So it is not a method "to believe in things which are known to be false". It is rather an argument why they are likely to be true (of course, I may be wrong somewhere in this argument, so if you find an error, I will appreciate it). 

In general, I don't think people here want to believe false things.

Comment by Just Learning on Simulation theology: practical aspect. · 2021-05-06T23:54:27.162Z · LW · GW

Of course, placebo is useful from the evolutionary point of view, and it is a subject of quite a lot of research. (Main idea - it is energetically costly to have your immune system always at high alert, so you boost it in particular moments, correlating with pleasure, usually from eating/drinking/sex, which is when germs usually get to the body. If interested, I will find the link to the research paper where it is discussed. ).

I am afraid I still fail to explain what I mean. I do not try to deduce from the observation that we are in a simulation, I don't think it is possible (unless simulators decide to allow it).
 I am trying to see how the belief that we are in simulation with benevolent simulators can change my subjective experience. Notice, I can't just trick myself to believe only because it is healthy to believe.  This is why I needed all this theory above - to show that benevolent simulators are indeed highly likely. Then, and only then, I can hope for the placebo effect (or for real intervention masquerading under placebo effect), because now I believe that it may work. If I could just make myself to believe in whatever I needed, of course I would not need all these shenanigans - but, after being faithful LW reader for a while, it is really hard, if possible at all.

 

Comment by Just Learning on Simulation theology: practical aspect. · 2021-05-05T18:07:19.467Z · LW · GW

It is exactly the point that there should be no proof of simulation unless simulators want it. Namely, there should be no observable (for us) difference between universe controlled simply by laws of Nature and between one with intervention from simulators. We can't look at any effect and say - this happens, therefore, we are in the simulation. 
The point was the opposite. Assume we are in simulation with benevolent simulators (what, according to what I wrote in the theoretical part of the post, is highly likely). What they can do so that we still was not able to classify this intervention as something outside of laws of nature, but so that our well-being would be improved? What are the practical results of it for us? 
By the way, we even do not have to require the ability to change probability. Just the placebo effect is good enough. Consider the person who was suffering from depression, or addiction, or akrasia - and now he is much better. Can a strong placebo (like a very strong religious experience) do it? Well, yes, there were multiple cases. Does it improve well-being? Certainly yes. So the practical point is that if such intervention masquerading under placebo can help, it is certainly worth trying. Of course one can say that I just tricking myself into believing it and then placebo just works, but the point is that I have reasons to believe in it (see theoretical part), and this makes placebo work. 

Thank you for directing my attention to the post, I will certainly read it.

Comment by Just Learning on Why We Launched LessWrong.SubStack · 2021-04-02T23:05:30.684Z · LW · GW

I would suggest a minutely subscription. It will be approximately $1/minute, actually close for mine akrasia fine for spending time on job unrelated websites.  

Comment by Just Learning on Nutrition for brain. · 2021-03-19T21:29:03.551Z · LW · GW

Thank you. There was one paper at the post about older adults and calorie restriction. However, it is kind of biased - they have slightly overweight people in the experiment. So yes, calorie restriction is good for overweight. Duh. 
Do you know any other studies? Thank you! 

Comment by Just Learning on Chaotic era: avoid or survive? · 2021-02-23T04:30:22.493Z · LW · GW

It sounds possible. However, before even the first people will get it, there should be some progress with animals, and right now there is nothing. So I would bet it is not going to happen in let's say next 5 years. (Well, unless we suddenly get a radical progress in creating a superAI that will do it for us, but this is the huge another question on its own). 
I would say, I wanted first to think about the very near future, without a huge technological breakthrough. Of course, the immortality and superAI are far more important than anything I mentioned in the original post. However, I think there is a non-negligible likelihood for something from the original post to happen very soon (maybe even this year), while the likelihood of the immortality before the end of this year seems quite negligible.