LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Originally I felt happy about these, because “mostly agreeing” is an unusually positive outcome for that opening. But these discussions are grueling. It is hard to express kindness and curiosity towards someone yelling at you for a position you explicitly disclaimed. Any one of these stories would be a success but en masse they amount to a huge tax on saying anything about veganism, which is already quite labor intensive.
The discussions could still be worth it if it changed the arguer’s mind, or at least how they approached the next argument. But I don’t get the sense that’s what happens. Neither of us have changed our minds about anything, and I think they’re just as likely to start a similar fight the next week.
May I ask what your motivation was when you decided to spend your time on the aforementioned discussions? Were you hoping to learn something or to persuade the other, or both?
It sounds to me as though the solution here is to be more cautious before spending time arguing. Sometimes (often) it is IMO wiser to cut your losses and stop replying.
mondsemmel on Why is AGI/ASI Inevitable?In the war example, wars are usually negative sum for all involved, even in the near-term. And so while they do happen, wars are pretty rare, all things considered.
Meanwhile, the problem with AI development is that that there are enormous financial incentives for building increasingly more powerful AI, right up to the point of extinction. Which also means that you need not some but all people from refraining from developing more powerful AI. This is a devilishly difficult coordination problem. What you get by default, absent coordination, is that everyone races towards being the first ones to develop AGI.
Another problem is that many people don't even agree that developing unaligned AGI likely results in extinction. So from their perspective, they might well think they're racing towards a utopian post-scarcity society, while those who oppose them are anti-progress Luddites.
david-fendrich on Which skincare products are evidence-based?A simplistic model of your metabolism is that you have two states:
A common theme in scientific anti-aging is that you need to balance both states and that the modern life leads us to spend too long in the anabolic state (in a state of abundance, well fed, moderate temperature and not physically stressed). Anabolic interventions can lead to good outcomes in the short-term and quick results, but can potentially be bad long-term.
Cycling in this context would mean something like doing it every other day or every other week (what is optimal? probably no one knows). It could also mean timing it when you don't do fasts for those people who do alternate day fasting or other longer fasts. Fasting and calorie restriction would be typically catabolic activities.
anders-lindstroem on Which skincare products are evidence-based?You mean in a positive or negative way? Harmful? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5615097/ , and/or useless? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1447210/
viliam on European Soylent alternativesI haven't paid attention to this recently (I have small kids, so we need to cook anyway), but I think it is magnesium and calcium -- they somehow interfere with each other's absorption.
Just a random thing I found in google, but didn't read it: https://pubmed.ncbi.nlm.nih.gov/1211491/
(Plus there is a more general concern about what other similar relations may exist that no one has studied yet, because most people do not eat like "I only eat X at the same time as Y, mixed together".)
vanessa-kosoy on Which skincare products are evidence-based?Can you say more? What are "anabolic effects"? What does "cycling" mean in this context?
david-fendrich on Which skincare products are evidence-based?David Sinclair mentioned in a podcast that he is also a bit worried about the long term anabolic effects of the retinoids. He suggested cycling it, possibly synchronized with other catabolic cycling such as fasting.
tailcalled on Ironing Out the Squiggles(…Unless you do conditional sampling of a learned distribution, where you constrain the samples to be in a specific a-priori-extremely-unlikely subspace, in which case sampling becomes isomorphic to optimization in theory. (Because you can sample from the distribution of (reward, trajectory) pairs conditional on high reward.))
Does this isomorphism actually go through? I know decision transformers kinda-sorta show how you can do optimization-through-conditioning in practice, but in theory the loss function which you use to learn the distribution doesn't constrain the results of conditioning off-distribution, so I'd think you're mainly relying on having picked a good architecture which generalizes nicely out of distribution.
tailcalled on Ironing Out the SquigglesOne could reply, "Oh, sure, it's obvious that you can conditionally sample a learned distribution to safely do all sorts of economically valuable cognitive tasks, but that's not the danger of true AGI." And I ultimately think you're correct about that. But I don't think the conditional-sampling thing was obvious in 2004.
Idk. We already knew that you could use basic regression and singular vector methods to do lots of economically valuable tasks, since that was something that was done in 2004. Conditional-sampling "just" adds in the noise around these sorts of methods, so it goes to say that this might work too.
Adding noise obviously doesn't matter in 1 dimension except for making the outcomes worse. The reason we use it for e.g. images is that adding the noise does matter in high-dimensional spaces because without the noise you end up with the highest-probability outcome, which is out of distribution [LW · GW]. So in a way it seems like a relatively minor fix to generalize something we already knew was profitable in lots of cases.
On the other hand, I didn't learn the probability thing until playing with some neural network ideas for outlier detection and learning they didn't work. So in that sense it's literally true that it wasn't obvious (to a lot of people) back before deep learning took off.
And I can't deny that people were surprised that neural networks could learn to do art. To me this became relatively obvious with early GANs, which were later than 2004 but earlier than most people updated on it.
So basically I don't disagree but in retrospect [LW · GW] it doesn't seem that shocking [LW · GW].
cousin_it on Let's Design A School, Part 1If a student is genuinely acting in bad faith—attending a class and ruining it for their peers—then they should be removed from the class and sent to a counselor/social worker.
The number of such students is larger than you think. But the more important question is what the social worker would do with the student - what tools would be available to them. Because by default the student will just disrupt another class tomorrow and so on. There isn't any magic method to make disruptive students non-disruptive; schools would love to have access to such magic if it existed.