LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Based on the link, it seems you follow the Theravada tradition. The ideas you give go against the Theravada ideas. You need to go study the Pali Canon. This information is all wrong I'm afraid. I won't talk more on the matter.
chris_leong on On Llama-3 and Dwarkesh Patel’s Podcast with ZuckerbergDo you have any thoughts on whether it would make sense to push for a rule that forces open-source or open-weight models to be released behind an API for a certain amount of time before they can be released to the public?
dr_s on Priors and PrejudiceTo be fair, any beliefs you form will be informed by your previous priors. You try to evaluate evidence critically, but your critical sense was developed by previous evidence, and so on so forth back to the brain you came out of the womb with. Obviously as long as your original priors were open minded enough, you can probably reach the point of believing in anything given sufficiently strong evidence - but how strong depends on your starting point.
dr_s on Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm)I am sceptical of recommender systems - I think they are kind of bound to end up in self reinforcing loops. I'd be more happy seeing a more transparent system - we have tags, upvotes, the works, so you could have something like a series of "suggested searches", e.g. the most common combinations of tags you've visited, that a user has a fast access to while also seeing what precisely is it that they're clicking on.
That said, I do trust this website of all things to acknowledge if things aren't going to plan and revert. If we fail to align this one small AI to our values, well, that's a valuable lesson.
lesswronguser123 on The losing identity of TwitterEver since they killed (or made it harder to host) nitter,rss,guest accounts etc. Twitter has been out of my life for the better. I find the twitter UX in terms of performance, chronological posts, subscriptions to be sub-optimal. If I do create an account my "home" feed has too much ingroup v/s outgroup kind of content (even within tech enthusiasts circle thanks to the AI safety vs e/acc debate etc), verified users are over-represented by design but it buries the good posts from non-verified. Elon is trying wayy too hard to prevent AI web scrapers ruining my workflow .
ryan_greenblatt on Adam Shai's ShortformTerminology point: When I say "a model has a dangerous capability", I usually mean "a model has the ability to do XYZ if fine-tuned to do so". You seem to be using this term somewhat differently as model organisms like the ones you discuss are often (though not always) looking at questions related to inductive biases and generalization (e.g. if you train a model to have a backdoor and then train it in XYZ way does this backdoor get removed).
capisce on Thoughts on seed oilAnd they all eat a lot of butter and dairy products.
nathan-helm-burger on CHAT Diplomacy: LLMs and National SecurityI think perhaps in some ways this overstated the present risks at the time, but I think this forecasting is still relevant for the upcoming future. AI is continuing to improve. At some point, people will be able to make agents that can do a lot of harm. We can't rely on compute governance with the level of confidence we would need to be comfortable with that as a solution given the risks.
An example of recent work showing the potential for compute governance to fail: https://arxiv.org/abs/2403.10616v1
nathan-helm-burger on How to Model the Future of Open-Source LLMs?Unless there is a 'peak-capabilities wall' that gets hit by current architectures that doesn't get overcome by the combined effects of the compute-efficiency-improving algorithmic improvements. In that case, the gap would close because any big companies that tried to get ahead by just naively increasing compute and having just a few hidden algorithmic advantages would be unable to get very far ahead because of the 'peak-capabilities wall'. It would get cheaper to get to the wall, but once there, extra money/compute/data would be wasted. Thus, a shrinking-gap world.
I'm not sure if there will be a 'peak-capabilities wall' in this way, or if the algorithmic advancements will be creative enough to get around it. The shape of the future in this regard seems highly uncertain to me. I do think it's theoretically possible to get substantial improvements in peak capabilities and also in training/inference efficiencies. Will such improvements keep arriving relatively gradually as they have been? Will there be a sudden glut at some point when the models hit a threshold where they can be used to seek and find algorithmic improvements? Very unclear.
adam_scholl on Paul Christiano named as US AI Safety Institute Head of AI SafetyMy guess is more that we were talking past each other than that his intended claim was false/unrepresentative. I do think it's true that EA's mostly talk about people doing gain of function research as the problem, rather than about the insufficiency of the safeguards; I just think the latter is why the former is a problem.