LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Sure, he's trying to cause alarm via alleged excerpts from his life. Surely society should have some way to move to a state of alarm iff that's appropriate, do you see a better protocol than this one?
algon on Why you should learn a musical instrumentNo, it's just that my prior says nootropics almost never work so I was wondering if you had some data suggesting this did e.g. by dowing a RCT on yourself or using signal processing techniques to detect if supplementing this stuff lead to a causal change in reflex times or so forth.
EDIT: Though I am vegan and I'm really ignorant about what makes for a good diet. So I'd be curious to hear why it's helpful for vegans to take this stuff.
Can anybody confirm whether Paul is likely systematically silenced re OpenAI?
keltan on Stephen Fowler's ShortformI’d like to see people who are more informed than I am have a conversation about this. Maybe at Less.online?
https://www.lesswrong.com/posts/zAqqeXcau9y2yiJdi/can-we-build-a-better-public-doublecrux [LW · GW]
keltan on Why you should learn a musical instrumentOnly bc I’m vegan. If I wasn’t, I wouldn’t be supplementing it.
I wish I could say I had a more accurate model. But my understanding doesn’t go deeper than DHA = Myelin = Faster processing
Was this purely a question? Or is there something I should look into here?
algon on Why you should learn a musical instrumentWhy do you think DHA algea powder works?
martin-vlach on Language Models Model Ushonestly the code linked is not that complicated..: https://github.com/eggsyntax/py-user-knowledge/blob/aa6c5e57fbd24b0d453bb808b4cc780353f18951/openai_uk.py#L11
martin-vlach on Language Models Model UsTo work around the non-top-n you can supply logit_bias list to the API.
martin-vlach on Language Models Model UsAs the Llama3 70B base model is said very clean( unlike base DeepSeek for example, which is instruction-spoiled already) and similarly capable to GPT3.5, you could explore that hypothesis.
Details: Check Groq or TogetherAI for free inference, not sure if test data would fit Llama3 context window.
I just realized that Paul Christiano and Dario Amodei both probably have signed non-disclosure + non-disparagement contracts since they both left OpenAI.
That impacts how I'd interpret Paul's (and Dario's) claims and opinions (or the lack thereof), that relates to OpenAI or alignment proposals entangled with what OpenAI is doing. If Paul has systematically silenced himself, and a large amount of OpenPhil and SFF money has been mis-allocated because of systematically skewed beliefs that these organizations have had due to Paul's opinions or lack thereof, well. I don't think this is the case though -- I expect Paul, Dario, and Holden all seem to have converged on similar beliefs (whether they track reality or not) and have taken actions consistent with those beliefs.