LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
avturchin · 2018-09-28T10:07:30.042Z · comments (6)
Some thoughts about the intro video: it's excellent but ends a little abruptly. It would be good to explain how AISafety.com plans to help. Also, I personally very strongly dislike flashing images (around the 45-second mark).
algon on Why you should learn a musical instrumentSince you seem interested in nootropics, I wonder if you've read Gwern's list of nootropic self-experiments? He covers a lot of supplements, some of which are pretty obscure AFAICT.~
EDIT: https://gwern.net/nootropic/nootropics
This ability has been observed more prominently in base models. Cyborgs [LW · GW] have termed it 'truesight':
the ability (esp. exhibited by an LLM) to infer a surprising amount about the data-generation process that produced its prompt, such as a user's identity, motivations, or context.
Two cases of this are mentioned at the top of this linked post [LW · GW].
---
One of my first experiences with the GPT-4 base model [LW · GW] also involved being truesighted by it. Below is a short summary of how that went.
I had spent some hours writing and {refining, optimizing word choices, etc}[1] a more personal/expressive text, to see how the model would continue it. I formatted it as a blog post and requested multiple completions via the API.
One of those completions happened to be a (simulated) second post titled ideas i endorse
. Its contents were very surprising to me because some of the included beliefs were all of the following: {ones I'd endorse}, {statistically rare}, and {not ones I thought were indicated by the text}.[2]
I also tried conditioning the model to continue my text with..
Also, one thing the model failed to truesight was my current focus on AI and longtermism. AI was not mentioned once.
The sum of those choices probably contained a lot of information about my mind, just not information that humans are attuned to detecting. Base models learn to gleam information about authors because this is useful to next token prediction.
Also note that using base models for this kind of experiment avoids the issue of the RLHF-persona being unwilling to speculate or decoupled from the true beliefs of the underlying simulator.
To be clear, it also included {some beliefs that I don't have}, and {some that I {{hadn't so far and probably wouldn't have otherwise} spent cognitive resources on considering}, but would agree with on reflection}
It also included some highly eccentric beliefs that I wouldn't agree with, like wanting to accelerate the 'entropic death of the universe.' (Though I can see a possible rationale: wanting to end suffering sooner. I'm deeply sympathetic to that and I think it's tragic if the current understanding of reality is true such that suffering will probably continue until the universe ends, even if only in 'small portions of the universe' (which are still quantitatively vast compared to life on Earth).
I don't think 'accelerating entropy' is the optimal way to reduce suffering, though, for the independently-sufficient-reasons {I don't think we can actually accelerate it in a significant or universe-wide way} and {the local lightcone would be more effectively used on things like reducing suffering directly via acausal trade [LW(p) · GW(p)]}.
I mean, if Paul doesn't confirm that he is not under any non-disparagement obligations to OpenAI like Cullen O' Keefe did, we have our answer.
In fact, given this asymmetry of information situation, it makes sense to assume that Paul is under such an obligation until he claims otherwise.
keltan on Why you should learn a musical instrumentI wouldn’t say I have a good grasp on Nutrition either. But spent a bit of time last year making sure I could parry any uncomfortable comments about my nutrition my family might make because of my veganism.
It seems the main thing is B12. Even the hard core vegan types, who don’t want to give an inch to the “other side”will admit this one is necessary. That makes me believe it really is.
What I’ll say in this next paragraph might be very wrong. If someone sees this and can call me on anything I’m wrong about, I’d love that.
Before going vegan I took fish oil. That’s because I’d heard Omega 3 was “beneficial for brain function”. That carried over when I went vegan, but I mostly ate walnuts as my source. Then I learnt that there are 3 Omega 3 Acids. (I should have noticed my confusion about that “3”, but I was not a rationalist at the time). I then learnt that ALA gets converted into EPA or another chemical. So by skipping ALA and going straight to DHA you potentially don’t lose anything.
Looking back on this, I think when I’m nearing the end of my current DHA supply I might need to take another look at Omega 3 and its functions. Something about it still feels a little off.
gurkenglas on "No-one in my org puts money in their pension"Sure, he's trying to cause alarm via alleged excerpts from his life. Surely society should have some way to move to a state of alarm iff that's appropriate, do you see a better protocol than this one?
algon on Why you should learn a musical instrumentNo, it's just that my prior says nootropics almost never work so I was wondering if you had some data suggesting this did e.g. by dowing a RCT on yourself or using signal processing techniques to detect if supplementing this stuff lead to a causal change in reflex times or so forth.
EDIT: Though I am vegan and I'm really ignorant about what makes for a good diet. So I'd be curious to hear why it's helpful for vegans to take this stuff.
Can anybody confirm whether Paul is likely systematically silenced re OpenAI?
keltan on Stephen Fowler's ShortformI’d like to see people who are more informed than I am have a conversation about this. Maybe at Less.online?
https://www.lesswrong.com/posts/zAqqeXcau9y2yiJdi/can-we-build-a-better-public-doublecrux [LW · GW]
keltan on Why you should learn a musical instrumentOnly bc I’m vegan. If I wasn’t, I wouldn’t be supplementing it.
I wish I could say I had a more accurate model. But my understanding doesn’t go deeper than DHA = Myelin = Faster processing
Was this purely a question? Or is there something I should look into here?