LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
Thee critiques of Lumina can stand separate from what the author thinks about Bay Area rationalism. "Haters gonna occasionally make some valid points" and such. Sometimes people who unfairly dislike you can also make valid critiques.
eggsyntax on Language Models Model UsOn reflection I somewhat endorse pointing the risk out after discovering it, in the spirit of open collaboration, as you did. It was just really frustrating when all my experiments suddenly broke for no apparent reason. But that's mostly on OpenAI for not announcing the change to their API (other than emails sent to some few people). Apologies for grouching in your direction.
akash-wasil on robo's ShortformThere are some conversations about policy & government response taking place. I think there are a few main reasons you don't see them on LessWrong:
If anyone here is interested in thinking about "40% agreement" scenarios or more broadly interested in how governments should react in worlds where there is greater evidence of risk, feel free to DM me. Some of my current work focuses on the idea of "emergency preparedness"– how we can improve the government's ability to detect & respond to AI-related emergencies.
jonas-hallgren on Examples of Highly Counterfactual Discoveries?Sure! Anything more specific that you want to know about? Practice advice or more theory?
stephen-fowler on Stephen Fowler's ShortformSo the case for the grant wasn't "we think it's good to make OAI go faster/better".
I agree. My intended meaning is not that the grant is bad because its purpose was to accelerate capabilities. I apologize that the original post was ambiguous
Rather, the grant was bad for numerous reasons, including but not limited to:
This last claim seems very important. I have not been able to find data that would let me confidently estimate OpenAI's value at the time the grant was given. However, wikipedia mentions that "In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone." This certainly makes it seem that the grant provided OpenAI with a significant amount of capital, enough to have increased its research output.
Keep in mind, the grant needs to have generated 30 million in EV just to break even. I'm now going to suggest some other uses for the money, but keep in mind these are just rough estimates and I haven't adjusted for inflation. I'm not claiming these are the best uses of 30 million dollars.
The money could have funded an organisation the size of MIRI for roughly a decade (basing my estimate on MIRI's 2017 fundraiser [EA · GW], using 2020 numbers gives an estimate of ~4 years).
Imagine the shift in public awareness if there had been an AI safety Superbowl ad for 3-5 years.
Or it could have saved the lives of ~1300 children [EA · GW].
This analysis is obviously much worse if in fact the grant was negative EV.
quila on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate UniversityI think this post is a red flag about your mental health. "I work so hard that I ignore broken glass and then walk on it" is not healthy.
Seems like a rational prioritization to me if they were in an important moment of thought and didn't want to disrupt it. (Noting of course that 'walking on it' was not intentional and was caused by forgetting it was there.)
Also, I would feel pretty bad if someone wrote a comment like this after I posted something. (Maybe it would have been better as a PM)
nnotm on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it's an attractor state, that is hard or impossible to get out of. Bugs certainly aren't bound to evolve into sentient species, but at least there's a chance.
weightt-an on LLMs could be as conscious as human emulations, potentiallyI think I generally got your stance on that problem, and I think you are kind of latching on irrelevant bit and slightly transferring your confusion onto relevant bits. (You could summarize it as "I'm conscious, and other people look similar to me, so they are probably too, and by making the dissimilarity larger in some aspects, you make them less likely to be similar to me in that respect too" maybe?)
Like, the major reasoning step is "if EMs display human behaviors and they work by extremely closely emulating brain, then by cutting off all other causes that could have made meaty humans to display these behaviors, you get strong evidence that meaty humans display these behaviors for the reason of computational function that brain performs".
And it would be very weird if some factors conspired to align and make emulations behave that way for a different reason that causes meaty humans to display them. Like, alternative hypotheses are either extremely fringe (i.e there is an alien puppet master that puppets all EMs as a joke) or have very weak effects (i.e while interacting with meaty humans you get some weak telepathy and that is absent while interacting with EMs)
So like, there is no significant loss of probability from meaty humans vs high-res human emulations with identical behavior.
I said it in the start of the post:
It would be VERY weird if this emulation exhibited all these human qualities for other reason than meaty humans exhibit them. Like, very extremely what the fuck surprising. Do you agree?
referring exactly to this transfer of a marker whatever it could be. I'm not pulling it out of nowhere by presenting some justification.
As it stands, I can determine that I am conscious but I do not know how or why I am conscious.
Well, presumably it's a thought in your physical brain "oh, looks like I'm conscious", we can extract it with AI mind reader or something. You are embedded into physics and cells and atoms, dude. Well, probably embedded. You can explore that further by effecting your physical brain and feeling the change from the inside. Just accumulating that intuition of how exactly you are expressed in the arrangement of cells. I think near future will give us that opportunity with fine control over our bodies and good observational tools. (and we can update on that predictable development in advance of it) But you can start now, by, I don't know, drinking coffee.
I would be very surprised if other active fleshy humans weren't conscious
But how exactly could you get that information, what evidence could you get. Like, what form of evidence you are envisioning here. I kind of get a feeling that you have that "conscious" as a free floating marker in your epistemology.
teatieandhat on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University"As there were no showers, on the last day of the project you could literally smell all the hard work I had put in.": that’s the point where I’d consider dragging out the history nerds. This, for instance, could have been useful :-)
yanni-kyriacos on Examples of Highly Counterfactual Discoveries?Hi Jonas! Would you mind saying about more about TMI + Seeing That Frees? Thanks!