LessWrong 2.0 Reader
View: New · Old · Top← previous page (newer posts) · next page (older posts) →
← previous page (newer posts) · next page (older posts) →
I think this post is a red flag about your mental health. "I work so hard that I ignore broken glass and then walk on it" is not healthy.
Seems like a rational prioritization to me if they were in an important moment of thought and didn't want to disrupt it. (Noting of course that 'walking on it' was not intentional and was caused by forgetting it was there.)
Also, I would feel pretty bad if someone wrote a comment like this after I posted something.
nnotm on "If we go extinct due to misaligned AI, at least nature will continue, right? ... right?"In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it's an attractor state, that is hard or impossible to get out of. Bugs certainly aren't bound to evolve into sentient species, but at least there's a chance.
weightt-an on LLMs could be as conscious as human emulations, potentiallyI think I generally got your stance on that problem, and I think you are kind of latching on irrelevant bit and slightly transferring your confusion onto relevant bits. (You could summarize it as "I'm conscious, and other people look similar to me, so they are probably too, and by making the dissimilarity larger in some aspects, you make them less likely to be similar to me in that respect too" maybe?)
Like, the major reasoning step is "if EMs display human behaviors and they work by extremely closely emulating brain, then by cutting off all other causes that could have made meaty humans to display these behaviors, you get strong evidence that meaty humans display these behaviors for the reason of computational function that brain performs".
And it would be very weird if some factors conspired to align and make emulations behave that way for a different reason that causes meaty humans to display them. Like, alternative hypotheses are either extremely fringe (i.e there is an alien puppet master that puppets all EMs as a joke) or have very weak effects (i.e while interacting with meaty humans you get some weak telepathy and that is absent while interacting with EMs)
So like, there is no significant loss of probability from meaty humans vs high-res human emulations with identical behavior.
I said it in the start of the post:
It would be VERY weird if this emulation exhibited all these human qualities for other reason than meaty humans exhibit them. Like, very extremely what the fuck surprising. Do you agree?
referring exactly to this transfer of a marker whatever it could be. I'm not pulling it out of nowhere by presenting some justification.
As it stands, I can determine that I am conscious but I do not know how or why I am conscious.
Well, presumably it's a thought in your physical brain "oh, looks like I'm conscious", we can extract it with AI mind reader or something. You are embedded into physics and cells and atoms, dude. Well, probably embedded. You can explore that further by effecting your physical brain and feeling the change from the inside. Just accumulating that intuition of how exactly you are expressed in the arrangement of cells. I think near future will give us that opportunity with fine control over our bodies and good observational tools. (and we can update on that predictable development in advance of it) But you can start now, by, I don't know, drinking coffee.
I would be very surprised if other active fleshy humans weren't conscious
But how exactly could you get that information, what evidence could you get. Like, what form of evidence you are envisioning here. I kind of get a feeling that you have that "conscious" as a free floating marker in your epistemology.
teatieandhat on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University"As there were no showers, on the last day of the project you could literally smell all the hard work I had put in.": that’s the point where I’d consider dragging out the history nerds. This, for instance, could have been useful :-)
yanni-kyriacos on Examples of Highly Counterfactual Discoveries?Hi Jonas! Would you mind saying about more about TMI + Seeing That Frees? Thanks!
keltan on On PrivilegeHmmm, I think the original post was an interesting idea. I think your comment points to something related but different. Perhaps taboo words?
keltan on keltan's ShortformI’ve seen a lot about GPT4o being kinda bad, and I’ve experienced that myself. This surprises me.
Now I will say something that feels like a silly idea. Is it possible that having the audio/visual part of the network cut off results in 4o’s poor reasoning? As in, the whole model is doing some sort of audio/visual reasoning. But we don’t have the whole model, so it can’t reason in the way it was trained to.
If that is the case, I’d expect that when those parts are publicly released, scores on benchmarks shoot up?
Do people smarter and more informed than me have predictions about this?
emrik-1 on The power of finite and the weakness of infinite binary point numbersLearning math fundamentals from a textbook, rather than via one's own sense of where the densest confusions are, is sort of an oxymoron. If you want to be rigorous, you should do anything but defer to consensus.
And from a socioepistemological perspective: if you want math fundamentals to be rigorous, you'd encourage people to try to come up with their own fundamentals before they einstellung on what's been written before. If the fundamentals are robust, they're likely to rediscover it; if they aren't, there's a chance they'll revolutionize the field.
quetzal_rainbow on robo's ShortformIt depends on overall probability distibution. Previously Eliezer thought something like that p(doom|trying to solve alignment) = 50% and p(doom|trying to solve AI ban without alignment) = 99% an then updated to p(doom|trying to solve alignment) = 99% and p(doom|trying to solve AI ban without alignment) = 95%, which makes solving AI ban even if pretty much doomed but worthwhile. But if you are, say, Alex Turner, you could start with the same probabilities, but update towards p(doom|trying to solve alignment) = 10%, which makes publishing papers on steering vectors very reasonable.
The other reasons:
Replied in PM.