trevor's Shortform
post by trevor (TrevorWiesinger) · 2024-08-01T23:26:56.024Z · LW · GW · 4 commentsContents
4 comments
4 comments
Comments sorted by top scores.
comment by trevor (TrevorWiesinger) · 2024-08-01T23:27:52.057Z · LW(p) · GW(p)
TL;DR "habitually deliberately visualizing yourself succeeding at goal/subgoal X" is extremely valuable, but also very tarnished. It's probably worth trying out, playing around with, and seeing if you can cut out the bullshit and boot it up properly.
Longer:
The universe is allowed to have tons of people intuitively notice that "visualize yourself doing X" is an obviously winning strategy that typically makes doing X a downhill battle if its possible at all, and so many different people pick it up that you first encounter it in an awful way e.g. in middle/high school you first hear about it but the speaker says, in the same breath, that you should use it to feel more motivated to do your repetitive math homework for ~2 hours a day.
I'm sure people could find all sorts of improvements e.g. an entire field of selfvisualizationmancy that provably helps a lot of people do stuff, but the important thing I've noticed is to simply not skip that critical step. Eliminate ugh fields [? · GW] around self-visualization or take whatever means necessary to prevent ugh fields from forming in your idiosyncratic case [EA · GW] (also, social media algorithms could have been measurably increasing user retention by boosting content that places ugh fields in places that increase user retention by decreasing agency/motivation, with or without the devs being aware of this because they are looking at inputs and outputs or maybe just outputs, so this could be a lot more adversarial than you were expecting). Notice the possibility that it might or might not have been a core underlying dynamic in Yudkowsky's old Execute by Default post [LW · GW] or Scott Alexander's silly hypothetical talent differential comment [LW(p) · GW(p)] without their awareness.
The universe is allowed to give you a brain that so perversely hinges on self-image instead of just taking the action. The brain is a massive kludge of parallel processing spaghetti code [LW · GW] and, regardless of whether or not you see yourself as a very social-status-minded person, the modern human brains was probably heavily wired to gain social status in the ancestral environment, and whatever departures you might have might be tearing down chesterton-schelling fences [LW · GW].
If nothing else, a takeaway from this was that the process of finding the missing piece that changes everything is allowed to be ludicrously hard and complicated, while the missing piece itself is simultaneously allowed to be very simple and easy once you've found it.
comment by trevor (TrevorWiesinger) · 2024-05-29T21:22:07.884Z · LW(p) · GW(p)
I just found out that hypnosis is real and not pseudoscience. Apparently the human brain has a zero day such that other humans can find ways to read and write to your memory, and everyone is insisting that this is fine and always happens with full awareness and consent?
Wikipedia says as many as 90% of people are at least moderately susceptible, and depending how successful people have been over the last couple centuries at finding ways to reduce detection risk per instance (e.g. developing and and selling various galaxy-brained misdirection ploys), that seems like very fertile ground for salami-slicing attacks [LW · GW] which wear down partially resistant people.
The odds that something like this would be noticed and tested/scaled/optimized by competent cybersecurity experts and power lawyers seems pretty high (e.g. screen refresh rate oscillation in non-visible ways to increase feelings of stress or discomfort and then turning it off whenever the user's eyes are bout to go over specific kinds of words, slightly altering the color output of specific pixels across the screen in the shape of words and measuring effectiveness based on whether it causally increases the frequency of people using those words, some kind of way to combine these two tactics, something derived from the millions of people on youtube trying hard to look for a video file that hypnotizes them, etc).
It's really frustrating living in a post-MKUltra world, where every decade our individual sovereignty as humans is increasingly reliant on very senior government officials (who are probably culturally similar to the type of person who goes to business school and have been for centuries) either consistently not succeeding at any of the manipulation science which they are heavily incentivized to diversify their research investment in, or taking them at their word when they insist that they genuinely believe in protecting democracy and the bad things they get caught doing are in service towards that end. Also, they seem to remain uninterested in life extension, possibly due in part to being buried deep in a low-trust dark forest (is trust even possible at all if you're trapped on a planet with hypnosis?).
Aside from the incredibly obvious move to cover up your fucking webcam right now, are there any non-fake defensive strategies to reduce the risk that someone walks up to you/hacks your computer and takes everything from you? Is there some reliable way to verify that the effects are consistently weak or that scaling isn't viable? The error bars are always really wide for the prevalence of default-concealed deception (especially when it comes to stuff that wouldn't scale until the 2010s), making solid epistemics a huge pain to get right, but the situation with directly reading and writing to memory is just way way too extreme to ignore.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2024-05-30T22:21:47.366Z · LW(p) · GW(p)
The best thing I've found so far is to watch a movie, and whenever the screen flashes, any moment you feel weirdly relaxed or any other weird feeling feeling, quickly turn your head and eyes ~60 degrees and gently but firmly bite your tongue.
Doing this a few minutes a day for 30 days might substantially improve resistance to a wide variety of threats.
Gently but firmly biting my tongue, for me, also seems like a potentially very good general-use strategy to return the mind to an alert and clear-minded base state, seems like something Critch recommended [LW · GW] e.g. for initiatiing a TAP flowchain [? · GW]. I don't think this can substitute for a whiteboard, but it sure can nudge you towards one.
comment by trevor (TrevorWiesinger) · 2024-04-05T21:45:07.413Z · LW(p) · GW(p)
I think that "long-term planning risk" and "exfiltration risk" are both really good ways to explain AI risk to policymakers. Also, "grown not built".
They delineate pretty well some criteria for what the problem is and isn't. Systems that can't do that are basically not the concern here (although theoretically there might be a small chance of very strange things ending up growing in the mind-design space [LW · GW] that cause human extinction without long-term planning or knowing how to exfiltrate).
I don't think these are better than the fate-of-humans-vs-gorillas analogy, which is a big reason why most of us are here, but splitting the AI risk situation into easy-to-digest components, instead of logically/mathematically simple components, can go a long way (depending on how immersed the target demographic is in social reality [? · GW] and low-trust [LW · GW]).