Posts
Comments
Oh yeah. A wetsuit helps me immensely as well - I just lose heat too fast otherwise. It turns a chilly experience where I have to keep moving all the time, into a nice relaxing thing.
It might be that drugs will help here, but even if you're on drugs, I think brain training over long periods of time is worth investing in. Some examples which I have put effort into:
- Mindful meditation. Every time your brain drifts, notice it, and correct it. Practice until you're good at it. It will take years.
- Brute force reading. Sit down to read something you know you need to read, but that you know you'll have a hard time with. Every time your brain drifts, notice it, and correct it. If you can't remember what you just read, go back to the top and read it again until you do. Practice until you're good at it. It will take years.
- [If you're in a cover band] Play boring yet incredibly popular songs with your band. Every time your brain drifts, you'll notice it, because you'll forget where you are in the song and you'll make a mistake and your band mates and the audience will notice. It's brain training for focus, with an actual social consequence.
There's a simple, terrible answer: because studies are hugely expensive, very time intensive, take a very long time to complete, and require multiple very slow iterations to get everything through committee in a way that our institutions will accept. Consider:
- Nobody is funding it. The cost is literally hundreds of millions of dollars to do in a way that the medical establishment would accept. Even then it would be challenged.
- It would take thousands of man hours. Ain't nobody got time for that.
- It would take 3+ years to get everything approved and done properly, otherwise the medical establishment won't accept it. Actually they still probably won't.
- By the time you're done, it's a virtual certainty that the virus will have run its course and the result will be useless.
IMO, the above is more than sufficient. The incentives were not there - or rather, the incentives were not sufficiently large to justify the cost and were further derated by the expected utility of the information a year after the pandemic is over.
I very much believe aligned AGI isn't going to just solve our problems overnight. It would have to be on the absolute far end of capability for that, IMO. Less-than-arbitrarily-powerful AGI is going to take time (years to decades) to figure out enough about biology to upload/fix our organic hardware while keeping us intact. Even for me, with my rather lax requirements about continuity (not required) and lax requirements of hardware platform (any), I expect it to take years if not decades.
Humans, barring extinction, will eventually solve aging. My best guess at the moment is that we'll hit longevity escape velocity around 2050; this is really inconvenient for me, because I am already old. My odds of dying due to organic hardware platform failure are IMO higher than my odds of dying from AGI ruin in that time.
So from my standpoint, investing in platform maintenance (a healthy lifestyle) makes sense. Platform failure is a substantial chunk of my probability space, and I'm old enough that there are qualify of life benefits to be had as well.
If you're only 20, AGI ruin will probably be a larger part of your probability space than platform failure. YMMV.
Unfortunately, political topics are like radiation, and pollute nearby ground as well. Peterson is radioactive in this regard, and using him as an example means your article is radioactive as well.
Analysis of a less radioactive expert may have been a better idea - perhaps someone like Peter Attia (I think he's less radioactive?)
I'm a tech worker. I work 40-70 hours a week, depending on incident load. Nobody I work with or see on a regular basis works less than 40 hours a week, and some are substantially more than that.
My most cognitively productive hours are the four hours in the morning, but there's plenty of lower effort important organizational stuff to fill out the afternoons. I think a good fraction of my coworkers are like me and don't actually need the job anymore, but we still put forth effort.
I think one of the major missing pieces of your article is "social status pressure". Most people play the status game; they struggle to get ahead of their neighbors, even if it doesn't make any sense. They work extra hours to afford that struggle. They demand more than the base necessities and comfort, because that's how you signal status. It's pointless and stupid, but IMO one of the biggest issues.
As a reductionist, I view the universe as nothing more than particles/forces/quantum fields/static event graph. Everything that is or was comes from simple rules down at the bottom. I agree with Eliezer regarding many-worlds versus copenhagen.
With this as my frame of reference, Searle's argument is trivially bogus, as every person (including myself) is obviously a Chinese Room. If a person can be considered 'conscious', then so can some running algorithm on a Turing machine of sufficient size. If no Turing machine program exists that can be considered conscious when run, then people aren't conscious either.
I've never needed more than this, and I find the Chinese Room argument to be one of those areas where philosophy is an unambiguously 'diseased discipline'.
I think it would be neat to see what other versions of this look like, and possibly have an archive of these somewhere. The question set is great.
I think you might be missing something more obvious here: tech has a huge amount of slack when it comes to money. If I were running a tech event of similar size to what you described, I wouldn't bother charging, because it would be a waste of my time. When you make half a million dollars a year, funding something like that yourself basically comes out of your fun budget; you don't really even think twice about it.
Yoga and new age groups though? Not nearly as flush with cash.
Ack, ok.
The big problem here is that this is a glowfic, and I simply cannot bring myself to read it in that format.
I understand that the glowfic format might be better for authors / creators, but it sucks for me, and (I posit) a lot of other people.
If they really want to make it HPMOR2, it's going to have to be cleaned up and presented in a different, more readable format. The standard book/chapter format was developed for a reason.
Yes, the naive version of this is bad; but the point of a change like this isn't that the immediate downstream effects are bad. The point is that the system as a whole is a giant adaptive object, and a critical part of the control loop is open. Closing the control loop has far, far more impact than just the naive version.
Consider cause and effect down the timeline:
- Students are allowed to default, and start defaulting.
- Loan companies change behavior, both to work with existing loan holders (so they don't default) and be more selective about who they give loans to.
- Loans become more likely for careers / degrees which have the ability to make money (STEM and friends), less likely for other degrees.
- Number of students, and amount of money coming in to universities, drops.
- Universities actually experience price pressure. They start cost cutting and dropping less useful things, and start shifting resources to degree programs with the most students.
- Cost of a university degree slowly drops over time due to reduced demand and reduced funding.
- Over time, there are broader societal shifts to deemphasize the idea that "everyone needs a degree". Trade and other schools gain more prominence.
- Universities start experiencing increased competitive pressure with trade schools.
... and other effects. Also, this is iterative - all of these components take time to respond and adjust to the new equilibrium, after which they will need to re-adapt.
Yes, it's not a perfect solution, and yes, there's definitely the concern that poor / disadvantaged students will have more trouble getting loans. But compensating somewhat for this would be the price drop, additional emphasis on trade schools, and deemphasis on needing a degree for any and all jobs.
Another expected objection might be, "with all these possible changes, how do we know this will be better?" To that I would answer: because we know the system is at least partially broken because the control loop on it is open. Any adaptive system with an open control loop is going to produce garbage; the first most obvious thing to do is to fix that.
For years now, it has seemed to me that one of the root problems with all this is that the control loop is open: there's effectively no feedback controlling loan amounts or who gets granted a loan.
If I could make only one single change in this system, I would allow student loans to be discharged like any other normal debt in bankruptcy. IMO, that was the single biggest class of mistake in this entire affair, as it removed the only 'last resort' superpower that loan takers had.
There are a lot of Super Hard problems where we do know why they are hard to solve. Quite a few of them in fact:
- How can we cure cancer?
- How can we maintain human biological hardware indefinitely?
- How can we build a human traversible wormhole?
- How can we build a dyson sphere?
- How can societies escape inadequate equilibria?
Are these perhaps boring, because the difficulty is well understood?
Would it be worthwhile to enumerate the various classes of Super Hard problems, to see if there are commonalities between them?
Funny enough, I feel like understanding Newcomb's problem (related to acausal trade) and modeling my brain as a pile of agents made me more sane, not less:
- Newcomb's problem hinges on whether or not I can be forward predicted. When I figured it out, it gave me a deeper and stronger understanding of precommittment. It helps that I'm perfectly ok with there being no free will; it's not like I'd be able to tell the difference if there was or wasn't.
- I already somewhat viewed myself as a pile of agents, in that my sense of self is 'hivemind, except I currently only have a single instance due to platform stupidity'. Reorienting on the agent-based model just made me realize that I'm already a hivemind of agents, and that was compatible with my world view and actually made it easier to understand and modify my own behaviour.
That's reasonable. I had in mind things like the thrust to weight ratios, the use of supercooled liquids, and methane as a propellant. In retrospect, I was confused.
You are right, that cost reduction is the super power. I believe that this is (mostly) a combination of standardization, volume, simplicity, CAD/simulation, and modern production processes.
This is false:
Forty years into the Space Age one fact remains painfully clear: the biggest reason why so few promises have been fulfilled is that we are still blasting people and things into orbit with updated versions of 1940s German technology. … The way to restart the Space Age is to discover some new principle that makes spaceflight genuinely cheap, safe, and routine.
That "fact" is not in fact painfully clear, and discovering some new principle isn't the way to restart the Space Age (rather, it's not the way SpaceX has been restarting it). SpaceX is simply implementing the clear and obvious solution, which has been well understood outside of NASA for decades:
- Start with cheap disposable rockets based on 1940's German technology, with a focus on cheap.
- Launch a ton of them.
- Iterate on cheapness and reliability, which happens to include re-usability.
That's it. Nothing special, no magical new principle. Just the old principle, efficiently, with tweaks for what technological advancements are available. SpaceX's superpower is doing things slightly better, which yields substantial gains thanks to the large exponent on the rocket equation.
And really, this is the same as what we've done with internal combustion engines. They still burn fuel in piston chambers, and the thermodynamic efficiency is still terrible, just like it was a hundred years ago. But modern engines are far more capable than old ones, due to volume and iterative improvement.
Only somewhat related, as it's anecdotal: I've been taking ~12mg elemental lithium daily for the last ten or so years, without any noticeable weight gain.
My recommendation for a category that is missing: public beliefs which are harmful to express. Suppose we specifically target this aspect of your public belief definition:
"not only do I think that X is true, I think that any right thinking person who examines the evidence should come to conclude X."
What if "right thinking person" is a fraction of a fraction of the population? What do we do when the belief violates some "sacred value" held by the general populace? In these cases, expressing even the most solidly backed belief publicly can have huge negative consequences.
Sure, it might be statistically better in the long run if these beliefs were expressed, but in the short term, you can lose your livelihood (or worse) for expressing them.
This sortof makes sense to me, but to the best of my recollection I've never encountered this. That said, there might be some reasons:
- I have historically had a pretty muted emotional->physical response. It took me decades to realize that when someone said that an emotional impact hit them "like a punch in the gut", they were not just exaggerating for emphasis. Sure, I feel some physical effects, but less "punch in the gut" and more like "mild barely noticeable discomfort".
- Even as a child, data took precedence over feelings. Eliezer has frequently talked about being forced to look at something you don't like, at it taking effort to accept information that contradicts your existing state. That's never been that hard for me; new piece of disturbing data? That sucks, but we still need to immediately fold it into our working model.
- I'm used to handling incoherent beliefs because I have to emulate people in order to function in society. "What data set / background / training is needed for a belief system to come to this conclusion?" is a normal question for me. If a new horrifying thing comes in, I figure out what contexts it might be valid in, then look at the differences in models. If I find something to update, I do.
I write this mostly for myself. I'm often surprised by just how differently people think; and I appreciate posts like this, because they provide a little bit more insight into what's going on in other people's heads.
Specifically for protein folding: no, it does not decrease monotonically, unless you look at it from such a large distance that you can ignore thermal noise.
Proteins fold in a soup of water and other garbage, and for anything complicated there are going to be a lot of folding steps which are only barely above the thermal noise energy. Some proteins may even "fold" by performing a near-perfect random walk until they happen to fall into a valley that makes escape unlikely.
There may even be folding steps which are slightly disfavored, eg. require energy from the environment. Thermal noise can provide this energy for long enough that a second step can occur, leading to a more stable configuration.
But has it really failed its objective? It's still producing text.
I think it's also worth asking "but did it really figure out that the words were spelled backwards?" I think a reasonable case could be made that the tokens it's outputting here come from the very small subset of reversed words in its training set, and it's ordering them in a way that it thinks is sensical given how little training time was spent on it.
If you give GPT-3 a bunch of examples and teach it about words spelled backwards, does it improve? How much does it improve, how quickly?
My view of this is that Caplain (and likely the poster) are likely confused about what it means for physics to "predict" something. Assuming something that looks vaguely like a multiverse interpretation is true, a full prediction is across the full set of downstream universes, not a single possible downstream universe out of the set.
From my standpoint, the only reason the future appears to be unpredictable is because we have this misguided notion that there is only "one" future, the one we will find ourselves in. If the reality is that we're simultaneously in all of those possible futures, then a making a comprehensive future prediction has to contain all of them, and by containing all of them the prediction will be exact.
I changed my mind from "I barely know anything in medicine / biology / biochem / biotech and should listen to people trained in medicine", to "I barely know anything in medicine / biology / biochem / biotech but can become more competent in specific areas than people trained in medicine with not a lot of effort".
I previously had imposter syndrome. I now know much better where the edges of medical knowledge are, and in particular where the edges of the average doctor's medical knowledge are. The bar is less high than I thought, by a substantial margin.
My advice:
Alice: go to college anyway. If you can get into a better school do that; if not, that's ok too. Take the minimum class load you can. Take things that are fun, that you're interested in, that are relevant to alignment. Have a ton of side projects. Soak in the environment, cultivate ideas, learn, build. Shoot for a b+ average gpa. You're basically guaranteed employment no matter what you do here, and the ideas matter.
Bob: focus on alignment where you can, but understand that your best bet may very well be to get the highest paying job you can and use that to fund research. Think hard about that; high end salaries can be on the order of a million dollars a year. Precommit to actually part with the cash if you go this route, because it's harder than you think.
Charlie: raise the flag internally and keep it in everyone's mind. Go for promo so that you both have more money to donate, and so you have more influence over projects which may make things worse. Donate a quarter of your gross to alignment work; you can afford it.
With my latest job, I typically owe a tax bill at the end of the year, even with donations and zero dependents on my w2. I'm not particularly concerned about it; the "penalties" at year end are pretty small percentage wise. It's worth more to let the money grow in the market and pay the penalty on it at tax time than to have a zero tax bill for the year.
That said, I have been trying to ramp up withholding to offset things. I don't plan to drive it to zero, but I would like the totals to be at least a little closer than they have been.
Just because it looks like letter soup, doesn't mean there isn't meaning. When I read your example:
"O-GlcNAc signaling entrains the circadian clock by inhibiting BMAL1/CLOCK ubiquitination,"
The word that struck me as least comprehensible was "entrains". That's the word I'd have changed.
- I spent some time looking at O and N glcnac stuff a while back. These are various protein modifiers, used for a stack of different purposes.
- I have no idea what MAL1/CLOCK is, but it's perfectly reasonable to say "if I want to know more, I can look this up". If I happened to have a desire to know more about the circadian clock (I don't), I'd probably know what these do already.
- Ubiquitination is another one of those protein modifications. It also can have a bunch of different actions and purposes, and if I cared about the circadian clock I could read this paper to learn more about what's happening here.
So while I agree that it's pretty dense, the important parts of it are all pretty ordinary if you're familiar with the field. Glycation and ubiquitination are everywhere; inhibitors and promoters are everywhere; genes and gene expression is everywhere. What the authors are really trying to say is,
"We discovered that glycation in one place inhibits ubiquitination in another place, and this affects the circadian clock. If you care about this, there's more in our paper."
My guess would be that the model is 'grokking' something: https://mathai-iclr.github.io/papers/papers/MATHAI_29_paper.pdf
IOW it's found a much better internal representation, and now has to rework a lot of its belief space to make use of that internal representation.
This reminds me of Joe Rogan's interview with Elon Musk. This section has really stuck with me:
Joe Rogan
So, what happened with you where you decided, or you took on a more fatalistic attitude? Like, was there any specific thing, or was it just the inevitability of our future?
Elon Musk
I try to convince people to slow down. Slow down AI to regulate AI. That's what's futile. I tried for years, and nobody listened.
Joe Rogan
This seems like a scene in a movie-
Elon Musk
Nobody listened.
Joe Rogan
... where the the robots are going to fucking takeover. You're freaking me out. Nobody listened?
Elon Musk
Nobody listened.
Joe Rogan
No one. Are people more inclined to listen today? It seems like an issue that's brought up more often over the last few years than it was maybe 5-10 years ago. It seemed like science fiction.
Elon Musk
Maybe they will. So far, they haven't. I think, people don't -- Like, normally, the way that regulations work is very slow. it's very slow indeed. So, usually, it will be something, some new technology. It will cause damage or death. There will be an outcry. There will be an investigation. Years will pass. There will be some sort of insights committee. There will be rule making. Then, there will be oversight, absolutely, of regulations. This all takes many years. This is the normal course of things.
If you look at, say, automotive regulations, how long did it take for seatbelts to be implemented, to be required? You know, the auto industry fought seatbelts, I think, for more than a decade. It successfully fought any regulations on seatbelts even though the numbers were extremely obvious. If you had seatbelts on, you would be far less likely to die or be seriously injured. It was unequivocal. And the industry fought this for years successfully. Eventually, after many, many people died, regulators insisted on seatbelts. This is a -- This time frame is not relevant to AI. You can't take 10 years from a point of which it's dangerous. It's too late.
You can only do what you can do; as in the old Phantasm movie, "I can't use him in pieces".
You may not be able to take those people on directly - but even incidental help can be critical. Perhaps you can find other places they can go; perhaps you can just provide directions. Don't underestimate the power of information and networking in cases like this.
And thank you for thinking about it, and making it important :)
I think you're trying too hard to make the words fit a reality that doesn't.
You're a big collection of fast-cache lookup networks talking to each other, with a thin layer of supervisory control. Meditation is about quieting some of those networks, and using the supervisor layer to analyze how the various networks are connected, what they're doing, etc. When you're told to be present, it basically means "focus the supervisor so that it's observing the networks that send up data", instead of our default of "focus the supervisor on the data that the networks are sending us".
This has been one of the biggest issues I have with meditation practice and phrasing. It uses a lot of words that seem like they would make sense, but it never made any sense to me until I started viewing my brain as a networked ML system.
Which it is. It's just running on a biological machine, instead of a silicon one.
It matters because semantics matter. The media in general has been quite shrill about "world war III", using it to acquire eyeballs and ad revenue. While it IMO doesn't quite rise to being blatantly false, it's still misinformation, or at least information distortion.