Posts
Comments
I think most people have short term, medium term, and long term goals. E.g., right about now many people probably have the goal of doing their taxes, and depending on their situation those may match many of your desiderata.
I used to put a lot of effort into creating exercises, simulations, and scenarios that matched up with various skills I was teaching, but ultimately found it much more effective to just say "look at your todo list, and find something that causes overwhelm". Deliberate practice consists of finding a thing that causes overwhelm, seeing how to overcome that overwhelm, working for two minutes, then finding another task that induces overwhelm. I also use past examples, imagining in detail what it would have been like to act in this different way
You're operating in a slightly different domain, but still I imagine people have plenty of problems and sub problems in either their life or research where the things you're teaching applies, and you can scope them small enough to get tighter feedback loops.
Why not just have people spend some time working with their existing goals?
I usually explain my process these days to clients with the acronym LIFE
Learn New Tools Integrate Resistance Forge an Identity Express Yourself
Learn New Tools is cognitive-emotional strategies, of which TYCS is an example. Fwiw a some of TYCS is actually deliberate practice to discover cognitive strategies ( as compared to something like CFAR which extracts and teaches them directly), but the result is the same.
The important thing is to just have a clear tool, give people something they know they can use in certain situations, that works immediately to solve their problems.
But the thing is, people don't use them, because they have resistance. That's where parts work and other resistance integration tools come into play.
Even when thata done, there's still the issue that you don't automatically use the techniques. This is where forge an Identity comes in, where you use identity change techniques to make the way you see yourself be in alignment with a way of being that the technique brings out. (This is one thing TYCS gets wrong in my opinion, trying to directly reinforce the cognitive strategies instead of creating an identity and reinforcing the strategies as affirming that identity.)
Finally that identity needs to propogate to every area of your life, so there's not situations where you fail to use the technique and way of being. This is just a process of looking at each area, seeing where it's not in alignment with the identity, then deliberately taking an action to bring it to that area.
IME all of these pieces are needed to make a life change from a technique, although it's rarely as linear as I describe it.
The way I do this with my clients is that we train cognitive tools first, then find the resistance to those habits and work on it using parts work
can you give examples?
I can hover over quick takes to get the full comment, but not popular comments.
Why not show the top-rated review, like you do at the top of the page?
The art change is pretty distracting, and having to hover to see the author is also a bummer, plus no way to get a summary (that I can see).
It's seemingly optimized for a "judge a book by it's cover" type of thing where I click around until I see a title and image I like
Appreciated this writeup.
How long have you been using the tool, and do you find any resistance to using it?
Do you always assume the underlying issue, or do you do focusing each time? Or do you find a contradictory experience through intuition without knowing why it works?
I haven't looked into this recently, but last time I looked at the literature behavioral interviews were far more predictive of job performance than other interviewing methods.
It's possible that they've become less predictive as people started preparing for them more.
Thanks. Appreciate this. I'm going to give another shot at writing this
Request for feedback: Do I sound like a raving lunatic above?
Surprising thing I've found as I begin to study and integrate skillful coercive motivation is the centrality of belief in providence and faith of this way of motivating yourself. Here are some central examples: the first from War of Art, the second from The Tools, the third from David Goggins. these aren't cherry picked (this is a whole section of War of Art and a whole chapter of The Tools).
This has interesting implications given that as a society (at least in america) we've historically been motivated by this type of masculine, apollonian motivation - but have increasingly let go of faith in higher powers as a tenet of our central religion, secular humanism.This means the core motivation that drives us to build, create, transcend our nature... is running on fumes. We are motivated by gratitude, w/o sense of to what or whom we should be grateful, told to follow our calling w/o a since of who is calling.
We've tried to hide this contradiction. our seminaries separate our twin Religion (Secular Humanism and Scientific Materialism) into stem and humanities tracks to hide that what motivates The Humanities to create is invalidated by the philosophy that allows STEM o discover. But this is crumbling, the cold philosophy of scientific materialism is eroding the shaky foundations that allow secular humanists to connect to these higher forces - this is one of the drivers of the meaning the crisis.
I don't really see any way we can make it through the challenges we're facing with these powerful new technologies w/o a new religion that connects us to the mystical truly wise core that allows us to be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism, and what Monastic Academy is trying to do with a new, mystical form of dataism - but both these projects are moonshots to massively change the direction of culture.
Or invisible?
The original reasoning that Eliezer gave if I remember correctly was that it's better to make people realize there are unknown unknowns, rather than taking one specific strategy and saying "oh, I know how I would have stopped that particular strategy"
Some quick calculations from chatGPT puts the value from a british government bond (considered the world power then) at about equal to the value of gold, assuming a fixed interest rate of 3% with gold coming out slightly ahead.
I haven't really checked these calculations but they pass the sniff test (except the part where chatGPT tried to adjust todays dollars for inflation).
This is not what I mean by wisdom.
Compared to what? My guess is it's a better bet than most currencies during that time, aside from a few winners that it would have been hard to predict ahead of time. E.g., if 200 years ago, you had taken the most powerful countries and their currencies, and put your money into those, I predict you'd be much worse off than gold.
I think oftentimes what's needed to let go of grief is to stop pushing it away, in doing that, it may be felt more fully, which once the message is received, can allow you to let it go. This process may involve fully feeling pain that you were suppressing.
. It doesn't hurt the way pity or lamenting might; there's no grief in it, just well-wishing.
While true, I think there's a caveat that often the thing preventing the feeling of true love from coming forth can be unprocessed grief that needs to be felt, or unprocessed pain that needs to be forgiven.
I think there's a danger in saying "if love feels painful you're doing this wrong" as often that's exactly the developmentally correct thing to be experiencing in order to get to the love underneath.
I couldn't pass an ITT for advocates of Islam or extrasensory perception. On the one hand, this does represent a distinct deficit in my ability to model what the advocates of these ideas are thinking, a tragic gap in my comprehension of reality, which I would hope to remedy in the Glorious Transhumanist Future if that were a real thing. On the other hand, facing the constraints of our world, my inability to pass an ITT for Islam or ESP seems ... basically fine? I already have strong reasons to doubt the existence of ontologically fundamental mental entities. I accept my ignorance of the reasons someone might postulate otherwise, not out of contempt, but because I just don't have the time.
I think there's a hidden or assumed goal here that I don't understand. The goal clearly isn't truth for it's own sake because then there wouldn't be a distinction between the truth of what they believe and the truth of whats real. You can of course make a distinction such as Simulacra levels but ultimately it's all part of the territory.
If the goal is instrumental ability to impact the world, I think probably a good portion of the time it's as important to understand peoples beliefs as the reality, because a good portion of the time your impact will he based on not just knowing the truth, but convincing others to change their actions or beliefs.
So what actually is the goal you are after?
I think this post has decent financial advice if you believe in near term GAI.
https://www.lesswrong.com/posts/CTBta9i8sav7tjC2r/how-to-hopefully-ethically-make-money-off-of-agi
I dunno I still think Bitcoin is actually a good store of value and hedge against problems in fiat currency. Probably as good a bet as gold as this point.
I think one of the things rationalists try to do is take the numbers seriously from a consequentialist/utilitarian perspective. This means that even if there's a small chance of doom, you should put vast resources towards preventing it since the expected loss is high.
I think this makes people think that the expectations of doom in the community are much higher than they actually are, because the expected value of preventing doom is so high.
I'm curious about the down votes to these comments. Do people think they weren't adding to the discussion?
I don't think I would describe hatties thinking as shoddy, he's one of the more careful thinkers in educational theory, both dong a careful review of the literature, and then testing his insights through his work with implementing his insights into schools and being careful with the results. Of course when you want your material implemented there's tradeoffs you make, between implementability and depth. But your assessment seems premature based on looking at my one comment
It's true that you're doing transfer with the previous skills that already went through the process while doing surface with new skills that are going through, but I dont think tht means it comes first. It comes last from those previous skills. If you want more specifics you can of course read Hattie.
In general I've found it practically very useful for my own learning, which is reason enough for me.
The model itself fell out Hattie's work trying to pull out all of the important meta studies in education, and understand the data - he found that by applying these 3 stages, he could better understand why certain interventions were effective in some cases and not others.
I would be very surprised if there are clear delineated boundaries between the 3, as such an abstraction rarely corresponds to reality so cleanly. And yet I still find it an incredibly useful model.
The Hattie model of learning posits surface -> deep -> transfer learning as a general rule for how learning progresses. I suspect that flashcards are excellent for surface learning, and the integration you're talking about is transfer learning. It's possible that you could try to skip straight to transfer learning, but I suspect it would actually take longer, as you'd be using transfer learning methods to get the surface and deep learning done.
I think the main effect will be AI boyfriends, which aren't already saturated.
Well, I don't actually know what "crybullying" or "sociosexuality" mean, but I definitely know that male sociopaths make use of reputation destruction.
This felt unnecessarily gendered to me. There are obviously masculine manipulative sociopaths.
You're not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won't show up in your nuanced calculations of costs and benefits.
I don't think anyone is saying this outright so I suppose I will - pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.
Nothing wrong with it, in fact I recommend it. But seeing oneself as a hero and persuading others of it will indeed be one of the main issues leading to hero worship.
Potatoes are relatively low calorie density and high satiety.
Potatoes aren't just satiating, they're weirdly satiating.
You can of course say that satiety explains the weight loss, but then you have to ask... what explains the satiety?
I like LW, and think that it does a certain subset of things better than anywhere else on the internet.
In particular, terms of "sane takes on what's going on" I can usually find them somewhere in the highly upvoted posts or comments.
I think in general my issue with LW is it just reflects the pitfalls of the rationalist worldview. In general the prevailing view conflates intelligence with wisdom, and therefore fails to grasp what is sacred on a moment to moment level that allows skillful action.
I think the fallout of SBF, the fact that rationalists and EAs keep building AI capabilities organizations, rationality adjacent cults centered around obviously immoral world views etc., are all predictable consequences of doing a thing where you try to intelligence hard enough that wisdom comes out.
I don't really expect this to change, and expect LW to continue to be a place that has the sanest takes on what's going on and then leads to incredible mistakes when trying to address that situation. And that previous sentence basically sums up how I feel about LW these days.
I think the video is mostly faked as a sequence of things Gemini can kind of sort of do. In the blog post they do it with few shot prompting and 3 screenshots, and say gemini sometimes gets it wrong:
https://developers.googleblog.com/2023/12/how-its-made-gemini-multimodal-prompting.html?m=1
I think one of the issues with Eliezer is that he sees himself as a hero, and it comes through both explicitly and in vibes in the writing, and Eliezer is also a persuasive writer.
I think that it's risky to have a simple waluigi switch that can be turned on at inferencing time. Not sure how risky.
The <good> <bad> thing is really cool, although it leaves open the possibility of a bug (or leaked weights) causing the creation of a maximally misaligned AGI.
Even Jaan Tallinn is “now questioning the merits of running companies based on the philosophy.”
The actual quote by Tallin is:
The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes... So the world should not rely on such governance working as intended.
which to me is a different claim than questioning the merits of running companies based on the EA philosophy - it's questioning an implementation of that philosophy via voluntarily limiting the company from being too profit motivated at the expense of other EA concerns.
"responsibility they have for the future of humanity"
As I read it, it only wanted to capture the possibility of killing currently living individuals. If they had to also account for 'killing' potential future lives it could make an already unworkable proposal even MORE unworkable.
Did you think they were going too easy on their children or too hard? Or some orthogonal values mismatch?
I, being under the age of 30, have a ~80% chance of making it to LEV in my lifespan, with an approximately 5% drop for every additional decade older you are at the present.
You, being a relatively wealthy person in a modernized country? Do you think you'll be able to afford the LEV by that time, or only that some of the wealthiest people will?
My sense is that most people who haven't done one in the last 6 months or so would benefit from at least a week long silent retreat without phone, computer, or books.
I don't have any special knowledge, but my guess is their code is like a spaghetti tower (https://www.lesswrong.com/posts/NQgWL7tvAPgN2LTLn/spaghetti-towers#:~:text=The distinction about spaghetti towers,tower is more like this.) because they've prioritized pushing out new features over refactoring and making a solid code base.
I have ~70% confidence that in the absence of superhuman AGI or other x-risks in the near term, we have a shot at getting to longevity escape velocity in 20 years.
Is the claim here a 70% chance of longevity escape velocity by 2043? It's a bit hard to parse.
If that is indeed the claim, I find it very surprising, and I'm curious about what evidence you're using to make that claim? Also, is that LEV for like, a billionaire, a middle class person in a developed nation, or everyone?
- Note that if camelidAI is very capable, some of these preventative measures might be very ambitious, e.g. “make society robust to engineered pandemics.” The source of hope here is that we have access to a highly capable and well-behaved GPT-SoTA.
I think there are many harms that are asymmetric in terms of creating them vs. preventing them. For instance, I suspect it's a lot easier to create a bot that people will fall in love with than to create a technology that prevents people from falling in love with bots (maybe you could create like, a psychology bot that helps people once they're hopelessly addicted, but that's already asymmetric) .
There of course are things that are asymmetric in the other direction (maybe by the time you can create a bot that reliably exploits and hacks software, you can create a bot that rewrites that same software to be formally verified) but all it takes is a few things that are asymmetric in the other direction to make this plan infeasible, and I suspect that the closer we get to general intelligence, the more of these we get (simply because of the breadth of activities it can be used for.)
I think virtue ethics is a practical solution, but if you just say "if corner cases show up, don't follow it" means you're doing something else other than being a virtue ethicist.
. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.
Excessive bullet biting for the pursuit of elegance is a road to moral ruin. Human value is complex. To be a consistent agent in Deontology, Virtue Ethics, or Utilitarianism, you necessarily have to (at minimum) toss out the other two. But morally, we actually DO value aspects of all 3 - we really DO think it's bad to murder someone outside of the consequences of doing so, and it feels like adding epicycles to justify that moral intuition with reasons when there is indeed a deontological core to some of our moral intuitions. Of course, there's also a core of utilitarianism and virtue ethics that would all suggest not murdering - but throwing out things you actually value in terms of your moral intuitions in the name of elegance is bad, actually.