Posts

LessWrong Community Weekend 2023 Updates: Keynote Speaker Malcolm Ocean, Remaining Tickets and More 2023-05-31T21:53:57.908Z
LessWrong Community Weekend 2023 [Applications now closed] 2023-05-01T09:31:19.215Z
LessWrong Community Weekend 2023 [Applications now closed] 2023-05-01T09:08:14.502Z
Harry Potter and the Methods of Psychomagic | Chapter 3: Intelligence Explosions 2022-01-20T13:11:58.598Z
Harry Potter and the Methods of Psychomagic | Chapter 2: The Global Neuronal Workspace 2021-10-26T18:54:49.386Z
Harry Potter and the Methods of Psychomagic | Chapter 1: Affect 2021-09-15T17:03:06.901Z

Comments

Comment by Henry Prowbell on The Compendium, A full argument about extinction risk from AGI · 2024-11-02T15:06:59.466Z · LW · GW

My model of a non-technical layperson finds it really surprising that an AGI would turn rogue and kill everyone. For them it’s a big and crazy claim.

They imagine that an AGI will obviously be very human-like and the default is that it will be cooperative and follow ethical norms. They will say you need some special reason why it would decide to do something so extreme and unexpected as killing everyone.

When I’ve talked to family members and non-EA friends that’s almost always the first reaction I get.

If you don’t address that early in the introduction I think you might lose a lot of people.

I don’t think you need to fully counter that argument in the introduction (it’s a complex counter-argument) but my marketing instincts are that you need to at least acknowledge that you understand your audience’s skepticism and disbelief.

You need to say early in the introduction: Yes, I know how crazy this sounds. Why would an AGI want to kill us? There are some weird and counter-intuitive reasons why, which I promise we’re going to get to.

Comment by Henry Prowbell on steve2152's Shortform · 2024-08-07T13:47:29.418Z · LW · GW

I’ll give it a go.

I’m not very comfortable with the term enlightened but I’ve been on retreats teaching non-dual meditation, received ‘pointing out instructions’ in the Mahamudra tradition and have experienced some bizarre states of mind where it seemed to make complete sense to think of a sense of awake awareness as being the ground thing that was being experienced spontaneously, with sensations, thoughts and emotions appearing to it — rather than there being a separate me distinct from awareness that was experiencing things ‘using my awareness’, which is how it had always felt before.

When I have (or rather awareness itself has) experienced clear and stable non-dual states the normal ‘self’ stuff still appears in awareness and behaves fairly normally (e.g there’s hunger, thoughts about making dinner, impulses to move the body, the body moving around the room making dinner…). Being in that non dual state seemed to add a very pleasant quality of effortlessness and okayness to the mix but beyond that it wasn’t radically changing what the ‘small self’ in awareness was doing.

If later the thought “I want to eat a second portion of ice cream” came up followed by “I should apply some self control. I better not do that.” they would just be things appearing to awareness.

Of course another thing in awareness is the sense that awareness is aware of itself and the fact that everything feels funky and non-dual at the moment. You’d think that might change the chain of thoughts about the ‘small self’ wanting ice cream and then having to apply self control towards itself.

In fact the first few times I had intense non-dual experiences there was a chain of thoughts that went “what the hell is going on? I’m not sure I like this? What if I can’t get back into the normal dualistic state of mind?” followed by some panicked feelings and then the non-dual state quickly collapsing into a normal dualistic state.

With more practice, doing other forms of meditation to build a stronger base of calmness and self-compassion, I was able to experience the non-dual state and the chain of thoughts that appeared would go more like “This time let’s just stick with it a bit longer. Basically no one has a persistent non-dual experience that lasts forever. It will collapse eventually whether you like it or not. Nothing much has really changed about the contents of awareness. It’s the same stuff just from a different perspective. I’m still obviously able to feel calmness and joyfulness, I’m still able to take actions that keep me safe — so it’s fine to hang out here”. And then thoughts eventually wander around to ice cream or whatever. And, again, all this is just stuff appearing within a single unified awake sense of awareness that’s being labelled as the experiencer (rather than the ‘I’ in the thoughts above being the experiencer).

The fact that thoughts referencing the self are appearing in awareness whilst it’s awareness itself that feels like the experiencer doesn’t seem to create as many contradictions as you would expect. I presume that’s partly because awareness itself, is able to be aware of its own contents but not do much else. It doesn’t for example make decisions or have a sense of free will like the normal dualistic self. Those again would just be more appearances in awareness.

However it’s obvious that awareness being spontaneously aware of itself does change things in important and indirect ways. It does change the sequences of thoughts somehow and the overall feeling tone — and therefore behaviour. But perhaps in less radical ways than you would expect. For me, at different times, this ranged from causing a mini panic attack that collapsed the non-dual state (obviously would have been visible from the outside) to subtly imbuing everything with nice effortlessness vibes and taking the sting out of suffering type experiences but not changing my thought chains and behaviour enough to be noticeable from the outside to someone else.

Disclaimer: I felt unsure at several points writing this and I’m still quite new to non-dual experiences. I can’t reliably generate a clear non-dual state on command, it’s rather hit and miss. What I wrote above is written from a fairly dualistic state relying on memories of previous experiences a few days ago. And it’s possible that the non-dual experience I’m describing here is still rather shallow and missing important insights versus what very accomplished meditators experience.

Comment by Henry Prowbell on How I internalized my achievements to better deal with negative feelings · 2024-02-27T15:27:53.698Z · LW · GW

Great summary, and really happy that this helped you!

I'd recommend people read Rick Hanson's paper on HEAL, if they're interested too: https://rickhanson.net/wp-content/uploads/2021/12/LLPE-paper-final2.pdf

Comment by Henry Prowbell on How to (hopefully ethically) make money off of AGI · 2023-11-09T09:45:37.177Z · LW · GW

Does it make sense to put any money into a pension given your outlook on AGI?

Comment by Henry Prowbell on Announcement: AI Narrations Available for All New LessWrong Posts · 2023-07-21T10:55:55.820Z · LW · GW

I really like the way it handles headlines and bullet point lists!

In an ideal world I'd like the voice to sound less robotic. Something like https://elevenlabs.io/ or https://www.descript.com/overdub.  How much I enjoy listening to text-to-speech content depends a lot on how grating I find the voice after long periods of listening.

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 3: Intelligence Explosions · 2023-06-11T14:59:02.872Z · LW · GW

Honestly, no plans at the moment. Writing these was a covid lockdown hobby. It's vaguely possible I'll finish it one day but I wouldn't hold your breath. Sorry.

Comment by Henry Prowbell on How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? · 2023-02-17T14:00:26.120Z · LW · GW

But I rarely see anyone touch on the idea of "what if we only make something as smart as us?"

 

But why would intelligence reach human level and then halt there? There's no reason to think there's some kind of barrier or upper limit at that exact point.

Even in the weird case where that were true, aren't computers going to carry on getting faster? Just running a human level AI on a very powerful computer would be a way of creating a human scientist that can think at 1000x speed, create duplicates of itself, modify it's own brain. That's already a superintelligence isn't it?  

Comment by Henry Prowbell on How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? · 2023-02-17T13:54:21.520Z · LW · GW

A helpful way of thinking about 2 is imagining something less intelligent than humans trying to predict how humans will overpower it.

You could imagine a gorilla thinking "there's no way a human could overpower us. I would just punch it if it came into my territory." 

The actual way a human would overpower it is literally impossible for the gorilla to understand (invent writing, build a global economy, invent chemistry, build a tranquilizer dart gun...)

The AI in the AI takeover scenario is that jump of intelligence and creativity above us. There's literally no way a puny human brain could predict what tactics it would use. I'd imagine it almost definitely involves inventing new branches of science.

Comment by Henry Prowbell on How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? · 2023-02-17T13:40:00.132Z · LW · GW

I think that's true of people like: Steven Pinker and Neil deGrasse Tyson. They're intelligent but clearly haven't engaged with the core arguments because they're saying stuff like "just unplug it" and "why would it be evil?"

But there's also people like...

Robin Hanson. I don't really agree with him but he is engaging with the AI risk arguments, has thought about it a lot and is a clever guy.

Will MacAskill. One of the most thoughtful thinkers I know of, who I'm pretty confident will have engaged seriously with the AI Risk arguments. His p(doom) is far lower than Eliezer's. I think he says 3% in What We Owe The Future.

Other AI Alignment experts who are optimistic about our chances of solving alignment and put p(doom) lower (I don't know enough about the field to name people.)

And I guess I am reserving some small amount of probability for "most of the world's most intelligent computer scientists, physicists, mathematicians aren't worried about AI Risk, could I be missing something?" My intuitions from playing around on prediction markets is you have to adjust your bets slightly for those kind of considerations.

Comment by Henry Prowbell on How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? · 2023-02-16T16:11:10.559Z · LW · GW

I find Eliezer and Nates' arguments compelling but I do downgrade my p(doom) somewhat (-30% maybe?) because there are intelligent people (inside and outside of LW/EA) who disagree with them.

I had some issues with the quote

Will continue to exist regardless of how well you criticize any one part of it.

I'd say LW folk are unusually open to criticism. I think if there were strong arguments they really would change people's minds here. And especially arguments that focus on one small part at a time.

But have there been strong arguments? I'd love to read them.

 

There's basically little reason to engage with it. These are all also evidence that there's something epistemically off with what is going on in the field.

For me the most convincing evidence that LW is doing something right epistemically is how they did better than basically everyone on Covid. Granted that's not the alignment forum but it was some of the same people and the same weird epistemic culture at work.

Comment by Henry Prowbell on My Model Of EA Burnout · 2023-01-31T11:40:42.524Z · LW · GW

For me the core of it feels less like trying to "satisfying the values you think you should have, while neglecting the values you actually have" and more like having a hostile orientation to certain values I have.

I might be sitting at my desk working on my EA project and the parts of me that are asking to play video games, watch arthouse movies, take the day off and go hiking, find a girlfriend are like yapping dogs that won't shut up. I'll respond to their complaints once I've finished saving the world.

Through CFAR workshops, lots of goal factoring, journaling, and Focusing I'm getting some traction on changing that pattern. 

I've realised that values (or perhaps 'needs' fits better) are immutable facts about myself. Like my height or hair colour.  And getting annoyed at them for not being different makes about as much sense as shouting at the sky for raining.

The part of me that wants to maximize impact has accepted that moving to the Bay Area and working 80-hours a week at an EA org is a fabricated option.  A realistic plan takes into account my values that constrain me to want to live near my family, have lots of autonomy over my schedule and work independently on projects I control. Since realising that, my motivation, productivity, sense of agency (and ironically expected impact) have improved.  The future feels a lot brighter – probably because a whole load of internal conflict I wasn't acknowledging has been resolved.

Comment by Henry Prowbell on Noting an unsubstantiated communal belief about the FTX disaster · 2022-11-15T11:22:16.579Z · LW · GW

You are however only counting one side here

 

In that comment I was only offering plausible counter-arguments to "the amount of people that were hurt by FTX blowing up is a rounding error."

How to model all the related factors is complicated. Saying that you easily know the right answer to whether the effects are negative or positive in expectation without running any numbers seems to me unjustified. 

I think we basically agree here.

I'm in favour of more complicated models that include more indirect effects, not less.

Maybe the difference is: I think in the long run (over decades, including the actions of many EAs as influential as SBF) an EA movement that has strong norms against lying, corruption and fraud actually ends up more likely to save the world, even if it gets less funding in the short term. 

The fact that I can't predict and quantify ahead of time all the possible harms that result from fraud doesn't convince me that those concerns are unjustified.

We might be living in a world where SBF stealing money and giving $50B to longtermist causes very quickly really is our best shot at preventing AI disaster, but I doubt it. 

Apart from anything else I don't think money is necessarily the most important bottleneck.

Comment by Henry Prowbell on Noting an unsubstantiated communal belief about the FTX disaster · 2022-11-14T20:51:49.079Z · LW · GW

If you believe that each future person is as valuable as each present person and there will be 10^100 people in the future lightcone, the amount of people that were hurt by FTX blowing up is a rounding error.

 

But you have to count the effect of the indirect harms on the future lightcone too. There's a longtermist argument that SBF's (alleged and currently very likely) crimes plausibly did more harm than all the wars and pandemics in history if...

  • Governments are now 10% less likely to cooperate with EAs on AI safety
  • The next 2 EA mega-donors decide to pass on EA
  • (Had he not been caught:) The EA movement drifted towards fraud and corruption
  • etc.
Comment by Henry Prowbell on You are Underestimating The Likelihood That Convergent Instrumental Subgoals Lead to Aligned AGI · 2022-09-26T16:12:31.017Z · LW · GW

The frequency with which datacenters, long range optical networks, and power plants, require human intervention to maintain their operations, should serve as a proxy to the risk an AGI would face in doing anything other than sustaining the global economy as is.

 

Probably those things are trivially easy for the AGI to solve itself e.g. with nanobots that can build and repair things.

I'm assuming this thing is to us what humans are to chimps, so it doesn't need our help in solving trivial 21 century engineering and logistics problems.

 

The strategic consideration is: does the upside of leaving humans in control outweigh the risks. Humans realising you've gone rogue or humans building a competing AGI seem like your 2 biggest threats... much bigger considerations than whether you have to build some mines, power plants, etc. yourself.

keeping alive a form of intelligence with very different risk profiles might be a fine hedge against failure

Probably you keep them alive in a prison/zoo though. You wouldn't allow them any real power.

Comment by Henry Prowbell on Which LessWrong content would you like recorded into audio/podcast form? · 2022-09-13T15:47:21.470Z · LW · GW

I was looking for exactly this recently.

Comment by Henry Prowbell on Meditation course claims 65% enlightenment rate: my review · 2022-08-01T21:10:49.946Z · LW · GW

I haven’t looked into his studies’ methodologies, but from my experience with them, I would put high odds that the 65% number is exaggerated.

 

From his sales page

"In our scientific study involving 245 people...

65% of participants who completed The 45 Days to Awakening Challenge and Experiment persistently awakened.

...

Another couple hundred people entered the program already in a place of Fundamental Wellbeing..."

Sounds like he's defining enlightenment as something that ~50% of people already experience.

Elsewhere he describes 'Location 1' enlightenment as a background sense of okayness that doesn't include any kind of non-dual experience and can be interrupted by negative thoughts and emotions.

I can believe that people agreeing with a statement like 'underneath everything I feel that I'm fundamentally okay' might be measuring something psychologically important – but it isn't what most people mean when they use words like enlightenment or persistently awakened.

(P.S. thanks for writing this up and really happy that you got so much from the loving-kindness meditation!)

Comment by Henry Prowbell on A summary of every "Highlights from the Sequences" post · 2022-07-18T13:26:14.597Z · LW · GW

Does anybody know if the Highlights From The Sequences are compiled in ebook format anywhere?

Something that takes 7 hours to read, I want to send to my Kindle and read in a comfy chair.

And maybe even have audio versions on a single podcast feed to listen to on my commute.

(Yes, I can print out the list of highlighted posts and skip to those chapters of the full ebook manually but I'm thinking about user experience, the impact of trivial inconveniences, what would make Lesswrong even more awesome.)

Comment by Henry Prowbell on Scott Aaronson and Steven Pinker Debate AI Scaling · 2022-06-28T17:17:26.181Z · LW · GW

I love his books too. It's a real shame.

"...such as imagining that an intelligent tool will develop an alpha-male lust for domination."

It seems like he really hasn't understood the argument the other side is making here.

It's possible he simply hasn't read about instrumental convergence and the orthogonality thesis. What high quality widely-shared introductory resources do we have on those after all? There's Robert Miles, but you could easily miss him.

Comment by Henry Prowbell on What’s the contingency plan if we get AGI tomorrow? · 2022-06-24T10:41:07.500Z · LW · GW

I'm imagining the CEO having a thought process more like...

- I have no idea how my team will actually react when we crack AGI 
- Let's quickly Google 'what would you do if you discovered AGI tomorrow?'*
- Oh Lesswrong.com, some of my engineering team love this website
- Wait what?!
- They would seriously try to [redacted]
- I better close that loophole asap

I'm not saying it's massively likely that things play out in exactly that way but a 1% increased chance that we mess up AI Alignment is quite bad in expectation.

*This post is already the top result on Google for that particular search

Comment by Henry Prowbell on What’s the contingency plan if we get AGI tomorrow? · 2022-06-23T09:07:48.160Z · LW · GW

I immediately found myself brainstorming creative ways to pressure the CEO into delaying the launch (seems like strategically the first thing to focus on) and then thought 'is this the kind of thing I want to be available online for said CEOs to read if any of this happens?'

I'd suggest for those reasons people avoid posting answers along those lines.

Comment by Henry Prowbell on Debating Whether AI is Conscious Is A Distraction from Real Problems · 2022-06-22T21:20:12.870Z · LW · GW

Somebody else might be able to answer better than me. I don't know exactly what each researcher is working on right now.

“AI safety are now more focused on incidental catastrophic harms caused by a superintelligence on its way to achieve goals”

Basically, yes. The fear isn’t that AI will wipe out humanity because someone gave it the goal ‘kill all humans’.

For a huge number of innocent sounding goals ‘incapacitate all humans and other AIs’ is a really sensible precaution to take if all you care about is getting your chances of failure down to zero. As is hiding the fact that you intend to do harm until the very last moment.

“rather than making sure artificial intelligence will understand and care about human values?”

If you solved that then presumably the first bit solves itself. So they’re definitely linked.

Comment by Henry Prowbell on Debating Whether AI is Conscious Is A Distraction from Real Problems · 2022-06-21T18:52:39.253Z · LW · GW

I read the article and I have to be honest I struggled to follow her argument or to understand why it impacts your decision to work on AI alignment. Maybe you can explain further?

The headline "Debating Whether AI is Conscious Is A Distraction from Real Problems" is a reasonable claim but the article also makes claims like...

"So from the moment we were made to believe, through semantic choices that gave us the phrase “artificial intelligence”, that our human intelligence will eventually contend with an artificial one, the competition began... The reality is that we don’t need to compete for anything, and no one wants to steal the throne of ‘dominant’ intelligence from us."

and

"superintelligent machines are not replacing humans, and they are not even competing with us."

Her argument (elsewhere in the article) seems to be that people concerned with AI Safety see Google's AI chatbot, mistake its output for evidence of consciousness and extrapolate that consciousness implies a dangerous competitive intelligence.

But that isn't at all the argument for the Alignment Problem that people like Yudkowsky and Bostrom are making. They're talking about things like the Orthogonality Thesis and Instrumental Convergence. None of them agree that the Google chatbot is conscious. Most, I suspect, would disagree that an AI needs to be conscious in order to be intelligent or dangerous.

Should you work on mitigating social justice problems caused by machine learning algorithms rather than AI safety? Maybe. It's up to you.

But make sure you hear the Alignment Problem argument in it's strongest form first. As far as I can tell that form doesn't rely on anything this article is attacking.

Comment by Henry Prowbell on What is Going On With CFAR? · 2022-05-29T20:04:48.711Z · LW · GW

I suspect you should update the website with some of this? At the very least copying the above comment into a 2022 updates blog post.

The message 'CFAR did some awesome things that we're really proud of, now we're considering pivoting to something else, more details to follow' would be a lot better than the implicit message you may be sending currently 'nobody is updating this website, the CFAR team lost interest and it's not clear what the plan is or who's in charge anymore'

Comment by Henry Prowbell on ProjectLawful.com: Eliezer's latest story, past 1M words · 2022-05-11T15:57:51.745Z · LW · GW

I strongly agree

Comment by Henry Prowbell on The case for turning glowfic into Sequences · 2022-05-05T08:55:38.986Z · LW · GW

If somebody has time to pour into this I'd suggest recording an audio version of Mad Investor Chaos.

HPMOR reached a lot more people thanks to Eneasz Brodski's podcast recordings. That effect could be much more pronounced here if the weird glowfic format is putting people off.

I'd certainly be more likely to get through it if I could play it in the background whilst doing chores, commuting or falling asleep at night.

That's how I first listened to HPMOR, and then once I'd realised how good it was I went back and reread it slowly, taking notes, making an effort to internalize the lessons.

Comment by Henry Prowbell on Monks of Magnitude · 2022-02-18T11:10:30.348Z · LW · GW

I have a sense of niggling confusion.

This immediately came to mind...

"The only way to get a good model of the world inside your head is to bump into the world, to let the light and sound impinge upon your eyes and ears, and let the world carve the details into your world-model. Similarly, the only method I know of for finding actual good plans is to take a bad plan and slam it into the world, to let evidence and the feedback impinge upon your strategy, and let the world tell you where the better ideas are." - Nate Soares, https://mindingourway.com/dive-in-2/

Then I thought something like this...

What about 1,000-day problems that require you to go out and bump up against reality? Problems that require a tight feedback loop?

A 1,000-day monk working on fixing government AI policy probably needs to go for lunch with 100s of politicians, lobbyists and political donors to develop intuitions and practical models about what's really going on in politics.

A 1,000-day monk working on an intelligence boosting neurofeedback device needs to do 100s of user interviews to understand the complex ways in which the latest version of the device effects it's wearers' thought patterns.

And you might answer: 1-day monks do that work and report their findings to the 1,000 day monk. But there's an important way in which being there, having the conversation yourself, taking in all the subtle cues and body language and being able to ask clarifying questions develops intuitions that you won't get from reading summaries of conversations.

Maybe on your island the politicians, lobbyists and political donors are brought to the 1,000-day monk's quarters? But then 'monk' doesn't feel like the right word because they're not intentionally isolating themselves from the outside world at all. In fact, quite the opposite – they're being delivered concentrated outside-world straight to their door everyday.

If the 1,000 day problem is maths-based you can bring all the relevant data and apparatus into your cave with you – a whiteboard with numbers on it. But for many difficult problems the apparatus is the outside world.

I think the nth order monks idea still works but you can't specify that the monks isolate themselves or else they would be terrible at solving a certain class of problem – having deep thoughts which are powered by intuitions developed through bumping into reality over and over again or that require data which you can only pick out if you've been working on the problem for years.

Comment by Henry Prowbell on How do you think about mildly technical people trying to advance science and technology? · 2022-02-18T09:42:27.026Z · LW · GW

If you haven't already, I'd suggest you put a weekend aside and read through the guides on https://80000hours.org/

They have some really good analyses on when you should do a PhD, found a startup, etc.

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 2: The Global Neuronal Workspace · 2021-12-02T17:09:14.441Z · LW · GW

This was the paper: https://www.cell.com/neuron/pdf/S0896-6273(08)00575-8.pdf

Comment by Henry Prowbell on Frame Control · 2021-11-28T14:44:54.883Z · LW · GW

what are some signs that someone isn’t doing frame control? [...]

  1. They give you power over them, like indications that they want your approval or unconditional support in areas you are superior to them. They signal to you that they are vulnerable to you.

 

There was a discussion on the Sam Harris podcast where he talks about the alarming frequency at which leaders of meditation communities end up abusing, controlling or sleeping with their students. I can't seem to find the episode name now.

But I remember being impressed with the podcast guest, a meditation teacher, who said they had seen this happening all around them and before they took over as the leader of their meditation centre had tried to put in place things to stop themselves falling into the same traps.

They had taken their family and closest friends aside and asked them for help, saying things to this effect: "If you ever see me slipping into behaviour that looks dodgy I need you to point it out to me immediately and in no uncertain terms. Even though I've experienced awakening I'm still fallible and I don't know how I'm going to handle all this power and all these beautiful young students wanting to sleep with me."

This kind of mindset is a norm I'd love to see encouraged and supported in the leaders of the rationalist community.

Comment by Henry Prowbell on App and book recommendations for people who want to be happier and more productive · 2021-11-07T16:04:47.999Z · LW · GW
  • Erasable pens. Pens are clearly better than pencils in that you can write on more surfaces and have better colour selection. The only problem is you can’t erase them. Unless they’re erasable pens that is, then they strictly dominate. These are the best I’ve found that can erase well and write on the most surfaces.

 

I also loved these Frixion erasable pens when I discovered them. 

But another even better step up in my writing-by-hand experience was the reMarkable tablet. Genuinely feels like writing on paper — but with infinite pages, everything synced to the cloud, pages organised in folders, ability to reorder pages/paragraphs and a passcode on the lock screen so you can write your most embarrassing secrets or terrible first drafts without fear of someone accidentally reading it. 

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 2: The Global Neuronal Workspace · 2021-10-31T13:26:24.853Z · LW · GW

Thanks Richard. Edited.

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 2: The Global Neuronal Workspace · 2021-10-31T13:23:39.119Z · LW · GW

Thanks for the encouragement. Appreciate it :)

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 2: The Global Neuronal Workspace · 2021-10-31T13:23:17.765Z · LW · GW

I've got this printed out on my desk at home but unfortunately I'm away on holiday for the next few weeks. I'll find it for you when I get back.

For what it's worth most of the ideas for this chapter comes from Stanislas Dehaene's book Consciousness and the Brain. Kaj Sotala has a great summary here and I'd recommend reading the whole book too if you've got the time and interest.

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 1: Affect · 2021-09-17T08:13:31.597Z · LW · GW

Well spotted! The Psychomagic for Beginners excerpt certainly takes some inspiration from that. I read that book a few years ago and really enjoyed it too.

Comment by Henry Prowbell on Harry Potter and the Methods of Psychomagic | Chapter 1: Affect · 2021-09-16T13:40:02.399Z · LW · GW

Thanks Ustice!

I've already written first drafts of a couple more chapters which I'll be polishing and posting over the next few months.

So I can guarantee at least a few more installments. After that it will depend on what kind of response I get and whether I'm still enjoying the writing process.

Early in HPMOR there's a bit where Harry mentions the idea of using magic to improve his mind but it's never really taken much further.

I wanted to write about that: if you lived in a universe with magic how could you use it to improve your intelligence and rationality? If Harry and Hermione studied legilimency using the scientific method what would they discover? And also tie in some things I've been reading recently about neuroscience, psychotherapy, theories of consciousness.

If anybody fancies reading some early drafts of the next few chapters and giving me some feedback please do get in touch.