Posts

Comments

Comment by Lumpyproletariat on Going Crazy and Getting Better Again · 2023-07-04T01:19:36.389Z · LW · GW

As someone who doesn't want to go insane, I find it useful to read accounts of people going insane (especially from people who passed through madness and out the other side).

For people who are curious and want to read a more detailed account of someone's psychotic break, what delusions felt like for them from the inside, the misadventures they had during it, and the lessons they took from it, Peter Welch wrote about his here: https://www.stilldrinking.org/the-episode-part-1

This story is the pivotal narrative turning point that it’s easy to blame for me being the person I am instead of someone else. The summer of my twentieth year on the planet obliterated every measure of good, evil, truth, beauty, reality, and fantasy I’d had before and makes everything that’s happened since seem banal. It’s the reason I will never believe in anything again, the reason I play music, and the reason the Acadia Hospital nursing staff thinks I’m a crackhead. There are probably three or four dozen people that won’t talk to me to this day because of these events, and I am an local legend in Bar Harbor, Maine.

Comment by Lumpyproletariat on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-28T07:19:51.825Z · LW · GW

Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.

Comment by Lumpyproletariat on wrapper-minds are the enemy · 2022-07-28T07:16:00.749Z · LW · GW

Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.

Comment by Lumpyproletariat on Cultivating And Destroying Agency · 2022-07-25T04:22:50.863Z · LW · GW

So, your exact situation is going to be unique, but there's no reason you shouldn't be able to get alternate funding to do college. Could you give more specifics about your situation and I'll see what I can do or who I can put you in contact with?

Comment by Lumpyproletariat on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-24T09:47:52.130Z · LW · GW

My off-the-cuff answers are ~about thirty thousand, and less than a hundred people respectively. That's from doing some googling and having spoken with AI safety researchers in the past, I've no particular expertise.

Comment by Lumpyproletariat on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-24T09:45:55.668Z · LW · GW

It hasn't been discussed to my knowledge, and I think that unless you're doing something much more important (or you're easily discouraged by people telling you that you've more to learn) it's pretty much always worth spending time thinking things out and writing them down.

Comment by Lumpyproletariat on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-24T09:42:36.164Z · LW · GW

You might find this helpful! 

https://www.readthesequences.com/A-Humans-Guide-To-Words-Sequence

Comment by Lumpyproletariat on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-24T09:39:59.254Z · LW · GW

Alien civilizations already existing in numbers but not having left their original planets isn't a solution to the Fermi paradox, because if the civilizations were numerous some of them would have left their original planets. So removing it from the solution-space doesn't add any notable constraints. But the grabby aliens model does solve the Fermi paradox.

Comment by Lumpyproletariat on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-24T09:29:37.474Z · LW · GW

The reason humans don't do any of those things is because they conflict with human values. We don't want to do any of that in the course of solving a math problem. Part of that is that doing such things would conflict with our human values, and the other part is that it sounds for a lot of work and we don't actually want the math problem solved that badly.

A better example of things that humans might extremely optimize for, is the continued life and well-being of someone who they care deeply about. Humans will absolutely hire people--doctors and lawyers and charlatans who claim psychic foreknowledge--, kill large numbers of people if that seems helpful, and there are people who would tear apart the stars to protect their loved ones if that were both necessary and feasible (which is bad if you inherently value stars, but very good if you inherently value the continued life and well-being of someone's children). 

One way of thinking about this is that an AI can wind up with values which seem very silly from our perspective, values that you or I simply wouldn't care very much about, and be just as motivated to pursue those values as we're motivated to pursue our highest values. 

But that's anthropomorphizing. A different way to think about it is that Clippy is a program that maximizes the number of paperclips, like an if loop in Python or water flowing downhill, and Clippy does not care about anything.

Comment by Lumpyproletariat on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-10T01:07:43.214Z · LW · GW

The history of the world would be different (and a touch shorter) if immediately after the development of the nuclear bomb millions of nuclear armed missiles constructed themselves and launched themselves at targets across the globe.

To date we haven't invented anything that's an existential threat without humans intentionally trying to use it as a weapon and devoting their own resources to making it happen. I think that AI is pretty different.

Comment by Lumpyproletariat on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-10T00:57:37.220Z · LW · GW

Robin Hanson has an solution to the Fermi Paradox which can be read in detail here (there are also explanatory videos at the same link): https://grabbyaliens.com/

The summary from the site goes: 

There are two kinds of alien civilizations. “Quiet” aliens don’t expand or change much, and then they die. We have little data on them, and so must mostly speculate, via methods like the Drake equation.

“Loud” aliens, in contrast, visibly change the volumes they control, and just keep expanding fast until they meet each other. As they should be easy to see, we can fit theories about loud aliens to our data, and say much about them, as S. Jay Olson has done in 7 related papers (1, 2, 3, 4, 5, 6, 7) since 2015.

Furthermore, we should believe that loud aliens exist, as that’s our most robust explanation for why humans have appeared so early in the history of the universe. While the current date is 13.8 billion years after the Big Bang, the average star will last over five trillion years. And the standard hard-steps model of the origin of advanced life says it is far more likely to appear at the end of the longest planet lifetimes. But if loud aliens will soon fill the universe, and prevent new advanced life from appearing, that early deadline explains human earliness.

“Grabby” aliens is our especially simple model of loud aliens, a model with only 3 free parameters, each of which we can estimate to within a factor of 4 from existing data. That standard hard steps model implies a power law (t/k)n appearance function, with two free parameters k and n, and the last parameter is the expansion speed s. We estimate:

  • Expansion speed s from fact that we don’t see loud alien volumes in our sky,
  • Power n from the history of major events in the evolution of life on Earth,
  • Constant k by assuming our date is a random sample from their appearance dates.

Using these parameter estimates, we can estimate distributions over their origin times, distances, and when we will meet or see them. While we don’t know the ratio of quiet to loud alien civilizations out there, we need this to be ten thousand to expect even one alien civilization ever in our galaxy. Alas as we are now quiet, our chance to become grabby goes as the inverse of this ratio.

/quote

Comment by Lumpyproletariat on Speaking of Stag Hunts · 2021-11-22T22:18:31.389Z · LW · GW

Epistemic status: socially brusque wild speculation. If they're in the area and it wouldn't be high effort, I'd like JenniferRM's feedback on how close I am.

My model of JenniferRM isn't of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite's comment below, they say:

It was a purposefully pointed and slightly unfair question. I didn't predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).

If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.

My model of the model which which outputs words like these is that they're very confident in their own understanding--viewing themself as a "teacher" rather than a student--and are trying to lead someone who they think doesn't understand by the nose through a conversation which has been plotted out in advance.

Comment by Lumpyproletariat on [deleted post] 2021-02-04T21:29:48.240Z

Good points.

Comment by Lumpyproletariat on [deleted post] 2021-02-04T21:29:26.864Z

It isn't pleasant when a critical response garners more upvotes than the original post. I tell people that I'm not thin-skinned, but that's only because I don't respect most people. I respect LessWrongers, so this rather stung.

"To me this sentence reads like you haven't put in the work to analyse why those tools don't do what's needed and why you think a new tool would do what's needed."

You'll need to tell me how you do those block quotes, they are neat.

Thanks for the feedback; this is something I'll keep in mind next time I write something. An earlier draft had disparaging things to say about Collaction and Actuator in particular (as they're the only things I'm aware of which try to exist in the same space as the working tool)--I cut them because I wasn't sure how to make them sound less mean spirited (and I hold no particular bad feeling towards those who made them); I thought that criticism was redundant when paired with a lengthy diatribe on what I thought a working product would need--apparently this was wrong.

"This again reads like not having seriously thought about the issue. Stock market manipulation is illegal. It's a lot easier to argue that you aren't really doing illegal coordination in the case of r/WallStreetBets where you have moderators that delete posts that look to much like illegal coordination then your system that wants to make it more explicit."

I am broadly ignorant of many things which strike other people as being too obvious to explain, on account of being as yet young and on account of my lumpenproletariat upbringing; I'm fixing this but it takes time. I wasn't aware that redditors coordinating in this manner would be less legal than a singular firm doing so--chalk this one up to lacking basic knowledge of the shape of the finance system.

(Still, wouldn't it be possible for redditors to create a firm of their own, if they needed to? I don't see that this is an insurmountable problem.)

"The idea of creating a website for possible illegal activity to attack billion dollar entities and expect to have access to credit card processing seems strange to me."

The iterated product would benefit from better anonymity, but I know nothing of cryptography or cryptocurrency and am trying to avoid premature optimization in that regard.

"I think that's a very poor theory of change. Real world change needs a lot of negotiating between stakeholders and finding reforms that actually acceptable to a variety of stakeholders and not just a certain amount of people who ask for reforms at a single moment in time."

I swear I'm not as stupid as betimes I come across in writing! Language isn't my format.

I was trying to point at a class of problems without dwelling overlong on the specifics. Certainly I wasn't detailing any theory of change; I don't think we disagree on any factual thing, in this regard.

 

But, are these objections important to you? To me, they all seemed like trivial things to quibble over--except perhaps the first. What do you think about the general idea? I see that it's not gaining traction quickly, is that because I'm bad at communication, or because the idea is a poor one? I'd appreciate having this spelled out to me.

Comment by Lumpyproletariat on [deleted post] 2021-02-03T05:51:06.267Z

I've seen. Though, as said in the post, "If I want to organize something important, I would not consider using Actuator nor Collaction."