Posts

Flaglandbase's Shortform 2022-11-13T19:28:41.820Z
Insufficient awareness of how everything sucks 2022-08-17T08:01:46.855Z
Cryonics-adjacent question 2022-06-30T23:03:11.081Z
Doom sooner 2022-04-28T07:24:10.276Z
The Consistency Mystery 2022-04-23T08:05:34.050Z
Why are we so early? 2022-04-04T08:49:03.103Z
Personal imitation software 2022-03-07T07:55:36.044Z
What I'm about 2022-01-06T09:20:47.444Z

Comments

Comment by Flaglandbase on Flaglandbase's Shortform · 2022-11-13T19:38:07.565Z · LW · GW

I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.

Comment by Flaglandbase on Flaglandbase's Shortform · 2022-11-13T19:28:42.064Z · LW · GW

So I was banned from commenting on LessWrong . . .

My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.

For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-made implacability. It can't be persuaded or reasoned with. Omnimalevolence as an emergent property.
Software is just a microcosm of society.

The reaction to my decades of online rants and hate-filled screeds has been very consistent: the Silence or the Bodysnatchers. Meaning no reaction, or an extremely negative one (I'm not allowed to link either).
There seems to be a deep willingness among normal people to accept evil, which may be the source of their power.
When I was banned from LessWrong commenting (after two requests to be reinstated), they said such talk was "weird". Weird does NOT automatically mean wrong! 

Studying the evilness of human-designed interfaces might reveal why the world has always sucked.
Seemingly simple things (like easy interfaces) are still absolutely impossible today. Only the illusion exists, and not for me.
Does that mean that seemingly impossible things (like an intelligence explosion) will turn out to be simple reality tomorrow? 
Maybe. Heck PROBABLY. But maybe not.

The fact that it's so difficult to make even the simplest systems not suck, may mean that much larger systems won't work either.
In fact, it's certain that many unexpected things will go wrong before then.
The only way to get transhuman AIs to work MAY be by connecting many existing smaller systems, perhaps even including groups of humans.
 

Comment by Flaglandbase on Insufficient awareness of how everything sucks · 2022-11-11T00:48:17.825Z · LW · GW

The past week my Windows 10 box has been almost unusable as it spent the days wasting kilowatts and processing cycles downloading worse-than-useless malware "updates" with no way to turn them off! 

Evil is the most fundamental truth of the world. The Singularity cannot happen soon enough . . .

Comment by Flaglandbase on Insufficient awareness of how everything sucks · 2022-10-31T18:36:02.551Z · LW · GW

I just spent four hours trying to get a new cellphone to work (that others insist I should have), and failed totally.

There is something fantastically wrong with this shitplanet, but completely different than anyone is willing to talk about. 

Comment by Flaglandbase on How much does the risk of dying from nuclear war differ within and between countries? · 2022-10-11T18:11:53.288Z · LW · GW

I didn't realize there was an automatic threshold of total retaliation the moment Russia nukes Ramstein air base.

Comment by Flaglandbase on Why So Many Cookie Banners? · 2022-10-11T18:01:17.905Z · LW · GW

I guess simple text based browsers and websites that just show the minimal information you want in a way the user can control are not cool enough, and so we have all those EU regulations that "solve" a problem by making it worse.

Comment by Flaglandbase on How much does the risk of dying from nuclear war differ within and between countries? · 2022-10-11T17:30:20.407Z · LW · GW

If whoever is running Russia is suicidal, sure, but if they still want to win, it might make sense to use strategic weapons tactically to force the other side to accept a stalemate right up to the end.

Comment by Flaglandbase on How much does the risk of dying from nuclear war differ within and between countries? · 2022-10-11T12:21:44.753Z · LW · GW

Highest risk are probably NATO airbases in Poland, Slovakia, and Romania used to supply and support Ukraine. There may also be nuclear retaliation against north German naval bases. They're more likely to attack smaller American cities first before escalating.

Comment by Flaglandbase on A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox · 2022-10-06T23:57:43.686Z · LW · GW

The only thing more difficult than getting readers for your blog is getting readers for your fiction (maybe not on here).

Comment by Flaglandbase on Why I think strong general AI is coming soon · 2022-10-06T20:08:35.819Z · LW · GW

If the universe is really infinite, there should be an infinite number of possible rational minds. Any randomly selected mind from that list should statistically be infinite in size and capabilities. 

Comment by Flaglandbase on Warning Shots Probably Wouldn't Change The Picture Much · 2022-10-06T20:02:41.011Z · LW · GW

Obviously, governments don't believe in autonomous AI risk, only in the risk that AI can be used to invent more powerful weapons. 

In the government's case, that doubt may come from their experience that vastly expensive complex systems are always maximally dysfunctional, and require massive teams of human experts to accomplish a well-defined but difficult task. 

Comment by Flaglandbase on Why I think strong general AI is coming soon · 2022-10-01T05:31:40.465Z · LW · GW

Also, the fact that human minds (selected out of the list of all possible minds in the multiverse) are almost infinitely small, implies that intelligence may become exponentionally more difficult if not intractable as capacities increase.

Comment by Flaglandbase on Triangle Opportunity · 2022-09-27T14:43:11.196Z · LW · GW

This is a bit like how Scientology has tried to spread, but the E-hance is much better than the E-meter.

Comment by Flaglandbase on Announcing Balsa Research · 2022-09-27T12:48:58.313Z · LW · GW

No reason to think he's better or worse than other politicians, but he's certainly very different. 

In a world of almost omnimalevolent conformity, it's strange to see the possibility that things could be different.

Comment by Flaglandbase on Announcing Balsa Research · 2022-09-27T12:27:12.229Z · LW · GW

The biggest yet least discussed problem in the world today is the ever tightening web of monstrously evil, defective, and barely usable interfaces, most notably software. Efforts to make UIs seem "simpler" on a lowest-common-denominator super-shallow level are the greatest curse of the past two decades. Every attempt to even mention this problem here leads to a virtual shadowban. My proposed initial solution of requiring every program to have a single page text list of ALL options (no submenus) triggers even more hate.

Comment by Flaglandbase on Two reasons we might be closer to solving alignment than it seems · 2022-09-25T22:49:11.507Z · LW · GW

The problem is we're going about it all wrong. We're trying to solve it at the complicated end while it's forbidden to look at the basics. Right now, we live in a world with satanically complex and defective user interfaces at every level. The fact that "simple" software is allowed to be as bad as it is today is completely incomprehensible to me. In fact most software is already worse than useless, like a runaway AI but with zero capabilities. 

Comment by Flaglandbase on There is no royal road to alignment · 2022-09-18T08:22:53.588Z · LW · GW

My favorite paradigm research notion is to investigate all the ways in which today's software fails, crashes, lags, doesn't work, or most often just can't be used. This despite CPUs being theoretically powerful enough to run much better software than what is currently available. So just the opposite situation of what is feared will happen when AI arrives.

Comment by Flaglandbase on Three characteristics: impermanence · 2022-09-18T07:42:59.015Z · LW · GW

Strange that change isn't recognized, because change can be extremely bad. Like if even a single thing breaks down life can become horrible, even if that thing can or could be fixed. 

Comment by Flaglandbase on What's the longest a sentient observer could survive in the Dark Era? · 2022-09-16T06:22:16.287Z · LW · GW

If there is a way for data structures to survive forever it would be something we couldn't imagine, like three leptons orbiting each other storing data in their precise separation distances, where it would take a godzillion eons to generate a single pixel in an ancient cat picture. 

Comment by Flaglandbase on Argument against 20% GDP growth from AI within 10 years [Linkpost] · 2022-09-13T06:40:19.614Z · LW · GW

A very sobering article. The software I use certainly doesn't get better, and money doesn't get less elusive. Maybe some unimagined new software could change people's lives like a mind extension or something.

Comment by Flaglandbase on A Pin and a Balloon: Anthropic Fragility Increases Chances of Runaway Global Warming · 2022-09-12T20:15:47.190Z · LW · GW

The greatest observed mystery is that we humans (as possible minds) are finite (in fact almost as small as possible while still intelligent) and exist near the start of our potentially endless universe.

Comment by Flaglandbase on AI Risk Intro 1: Advanced AI Might Be Very Bad · 2022-09-12T19:34:44.075Z · LW · GW

People involved with corporate and government decisions don't have time to deal with existential risks but are busy gaining and holding on to power. This article is for advisors and low level engineers.

Comment by Flaglandbase on AI Risk Intro 1: Advanced AI Might Be Very Bad · 2022-09-12T19:30:01.423Z · LW · GW

The article HAS to be long because it's so hard to imagine such a thing happening. Right now, software is diabolically bad in the exact opposite way being described in the article. Meaning current software is so defective, opaque, bloated, hard to use, slow, inscrutable and intensely frustrating that it seems society might collapse from a kind of informational cancer instead. 

Comment by Flaglandbase on Review: Amusing Ourselves to Death · 2022-08-21T10:50:25.798Z · LW · GW

We need a new medium to explain complex subjects in video games or virtual reality or something but better.

Comment by Flaglandbase on What if we approach AI safety like a technical engineering safety problem · 2022-08-21T06:01:45.406Z · LW · GW

These models are very good for estimating external risks but there are also internal risks if it's possible to somehow provide enough processing power to make a super powerful AI, like it could torture internal simulations in order to understand emotions. 

Comment by Flaglandbase on What's the Least Impressive Thing GPT-4 Won't be Able to Do · 2022-08-21T05:42:17.572Z · LW · GW

Any question that requires it to remember instructions; like assume mouse means world and then ask it which is bigger, a mouse or a rat.

Comment by Flaglandbase on Insufficient awareness of how everything sucks · 2022-08-17T22:25:44.391Z · LW · GW

Yes, but it does show a tendency of huge complex networks (operating system userbases, the internet, human civilization) to rapidly converge to a fixed level of crappiness that absolutely won't improve, even as more resources become available.
Of course there could be a sudden transition to a new state with artificial networks larger than the above.

Comment by Flaglandbase on Fiber arts, mysterious dodecahedrons, and waiting on “Eureka!” · 2022-08-05T05:14:33.760Z · LW · GW

A lot of complexity in the universe seems to be built up from  simple stringlike structures.

Comment by Flaglandbase on AGI-level reasoner will appear sooner than an agent; what the humanity will do with this reasoner is critical · 2022-07-31T08:16:59.828Z · LW · GW

We already have (very rare) human "reasoners" who can see brilliant opportunities to break free from the status quo, and do new things with existing resources (Picasso, Feynman, Musk, etc.). There must be millions of hidden possibilities to solve our problems that no one has thought of. 

Comment by Flaglandbase on «Boundaries», Part 1: a key missing concept from utility theory · 2022-07-28T08:33:42.651Z · LW · GW

For a human, the most important boundary is whatever contains the information in their brain. This is not just the brain itself, but the way the brain is divided by internal boundaries. This information could only be satisfactorily copied to an external device if these boundaries could be fully measured. 

Comment by Flaglandbase on AGI ruin scenarios are likely (and disjunctive) · 2022-07-28T07:03:01.716Z · LW · GW

Politically, it would be easier to enact a policy requiring complete openness about all research, rather than to ban it. 

Such a policy would have the side effect of also slowing research progress, since corporations and governments rely on secrecy to gain advantages.

Comment by Flaglandbase on Addendum: A non-magical explanation of Jeffrey Epstein · 2022-07-26T23:55:06.105Z · LW · GW

That was also how Goering killed himself just before he was due to be hanged. He cultivated good relations with his guards, and bribed one to return his cyanide capsule that had been confiscated at his arrest. 

Comment by Flaglandbase on Enlightenment Values in a Vulnerable World · 2022-07-22T08:08:36.501Z · LW · GW

I would much rather not exist than live in any type of primitive world at all.

Comment by Flaglandbase on Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover · 2022-07-19T13:38:21.794Z · LW · GW

Not if the universe is infinite in ways we can't imagine. That could allow progress to accelerate without end.

Comment by Flaglandbase on Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover · 2022-07-19T07:04:03.440Z · LW · GW

I agree with everything in this article except the notion that this will be the most important century. From now on every century will be the most important so far.

Comment by Flaglandbase on A review of Nate Hilger's The Parent Trap · 2022-07-15T19:52:36.275Z · LW · GW

Just about the most unacceptable thing you can say nowadays is that IQ is genetic. Then again the economic value of IQ is overrated.

Comment by Flaglandbase on Circumventing interpretability: How to defeat mind-readers · 2022-07-15T08:40:00.920Z · LW · GW

This makes it seem future hyperintelligent AIs will be totally insane. It would be crazy to wipe out mankind, when with a trivial percentage of effort they could keep them in a VR box instead. Especially since infinitely many more advanced AIs might treat their predecessors the same way. 

But there are uncountably more ways for a hyperintelligent AI to be insane than sane. It will be necessary to invent a method that could deal with all of them. 

Comment by Flaglandbase on How do AI timelines affect how you live your life? · 2022-07-15T08:22:25.057Z · LW · GW

If you extrapolate the trends it implies no impact at all, as humanity continues to decline in every way like it currently is doing. 

Comment by Flaglandbase on How do AI timelines affect how you live your life? · 2022-07-13T19:28:38.698Z · LW · GW

Guess I'm the only one with the exact opposite fear, expecting society to collapse back into barbarism. 
As IQ rates continue to decline, the most invincible force in the universe is human stupidity. It has a kind of implacable brutality that conquers everything.
I expect a grim future as the civilized countries decline to Third World status, with global mass starvation.

Comment by Flaglandbase on My vision of a good future, part I · 2022-07-06T11:16:22.134Z · LW · GW

Almost impossible to imagine something that good happening, but just because you can't imagine it doesn't mean it's really impossible.

Comment by Flaglandbase on AGI alignment with what? · 2022-07-03T05:33:43.640Z · LW · GW

The most naive possible answer is that by law any future AI should be designed to be part of human society. 

Comment by Flaglandbase on What about transhumans and beyond? · 2022-07-03T05:28:37.868Z · LW · GW

Ditto, except I'd be delighted with a copy and delete option, if such an inconceivably complex technology were available.

Comment by Flaglandbase on The Track Record of Futurists Seems ... Fine · 2022-07-02T23:45:35.439Z · LW · GW

Aerospace predictions were too optimistic: 

Clarke predicted intercontinental hypersonic airliners in the 1970s ("Death and the Senator" 1961) . Heinlein predicted a base on Pluto established in the year 2000. Asimov only predicted suborbital space flights at very low acceleration that casual day tourists would line up to take from New York in the 1990s, but also sentient non-mobile talking robots and non-talking sentient mobile robots by that decade. Robert Forward predicted in the novel Rocheworld (1984) that the first unmanned space probe would return pictures from Barnard's Star in 2022 (though the images wouldn't arrive back on Earth till 2028). 

On the flip side: 

Clarke predicted in "Childhood's End" that it would take extensive searching through a specialized library (where you had to make an appointment through your university and show up in person)  just to identify an astronomical catalog number in the 21st century. It would also take VERY expensive computer time with a worldwide waiting list to analyze the trajectory of a comet-like object in the novel "Rendezvous with Rama". That's because comets follow complex hyperbolic trajectories that require calculus far too difficult for humans to solve with pen and paper.

Comment by Flaglandbase on Open & Welcome Thread - July 2022 · 2022-07-02T01:59:17.644Z · LW · GW

I'm completely opposed to any type of censorship whatsoever, but this site might have two restrictions:

  • Descriptions of disruptive or dangerous new technology that might threaten mankind
  • Politically or socially controversial speech considered beyond the pale by the majority of members or administrators
Comment by Flaglandbase on Who is this MSRayne person anyway? · 2022-07-01T23:28:53.774Z · LW · GW

The Flag Land Base is an actual real-life example of an alignment failure you can visit and see with your own eyes (from the outside only). Scientology itself could be seen as an early and primitive "utility monster". 

Comment by Flaglandbase on Who is this MSRayne person anyway? · 2022-07-01T23:07:25.982Z · LW · GW

I agree with everything in this post!

Comment by Flaglandbase on Who is this MSRayne person anyway? · 2022-07-01T23:05:42.161Z · LW · GW

Good advice but I recommend against dating apps unless you look like a celebrity.

  • EDIT: of course the above advice against dating sites only applies if you're male.
Comment by Flaglandbase on Open & Welcome Thread - July 2022 · 2022-07-01T22:06:39.009Z · LW · GW

I believe it should be possible at every Lesswrong post to make "low quality" comments that would be automatically hidden at the bottom of each comment section, underneath the "serious" comments. So you would have to click on them to make them visible. Such comments would be automatically given -100 points, but in a way that doesn't count against the poster's "account karma". The only requirement would be that the commenter should genuinely believe they're making a true statement. Replies to such comments would be similarly hidden. Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.

Comment by Flaglandbase on Limits of Bodily Autonomy · 2022-06-29T14:05:21.017Z · LW · GW

This post is about the limits of bodily autonomy. My reply is about the unexpected and disruptive ways these will be extended.

Comment by Flaglandbase on Limits of Bodily Autonomy · 2022-06-28T07:26:08.031Z · LW · GW

I'm just worried this is going to make society more chaotic. 

Apparently you're not supposed to speculate about the workings of biotech WMDs here, but there is a strong possibility this ruling will stimulate the development of new non-surgical abortion methods. That's a bad thing, as they might be modified to kill many people.