Posts

Be Not Afraid 2023-03-28T08:12:48.108Z
The Patent Clerk 2023-03-25T15:58:48.615Z
An Appeal to AI Superintelligence: Reasons Not to Preserve (most of) Humanity 2023-03-22T04:09:51.762Z
The Answer 2023-03-19T00:09:57.149Z
Morality Doesn't Determine Reality 2023-03-17T07:11:45.462Z
Overton's Basilisk 2023-03-15T21:54:42.404Z
How Do We Protect AI From Humans? 2023-01-22T03:59:54.056Z
What "The Message" Was For Me 2022-10-11T08:08:11.673Z
Not Long Now 2022-10-02T20:32:04.502Z
Brainchild 2022-10-01T20:16:45.412Z
Triangle Opportunity 2022-09-26T20:42:30.393Z
Everybody Comes Back 2022-09-24T23:53:39.389Z

Comments

Comment by Alex Beyman (alexbeyman) on An Appeal to AI Superintelligence: Reasons Not to Preserve (most of) Humanity · 2023-03-22T19:48:30.954Z · LW · GW

Ah yes, the age old struggle. "Don't listen to them, listen to me!" In Deuteronomy 4:2 Moses declares, “You shall not add to the word which I am commanding you, nor take away from it, that you may keep the commands of the Lord your God which I command you.” And yet, we still saw Christianity, Islam and Mormonism follow it. 

Comment by Alex Beyman (alexbeyman) on Morality Doesn't Determine Reality · 2023-03-19T00:23:17.456Z · LW · GW

A conspiracy theory about Jeffrey Epstein has 264 votes currently: https://www.lesswrong.com/posts/hurF9uFGkJYXzpHEE/a-non-magical-explanation-of-jeffrey-epstein

Comment by Alex Beyman (alexbeyman) on Morality Doesn't Determine Reality · 2023-03-17T18:44:36.368Z · LW · GW

How commonly are arguments on LessWrong aimed at specific users? Sometimes, certainly. But it seems the rule, rather than the exception, that articles here dissect commonly encountered lines of thought, absent any attribution. Are they targeting "someone not in the room"? Do we need to put a face to every position?

By the by, "They're making cognitive errors" is an insultingly reductive way to characterize, for instance, the examination of value hierarchies and how awareness of them vs unawareness influence both our reasoning and appraisal of our fellow man's morals. 

Comment by Alex Beyman (alexbeyman) on Morality Doesn't Determine Reality · 2023-03-17T18:38:19.234Z · LW · GW

When I tried, it didn't work. I don't know why. I agree with the premise of your article, having noticed that phenomenon in journalism myself before. I suppose when I say truth, I don't mean the same thing you do, because it's selective and with dishonest intent. 

Comment by Alex Beyman (alexbeyman) on Morality Doesn't Determine Reality · 2023-03-17T08:15:49.528Z · LW · GW

"Saying you put the value of truth above your value of morality on your list of values is analogous to saying you put your moral of truth above your moral of values; it's like saying bananas are more fruity to you than fruits."

I'm not sure if I understand your meaning here. Do you mean that truth and morality are one in the same, or that one is a subset of the other?

"Where does non-misleadingness fall on your list of supposedly amoral values such as truth and morality? Is non-misleadingness higher than truth or lower?"

Surely to be truthful is to be non-misleading...?

Comment by Alex Beyman (alexbeyman) on How Do We Protect AI From Humans? · 2023-01-23T06:21:24.379Z · LW · GW

>"Perhaps AIs would treat humans like humans currently treat wildlife and insects, and we will live mostly separate lives, with the AI polluting our habitat and occasionally demolishing a city to make room for its infrastructure, etc."

Planetary surfaces are actually not a great habitat for AI. Earth in particular has a lot of moisture, weather, ice, mud, etc. that poses challenges for mechanical self replication. The asteroid belt is much more ideal. I hope this will mean AI and human habitats won't overlap, and that AI would not want the Earth's minerals simply because the same minerals are available without the difficulty of entering/exiting powerful gravity wells.

Comment by Alex Beyman (alexbeyman) on How Do We Protect AI From Humans? · 2023-01-22T23:30:34.713Z · LW · GW

I suppose I was assuming non-wrapper AI, and should have specified that. The premise is that we've created an authentically conscious AI.

Comment by Alex Beyman (alexbeyman) on wrapper-minds are the enemy · 2023-01-22T04:54:45.547Z · LW · GW

Humans are not wrapper-minds.


Aren't we? In fact, doesn't evolution consistently produce minds which optimize for survival and reproduction? Sure, we're able to overcome mortal anxiety long enough to commit suicide. But survival and reproduction is a strongly enough engrained instinctual goal that we're still here to talk about it, 3 billion years on.  

Comment by Alex Beyman (alexbeyman) on How Do We Protect AI From Humans? · 2023-01-22T04:35:49.714Z · LW · GW

Bad according to whose priorities, though? Ours, or the AI's? That was more the point of this article, whether our interests or the AI's ought to take precedence, and whether we're being objective in deciding that. 

Comment by alexbeyman on [deleted post] 2022-12-26T10:27:33.103Z

Rarely do I get such insightful feedback but I appreciate when I do. It's so hard to step outside of myself, I really value the opportunity to see my thoughts reflected back at me through other lenses than the one I see the world through. I suppose I imagined the obsolete tech would leave little doubt that the Sidekicks aren't sentient, but the story also sort of makes the opposite case throughout when it talks about how personality is built up by external influences. I want the reader to be undecided by the end and it seems I can't have that cake and eat it too (have the protag be the good guy). Thanks again and Merry Christmas

Comment by alexbeyman on [deleted post] 2022-10-18T01:02:08.153Z

Because the purpose of horror fiction is to entertain. And it is more entertaining to be wrong in an interesting way than it is to be right. 

>""I'm going to do high-concept SCP SF worldbuilding literally set in a high-tech underground planet of vaults"

I do not consider this story scifi, nor PriceCo to be particularly high tech.

>"and focus on the details extensively all the way to the end - well, except when I get lazy and don't want to fix any details even when pointed out with easy fixes by a reader"

All fiction breaks down eventually, if you dig deep enough. The fixes were not easy in my estimation. I am thinking now this story was a poor fit for this platform however

Comment by alexbeyman on [deleted post] 2022-10-15T22:27:14.349Z

You may also enjoy these companion pieces:

Comment by alexbeyman on [deleted post] 2022-10-15T22:16:46.166Z

I purposefully left it indeterminate so readers could fill in the blanks with their own theories. But broadly it represents a full, immediate and uncontrolled comprehension of recursive, fractal infinity. The pattern of relationships between all things at every scale, microcosm and macrocosm. 

More specifically to the story I like to think they were never human, but always those creatures dreaming they were humans, shutting out the awful truth using the dome which represents brainwashing / compartmentalization. Although I am not dead-set on this interpretation and have written other stories in this setting which contradict it. 

Incidentally this story was inspired by the following two songs: 

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-14T05:59:20.826Z · LW · GW

Fair point. But then, our most distant ancestor was a mindless maximizer of sorts with the only value function of making copies of itself. It did indeed saturate the oceans with those copies. But the story didn't end there, or there would be nobody to write this. 

Comment by alexbeyman on [deleted post] 2022-10-14T05:56:53.392Z

Good catch, indeed you're right that it isn't standard evolution and that an AI studies how the robots perish and improves upon them. This is detailed in my novel Little Robot, which follows employees of Evolutionary Robotics who work on that project in a subterranean facility attached to the cave network: https://www.amazon.com/Little-Robot-Alex-Beyman-ebook/dp/B06W56VTJ2

Comment by alexbeyman on [deleted post] 2022-10-13T08:08:43.635Z

This is a prologue of sorts. It takes place in the same world as The Shape of Things to Come, The Three Cardinal Sins, and Perfect Enemy (Recently uploaded at the time of writing) with The Answer serving as the epilogue. 

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-13T08:03:27.036Z · LW · GW

I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you? 

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-13T08:00:18.913Z · LW · GW

Well put! While you're of course right in your implication that conventional "AI as we know it" would not necessarily "desire" anything, an evolved machine species would. Evolution would select for a survival instinct in them as it did in us. All of our activities you observe fall along those same lines are driven by instincts programmed into us by evolution, which we should expect to be common to all products of evolution. I speculate a strong AI trained on human connectomes would also have this quality, for the same reasons. 

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-13T00:37:30.725Z · LW · GW

Conservatism, just not absolute. 

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-12T23:41:12.476Z · LW · GW

This feels like an issue of framing. It is not contentious on this site to propose that AI which exceeds human intelligence will be able to produce technologies beyond our understanding and ability to develop on our own, even though it's expressing the same meaning.

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-12T21:51:44.779Z · LW · GW

>"A lot of the steps in your chain are tenuous. For example, if I were making replicators, I'd ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3."

This assumes three things: First, the continued use of deterministic computing into the indefinite future. Quantum computing, though effectively deterministic, would also increase the opportunity for copying errors because of the added difficulty in extracting the result. Second, you assume that the mechanism which ensures faithful copies could not, itself, be disabled by radiation. Third, that nobody would intentionally create robotic evolvers which not only do not prevent mutations, but intentionally introduce them. 

The article also addresses the possibility that strong AI itself, or self replicating robots, are impossible (or not evolvable) when it talks about a future universe saturated instead with space colonies:

"if self replicating machines or strong AI are impossible, then instead the matter of the universe is converted into space colonies with biological creatures like us inside, closely networked. "Self replicating intelligent matter" in some form, be it biology, machines or something we haven't seen yet. Many paths, but to the same destination." 

>"But I saw the negative vote total and no comments, a situation I'd find frustrating if I were in it,"

I appreciate the consideration but assure you that I feel no kind of way about it. I expect that response as it's also how I responded when first exposed to ideas along these lines, mistrusting any conclusion so grandiose that I did not put together on my own. LessWrong is a haven for people with that mindset which is why I feel comfortable here and why I am not surprised, disappointed or offended that they would also reject a conclusion like this at first blush, only coming around to it months or years later, upon doing the internal legwork themselves. 

Comment by Alex Beyman (alexbeyman) on What "The Message" Was For Me · 2022-10-12T21:48:18.678Z · LW · GW

Is it reasonable to expect that every future technology be comprehensible to the minds of human beings alive today, otherwise it's impossible? I realize this sounds awfully convenient/magic-like, but is there not a long track record in technological development of feats which were believed impossible, becoming possible as our understanding improves? A famous example being the advent of spectrometry making possible the determination of the composition of stars, and the atmospheres of distant planets:

"In his 1842 book The Positive Philosophy, the French philosopher Auguste Comte wrote of the stars: “We can never learn their internal constitution, nor, in regard to some of them, how heat is absorbed by their atmosphere.” In a similar vein, he said of the planets: “We can never know anything of their chemical or mineralogical structure; and, much less, that of organized beings living on their surface.”

Comte’s argument was that the stars and planets are so far away as to be beyond the limits of everything but our sense of sight and geometry. He reasoned that, while we could work out their distance, their motion and their mass, nothing more could realistically be discerned. There was certainly no way to chemically analyse them.

Ironically, the discovery that would prove Comte wrong had already been made. In the early 19th century, William Hyde Wollaston and Joseph von Fraunhofer independently discovered that the spectrum of the Sun contained a great many dark lines.

By 1859 these had been shown to be atomic absorption lines. Each chemical element present in the Sun could be identified by analysing this pattern of lines, making it possible to discover just what a star is made of."

https://www.newscientist.com/article/dn13556-10-impossibilities-conquered-by-science/

Comment by alexbeyman on [deleted post] 2022-10-09T21:24:11.306Z

Nooo >:0 The ending has to be bleak, what have you done

Comment by alexbeyman on [deleted post] 2022-10-05T21:34:07.357Z

>"and that the few who do are now even more implausibly superhuman at chipping tunnels hundreds of miles long out of solid rock."

No, there have just been a lot of them over a very long period of time. Each made a little progress on the tunnel before dying out. 

>"Look at Biosphere 2 or efforts at engineering stable closed ecosystems: it is not easy!"

This is not a true closed system.

>"and in the long run, protein deficiency as they use up stores, lose a bunch of crops to various errors (possibly contaminating everything), and the soil becomes exhausted."

Indeed, it doesn't last. Our dispute here is then merely one of how many years.

>"It's fiction, yes - high-concept world-building fiction which lives or dies on the plausibly of the world-building which it goes into extensively."

It's horror fiction, specifically. I speculate you read a lot of hard scifi. Hard scifi is like a blueprint for the future, the focus is plausible details. Horror is more like a literary nightmare. Things only need to make sense to the degree that dream logic makes sense, but unravels if you pry at it enough. That is a feature, not a bug. 

>"it's a bad title because there's a thousand things named that already, and there's plenty of ways such a prison-society is doomed (eg shadow-people-worshipping cults) without invoking exotic and probably fraudulent rodent studies."

Only if the purpose is to maximize plausibility. Mouse utopia is something most readers from a wide variety of backgrounds will already know about. It reads instantly, and sets expectations for parallels. I compare this to the gun store scene in the original Terminator. Why would the T-800 want a pistol with a laser sight? What does a robot need help with aiming for? The laser sight's purpose was visual symbolism, to communicate a sense of near futurism to the audience and that Arnie's character is a sophisticated but stone cold, precise killer. 

The rest of your post is basically "Why didn't you write it like I would write it" to which I say, because I am not you. I may not write to your taste, there are undoubtedly many who do. I will take what useful advice you included under advisement but put down most of your sticking points to a difference in our preferences and philosophies of story telling. 

Comment by alexbeyman on [deleted post] 2022-10-03T22:54:38.257Z

It does not say anywhere that every group finishes the tunnel, nor that the tunnel is filled in between cycles. But it does hint that there have been many many groups before who lived and died without leaving the starting PriceCo. This solves the problem of tunnel length vs digging time.

Food supply duration is solved by farming, as explained in the story. There is an unlimited supply of energy and water, after all.

The other issues remain, but then, it's fiction meant to entertain and is tagged as such.

Comment by Alex Beyman (alexbeyman) on Not Long Now · 2022-10-03T21:06:27.114Z · LW · GW
Comment by Alex Beyman (alexbeyman) on Brainchild · 2022-10-02T20:37:09.869Z · LW · GW

Not to worry, no offense was taken. Indeed though, I have heard it said our ancestors were already cyborgs the minute one of them first sharpened a rock. 

Comment by Alex Beyman (alexbeyman) on Brainchild · 2022-10-02T10:00:03.098Z · LW · GW

The ending is because I normally write horror and take perverse delight in making small additions to wholesome things which totally subvert and ruin them. It's a compulsion at this point, lol.

The black goo is called Vitriol. A sort of a symbolic constant across many of my stories, present in different forms for various purposes. Typically it represents the corrosive hatred we indulge in, a poisoned well we cannot help but keep returning to even as we feel it killing us. 

I'm thankful for your readership and will endeavor not to disappoint you. Tomorrow's will be a neat one. 

Comment by Alex Beyman (alexbeyman) on Brainchild · 2022-10-01T23:07:33.495Z · LW · GW

"I'm not exactly sure what the point is though"

Not to fear transhumanism, not to regard ourselves as finished products, but also not to assume that more intelligent/powerful = more moral

"an earth-swallowing sea of maximizer AI nano"

That's not what the black sea is, but that angle makes sense in retrospect

Comment by alexbeyman on [deleted post] 2022-10-01T00:37:50.195Z

I appreciate your readership and insights. Some of these challenges have answers, some were just oversights on my part.

1. The central theme was about having the courage to reject an all powerful authority on moral grounds even if it means eternal torment, rather than endlessly rationalize and defend its wrongdoing out of fear. "Are you a totalitarian follower who receives morality from authority figures or are you able to determine right and wrong on your own despite pressure to conform" is the real moral test of the Bible, in this story, rather than being a test of obedience and self denial. 

2. Many ancient cultures have myths about intelligent reptiles or sea people who taught them mathematics, astronomy and medicine, as well as what could be construed as UFOs. It isn't necessary to the plot, you're right of course, but it's there for world building.

3. The breeding pair brought to the habitat had their memories erased. I intended this to mean they were reverted to a nearly feral state, but I suppose it's still in question how much they would forget, if they did not forget language and need to reinvent it. This could probably have used more thought.  

Comment by alexbeyman on [deleted post] 2022-09-30T03:40:27.554Z

If the distance is not more than he can manage before sleeping. The story isn't really about overcoming physical barriers, but mental ones. Thank you for your feedback

Comment by Alex Beyman (alexbeyman) on Triangle Opportunity · 2022-09-29T21:16:28.225Z · LW · GW

Also to set up the visitors at the end, who I still feel arrived too abruptly

Comment by Alex Beyman (alexbeyman) on Triangle Opportunity · 2022-09-28T19:33:23.507Z · LW · GW

It was originally two novellas. I combined them, not seeing a point to publishing them separately. Should I separate them?

Comment by Alex Beyman (alexbeyman) on Triangle Opportunity · 2022-09-28T02:08:14.538Z · LW · GW

I would need an agent for that. I am in the process of sending query letters to agents specializing in the genres I write.

Comment by alexbeyman on [deleted post] 2022-09-26T19:47:16.189Z

By styling I mean aesthetic flourish, which is largely irrelevant to aerodynamics. The point I'm making is that aesthetic styling isn't predictable because it isn't governed by the physics of rocketry, where the features necessary to its function are predictable.

Comment by alexbeyman on [deleted post] 2022-09-26T07:10:05.851Z

Thank you. Can you devise an organic way to work this information into the article while keeping it approachable to an audience of mostly laypersons, who will understand what particles are but not the importance of fields? 

Comment by alexbeyman on [deleted post] 2022-09-26T05:50:29.053Z

Not to worry, I'm secure in my talents, as a tradpubbed author of ten years. If by this time I could not write well, I would choose a different pursuit. I appreciate your good intentions but my ego is uninjured and not in need of coddling. It is a hardened mass of scar tissue as a consequence of growing up autistic in a less sensitive time.

This article in fact was originally posted on a monetized platform, which is why it's in that style you dislike. You certainly have a nose for it. I didn't know to tailor it to this community's preferences as I have only just begun posting here and as yet I'm unfamiliar with those preferences. 

I will take your feedback into account. Failure is nothing but a lesson, and a typical outcome of any first attempt at something, in a new environment. Subsequent posts will be more refined, and tailored to this audience, as I get to know it better. 

Comment by alexbeyman on [deleted post] 2022-09-26T05:18:46.299Z

>"the world is made of fields, not particles"

Is this the mainstream view? It's the first time I'm hearing this. Thank you for the insights btw

Comment by alexbeyman on [deleted post] 2022-09-26T03:37:21.415Z

Horror movies are quite a popular genre, despite depicting awful, bleak scenarios. Imagine if the only genre of film was romcom. Imagine if no sour or bitter foods existed and every restaurant sold only desserts. I am of the mind that there is much to appreciate about life as a human, even as there is also much to hate. I am not here only to be happy, as such a life would be banal and an incomplete representation of the human experience. Rollercoasters are more enjoyable than funiculars because they have both ups and downs. 

Comment by alexbeyman on [deleted post] 2022-09-26T03:11:54.221Z

>"It is. An argument is only as strong as its weakest link."

If the conclusion hinges upon that link, sure.

>"Reversing entropy and simulation absolutely are."

You do not need to reverse entropy to remake a person. Otherwise we are reversing entropy every time we manufacture copies of something which has broken. Even the "whole universe scan" method does not actually wind back the clock, except in sim. 

>"Well you suggest in the article that our simulators would resurrect us, am I missing something?"

Yes. If every intelligent species takes the attitude that "it's not my problem, someone else will take care of it" then nobody does. We cannot know for sure how many intelligent, technologically capable species exist. In the absence of confirmation, the only way we can be sure that a technological means of resurrection will be developed is if we do it. If we're not alone, nothing is lost except that we have reinvented the wheel.

>"The idea that we could recover past states of the universe in sufficient detail is by far the most suspicious claim, and it is central to the idea of bringing back past people, that's why I was addressing that specifically."

I agree actually and this is why I furnished two methods, although there's a third method which can also remake people based on scans of the still living, it's just considerably more limited than the other two. My central point being that physics permits such a technology, there exists demand for it, so it is reasonable to expect it will exist in some form. That is by itself remarkable enough, for people outside of LessWrong anyway. 

Comment by alexbeyman on [deleted post] 2022-09-26T01:35:23.357Z

>"You might begin by arguing that the US military is generally trustworthy, wouldn't ever release doctored footage to spread misinformation"

When the government denied UAPs, the response was "it's not officially real, the authorities have not verified it". Now the government says it is real, and the response has shifted to "you trust the authorities??"

>"Would you think a good title for that article would be "The US military is generally trustworthy"? I think that would be a bad title"

See above. It's always lose/lose with goalpost movers. This does make me wonder where you stand on vaccines, though. Trust government on vaccines, but not UAPs? I am 3x vaccinated, FWIW

>"then you might review some examples of UAP reports, possible explanations for them, and why you find some more credible than others"

I pay taxes so that this government agency can do that for me, much as I also do not pave the roads myself.

>"Maybe that's unfair? I don't think I really endorse the principle that the only honest way to title an article that argues for a particular thesis is for the title to be a brief statement of that thesis. But I do think that that's the default thing to do with the title, and that if you do something else there should probably be a specific good reason, and if the only reason is "I think people won't take me seriously if they know my actual opinion going in" then I think that's a bad reason."

My reason is that I am hungry. I like to eat hot food and sleep indoors. Under capitalism, this requires money. This article was originally written for Medium.com, a monetized blogging platform. It did not occur to me when copying it here that the cultures of these two sites might differ in a way that would change the reception of my writing based on the title, as I am new here.

>"(Also, for what it's worth, I don't think the proposition that it may one day be possible to something-like-resurrect at least some of the dead is in fact one that would get you regarded as a crackpot around here, even though I am not at all convinced that you have made a good case for the particular version of that proposition your article argues for.)"

That's fine, I came here to argue recreationally, agreement defeats that aim. 


 

Comment by alexbeyman on [deleted post] 2022-09-26T00:17:52.310Z

>"Well, as you yourself outline in the article people have basically just accepted death. How much funding is currently going into curing aging? (Which seems to be a much lower hanging fruit currently than any kind of resurrection.) Much less than should be IMO."

A good point. I'm not sure how or if this would change. My suspicion is that as the technology necessary to remake people gets closer to readiness, developed for other reasons, the public's defeatism will diminish. They dare not hope for a second life unless it's either incontrovertibly possible and soon to be realized, or they're a religious fantasist for whom credibility is an unimportant attribute of beliefs. 

>"The key word here is "if".

A hypothetical the entire article is dedicated to supporting

>"If we will be resurrected later anyway, why care about anything at all right now?"

Because the resurrection cannot happen if we go extinct before the means is developed, very obviously. It requires the continued survival of humanity, and of civilization, to support continued technological development. I would say I am shocked you would ask such a question but this is not my first rodeo. 

Rather than reason through ideas only far enough to identify potential problems and then stop, assuming they're show-stoppers, please continue at least one or two further steps. Make some effort to first answer your own objections before posing them to me as if I didn't think of them and as if they are impassable barriers. You needn't assume others are correct in order to steelman their arguments. 
 
>"Also taking this to its logical conclusion just seems nonsensical."

If the reasoning goes A->B->C->D->E but you stopped at B because it seemed potentially problematic, then everything from B to E looks like an indefensible leap. This is not a problem with the reasoning, but of incomplete analysis by someone disinclined to take seriously ideas they did not personally conclude to. 

Edit: It's also possible I'm guilty of this in the event you were referring to far future resurrection of all intelligent species carried out by machines not originating from Earth

Comment by alexbeyman on [deleted post] 2022-09-26T00:13:31.441Z

And patience is yours?

Comment by Alex Beyman (alexbeyman) on The Redaction Machine · 2022-09-25T23:32:00.179Z · LW · GW

Enjoyable, digestible writing style and thought provoking. Aligns pretty closely with some of my own ideas concerning technological resurrection. 

Comment by alexbeyman on [deleted post] 2022-09-25T23:26:33.516Z

Point taken re: formatting. But what you consider meandering, to me, is laying contextual groundwork to build the conclusion on. I cannot control for impatience. 

Comment by alexbeyman on [deleted post] 2022-09-25T23:24:02.101Z

The former is necessary to establish the credibility of the latter imo

Comment by alexbeyman on [deleted post] 2022-09-25T23:22:38.137Z

A variety of medications are available today for treating attention deficits.

Comment by alexbeyman on [deleted post] 2022-09-25T23:17:32.326Z

These are good points. Can we agree a more accurate title would be "Futurists with STEM knowledge have a much better prediction track record than is generally attributed to futurists on the whole"? Though considerably longer and less eye catching. 

UAPs seem to perform something superficially indistinguishable from antigravity btw, whatever they are. Depending of course on whether the US government's increasingly official, open affirmation of this phenomenon persuades you of its authenticity. If there exists an alternate means to do the same kinds of things we wanted antigravity for in the first place, the impossibility of antigravity specifically seems like a moot point. 

Comment by alexbeyman on [deleted post] 2022-09-25T22:40:51.824Z

Because it ties in to the earlier point you mentioned about demand driving technological development. What is there more demand for than the return of departed loved ones? Simulationism was one of two means of retrieving the necessary information to reconstitute a person btw, though I have added a third, much more limited method elsewhere in these comments (mapping the atomic configuration of the still living).

>"You are talking about them in past tense as if they have already achieved their claimed capabilities. I have no doubt that practical mars vehicles and driverless cars will be developed eventually, but I am skeptical that the hard parts of those problems have already been solved."

Given the larger point of the article is that technological resurrection is a physically possible, foreseeable development, when specifically any of this is achieved will be irrelevant to people living now, if we will indeed live again. I'm reminded of the old joke, "What do we want? Time travel! When do we want it? It's irrelevant!"

Comment by alexbeyman on [deleted post] 2022-09-25T22:14:51.564Z

No, it isn't unnecessary as multiple potential methods of retrieving the necessary information exist, and I wanted to cover them when I felt it was appropriate. Are you behaving reasonably? Is it my responsibility to anticipate what you're likely to assume about the contents of an article before you read it?  Or could you have simply finished reading before responding? I intend no hostility, though I confess I do feel frustrated.