Posts
Comments
I wouldn't assume that Hitchen's writings are a complete record of his views. I remember him being a regular (and fiery!) TV guest during this period, often arguing in defense of military intervention on the basis that radical Islam is an evil worth fighting against. It's possible that he argued in favor of waterboarding in one of these many appearances.
Doing nothing might be preferable to intervening in that case. But I'm not sure if the advanced civilization in Bostrom's scenario is intervening or merely opining. I would hope the latter.
I noticed this same editing style in a children's show about 20 years ago (when I last watched TV regularly). Every second there was a new cut -- the camera never stayed focused on any one subject for long. It was highly distracting to me, such that I couldn't even watch without feeling ill, and yet this was a highly popular and award-winning television show. I had to wonder at the time: What is this doing to children's developing brains?
Once you reach a certain age, you may find that these little tasks and chores become therapeutic. They offer a chance to step away from your desk and do something with your body. This isn't really a big deal in your teens and 20s, when your body can take all sorts of punishment and keep going, but once you pass the 30 mark, you may find that a little light exercise and movement feels really good if you've been sitting at a desk or staring at a screen all day.
Also, studies have shown that exercise actually improves mental acuity. I don't remember the exact details, but I recall one study that split students into two groups: one that studied for about 1 hour, and one that studied for about 45 minutes and exercised for 15. The second group (the one that studied less and exercised more) actually performed better on academic and memory tests.
So my advice is: don't think of chores as wasted time, but as opportunities to recharge your mind and body and actually accelerate your mental acuity. You might not think of folding laundry as exercise, but compared to sitting at a desk, it is.
Great new feature. Thank you! I will probably make use of this over the next few weeks.
But I did get a laugh out of "Specialist terminology, acronyms and idioms are handled gracefully" immediately being followed by a mispronunciation of "latex."
Why would an advanced entity, capable of traveling between stars, separate their body from their spacecraft?
It could be an advanced entity that evolved here on Earth and isn't capable of traveling between stars, perhaps a member of an ancient civilization that predates humanity.
Remember, "alien" and "extraterrestrial" are not necessarily synonymous.
Regarding the USAF official who says he misspoke about a killer drone AI, I think we have two plausible scenarios:
- A USAF official misspoke, then corrected himself
- A USAF official told the truth, then walked back his comments, claiming he misspoke
Right now, everyone seems to assume that 1 is true, but why? Even if 2 is unlikely, isn't discounting it entirely similar to uncritically accepting the original story?
You asked why this sort of violence is taboo, not whether we should break that taboo or not. I'm merely answering your question ("Why is violence in this specific context taboo?"). The answer is because it's illegal. Everyone understands, either implicitly or explicitly, that the state has a monopoly on violence. Therefore all extralegal violence is taboo. This is a separate issue from whether that violence is moral, just, necessary, etc.
Because it's illegal.
If you have perfect foresight and you know that action X is the only thing that will prevent the human race from going extinct, then maybe action X is justified. But none of those conditions apply.
This prompt uses some of my own prompt in it, so I recognize the characteristic style. It's a little over-the-top to the point of being hilariously bad at times (which is one reason why I enjoy it), but if you're looking to tone it down then I suggest changing the "unique, sassy, ironic, sarcastic, and humorous voice" to just a "sarcastic and humorous voice." That is what I've been doing lately, and the results have been much better. (By the way, the voice I was trying to emulate was Janet Evanovich's Stephanie Plum novels.)
One rookie mistake I see here and elsewhere in GPT-4's writing is describing the POV character's facial expressions. This is usually frowned upon in first-person and third-person limited POV because POV characters generally don't perceive their own facial expressions unless they're standing in front of a mirror. It's better to say "I felt my cheeks burning" than "I blushed," for example, because the former is truer of the POV character's experiences.
If you want to try breaking GPT of these quasi-POV breaks, you could try adding a line about Deep POV, which is modern writing lingo for a highly immersive POV that stays completely in the POV character's head. I would do it myself, but I don't have access to GPT-4.
Very interesting. My accuracy was the same as Richard's: 4/8*. I think you probably used my prompt for one of the ones I got right, which is probably why I got it right (the tone and structure are very familiar to me after so much experimentation).
To those who think the current crop of AIs aren't capable of writing great novellas (18-40k words): Do you think your opinion will change in the next 5 years?
* I originally reported a score of 1/8 by mistake.
Prompt:
Write the opening scene of a futuristic science-fiction novel. Use lots of dialogue, description, and detail. Use first-person POV and italicize the main character's inner dialogue. Give the main character a unique, sassy, ironic, sarcastic, and humorous voice. Throughout the scene, use beats and other actions to bring the characters and their world to life. End with something shocking and unexpected.
ChatGPT (“Murder in NeoCity”):
I squinted against the harsh glare of the neon lights as I stepped out onto the rain-soaked streets of NeoCity. The air hummed with a symphony of hovercars whizzing overhead and the low murmur of conversations from the busy sidewalk. Great. Another night in this tech-infested wonderland.
I adjusted the collar of my trench coat, feeling like a low-budget detective from a vintage holofilm. Not that I minded playing the role of a futuristic gumshoe, but in a world full of flying robots and cybernetically enhanced humans, the old-school charm could only get you so far.
"Detective Ryder," a metallic voice chimed in my earpiece, slicing through my thoughts like a buzzsaw. "You're needed at the Cybernetix Tower. A body was found in the server room. Looks like murder."
I sighed, my breath mixing with the misty rain. "Of course, it's always a murder. Why can't these people commit a good old-fashioned forgery or tax evasion?"
I hailed a hovercab, its sleek design gliding to a stop in front of me. I slid into the backseat, the doors hissing shut behind me. The driver, a holographic projection of a man wearing a classic chauffeur uniform, nodded at me through the rearview mirror. "Where to, detective?"
"Cybernetix Tower," I replied, leaning back against the plush seat. "And step on it. I've got a date with a dead body."
The hovercab shot forward, zipping through the bustling city streets at breakneck speed. I gripped the edges of my seat, my heart pounding like a faulty reactor. Why did I always end up in the middle of these messes?
The cab screeched to a halt outside the towering edifice of Cybernetix Tower, its polished glass exterior reflecting the city's neon glow. I shoved a handful of credits into the payment slot and darted out into the rain.
I flashed my credentials at the security guard, who looked about as thrilled to see me as a cat at a water park. "Detective Ryder," I announced, my voice dripping with the sarcasm only years of dealing with hardened criminals and malfunctioning holograms could bring. "I'm here to investigate a murder."
The guard grunted and pointed towards the elevators. "Up to the 30th floor, detective. Server room's just down the hall. Good luck, not that you'll need it."
I rolled my eyes, knowing full well that luck was about as reliable as a politician's promises. I made my way to the elevator and pressed the button, the doors sliding open with a soft ping. I stepped inside and hit the button for the 30th floor, the numbers on the display flashing like a kid on a sugar rush.
As the elevator ascended, I couldn't help but wonder who could have committed a murder in the heart of a high-security building like this. The doors slid open, revealing a dimly lit hallway lined with sleek silver doors.
I approached the server room, my hand instinctively reaching for the gun holstered at my side. Alright, Ryder. Time to put that wit of yours to good use. Let's solve this murder and maybe, just maybe, I'll have time for a decent cup of coffee afterwards.
I pushed open the door, ready to greet the grim scene that awaited me. But what I found inside stopped me dead in my tracks.
There, sprawled on the floor amidst a tangle of wires and blinking lights, lay a body. But it wasn't just any body. It was my body.
Well, isn't this a twist. Who knew I'd be investigating my own murder?
Right now AIs seem to think that a "story" is a short parable suitable for children. These parables usually consist entirely of narrative with a clear beginning, middle, and end. The stories also lack dialogue and scenic structure of any kind -- the writing is all "tell" rather than "show," which is the opposite of modern writing advice.
I'm still experimenting with prompts, but so far my experience is that:
- You have to tell the AI to use dialogue or there won't be any
- Encouraging the use of adverbs and speech tags "in order to show what a character is saying and how they are saying it" helps reinforce the use of dialogue
- Telling an AI to add more description, detail, and dialogue to an existing output can produce better results, akin to "upscaling" a Midjourney prompt
The reason I dislike the phrase "God-like AI" is because "God" has so many positive connotations for some people; they hear "God" and think benevolence, wisdom, love for humanity, etc. That's probably one reason why some people have trouble conceptualizing existential risk: We've saddled AI with too many positive attributes.
I'm not sure there's a better word or phrase, but "mastermind" comes to mind. A mastermind AI is a hostile alien entity capable of outmaneuvering humanity at every turn, one that will come to dominate over us and re-order the universe to suit its own selfish purposes.
The "New York City's Death Rate" chart is ambiguous to me. Is the red line at 2020 a graphical notation pointing to the current year, or is it a spike in the death rate?