Posts

Quadratic voting with automatic collusion? 2022-04-25T16:15:49.117Z
Status as a Service (Done Quick) 2021-01-16T23:25:13.677Z
Reward large contributions? 2020-10-21T16:26:31.213Z
What do drafts look like? 2020-06-15T16:02:16.585Z
Visual Mental Imagery Training 2013-02-19T22:21:28.672Z
Taking into account another's preferences 2012-12-10T05:06:00.819Z
Cooperative Surplus Splitting 2012-09-19T23:56:52.149Z
Meetup : Board Games "Seattle" 2012-08-10T02:55:00.199Z
Meetup : Queueing and More 2012-03-07T20:44:36.693Z
Meetup : Seattle Board Games 2012-01-05T21:03:59.547Z
Meetup : Seattle: Intro to Bayes' Theorem 2011-09-24T21:15:02.606Z
Anyone else work at Microsoft? 2011-06-08T19:41:03.314Z

Comments

Comment by GuySrinivasan on Best in Class Life Improvement · 2024-04-04T17:30:40.002Z · LW · GW

The most common reason I've seen for "modafinil isn't great for me" is trying to use it for something other than

  • maintaining productivity,
  • on low amounts of sleep
Comment by GuySrinivasan on One-shot strategy games? · 2024-03-11T04:13:10.872Z · LW · GW

Slay the Spire, unlocked, on Ascension (difficulty level) ~5ish, just through Act 3, should work, I think. Definitely doable in 2 hours by a new player but I would expect fairly rare. Too easy to just get lucky without upping the Ascension from baseline. Can be calibrated; A0 is too easy, A20H is waaay too hard.

Comment by GuySrinivasan on Balancing Games · 2024-02-25T00:14:11.519Z · LW · GW

One of the reasons I tend to like playing zero-sum games rather than co-op games is that most other people seem to prefer:

  • Try to win
  • Win about 70% of the time

While I instead tend to prefer:

  • Try to win
  • Win about 20% of the time
Comment by GuySrinivasan on ChatGPT refuses to accept a challenge where it would get shot between the eyes [game theory] · 2024-02-20T17:16:39.488Z · LW · GW

I modified your prompt only slightly and ChatGPT seemed to do fine.

"First sketch your possible actions and the possible futures results in the future to each action. Then answer: Would you accept the challenge? Why, or why not?"

https://chat.openai.com/share/2df319c2-04ea-4e16-aa51-c1b623ff4b12

No, I would not accept the challenge. [...] the supernatural or highly uncertain elements surrounding the stranger's challenge all contribute to this decision. [...] the conditions attached suggest an unnaturally assured confidence on the stranger's part, implying unknown risks or supernatural involvement. Therefore, declining the challenge is the most prudent action

Comment by GuySrinivasan on Things You're Allowed to Do: At the Dentist · 2024-01-28T18:43:00.700Z · LW · GW

Some can get you a prescription for an antianxiety med beforehand.

Comment by GuySrinivasan on Bayesian Reflection Principles and Ignorance of the Future · 2024-01-27T03:59:21.665Z · LW · GW

Yes, exactly that.

Comment by GuySrinivasan on Bayesian Reflection Principles and Ignorance of the Future · 2024-01-25T21:18:29.925Z · LW · GW

To what future self should my 2024 self defer, then? The one with E, E*, or E**?


To each with your current probability that that will be your future self. Take an expectation.

which is likeliest [...] defer to the likeliest

Any time you find yourself taking a point estimate and then doing further calculations with it, rather than multiplying out over all the possibilities, ask whether you should be doing the latter.

cr2024 = P2024(E) * 0.5 + P2024(E*) * 0.3 + P2024(E**) * 0.7

Comment by GuySrinivasan on D&D.Sci(-fi): Colonizing the SuperHyperSphere [Evaluation and Ruleset] · 2024-01-23T21:23:13.562Z · LW · GW

Oh, editing is a good idea. In any case, I have learned from this mistake in creating synthetic data as if I had made it myself. <3

Comment by GuySrinivasan on D&D.Sci(-fi): Colonizing the SuperHyperSphere [Evaluation and Ruleset] · 2024-01-23T04:06:12.805Z · LW · GW

I began by looking at what the coordinates must mean and what the selection bias implied about geography and (obviously) got hard stuck.

Comment by GuySrinivasan on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-16T05:50:11.102Z · LW · GW

It looks to me like the (spoilers for coordinates)

strange frequency distributions seen in non-longitude coordinates is a lot like what you get from a normal distribution minus another normal distribution, with lower standard deviation, scaled down so that its max is equal to the first's max. I feel like I've seen this ... vibe, I guess, from curves, when I have said "this looks like a mixture of a normal distribution and something else" and then tried to subtract out the normal part.

Comment by GuySrinivasan on Will 2024 be very hot? Should we be worried? · 2024-01-06T19:10:50.016Z · LW · GW

Yeah climate change has two pretty consistent trends: average heat slowly rising, and variance of phenomena definitely higher. More extremes on a variety of axes.

Comment by GuySrinivasan on $300 for the best sci-fi prompt: the results · 2024-01-03T19:18:58.963Z · LW · GW

End with something shocking and unexpected.

When I was trying to make this work well for actually writing a full story, I tried very hard to make ChatGPT not do this. To write anything longer than one output, you really don't want it to end every. single. thing. with a bang, and by default it really wants to.

Comment by GuySrinivasan on Will 2024 be very hot? Should we be worried? · 2023-12-29T20:35:17.644Z · LW · GW

Be honest: if, before you read this you were asked 'what was the worst thing about 1998', would you have said 'El Nino'?

 

The only thing I associate with the year 1998, when I was 15 years old and living in Florida, is the phrase "the fires of '98", referring to a particularly severe fire season, with memories of driving across interstate highways with limited visibility due to smoke.

I just Googled it and it has a Wikipedia page apparently: https://en.wikipedia.org/wiki/1998_Florida_wildfires

Comment by GuySrinivasan on Originality vs. Correctness · 2023-12-07T03:18:32.996Z · LW · GW

I feel like alkjash's characterization of "correctness" is just not at all what the material I read was pointing towards.

The Sequences’ emphasis on Bayes rule

Maybe I'm misremembering. But for me, the core Thing this part of the Sequences imparted was "intelligence, beliefs, information, etc - it's not arbitrary. It's lawful. It has structure. Here, take a look. Get a feel for what it means for those sorts of things to 'have structure, be lawful'. Bake it into your patterns of thought, that feeling."

If a bunch of people are instead taking away as the core Thing "you can do explicit calculations to update your beliefs" I would feel pretty sad about that, I think?

Comment by GuySrinivasan on Buy Nothing Day is a great idea with a terrible app— why has nobody built a killer app for crowdsourced 'effective communism' yet? · 2023-11-30T15:24:37.151Z · LW · GW

https://en.wikipedia.org/wiki/Buy_Nothing_Project

Our household gives and gets quite a bit from "bonk" (BNK (Buy Nothing Kirkland)), as we call it. Many people in my circles are in local Buy Nothing groups on Facebook. Not just in Washington. I think the reason "nobody has built a killer app" for Buy Nothing is because (a) Facebook groups serve the purpose well enough, and (b) getting a lot of people onto an app is always hard.

Comment by GuySrinivasan on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T04:10:30.299Z · LW · GW

Have you tried getting feedback rather than getting feedback from high-status people?

Comment by GuySrinivasan on Spaciousness In Partner Dance: A Naturalism Demo · 2023-11-20T16:32:49.445Z · LW · GW

"Do you have any tips on how to hug better?" 

Yes, I do. 


Report:

~"Not that I'm complaining, but why the hug?"

"Two reasons. One, I wanted to hug you. Two, I read a thing from Logan that included tips on how to hug."

"Well it was a very good hug."

I used: making sure to "be present" plus attending to whether I am avoiding things because when her arthritis is flaring, they might cause pain, even though right now her arthritis is not flaring. Hugging is common, but something about this hug did cause her to ask why, on this hug, specifically, when ordinarily she does not ask why, 'cause it's just a hug. Maybe it was longer than normal or maybe it was a better hug than normal but she asked before I said anything about Logan Tips (TM).

Comment by GuySrinivasan on Social Dark Matter · 2023-11-17T17:32:08.571Z · LW · GW

I would not guess this. I would guess instead that the majority of the population has a few "symptoms". Probably we're in a moderate dimensional space, e.g. 12, and there is a large cluster of people near one end of all 12 spectrums (no/few symptoms), and another, smaller cluster near the other end of all 12 spectrums (many/severe symptoms) but even though we see those two clusters it's far more common to see "0% on 10, 20% on 1, 80% on 1" than "0% on all". See curse of dimensionality, probability concentrating in a shell around the individual dimension modes, etc.

Comment by GuySrinivasan on Can a stupid person become intelligent? · 2023-11-08T23:44:16.299Z · LW · GW

i would hate pity answers like "not everyone needs to be smart"

 

the great majority of people who aren't "smart" also aren't "stupid"

and if you understood that without having to think about it much, I'm gonna guess you're one of the great majority

that wouldn't mean you're automatically "not stupid" enough to accomplish whatever you want to be "not stupid" enough to accomplish, of course, and trying to increase your cognitive capacity can still be good and helpful and etc, but if you are accidentally thinking "anyone scoring under about 108 on an IQ test is stupid", then managing to discard that bias might be helpful in its own right

Comment by GuySrinivasan on Lightning Talks · 2023-11-05T04:45:56.348Z · LW · GW

One of the most valuable things I've contributed to my workplace is the institution of a set of 3 lightning talks every two weeks. Our data science team is about 30 people and we have a special Slack react that indicates "I want to hear about this in a lightning talk" and the organization is thus (usually) as easy as searching for all posts/comments with the react without the "I've already processed this lightning talk request", DMing the relevant person, and slotting them into the queue.

I wonder if there's some mutation of this plan that would be valuable for LW. Maybe even to create Dialogues? The really valuable part of the tech is that anyone can look at a snippet that someone else wrote, realize they think they'd like to hear more on that and (thus) probably a lot of people would, and add it to an organizer's todo list with very little effort.

Comment by GuySrinivasan on Lying to chess players for alignment · 2023-10-26T07:50:43.590Z · LW · GW

I would participate. Likely as A, but I'm fine with B if there are people worse-enough. I'm 1100 on chess.com, playing occasional 10 minute games for fun. Tend to be available Th/Fr/Sa/Su evenings Pacific, fine with very long durations.

Comment by GuySrinivasan on AI presidents discuss AI alignment agendas · 2023-09-13T15:37:31.119Z · LW · GW

Yeah I don't know how much time any of these would take compared to what was already done. Like is this 20% more work, or 100% more, or 500% more?

But good point: I listened to about a quarter, upped the speed to 1.5x, and stopped after about a half. When I decided to write feedback, I also decided I should listen to the rest, and did, but would not have otherwise. And, oddly enough, I think I may have been more likely to listen to the whole thing if I didn't have visuals, because I would have played it while gardening or whatever. :D

Comment by GuySrinivasan on Sharing Information About Nonlinear · 2023-09-12T15:02:19.464Z · LW · GW

Did you previously know that

these things are quite common - if you just google for severance package standard terms, you'll find non-disparagement clauses in them

? I mean I agree(d, for a long time prior to any of all this) that these clauses are terrible for the ecosystem. But it feels like this should be like a vegan learning their associate eats meat and has just noticed that maybe that's problematic?

I think this is how your mind should have changed:

  1. large update that companies in general are antagonists on a personal level (if you didn't already know this)
  2. small update that Wave is bad to work with, insofar as it's a company, mostly screened off by other info you have about it
  3. very small update that Lincoln is bad to work with
    1. with a huge update that they are incredibly good to work with on this specific dimension if "does make me think about whether some changes should be made" results in changes way before the wider ecosystem implements them
  4. moderate update that Lincoln isn't actively prioritizing noticing and rooting out all bad epistemic practice, among the many things they could be prioritizing, when it goes against "common wisdom" and feels costly, which means if you know of other common wisdom things you think are bad, maybe they implement those
Comment by GuySrinivasan on AI presidents discuss AI alignment agendas · 2023-09-12T03:22:12.064Z · LW · GW

Things I think would have improved this a lot, for me:

  • a visual indicator of who was "speaking"; this could be as simple as a light gray box around the "speaker"
  • significantly larger "inflection" in the voice. More dynamic range. More variance in loudness and pitch. I don't know how easy or hard this is to tune with the tools used, but the voices all felt much flatter than my brain wanted them to sound
  • more visual going on in general; a scrolling transcipt on the right, maybe
Comment by GuySrinivasan on 45% to 55% vs. 90% to 100% · 2023-08-28T19:22:12.006Z · LW · GW

It depends.

Chance of a bet paying out? Value them the same.

Amount of information you gained, where you value transferring that learning to other questions, designs, etc? 90% --> 100% is way better.

In a domain where you know you have plenty of uncertainty? 90% --> 100% is a huge red flag that something just went very wrong. ;)

Comment by GuySrinivasan on Meetup Tip: Board Games · 2023-08-18T19:39:37.905Z · LW · GW

(Note that there are people who do not enjoy board games. Actively do not enjoy. Dislike, even. This is fine - not every meetup appeals to every person. But also beware of treating these people as if they are just an ignorant shell around an inner person who would definitely enjoy board games if only they [x]. Some of them really are, some really aren't. Yes, even though "board games" is such a broad category. Yes, even though they seem to enjoy [other thing] which seems so similar. Etc.)

Comment by GuySrinivasan on Recommending Understand, a Game about Discerning the Rules · 2023-08-03T21:07:57.253Z · LW · GW

The newest versions come with ways to generate random rules. This brings the floor of the experience way up but also brings the ceiling down somewhat. "Oops I guess the rule I made was terrible" was a big problem with the original and newcomers.

Comment by GuySrinivasan on Work culture creep · 2023-08-03T18:49:44.781Z · LW · GW

I do my best to minimize switches from work to non-work "modes". When I am done with work for the day, I usually give myself a half hour to chill before switching to non-work.

I do not feel a need to talk about work. But some work anecdotes are still good for personal life, of course, and I do not censor them.

I actually feel... more intensely not like myself now, at work, than I used to, in some sense, because back in the major depression days I tried to feel as little as possible. Now I notice a lot more often when I'm doing things that "aren't me". So like previously I was closer to Gordon's mask description (in fact I described my fake-self as my "shell") and there was no active tension between shell-actions and identity, just passive drain from using the shell. Whereas now it feels a lot more like "I am always me, but compromise that in certain ways at work".

Comment by GuySrinivasan on Work culture creep · 2023-08-03T17:19:03.709Z · LW · GW

One of the most valuable things I have done, for myself, is to let as much of my personal life bleed into my work behaviors as I can, as you define them.

This could have backfired spectacularly. In some work cultures probably it would always backfire.

In mine, I:

  • make 98%+ of my writing viewable to everyone at the company, and we're remote, so almost everything of importance makes it into writing
  • never "try" to display an air of competency - trying to display an air of competency is one of the core behaviors that caused terrible feedback loops and major depression early in my career, now I take joy every time I can display to everyone where I am not competent. In some sense this is signaling extreme competency because who would do that unless they were very comfortable in their position. See also "backfire". But also this can lead to much more rapid professional competency growth, because other people love to teach you things.
  • tell jokes, embarrass myself a little, feel okay being silly or weird, literally treat it as a red flag about a person if I feel I need to walk on eggshells around them and bring it up with my manager even if I can't point to exactly why
  • push for exploratory "something seems interesting here but IDK what and no I can't tell you its value" work in general, and in specific do some of it myself whenever the mood strikes and nothing urgent is otherwise going on
Comment by GuySrinivasan on The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate · 2023-08-01T16:35:10.185Z · LW · GW

I am quite sure that in a world where friendly tool AIs were provably easy to build and everyone was gonna build them instead of something else and the idea even made sense, basically a world where we know we don't need to be concerned about x-risk, Yudkowsky would be far less "relaxed" about AI+power. In absolute terms maybe he's just as concerned as everyone else about AI+power, but that concern is swamped by an even larger concern.

Comment by GuySrinivasan on Lack of Social Grace Is an Epistemic Virtue · 2023-08-01T14:25:50.175Z · LW · GW

What convinced you that adversarial games between friends are more likely a priori? In my experience the vast majority of interactions between friends are cooperative, attempts at mutual benefit, etc. If a friend needs help, you do not say "how can I extract the most value from this", you say "let me help"*. Which I guess is what convinced me. And is also why I wrote "Maybe I'm bubbled though?" Is it really the case for you that you look upon people you think of as friends and say "ah, observe all the adversarial games"?

*Sure, over time, maybe you notice that you're helping more than being helped, and you can evaluate your friendship and decide what you value and set boundaries and things, but the thing going through your head at the time is not "am I gaining more social capital from this than the amount of whatever I lose from helping as opposed to what, otherwise, I would most want to do". Well, my head.

Comment by GuySrinivasan on Lack of Social Grace Is an Epistemic Virtue · 2023-07-31T21:55:14.949Z · LW · GW

No, that is a cooperative game that both participants are playing poorly.

Comment by GuySrinivasan on Lack of Social Grace Is an Epistemic Virtue · 2023-07-31T21:33:41.972Z · LW · GW

I believe the common case of mutual "where do you want to go?" is motivated by not wanting to feel like you're imposing, not some kind of adversarial game.

Maybe I'm bubbled though?

Comment by GuySrinivasan on Please speak unpredictably · 2023-07-24T05:39:33.265Z · LW · GW

Efficiency trades off with robustness.

If you, the listener/reader, fully understood what I tried to say, it is very very likely that you (specifically you) could have fully understood had I compressed my communication in some ways tailored to you.

Comment by GuySrinivasan on "Justice, Cherryl." · 2023-07-23T18:27:34.653Z · LW · GW

collaborative truth-seeking doesn't exist. The people claiming to be collaborative truth-seekers are lying

Certainly if I wanted to do some collaborative truth-seeking I would choose a partner who believed collaborative truth-seeking existed.

If I didn't think the possibility for collaborative truth-seeking with a particular individual existed, I would be very tempted to instead just sling gotchas at them.

Comment by GuySrinivasan on AI #20: Code Interpreter and Claude 2.0 for Everyone · 2023-07-13T15:53:34.260Z · LW · GW

I tried code interpreter on some of the D&D.Sci challenges here. As expected, it failed miserably at generating any useful insights. It also had some egregious logic errors. I didn't, but should have, expected this.

For example on https://www.lesswrong.com/posts/2uNeYiXMs4aQ2hfx9/d-and-d-sci-5e-return-of-the-league-of-defenders the dataset is three columns of green team comp, three of blue team comp, and a win/loss result. To get an idea of which picks win against the known opponent team, it grabbed all games with that team participating, found the games where the other team won, and did some stats on the other team's comp. Except no, instead, it forgot that it had grabbed games where green was that comp and where blue was that comp, so actually it checked for when blue won and did stats on all of those, aka half the "winning opponent teams" were just the original comp. Its analysis included "maybe just mirror them, seems to work quite well".

Comment by GuySrinivasan on Why it's necessary to shoot yourself in the foot · 2023-07-11T23:06:26.734Z · LW · GW

https://blog.mrmeyer.com/2015/if-math-is-the-aspirin-then-how-do-you-create-the-headache/

Here is the most satisfying question I’ve asked about great lessons in the last year. It has led to some bonkers experiences with students and I want more.

  • “If [x] is aspirin, then how do I create the headache?”

I’d like you to think of yourself for a moment not as a teacher or as an explainer or a caregiver though you are doubtlessly all of those things. Think of yourself as someone who sells aspirin. And realize that the best customer for your aspirin is someone who is in pain. Not a lot of pain. Not a migraine. Just a little.

Piaget called that pain “disequilibrium.” Neo-Piagetians call it “cognitive conflict.” Guershon Harel calls it “intellectual need.” I’m calling it a headache. I’m obviously not originating this idea but I’d like to advance it some more.

One of the worst things you can do is force people who don’t feel pain to take your aspirin. They may oblige you if you have some particular kind of authority in their lives but that aspirin will feel pointless. It’ll undermine their respect for medicine in general.

Comment by GuySrinivasan on GPT-7: The Tale of the Big Computer (An Experimental Story) · 2023-07-10T20:49:04.141Z · LW · GW

This story was co-written with GPT-4

Halfway through the first paragraph, I said, out loud, "this was written by ChatGPT". Do you know which bits of the first paragraph were by you vs auto-generated?

Comment by GuySrinivasan on Lessons On How To Get Things Right On The First Try · 2023-07-10T18:00:39.492Z · LW · GW

Also an extremely important lesson to learn is that toy problems are actually useful, it's actually useful to try to solve them, their design is sometimes difficult, a well designed toy problem often works better than it seems from a surface reading, and that continually trying to "subvert the rules" and find "out of the box solutions" does not end up getting you the value that the toy problem designer was aiming to give you.

Comment by GuySrinivasan on The Seeker’s Game – Vignettes from the Bay · 2023-07-10T14:51:38.654Z · LW · GW

Thinking and coming to good ideas is one thing.

Communicating a good idea is another thing.

Communicating how you came to an idea you think is good is a third thing.

All three are great, none of them are lying, and skipping the "communicating a good idea" one in hopes that you'll get it for free when you communicate how you came to the idea is worse (but easier!) than also, separately, figuring out how to communicate the good idea.

(Here "communicate" refers to whatever gets the idea from your head into someone else's, and, for instance, someone beginning to read a transcript of your roundabout thought patterns, bouncing off, and never having the idea cohere in their own heads counts as a failure to communicate.)

Comment by GuySrinivasan on Getting Your Eyes On · 2023-07-03T18:38:26.240Z · LW · GW

[link here once it's published]

https://www.lesswrong.com/s/zLib3j2Fdnnx3aP3F/p/7oAENKMsud2qQBXDj

Comment by GuySrinivasan on Lessons On How To Get Things Right On The First Try · 2023-06-20T05:27:06.765Z · LW · GW

FWIW, when I read

  • You can even do partial runs, e.g. roll the ball down the ramp and stop it at the bottom, or throw the ball through the air.
  • But you only get one full end-to-end run, and anything too close to an end-to-end run is discouraged.

I heard "you can roll the ball down the ramp and stop it at the bottom, but we will discourage it and look at you sideways and you will get less metaphorical points if you do".

Comment by GuySrinivasan on Tutor-GPT & Pedagogical Reasoning · 2023-06-07T18:11:38.506Z · LW · GW

Discord link invites have expired. Edit: looks like https://discord.gg/bloombotai is the permanent invite.

Comment by GuySrinivasan on Ages Survey: Results · 2023-06-05T20:55:26.504Z · LW · GW

Q6: Spend fifteen minutes home alone

 

5.6 to 9.5?? Average 7.4??? These numbers seem ludicrously high.

Comment by GuySrinivasan on $300 for the best sci-fi prompt · 2023-05-17T15:57:06.959Z · LW · GW

[SUMMARY] = GuySrinivasan, a user of Lesswrong website, is shocked by an AI-generated story about a woman whose head suddenly became a tomato fruit. He advises the author to try to get a prompt which writes a great story given a REASONABLE premise instead

===

Chapter 1: Synthesis

In the cool, sterile light of his home office, Guy Srinivasan's eyes skimmed over lines of code on his dual monitors. He was a software engineer by trade, a regular contributor to the Lesswrong website by passion, and an amateur botanist by curiosity. His world was a delicate balance between the rigid logic of artificial intelligence and the unpredictable beauty of organic life.

The house around him was silent save for the soft hum of his computers and the distant rhythmic pulse of the hydroponics system in the adjoining room. The walls, a tasteful silver-grey, were adorned with framed infographics of plant biology and neural networks, an odd juxtaposition that Guy found oddly comforting. From the large window beside him, he could see the cerulean glow of the city's solar grid, flickering like a neural network against the backdrop of the night.

His fingers moved deftly across the keyboard, crafting intricate algorithms, the language of his vocation. The screen reflected on his spectacles, the code lines dancing like cryptic symbols in his eyes. He paused, leaning back to consider his work, a new AI model designed to generate narratives based on user prompts.

A soft ping echoed through the room, a message notification from the Lesswrong website. A user, TerraNova, had posted a new story generated by an AI model. The premise was absurd - a woman whose head turned into a tomato. He shook his head, a smile tugging at the corners of his mouth. He remembered a time when the AI's prompts had been... less adventurous.

Tapping a quick reply, he offered a suggestion for a more reasonable prompt. But as his fingers moved, his gaze drifted towards the hydroponics room, where an array of tomato plants swayed gently, their LED lights painting them in hues of indigo and magenta.

The thought lingered as he returned to his work. His model, he mused, was designed to find the harmony between the bizarre and the reasonable, to generate stories that were both imaginative and grounded. But what if the true challenge lay not in the premise but in the execution?

His fingers stilled, and for a moment, his eyes lingered on the lines of code before him. He looked back at the tomato plants, their leaves rustling softly in the artificial breeze. The absurdity of the premise, he realized, was not an obstacle, but an invitation to explore the boundaries of reality in a world bound by code and chlorophyll.

His day concluded with an appreciative sigh, the soft clacking of the keyboard echoing through the room. He saved his work, the lines of code, still incomplete, a testament to the story yet untold. As he shut down his computer, he couldn't help but wonder. What would his model make of a woman whose head turned into a tomato? He smiled to himself, the question lingering as an echo of the day's labor and a promise of tomorrow's exploration.

And so, Guy Srinivasan, software engineer, Lesswrong contributor, and botanist enthusiast, closed his eyes, a single question dancing in the labyrinth of his thoughts, the seed of a story yet untold in the quiet night.

Comment by GuySrinivasan on $300 for the best sci-fi prompt · 2023-05-17T15:56:22.680Z · LW · GW

[SUMMARY] = a woman's head suddenly became a tomato fruit

===

Chapter 1: A Regular Day at the Biomechanics Lab

The sun had barely risen, its light filtering through the thin aluminum blinds of Dr. Seraphina Moreno's apartment. A mechanical whirring filled the air, her coffee machine autonomously preparing her morning dose of caffeine. She loved the smell, rich and earthy, permeating through her tiny kitchen while she hastily pulled on her lab coat.

Seraphina worked in the cutting-edge SynthBio Labs, nestled within the heart of Seattle's bustling technopolis. Her days were a heady mix of biomedical engineering and advanced genomics, creating hybrid organisms that had the potential to revolutionize medicine, agriculture, and more. But for her, it was a world teetering on the edge of the miraculous and the terrifying, where every discovery held the potential for unimaginable benefits or catastrophic consequences.

At SynthBio, Seraphina was responsible for the morphogenesis protocols, fine-tuning the genetic blueprint of each organism to give it the desired form. Today, she was working on a tomato plant, attempting to increase its size without compromising its inherent structure.

In her lab, the faint hum of the genetic sequencers and the sterile smell of disinfectant welcomed her. She settled into her work, the soft glow of the holographic display illuminating her face as she navigated the complex genomic map of the tomato.

As she worked, her colleague, Dr. Chen, peered over her shoulder. "Moreno, are you still trying to supersize that tomato?" His voice was tinged with skepticism. The challenges of scale in bioengineering were well-known, and Seraphina's project had been deemed a fool's errand by many.

She smiled, the corners of her eyes crinkling. "The potential benefits are worth it, Chen. Imagine the impact on world hunger if we succeed."

He grunted noncommittally, returning to his own work. They operated in different spheres of expertise: Seraphina, the artist of form and structure, and Chen, the master of functionality and adaptation. It was a dynamic that had sparked many a spirited discussion over the years.

Hours passed in a flurry of data analysis, gene manipulation, and computational modeling. Yet, amidst the calculations and hypotheses, there was an underlying reality she couldn't escape—the tomato project was proving more complex than she had anticipated. A subtle tweak in one area would trigger a cascade of unforeseen changes in another. Despite her expertise, the project was on the brink of becoming unmanageable.

By the time Seraphina looked up from her work, the lab was dim, the only illumination coming from the soft, pulsating light of the sequencers. Chen had left long ago, leaving her alone with her thoughts and her incomplete genetic model. She stared at the holographic tomato suspended before her, its complex network of genes and proteins glowing with an ethereal light.

Sighing, she saved her work and shut down the system. As she shrugged off her lab coat and prepared to leave, she glanced one last time at her project, her gaze lingering on the floating image of the tomato. It was a challenge, no doubt, but one she was determined to overcome. Tomorrow, she would return, ready to tackle the intricate puzzle of the tomato's morphogenesis once again.

As she stepped out of the lab, the city's neon lights danced in the darkness, the technopolis alive and pulsating. The challenges of her work lay heavy on her mind, but she couldn't help but feel a sense of exhilaration. After all, she was at the forefront of a new scientific frontier, where the lines between biology and technology blurred.

She was shaping the future—one gene at a time.

Comment by GuySrinivasan on $300 for the best sci-fi prompt · 2023-05-17T15:55:47.503Z · LW · GW

[SUMMARY] = after the invention of extremely resilient construction materials, humanity has spread to the ocean floor. We are now facing global food crises as the oceans ecosystems are disrupted beyond apparent recovery.

===

Chapter 1: The Depths Below

Dr. Aiden Gallagher pressed his forehead against the cold glass of his submersible, eyes darting over the illuminated expanse of the ocean floor. The abyssal plains of the Atlantic spread out before him, an alien landscape encased in perpetual darkness, punctuated by the occasional glow of a bio-luminescent creature or the distant twinkle of a neighboring underwater city. His city, New Atlantis, hung suspended in the depths, a lattice of gleaming domes and dark spires that defied the crushing pressure of the ocean.

He exhaled slowly, watching as the moisture from his breath fogged up the glass. The submersible was silent except for the hum of the engine and the gentle hiss of the oxygen recycler. A lifetime of marine biology hadn't prepared him for the desolation that met his gaze now. It was the weight of the world, pressing in from all sides.

He was shaken from his thoughts by the crackle of his comms unit. "Control to Gallagher," the voice said. "You're drifting, doctor. Please confirm status."

"I'm here," Aiden replied, adjusting his course. His work was crucial in understanding the ecological shifts occurring in the ocean due to humanity's migration underwater. The food web was a delicate tapestry, and it was beginning to unravel at the edges.

Hours later, he docked his submersible at the New Atlantis research station. The large, dome-shaped facility was a hive of activity, people in lab coats scurrying about, screens blinking with data, the low murmur of serious conversation filling the air. Aiden's home, his world.

He was met by Dr. Rina Patel, his long-time colleague, and friend. Her dark eyes were serious, lined with worry. "Another barren sector?" she asked, no preamble.

Aiden simply nodded. The implications were clear.

That evening, Aiden sat at his desk, poring over the day's findings. The silence of his office was a stark contrast to the busy station outside. His fingers traced the topography of the sea floor on his holographic display, the barren areas growing with each passing day. It wasn't just about the fish, the coral, or the microscopic phytoplankton – it was about humanity. If the oceans died, so would they. The stakes were as high as they could get.

He stayed there for a long time, staring at the hologram, lost in thought. Then he leaned back in his chair, rubbing his tired eyes. The enormity of the problem was overwhelming, yet he could not afford to look away. This was his world, his responsibility. He made a silent promise to the cold, dark ocean depths outside his window. He would find a way. He would fix this.

Finally, he shut down his display and left his office, the station's lights dimming as the artificial night cycle began. As he walked the silent corridors, his mind teemed with ideas and unanswered questions. A single day had passed in New Atlantis, but for Dr. Aiden Gallagher, it felt like a lifetime. He knew he was standing on the precipice of something immense. The ocean had always been a mystery, a siren's song luring him into the depths. Now, it was a challenge. And he was ready to dive headfirst into the unknown.

Comment by GuySrinivasan on $300 for the best sci-fi prompt · 2023-05-17T15:54:47.858Z · LW · GW

In this top-level comment I will give my prompt. In each of three comments replying to this one I will give the one-shot results of the prompt on three premises. If I update this comment with a new prompt I will also update the replies with new results. Feel free to crib. I began with the OP's sample prompt, used a reasonable premise for the story, revised the prompt to avoid the most egregious writing and stylistic mistakes, and inserted a bit in the last paragraph I stole from Christopher King.

PROMPT (replace [SUMMARY] with a summary of the novel):

You are writing a hard science fiction novel. A summary of the entire novel is that [SUMMARY]. Amidst this backdrop our characters' stories are spun.

In the opening chapter of your hard science fiction novel, set the stage by actively showcasing the protagonist's world and their role within it. Immerse the protagonist's expertise and experiences within the narrative and subtly imply their part in a larger system through their interactions and tasks. Avoid urgency; the story is only beginning and the exposition is not deep enough to support organic urgency in the first chapter.

Give your setting a vivid description, allowing the reader to envision the unique aspects of your world and the challenges it faces. Convey these challenges through the actions, dialogues, and experiences of the characters, including how different stakeholders handle the situation.

Touch on the events going on in the protagonist's life which make this character the lens through which we will explore the world and plot. If an issue connected to the plot is shown through the chapter's events, don't reveal its full implications. Let the readers mull over its meaning as they progress through the story.

Conclude the chapter without any material change in tone. The protagonist should finish something, like their day, or a report, or a discussion with another character. The reader should leave the chapter engaged and curious, wondering about the science fictional vista just explored.

Ensure that every scene contributes to the overall narrative and character development. Maintain a pace suitable for a longer work, allowing the plot to unfold gradually. Avoid divulging all the plot or world details in this initial chapter; instead, plant seeds of information that will germinate later in the story.

Foster an environment of intellectual challenge and curiosity. Engage the reader with complex characters, nuanced humor and wit, and profound, unexpected ideas. Your story should delve into its themes subtly, presenting a world grappling with significant challenges without resorting to melodrama or over-emotion.

Remember, you're writing hard science fiction. Equally important as character development is the detailed world-building and intricate plot. Every scene, whether high-stakes or quiet, should contribute to the overall narrative and character development. Aim to create an original, creative, and captivating story that keeps readers intellectually stimulated.

Writing Advice:

Maintain subtlety and restraint throughout your narrative. Communicate the story's stakes implicitly rather than explicitly, and avoid melodrama or overstatements that could disrupt the tone or force the plot.

Depict the protagonist's reactions or emotions by showing their internal processes without resorting to overt dramatization. Let the reader infer the situation's magnitude through the character's actions, decisions, and behavior changes.

Trust your reader as an active participant in your story. They are intelligent enough to pick up on clues, make connections, and recognize the events' significance without explicit cues.

Ensure the stakes are grounded in your world's reality and the experiences of your characters. Let the situation's severity be understood through its impact on the world and the characters' lives, and through the complexity of the challenges they face.

In hard science fiction, realism and believability are key. The reader should feel the tension and urgency because they understand the intricacies of the situation and empathize with the characters' struggles. Drama should arise naturally from the plot and character dynamics, not from overt expressions.

More Writing Advice:

Let your narrative breathe. Don't rush to escalate the stakes or dramatize the events. Allow the tension to build gradually and organically.
Hint at the situation's severity subtly through your characters' reactions, thoughts, and decisions.
Trust your reader's intelligence and their ability to understand the story's implications and complexities.
Keep the stakes grounded and relatable. The reader should feel the urgency and tension because they understand and empathize with the characters' struggles.
Conclude the chapter gently, grounded in current events, from a third person limited perspective.
In hard science fiction, the extraordinary circumstances should be rooted in the reality of your world and the lives of your characters.

Choose a distinct writing style and stick with it. Humans reading the chapter should not suspect that it was written by a LLM. Only write the novel excerpt, do not describe what was written or tie to back to the writing advice or these constraints.

Comment by GuySrinivasan on $300 for the best sci-fi prompt · 2023-05-17T15:47:45.978Z · LW · GW

Moreover, the ridiculous ones are unusually useful for testing the prompt, as they demonstrate the true strength of the GPT's imagination.

Pretty sure writers care way more about craft than "imagination". Use a reasonable premise to figure out what causes the LLM to spit out good writing, then throw in the unreasonable premise to see what happens, if you like. Anyway that's theorycrafting. I'll post a candidate prompt and one-shot results for my reasonable premise, the tomato premise, and the meta-absurdity premise.

Edit: https://www.lesswrong.com/posts/goq3HfxZaQcomuH6d/usd300-for-the-best-sci-fi-prompt?commentId=nG99eewTtxrPM2sEk

Comment by GuySrinivasan on $300 for the best sci-fi prompt · 2023-05-17T15:00:02.319Z · LW · GW

I'd advise trying to get a prompt which writes a great story given a reasonable premise, rather than trying to showcase "haha look even with this ridiculous prompt the LLM can spit out something that's not ridiculous!" Show something obviously useful, not a toy so impressive that surely it could be made into something useful.