The need for speed in web frameworks?

post by Adam Zerner (adamzerner) · 2023-01-03T00:06:15.737Z · LW · GW · 2 comments

Contents

  Core features vs add-ons
  Minimalism, YAGNI and the wisdom of saying no
  Tradeoffs
  Valuing performance
  Impact on velocity
  Conclusion
None
2 comments

Recently I've been spending a lot of time investigating JavaScript web frameworks that use server-side rendering (SSR). I've noticed that there are a lot of features that are ultimately about improving performance.[1]

IMO, these all introduce complexity though. In which case, the question becomes: is it worth it?

Core features vs add-ons

Here is how I think about this. There are certain things that I see as core features that I'd want out of one of these web frameworks. The rest I see as add-ons. With that model, we can then (eventually) go through each add-on and ask if it's worth including.

Here are the core features:

The rest I see as add-ons. Partial hydration, prefetching, the rest of those bullet points in the first section of this post: those are all add-ons to me.

Unfortunately, I haven't been able to find a framework that offers me these core features without any add-ons. react-ssr is the closest thing I found but it is old, unmaintained, a little buggy and after experiencing the speed and simplicity of Vite I just can't go back to Webpack. I spent some time trying to hack together my own version of react-ssr using Vite but that effort was unsuccessful.

Minimalism, YAGNI and the wisdom of saying no

Speaking broadly, I align pretty strongly with all three of those things mentioned in the heading. However, I'm not dogmatic about them. At least I try not to be. I see them all as heuristics and starting points.

I think I make my best decisions when I use this as my default. When I place the burden of proof on justifying that extra "stuff" is worth it. "No" until proven "yes".

Tradeoffs

Of course, improvements in performance in those situations are still nice! Some people will argue that this means the performance improvements in question are worthwhile. After all, we should give users the best experience possible, right?

Well, no, we shouldn't. I mean yes, of course we should. But... er... let me start over.

There are tradeoffs at play. To keep things simple, consider a hypothetical. There are three tasks that you can assign to a developer this sprint:

  1. Improve the page load speed by 50ms.
  2. Fix a usability problem that was identified with one of the form fields.
  3. Implement a new feature that allows people to use markdown in the textarea.

It would be nice if you could do all three of them, but as anyone who's ever seen a backlog before understands, you can't. We are forced to prioritize. Picking one means not picking the others.

It's the pidgeonhole principle[2]. There are fewer slots available than there are pidgeons, so you have to let some of them fly away. And when you choose to keep the performance pidgeon, it's the usability and feature development pidgeons who are forced to migrate.

Bringing this back to web frameworks that include performance improving add-ons, it comes at the cost of complexity. This added complexity makes development take longer (decreases velocity), which means fewer features and usability improvements than there otherwise would be without that complexity. So then, the questions we have to ask ourselves are 1) how much do we value the performace improvements and 2) how much do they decrease velocity?

Valuing performance

Response Times: The 3 Important Limits has some good information on how much performance matters.

Summary: There are 3 main time limits (which are determined by human perceptual abilities) to keep in mind when optimizing web and application performance.

The basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]:

  • 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
  • 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
  • 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.

Check out the article and the corresponding video for more information. The takeaway I get from this is that one second threshold. If you navigate to a new page and it takes longer than a second, your flow of thought becomes interrupted. This feels like it matches my experience when I am on the internet. When it takes 3-4 seconds for a page to load, I get slightly impatient and I have a little bit of that feeling that my flow was interrupted.

But sometimes that is ok. For example, if I am paying my credit card, I don't mind waiting. Or if I'm updating some settings in GitHub. Or writing a blog post. Or buying something online.

Actually, buying something online is an interesting one. And illuminating. If I have a particular brand of dark soy sauce that I want to purchase, I'll go to eg. Amazon, type it in the search bar, add it to my cart, and purchase it. If any of those steps take a few seconds, I don't care. I'm there to do a job.

However, if I'm not sure what brand of dark soy sauce I want — or even what I'm shopping for in the first place — then I'll probably get a little impatient if the pages take long to load. I'll probably spend less time hanging out and shopping.

It's the same thing with a physical store. If you were at, let's say Target, and you were browsing around but it was mildly unpleasant for some reason (maybe it's dirty and smells bad), you'd probably decide to not hang out as long. But if you went specifically to the store to get more napkins you probably wouldn't mind. You're not going to turn around and not get your napkins because it smells a little funny. Especially if the quality and/or price was better than competitors.

From what I can tell, there's a lot of data points indicating that performance improvements lead to increases in revenue. WPO stats has a lot of examples, NN Group says so, Jeff Atwood says so. You have to be careful in how you interpret these data points though. There's a big risk of selection bias: maybe they're only reporting on the examples of performance improvements being important and failing to report on examples where performance improvements don't really matter.

It's hard to say in general when it matters and when it doesn't. I spent a few hours digging into this and haven't found good research. I wish there was good research though. If anyone knows of any, please send it my way. But in the absence of that information, my take is something like this:

When people are kinda just browsing the web, performance matters because if load times are slow they'll get impatient and go somewhere else. But when people come to you for a specific reason, like to pay their credit card, they don't mind waiting a little. So e-commerce, social media, checking sports scores, watching YouTube: performance matters in those situations. If the tweets keep loading slowly you'll switch over to your browser tab for Reddit. But for more functional and business oriented tasks, it doesn't really matter. At least not as much.

Impact on velocity

Valuing the performance improvements was difficult. Unfortunately, I think that figuring out the impact on velocity is also pretty difficult to do.

For starters, none of the web frameworks are pitched as having this as a downside. They don't say "we help you build highly performant websites but this comes at the cost of complexity and velocity". Instead they kinda say the opposite: that they provide a great developer experience (DX). That you can have your cake and eat it too.

Hey, maybe that is possible! Ideally you would just make the add-ons opt-in. That way you have them if you want them and can ignore them if you don't. It doesn't really feel like that's the case to me though. Partially because of leaky abstractions, partially because you to understand them well enough to not shoot yourself in the foot, partially because they change your mental models.

That's just speaking in the abstract though. In my experience, you really need to spend time with a given framework to get a feel for what the DX is and how fast you can move with it. And that takes time.

It's also personal. For example, some people love Ruby on Rails and claim it really makes them more productive. I'm the opposite. On the other hand, I love functional programming techniques. I think they make me more productive. But a lot of Rails enthusiasts I've worked with have felt the opposite. Certain things click well for some people but not others.

Another consideration is the upfront cost vs the variable cost. For example, Next has a fair amount of bells and whistles IMO. It takes time to learn them. But once you learn them, maybe then things are just fine? Ie. for your first project it takes you extra time but your second it doesn't take extra time because you already know how the bells and whistles work. I think there is something to be said for this. The upfront cost will be larger than the variable cost. But at the same time, the variable cost won't be zero.

Ultimately I don't have a great sense for what the impact on velocity will be. If I had to guess, I'd revert back to my minimalism and "complexity is the enemy" heuristics and say that, qualitatively, at best they'd be a "nuisance" and at worst a "notable impediment".

Conclusion

This has been a "thinking out loud" type of post, not one that offers confident opinions and battle tested pieces of advice.

It also leans more theoretical than practical. All of the popular and good web frameworks out there seem to include a bunch of add-ons. It's not like there's good frameworks out there that optimize for low complexity over performance and omit these add-ons. If there was, there'd be a practical decision to make about when to use that minimal framework and when to use the performance-oriented onces. But that's not the situation we face. Personally I plan on giving Next a shot. Hopefully the DX ends up being good.

I do feel pretty confident though that the minimal framework I'm describing is something that "deserves to exist", in the sense that it would be the right choice for a very non-trivial number of apps. But things don't really come to life because they "deserve" to. It takes a lot of resources to build a good web framework. Lots of smart-engineer-hours.

We see these smart-engineer-hours often coming from big companies like Google and Facebook. But Google and Facebook do this not only as a service to the community, but primarily (I assume) to meet their own business needs. And as mega-high scale companies, their needs are different from the rest of ours.

Still, there is a precedent for large scale open-source efforts succeeding. Vue is a good example in the world of JavaScript frameworks. And there is, I think, some sort of Invisible Hand that is powered by something other than profit. If there is some library that would be super, super useful to developers but that you'd have to offer for free, well, I think the Invisible Hand pushes towards that sort of thing being created. Not as hard as things that make money, but still somewhat hard. Instead of a $100 bill lying on the ground, it's more like 100 units of internet karma. Hopefully that will be a large enough force to bring to life my vision of a minimal, server-side rendered, web framework that uses JavaScript and utilizes full hydration.


  1. Don't treat this as comprehensive or authoritative. I skimmed through the docs of these frameworks but didn't read them thoroughly. ↩︎

  2. Technically the pidgeonhole principle is saying something slightly different. It's saying that if you fit n items into m slots then at least one slot must contain more than one item. I'm saying that if you have n items and m slots where m < n then you won't be able to fit every item into a slot. ↩︎

2 comments

Comments sorted by top scores.

comment by Brendan Long (korin43) · 2023-01-03T03:16:29.681Z · LW(p) · GW(p)

One issue with making features optional is that it's usually harder to write plugins/addons than core features (since you also need to design and maintain an interface for your plugin, and then constrain yourself to using it). In some cases this might be long-term beneficial (better encapsulation), but it's additional work.

The GNOME people used to talk about this a lot: the reason there's so few settings or plugins in GNOME is that it makes it much harder to write and test applications, so they strip out options so they can give the best experience for the cases people care most about.

There's also issues with plugin interface overhead, which normally aren't a huge problem but are a problem if the whole point of the plugin is to improve performance.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-01-03T07:06:05.362Z · LW(p) · GW(p)

Good point, that makes sense as a consideration. It sounds like a surface area thing. Suppose you have plugins A, B and C. Now you have to make sure that things work with 1) just A, 2) just B, 3) just C, 4) A and B, 5) A and C, 6) B and C, 7), A, B and C, and 8) none. That's a larger surface area where things can potentially go wrong.