Web Performance
A battle against the laws of physics
Percieved Performance
- < 100ms - Instant
- < 1 sec - Keep attention
- 1-10 sec - WTF?
- > 10 sec - Rage Quit
http://blog.teamtreehouse.com/perceived-performances
Basically we want to be sub 100ms
Backend Performance
Dreams of an old school rails dev
Hypothesis
- Static rendering preferable over SPAs
- Write simple and testable code in our favourite languages
- SSR fast since less asset loading / api calls / spinners
- Leverage "turbolinks" to hide page/asset reload cycle
- Render as fast as possible (< 100ms)
Also inspired by Honeybadger who went to SPA and back again...
So how fast can we render?
Rails performance in general
- Middleware stack ~20ms
- Fetching minor indexed documents from database ~2ms
- Rendering an erb template ~10-100ms
- Can add up pretty fast
- People solve this with caching (hard)
- Not the best concurrency scenario (unless jruby)
- Can we do better?
Note that middleware stack doesn't appear in logs.
Lots of dynamic content and helpers (or even database calls) in views will slow them down.
Rails always put happiness and productivity over performance
Possible to get an order of magnitude better performance in compiled languages.
Express similar to ruby needs cluster to use multiple cores efficiently.
Mention Startram had almost phoenix performance using only one core.
https://github.com/mroth/phoenix-showdown
Looking good
- Phoenix (elixir) does most of what rails does at 10x the performance
- Startram (crystal) has potential to do it at 50x the performance
- Sub 100ms is easy, why don't we go for sub 1ms?
- Let's put it in the cloud and rejoice!
Doing all the caching right we render in about 6ms so what the hell is this?
Teh Intranets
No hyperdriveStuck at the speed of light
Speed of Light, in theory
- 299,792km per second
- Stockholm -> N. Virginia distance 7,000km
- Should take 23ms
- Stockholm -> Dublin distance 2,000km
- Should take 6ms
Doesn't look so bad...
Speed of Light, in reality
- Zig zagging infrastructure
- Routers add latency
- Light moves 30% slower in glass
- Pinballing through the fiber
- About twice as slow
Routers - like going through customs
http://royal.pingdom.com/2007/06/01/theoretical-vs-real-world-speed-limit-of-ping/
Speed of Light, in practice
- Ping Dublin ~50ms
- Ping N. Virginia ~130ms
- So what's up with 750ms?
130ms not cool but what's up with 750ms?
http://www.cloudping.info/
1x rt - you're not getting anything until we have encryption
2x rt - encryption handshake
1x rt - get actual server response
Wait for content
Speed of light, on https
- 4 round trips for first response
- Similar handshakes for asset loading
- VERY slow first render
- Optimization 1: Browser will cache connections
- Optimization 2: HTTP/2 requires only one connection
- Still adds ~140ms roundtrip for each request
Example of H2 via Cloudflare
When using cloudflare enabled domain http2 is used and only one connection is created.
Assets are then pipelined. About half the time to load assets.
Single Page Apps
Nothing will stand in our way
Tricking the speed of light
- Optimistic Updates
- Client side data caching
- Websockets
Challenges
- Two way replication
- Stateful servers
- Persistent connections
If we never reload, how do we get fresh data?
If we need fresh data, servers need to know where we're at
Scaling persistent connections very different from polling
Solutions
- Meteor (the only mature complete package)
- Phoenix (websocket pub/sub, massive scalability)
- Relay (graphql with optimistic updates)
- Firebase (cloud store with live queries)
- Om (optimistic updates, coolest kid on the block)
- Dato (next gen meteor... in theory)
Scalability concerns
- Sticky sessions
- PaaS might limit number of persistent connections
- Live query computation is expensive (meteor)
- Javascript isn't multi threaded and hard to optimize (node)
Paas like heroku is known to handle 6000 connections per dyno
The best is yet to come
- Doable but hard to be productive
- Use Meteor or roll your own (not trivial)
- Game style network programming scales differently
- Fast and concurrent backend more important than ever
- Isomorphic Clojure or Elixir might make for an optimal solution?
- It's not the dark side!
Web Performance
A battle against the laws of physics