http-nextgen



http-nextgen

0 0


http-nextgen

HTTP - The Next Generation presentation @ BreizhCamp 2016

On Github rluta / http-nextgen

HTTP - The NextGeneration

@raphaelluta #BzhCmp

Internet, the final frontier.

These are the voyages of the HTTP protocol.

Its continuing mission: to explore strange new devices, to seek out new use cases and new networks,

to boldly go where no browser has gone before!

HTTP/1.1: The Original

Slow
Inefficient
Complex

Arguably the most successful application protocol ever

    HTTP/1.1 200 OK
    Content-Type: text/html; charset=utf-8
    Date: Mon, 01 Dec 2014 16:34:12 GMT
    Server: apache
    Vary: Accept
    Cache-Control: public, max-age=7200
    Cache-Control: s-maxage=86400
    Expires: Mon, 01 Dec 2014 17:34:12 GMT
    ETag: w/"CEFE1C6A-B5DB-4E65-B965-F6356676FC57"
    Transfer-Encoding: chunked
    Content-Encoding: gzip
    Content-MD5: a76aad98ae2b51c35296a4ab222268db

7 RFCs to define protocol (RFC 7230-7237)

Damn it Jim ! I'm a text protocol,

not a warp core !

Parser fun

GET    /    HTTP   /   1  .   1
Accept:  text/plain
                     ; q=         0.01
               ,,,,,                      ,,,,,,,
    ,,,,,,,  ,,     ,,                      ,,  ,,
    ,,,,,,,,,    ,    ,,                   ,,   ,,
           ,,   ,,,    ,,           ,,,,,,,,,   ,,
    ,,,,,,,,,    ,    ,,                   ,,   ,,
    ,,,,,,,  ,,     ,,                      ,,  ,,
               ,,,,,                      ,,,,,,,
    text/*          ; q=         1.00
Host:
    www.apache.org

Inefficient

GET /js/app.js HTTP/1.1
Host: www.breizhcamp.org
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept: */*
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36
Referer: http://www.breizhcamp.org/
Accept-Encoding: gzip, deflate, sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4

HTTP/1.1 200 OK
Server: GitHub.com
Content-Type: application/javascript; charset=utf-8
Last-Modified: Fri, 18 Mar 2016 16:36:19 GMT
Access-Control-Allow-Origin: *
Expires: Sat, 19 Mar 2016 09:17:09 GMT
Cache-Control: max-age=600
Content-Encoding: gzip
X-GitHub-Request-Id: B91F1131:38FF:51D3672:56ED16BD
Content-Length: 2480
Accept-Ranges: bytes
Date: Sat, 19 Mar 2016 16:00:32 GMT
Via: 1.1 varnish
Age: 51
Connection: keep-alive
X-Served-By: cache-fra1231-FRA
X-Cache: HIT
X-Cache-Hits: 1
X-Timer: S1458403232.597602,VS0,VE0
Vary: Accept-Encoding
X-Fastly-Request-ID: 4ffe3e91de4fbfdbb8a504fbe406735996cb685e
Response body 2480 bytes Response headers 609 bytes (24% overhead) Request 383 bytes

Request - Response only

Synchronous

Limited parallelism (6 to 8 cnx)

client-http-flowOpenConnectionClientServerCloseConnection

Current web stats

  • 100 requests / page
  • 40 TCP connections
  • 2,281 KB transferred / page

Server-sent events

aka EventSource

Text event streams document type

GET /events HTTP/1.1
Host: localhost:7001

HTTP/1.1 200 OK
Content-Type: text/event-steam; charset=utf-8

data: Beam me up, Scotty !
retry: 500
data: Damn you, Scotty
data: Beam me up !
id: Scotty-1
event: failure
data: The engines are blowing up !

Javascript API

var source = new EventSource("http://localhost:7001/stream");

source.addEventListener('twitter',function (evt) {
    var obj = JSON.parse(evt.data);
    $('#twitter').html('
'+ ''+obj.from+'' +''+obj.message +'
' ); });
Show Panel

Use cases

Anything that suits the PubSub architecture

  • Tickers and notifications
  • Local storage management
  • Reactive UI or dashboards
  • etc...

Example Backend

def server = vertx.createHttpServer(), rm = new RouteMatcher()
def clients = []

rm.get('/stream') { HttpServerRequest req ->
    req.response.putHeader('Content-Type','text/event-stream')
    req.response.putHeader('Access-Control-Allow-Origin','*')
    req.response.putHeader('Cache-Control','public, no-cache')
    req.response.chunked = true
    req.response.write('retry: 1000\nevent: hello\ndata: {"type":"hello"}\n\n')

    clients << req.response

    req.response.closeHandler { clients.remove(req.response) }
}

vertx.eventBus.registerHandler('events') { Message msg ->
    def jsonBody = new JsonObject((Map)msg.body().data)
    def dataStr = "event: ${msg.body().type}\ndata: ${jsonBody.encode()}\n\n"

    clients.each { HttpServerResponse resp -> resp.write(dataStr) }
}

server.requestHandler(rm.asClosure()).listen(7001)
def twitterFactory = new TwitterStreamFactory().getInstance();
def queries = ['http2'];

final StatusListener statusListener = new StatusAdapter() {
    @Override
    public void onStatus(Status status) {

        vertx.eventBus.publish('events',[type:'twitter',
            data:['id':status.id,'from':status.user.name,
                  'message':status.text,'lang': status.lang
            ]
        ])
    }
};

def connectTwitterStream(twitter, listener, query) {
    twitter.cleanUp();
    twitter.clearListeners();

    twitter.addListener(listener);
    FilterQuery filterQuery = new FilterQuery().track(query as String[])
        .language(['fr','en'] as String[])
    twitter.filter(filterQuery);
}

connectTwitterStream(twitterFactory,statusListener,queries)

WebSockets

Bi-directional, low latency communication for anything

Setting up a Websocket

  • Custom protocol schemes ws:// and wss://
  • Negociated using HTTP/1.1 Upgrade mechanism
  • Optional subprotocols for application framing
GET / HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Host: echo.websocket.org
Origin: http://www.websocket.org
Sec-WebSocket-Key: i9ri`AfOgSsKwUlmLjIkGA==
Sec-WebSocket-Version: 13
Sec-WebSocket-Protocol: chat
HTTP/1.1 101 Web Socket Protocol Handshake
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: Qz9Mp4/YtIjPcdpbvG8bs=
Sec-WebSocket-Protocol: chat

Sockets API overview

    var ws = new WebSocket("ws://localhost:9000")

    ws.addEventListener('open', function (evt) {
        console.log("Socket is connected")
    });

    ws.addEventListener('message', function (evt) {
        receiveMessage(evt.data);
    });

    ws.send('Beam me up, scotty !');

    ws.addEventListener('close', function (evt) {
        console.log("Socket is closed");
    });

Use cases

Anything the server sent events can do and

Compatibility matrix

IE/Edge Firefox Safari Chrome IOS Android Server-Sent Events Polyfill 4.4+ Websockets 10+ 4.4+ WebTRC 44+
Excluding users that couldn't even do basic a HTTP transaction over the transport layer (6.37%, 10.15% and 7.91%, respectively), the success rates [for websockets] are:
HTTP (port 80)      67%
HTTP (port 61985)   86%
HTTPS (port 443)    95%
This results in overall sucess rates of 63%, 77% and 87%, respectively. Adam Langley, Google, in IETF TLS mailing-list

I don't always deploy new protocols on Internet

but when I do, I do it over TLS

HTTP/2

Improving the Web performance

Goals

  • Faster HTTP
  • Simpler webperf
  • Better use of network
  • Easily deployable

How it works

  • Single TCP connection
  • Binary framing
  • HTTP/1.1 semantics
  • Multiplexed streams with priority
  • Header compression
  • Server push
HTTP/1.1
HTTP/2 binary stream
TLS (Optional)
TCP
IP Network

HTTP/2 Variants

h2
h2c
  • TLS 1.2
  • ALPN negociation
  • Clear-text TCP
  • HTTP/1.1 Upgrade

Browser support

Binary Framing

POST /request HTTP/1.1
Host: localhost:9000
Accept: text/html, image/jpeg, */*
User-Agent: USS Stargazer (NCC-2893)
Content-Type: application/json

{"name":"Picard","role":"Captain"}
        
HEADER
:authority       localhost:9000:method          POST:path            /request:scheme          httpsUser-Agent       USS Stargazer (NCC-2893)Content-Type     application/json
DATA
{"name":"Picard","role":"Captain"}

HTTP/2 Streams

HPACK: Header compression

  • Endpoints maintain an index table of headers for connection
  • Each header frame received updates the index state
  • Huffman coding may be used to further reduce bitsize
Repeated headers across multiple requests (like User-Agent, cookies, etc...) cost 1 byte after first transmission

Fascinating

Demo

HTTP/1.1
H2
H2+PUSH

Data Fun

Page load improvements

When simply activating SPDY - HTTP/2:

  • Speed increase between -15% and +300%
  • Typical gain around 20%

Some results on mobile networks

Towards a SPDY’ier Mobile Web?

h2o benchmarks

H2o benchmarks

Apache 2.4.17 Scaling

~0 latency, 2k resource, 500k x GET, localhost, 4 clients, using ab + h2load

~50k
requests / sec
1 2 6 6 1 1 1 1 1 #con/client 1 1 1 1 6 12 24 48 96 #max requests/con http/1 http/2
17600
27500 vs. 17600(56% gain)
31300 vs. 17600(78% gain)
22900 vs. 17600(30% gain)
35500 vs. 17600(102% gain)
38900 vs. 17600(121% gain)
40200 vs. 17600(128% gain)
40600 vs. 17600(131% gain)
41000 vs. 17600(133% gain)

httpd+h2, Tales of Mystery and Imagination by Stefan Eissing

Make it so !

Deploying HTTP/2

CDN

  • Easiest
  • Scalable
  • Akamai
  • Cloudflare
  • MaxCDN

Standard

  • Familiar
  • Inclusive
  • Nginx
  • Apache
  • h2o

Applicative

  • Control
  • Performance
  • Netty, Jetty
  • Node
  • Go

HTTP/2 for devs

Can be used just as a better HTTP/1.1

Webperf adjustments may be necessary

Possible advanced usages:

  • Remote client cache control
  • Fine grained prioritization
  • Asynchronous messaging

Webperf rules

  • Improve network latency
  • Reduce number of bytes transmitted
  • Reduce number of requests
  • Prioritize resources
  • Increase parallelism
  • Improve network latency
  • Reduce number of bytes transmitted
  • Reduce number of requests
  • Prioritize resources
  • Increase parallelism

HTTP/2 on Java

API normalization

  • Client API in JDK 9 (JEP 110)
  • Server support in Servlet 4.0 (JEP 369)

Currently useable:

  • Netty 4.1
  • Jetty 9.3
  • Undertow
  • OkHttp (client)

JDK 8 is minimal requirement

HTTP/2 on the JVM

JDK 8 SSLEngine doesn't implement ALPN extension (JEP 244)

Current options:

  • Use Openssl JNI bridge (aka the Netty way)
  • Override the JDK SSLEngine (aka the Jetty way)

If using a SSLEngine override:

    java -Xbootclasspath/p:_path_to_alpn_boot_jar ...
    

JDK 9 current release target: mid-2017

ORAAAAAAAAAAAACLE !

HTTP/2 Push strategies

  • Hard-coded at HTTP/2 server level
  • Manually coded at application level
  • Automatic association like Jetty Push
  • Use response header as server hint
Link: </style.css>; rel=preload; as=style

Link: </app.js>; rel=preload; as=script

Link: <https://fonts.example.com/font.woff>; rel=preload; as=font; crossorigin

Async Messaging

Use chunkable mime-types for request and response

Server starts reply while request frames are still coming

Implementation examples:

  • Google gRPC with Protobuf messages
  • Other formats may be used, such as text/event-stream

Testing and troubleshooting

Harder than HTTP/1.1 due to binary and encryption

  • Chrome chrome://net-internals/#http2 invaluable
  • nghttp as standalone CLI client
  • h2load as benchmarking tool
  • Wireshark still de facto tool for in-depth investigations

Gotchas and limitations

  • HTTP/2 not compatible with WebSockets
  • To push or not to push
  • Very sensitive to TCP head of line blocking
  • Proxy and caching unfriendly
  • Proxy work in progress

HTTP Next ?

QUIC

  • Current experiment by Google
  • In a nutshell: HTTP/2 over UDP
  • No more Head of Line blocking
  • Persistent connections across IP endpoints
  • But NAT traversal issues
  • Perfect for IPv6 ?

Thank you

Raphaël Luta Freelance technical consultant, Web & Data@raphaelluta