On Github austinsmorris / speak-softly-and-carry-a-big-stack
How to leverage open source projects (nginx, haproxy, redis, varnish, etc.) to turbocharge your php application stack
Austin Morris / @austinsmorris Slides: https://austinsmorris.github.io/speak-softly-and-carry-a-big-stack
(In the traditional sense...)
Problem: My PHP is too slow!
(Or at least PHP 5.5)
php.ini settings:
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=4000
opcache.fast_shutdown=1
opcache.revalidate_freq=60
opcache.validate_timestamps=0
Problem: My web server is too slow!
nginx.conf
user austin admin; worker_processes 1; events { multi_accept on; worker_connections 1024; }
nginx.conf (cont.)
http { sendfile on; tcp_nopush on; keepalive_timeout 65; server_tokens off; include mime.types; default_type application/octet-stream; access_log /usr/local/var/log/nginx/access.log; error_log /usr/local/var/log/nginx/error.log; upstream php-fpm-socket { server unix:/tmp/php-fpm.sock; } include /usr/local/etc/nginx/sites-enabled/*; }
sites-enabled/mysite
server { listen 80; server_name mysite.dev; root /path/to/public/dir; index index.php; location / { try_files $uri $uri/ /index.php?$query_string; } location ~ \.php$ { fastcgi_pass php-fpm-socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }
php-fpm.conf
[dev] listen = /tmp/php-fpm.sock pm = dynamic pm.max_children = 100 pm.start_servers = 10 pm.min_spare_servers = 5 pm.max_spare_servers = 15 pm.max_requests = 1000 pm.status_path = /php_status
Problem: I only have one web server!
haprox.cfg
global maxconn 50000 user haproxy group haproxy stats socket /tmp/haproxy node lb1 nbproc 1 daemon defaults log global retries 3 timeout connect 5000ms timeout client 5000ms timeout server 5000ms maxconn 50000
haprox.cfg (cont.)
frontend tcp_proxy bind *:80 mode tcp default_backend my-backend backend my-backend mode tcp balance roundrobin # option httpchk HEAD / HTTP/1.1\r\nHost:\ example.com server my-server-1 10.0.2.2 check port 80 inter 1000 server my-server-2 10.0.2.3 check port 80 inter 1000
haprox.cfg (cont.)
listen stats *:1936 mode http stats enable stats uri / stats hide-version stats auth Username:Password
Problem: I have more than one web server!
extension=igbinary.so extension=redis.so
redis.conf
(demo for full effect)
php.ini
session.save_handler = redis session.save_path = "tcp://192.168.0.100:6379"
Problem: I want more!
a.k.a Caching HTTP Reverse Proxy
/etc/default/varnish
# Should we start varnishd at boot? Set to "no" to disable. START=yes # Maximum number of open files (for ulimit -n) NFILES=131072 # Maximum locked memory size (for ulimit -l) MEMLOCK=82000 DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -S /etc/varnish/secret \ -f /etc/varnish/my.vcl \ -s malloc,256m"
-s malloc,256m
-s file,/tmp/varnish,500G
/etc/varnish/my.vcl
vcl 4.0; backend server1 { .host = "192.168.0.100"; .port = "80"; } backend server1 { .host = "192.168.0.200"; .port = "80"; } acl purge { "192.168.0.200"; "192.168.0.100"; } sub vcl_init { new backends = directors.round_robin(); backends.add_backend(server1); backends.add_backend(server2); }
/etc/varnish/my.vcl (cont.)
sub vcl_recv { if (req.method == "PURGE") { if (!client.ip ~ purge) { return (synth(405, "Not allowed")); } return (purge); } if (req.http.Cache-Control ~ "no-cache") { return (pass); } return (hash); }
/etc/varnish/my.vcl (cont.)
sub vcl_backend_response { set beresp.ttl = 120s }