On Github bithound / 2015-jsDay
Presented by Tony Thompsonwww.bithound.io / @bithoundio
#!/bin/bash nodemon \ -e js,html,css \ --exec "bash ./restart.sh"Also, it's possible to work around some of these issues. For example, we know file reads are slow, right? That makes it things like nodemon also very slow. But we don't have to run it inside the VM. It's not part of our production setup! There's no reason we can't take that and run it in the host environment. I'll get into where we'd stash this in the next section.
Vagrant.configure(2) do |config| config.vm.provider :virtualbox do |vb| vb.cpus = 3 vb.memory = 2048 vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"] end endMost VMs can be tuned to meet your needs. Vagrant defualt is (I htink) 512MB of RAM and a single CPU. This kind of tuning helps, but regardless, your app will not be as fast inside a virtualizer -- no matter which one -- as it is on bare metal. But keep in mind, you're almost certainly deploying to a virtualized environment. Unless you own your own hardware.
|-- Vagrantfile |-- Dockerfile |-- app |-- bin | |-- start.sh | +-- stop.sh |-- etc +-- scripts |-- provision.sh |-- deploy.sh +-- scripts.shWe actually have two separate places where we keep scripts. Our main repo looks something like this: And just to make things difficult, we keep our systems level code in a seperate repo from our actual app. We include the app as a submodule.
|-- Vagrantfile |-- Dockerfile |-- app | |-- bin | | |-- start.sh | | +-- stop.sh | +-- scripts | +-- migrate.sh |-- bin | |-- start.sh | +-- stop.sh |-- etc +-- scripts |-- provision.sh |-- deploy.sh +-- scripts.shIf we pull in that submodule, our directory path looks like this. So scripts that are in 'app/bin' handle regular operation (e.g. starting, stopping) our app within the vim. 'bin' handles the same, but from the host. 'bin/scripts' holds our app specific management scripts and 'scripts' holds our system wide, common scripts. This is our convention. There are many like it, but this works for us at bitHound. So depending on your path, you can tell if you are trying to interact from inside or outside the guest.
#!/bin/bash #start app vagrant ssh --command "cd /vagrant/app && ./bin/cli server start"So here's a really naive way of starting an app in a vagrant box. This runs, but never comes back.
#!/bin/bash #start app vagrant ssh --command "cd /vagrant/app && ./bin/cli server start &"
#!/bin/bash #start app vagrant ssh --command "cd /vagrant/app && ./bin/cli server start" &So we can push it into the background, either on the guest or the host, but we don't have a handle or a way to stop or restart the app.
#!/bin/bash #start app vagrant ssh --command "cd /vagrant/app; forever --uid 'app' -a start ./bin/cli server start"
#!/bin/bash #stop app vagrant ssh --command "forever stop app"
And here's a version using 'forever'. It's a JS tool that's a lot like supervisor. It's built to manage long running processes. E.g.: start, stop, restart. Here, it's running inside the guest, but we're managing it from the host.
#!/bin/bash vagrant ssh --command "cd /vagrant/app && ./bin/bithound.js $*"Utility / proxy scripts to interact with your project inside the VM. You could call this a trampoline or a thunk, uh, if you lived through the win16 to win32 transition. (I didn't, but I was just after that and still had to deal with old developer documentation. $* is just the parameter list that this program has been called with itself.
unless Vagrant.has_plugin?('vagrant-s3auth') # Attempt to install ourself. Bail out on failure so we don't get # stuck in an infinite loop. system('vagrant plugin install vagrant-s3auth') || exit! # Relaunch Vagrant so the plugin is detected. # Exit with the same status code. exit system('vagrant', *ARGV) end
Vagrant.configure(2) do |config| config.vm.define "app" do |app| app.vm.box = "trusty64" app.vm.provision "shell", path: "scripts/provision_app.sh" app.vm.network "private_network", ip: "10.10.11.11" end config.vm.define "worker1" do |worker| worker.vm.box = "trusty64" worker.vm.provision "shell", path: "scripts/provision_worker.sh" worker.vm.network "private_network", ip: "10.10.11.12" end config.vm.define "worker2" do |worker| worker.vm.box = "trusty64" worker.vm.provision "shell", path: "scripts/provision_worker.sh" worker.vm.network "private_network", ip: "10.10.11.13" end endSo if you are building a distributed system, develop on a distributed system! Almost all virtualization environments right now allow you define multiple VMs and private networks.
#!/bin/bash #start app vagrant ssh app --command "cd /vagrant/app; forever --uid 'app' -a start ./bin/cli server start" #start workers vagrant ssh worker1 --command "cd /vagrant/app; forever --uid 'worker' -a start ./bin/bithound.js worker 10.10.11.11" vagrant ssh worker2 --command "cd /vagrant/app; forever --uid 'worker' -a start ./bin/bithound.js worker 10.10.11.11"
## Do we need fake SSL keys? ssl_pem=/etc/ssl/private/www_bithound_io.pem ssl_key=/etc/ssl/private/www_bithound_io.key ssl_crt=/etc/ssl/private/www_bithound_io.crt if [ ! -e $ssl_pem ]; then # No PEM. if [ ! -e $ssl_key ] || [ ! -e $ssl_crt ]; then # No keys. country=CA state=Ontario locality=Kitchener organization=bitHound name=app.bithound.io openssl req -x509 \ -newkey rsa:2048 \ -subj "/C=$country/ST=$state/L=$locality/O=$organization/CN=$name" \ -keyout $ssl_key \ -out $ssl_crt \ -days 90 \ -nodes fi cat $ssl_crt $ssl_key > $ssl_pem fiWe also deliver everything over SSL. So we do that on our dev boxes too. We autogenerate SSL certs if there isn't one present already. This is useful in other places too. We use Amazon S3 for file storage in production. We do static analysis of code and generate a lot of data. In development, we use an S3 simulater called 's3rver' to stand in for S3. Cause it's cheaper to not send that data out if we don't have to.
#!/bin/bash BASE='/vagrant' apt-get update apt-get install -y build-essential curl git mongodb-clients nginx tmux vim cp "$BASE/etc/nginx/nginx.conf" /etc/nginx/nginx.conf /etc/init.d/nginx restart npm -g install forever npm -g install nodemonHere's a really early version of our provision script. We don't need to template our nginx config because it's pretty simple. Chef recipes and puppet rules tend to be written to be flexible and generic. And that's great if your maintaining hundreds of servers each configured slightly differently. If you want to configure your servers identically just blind copying a file is much simpler.
Vagrant.configure(2) do |config| config.vm.provision "shell", path: "scripts/provision.sh" endAnd then calling a shell provisioner is really simple. In real life, we have parameters that we can pass to our provisioning script. We can tell it what user our app will run as, and what the source and destination paths actually are. We actually use the same scripts to provision our dev machines and our production machines.
FROM ubuntu:1404 RUN scripts/provision.sh ADD app /app RUN cd /app && npm install CMD ["/app/scripts/entrypoint.sh"]Incidently, then you can use the same bash file with Docker.