hammer and anvil

The Next Adventure

The past 4 years here in Indianapolis have been wonderful and a time of growth for me. I have many happy memories as I look back at our time here. But, it is time for us to move on. This summer my family and I will be moving to San Francisco.

Goodbye Pinnacle of Indiana

Sadly this means that it's time to leave my job at Pinnacle of Indiana. I've really enjoyed the last few years working there. They are a great development team and I will miss working with them.

If you are looking for a team to help with your next .NET project, be sure to give them a call.

Indianapolis Meetups

One of the best choices I made was to go to the JavaScript and Node meetups. Indy.js has been a great source of information on various topics as well as a great networking group to meet developers from around the city. I've even been lucky enough to present there a few times. Node.Indy has grown from a handful of people to a well attended meetup. The presentations have ranged from high speed web scraping to opening garage doors via Arduinos to websockets and WebRTC.

If you are in the Indianapolis area, I highly recommend both of these meetups.

Hello Doyle Software

While we're still in Indianapolis, I'm going to be doing freelance work under my own company. You can checkout my site at https://doylesoftware.com. I currently have work lined up, but if you have a project you're looking for help with, let me know and I'll see if its a fit for me. My focus is on projects where Node.js and Angular.js make sense to provide an interactive and efficient solution. I'm not against doing some small, short-term .NET projects as well.

San Francisco!

We're both excited about moving to San Francisco. We're at a point in our lives where we get to choose anywhere we want to live, so why not go somewhere warm. My wife has secured a great job in downtown doing what she loves. I'm not exactly sure what I want to do next, but I'm sure I can find it in the Bay Area. It's also not a bad place to be as a software developer interested in Node!

I'll still be around in Indianapolis until mid-summer if you want to chat or get together!


Using Karma for JavaScript Testing

Getting the tooling to do TDD with JavaScript code has been something that I've been struggling with for the last year. There have been lots of tools that can handle one aspect or another of JavaScript testing, but nothing was a complete solution. I thought our needs would be fairly common since we're using pretty standard tool sets.

I wanted:

  • Ability to run JS tests automatically or with a simple key command within Visual Studio (ala Resharper)
  • The ability to use wildcards for our source and test files. Listing each file out is too painful on a large project.
  • TeamCity integration just like we have for our C# unit tests
  • Code coverage generation, preferably that could also hook into TeamCity

Some Nice To Haves:

  • Allow devs to generate code coverage locally
  • A configuration that could be checked into our source control

I'm finally happy with the current setup we're using now. We've setup Karma which fits our needs and hits just about every point we wanted.

Our Setup

Here's a bit more detail on what we're using and testing against.

Our JS code is mostly using Knockout.js. We try to keep jQuery use to a minimum, and keep it out of our ViewModels completely, with the exception of $.ajax. Knockout makes it very easy to test our client side logic because there is no reliance on the DOM.

On the testing side we use QUnit mainly because it is very close to NUnit which is our testing framework on the C# side of things. We've recently introduced Sinon.js for our mocking/spies/stubbing framework. We had been using one I wrote, but Sinon is just so much better.

A Brief History of Testing

When we started with JavaScript testing we just used a web page setup from the QUnit tutorials. That was fine for local testing, but didn't work with TeamCity. It didn't take long to get PhantomJS setup and having our tests run in TeamCity that way.

To get code coverage working, we found the YUI-Coverage tool. It's a Java app that instruments your code then parses the output created when the tests run. It worked but was a pain to maintain. Since the files were modified when they were instrumented, we had to make sure we saved off a copy of the originals otherwise we'd see coverage percentages like 56000%. It has no issue instrumenting an already instrumented file for bonus coverage fun.

We were able to get this setup working, but it wasn't quite where we wanted it to be.

Enter Angular.js & Karma

I had seen the limits of Knockout when it came to very complicated Single Page Apps (SPA) that we had worked on. Knockout worked, but the code was not as clean and clear as I would have liked it to be. I started reading about Angular.js and it's approach as a client-side framework. I came across the test framework that the Angular team had created. At the time it had a rather unfortunate name (which has been corrected), but it appeared to be everything we were looking for.

Karma is a command line tool that runs in Node.js. It uses more modern approaches to testing by being just a modular test runner. It supports all the major testing libraries, including QUnit. It also has a code coverage module which runs Istanbul, also by the YUI team. Istanbul uses another library calls Esprima which allows for the instrumentation to be done in memory saving us the step of saving off the originals.

How it works is actually really cool. You configure Karma with your source and test files and tell it how you want the results reported back to you. There are a variety of reporters; we just use the progress one. You also tell Karma which browsers you would like your tests run in. It defaults to Chrome, but supports the major browsers and PhantomJS. You can configure as many as you like and have your tests run on each concurrently.

Karma hosts the web server itself and uses websockets to establish a connection to the testing browser. When you update your files, it re-sends them to the browsers and re-runs your tests. This provides instant and automatic feedback. Exactly what we want for doing TDD.

As of 0.10, Karma is now plugin based. The team did a good job of breaking out the existing functionality into Node modules, and the community has filled the gaps. The TeamCity reporter works great, so we're still covered there.

Karma on Windows

For the most part, getting Karma to work on Windows was painless. We're using Node 0.10.15 and all of the Node modules that are used compile just fine. We did run into an issue with how the location of the Chrome and Firefox executables is determined, but I have already submitted pull requests to correct that (Chrome Reporter, Firefox Reporter).

We have two Karma config files setup. Once for local development that runs after files are saved, and another for TeamCity and code coverage enabled. This allows us to see the coverage without having to check in, which is actually pretty nice.

My Contribution to the Karma Community

As I was learning how to get Karma going I didn't like how I had to keep the console window visible to know if my tests failed. I wanted to hear that my tests failed.

Introducing Karma-Beep-Reporter. It's a simple reporter that outputs the ASCII character 0x07 (Bell) when you have a failed test or your tests fail to run altogether. It's meant to run along side one of the other reporters since it only beeps. I've only tested it on Windows so far, but it works great. I welcome comments and feedback!


Getting started with Node.js and Nginx

I've started to move on to the next phase of learning about Node.js. I have a few sites created and for the most part IISNode has done a good job allowing me to run within IIS. Enabling output and kernel level caching gives a nice boost to performance as well. While this is all well and good, it's not how Node.js is generally run in production scenarios. I decided it was time to learn about hosting Node.js sites on Linux behind nginx.

The Goal

Here's what I want to accomplish.

  1. Get a Linux VM setup; Ubuntu 13.04 x64
  2. Install Node.js & nginx
  3. Configure nginx to proxy my site with caching enabled for static files
  4. Setup my site to start when the server boots

Installing Linux

There's not much exciting here. Just a vanilla Ubuntu server install. I made sure I had OpenSSH installed so I could manage it remotely. I've done this part before.

Important!
I am not an experienced Linux administrator. I can get around and do some basics, but Linux is undiscovered country for me. The steps below are what I've been able to scrape together off the internet. It worked for me. If there's something I did wrong or there's a better way, I'd love to hear about it!

Installing Node.js & nginx

Doing a little of the Google magic points out that while Ubuntu has a Node.js package, its not maintained or up to date. The Node repo has a nice Github wiki page covering the steps you need to do to add a reference to the up to date package.

1
2
3
4
5
sudo apt-get update
sudo apt-get install python-software-properties python g++ make
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs

This worked like a charm. Now I have Node v0.10.13 running.

I followed a similar process with nginx. They have straightforward documentation for each of the main Linux distros.

The first step is to install the nginx public key. I downloaded it the server then ran this command:

1
sudo apt-key add nginx_signing.key

Next I added these two lines to the end of /etc/apt/sources.list

1
2
deb http://nginx.org/packages/ubuntu/ raring nginx
deb-src http://nginx.org/packages/ubuntu/ raring nginx

Now I'm ready to install.

1
2
apt-get update
apt-get install nginx

Success! nginx installed.

Configure nginx

This is where things got fun. So I found a good post on StackOverflow with an answer that looked like what I needed! So I started at the top and went to create a new file in /etc/nginx/sites-available. Only, I didn't have a sites-available directory. Did I miss a step?

Again, StackOverflow to the rescue! It turns out that the sites-available/sites-enabled setup is part of the Ubuntu maintained package, not the main package from the nginx folks. I like the concept of the sites-available/sites-enabled setup, so I decide to implement it. I create the directories, edit the /etc/nginx/nginx.conf file, restart nginx ( sudo service nginx restart), and now I can go back to getting the site setup.

I used an article from the ARG! Team Blog I found on Hardening Node.js For Production. Looked like what I wanted! Instead of putting the server configuration directly in the nginx.conf, I put mine in the sites-available directory and created a symbolic link to it in the sites-enabled directory. For those that want to see the command:

1
2
cd /etc/nginx/sites-enabled
sudo ln -s /etc/nginx/sites-available/test.conf test.conf

Here's the test.conf file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
upstream testsite {
    server 127.0.0.1:3500;
}
 
server {
    listen 80;
    access_log /var/log/nginx/test.log;
 
    location ~T ^/(images/|img/|javascript/|js/|css/|stylsheets/|favicon.ico) {
        root /home/joe/testsite/public;
        access_log off;
        expires max;
    }
 
    location / {
        proxy_set_header X-Real-IP ~Dremote_addr;
        proxy_set_header X-Forwarded-For ~Dproxy_add_x_forwarded_for;
        proxy_set_header Host ~Dhttp_host;
        proxy_set_header X-NginX-Proxy true;
 
        proxy_pass http://testsite;
        proxy_redirect off;
    }
}

The article from the ARG! Team Blog goes into detail about what's going on in this file. Here are the highlights:

Lines 1-3:
This defines where my Node.js site is at. In my case, its on the same machine on port 3500. This can be another server, or multiple servers to round-robin against.

Lines 9-13:
This defines where the static content is that nginx should serve instead of Node.js. Notice that it points to my public directory inside my site.

Lines 15-23:
This defines the root of the site that nginx should proxy for. We add a bunch of headers to tell Node.js/Express that there's a proxy in front of it.

Line 21:
The url here isn't the url used to access the site. Instead it is referring to Line 1 as the backend servers to send requests to.

Time to test it!

After I got all this setup, I started up my site. I opened it up in the browser and...


Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.


Not quite what I was expecting. At least I know nginx is running. But what went wrong? I rechecked everything and I thought it looked right. Then I remembered the instructions for enabling the sites-available/sites-enabled. I had added this line as directed:

1
include /etc/nginx/sites-enabled/*;

What I missed was to remove the line that was already there:

1
include /etc/nginx/conf.d/*.conf;

I commented it out by putting a # in front of it and restart nginx again. When I tested this time, success!

Here's my final nginx.conf after adding the rest of the parts from the ARG! Team blog:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
user  nginx;
worker_processes  4;
 
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
 
events {
    worker_connections  1024;
}
 
http {
    proxy_cache_path  /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m;
    proxy_temp_path /var/tmp;
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
 
    log_format  main  '~Dremote_addr - ~Dremote_user [~Dtime_local] "~Drequest" '
                      '~Dstatus ~Dbody_bytes_sent "~Dhttp_referer" '
                      '"~Dhttp_user_agent" "~Dhttp_x_forwarded_for"';
 
    access_log  /var/log/nginx/access.log  main;
 
    sendfile        on;
    #tcp_nopush     on;
 
    keepalive_timeout  65;
 
    gzip  on;
    gzip_comp_level 6;
    gzip_vary on;
    gzip_min_length 1000;
    gzip_proxied any;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
    gzip_buffers 16 8k;
 
    #include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Ok, time for the last step.

Start the site when the server boots

I'm used to Windows services which are compiled programs. Ubuntu has Upstart which is a nicer script driven system. It looks like that's the modern approach for what I want to do.

Important
I'm not running a module to restart Node if it goes down! This is just a test. When I move a real production site behind nginx I will use a module like Forever.

I started with this StackOverflow post and couldn't get it to work. I did more searching and ran across the Upstart Cookbook which helped to explain what I was even trying to do, and then I found this post about Node.js and the Forever module. The example they gave was much simpler.

To create an Upstart script create a file in /etc/init. I called mine test.conf for simplicity. Here's what I ended up with in the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
#!upstart
 
description "Test Node.js Site"
 
env FULL_PATH="/home/joe/testsite"
env FILE_NAME="app.js"
 
start on startup
stop on shutdown
 
script
 exec node ~DFULL_PATH/~DFILE_NAME > /home/joe/testsite/test.log
end script

I start it up with:

1
sudo start test

And the site is live!

I reboot the server and... the site is down. Hmm. Back to the Google.

This time it's AskUbuntu (a StackExchange Network Site) which has a perfectly named post: Why isn't my upstart service starting on system boot? It led me to try changing my start event from on startup on line 8 to:

1
start on net-device-up IFACE=eth0

I reboot once again... and the site is up!

What next?

Now that I have a basic site setup I want to play around with moving a few other sites onto this server and off of IIS. Since I still do have sites that I want to keep on IIS, I'm also planning on having nginx proxy for those as well. If things go well I'll probably also move this site as well.