Joe Doyle's Coding Blog

Posts tagged "Node.js"

Getting the tooling to do TDD with JavaScript code has been something that I've been struggling with for the last year. There have been lots of tools that can handle one aspect or another of JavaScript testing, but nothing was a complete solution. I thought our needs would be fairly common since we're using pretty standard tool sets. I wanted: Ability to run JS tests automatically or with a simple key command within Visual Studio (ala Resharper) The ability to use wildcards for our source and test files. Listing each file out is too painful on a large project. TeamCity integration just like we have for our C# unit tests Code coverage generation, preferably that could also hook into TeamCity Some Nice To Haves: Allow devs to generate code coverage locally A configuration that could be checked into our source control I'm finally happy with the current setup we're using now. We've setup Karma which fits our needs and hits just about every point we wanted. Our Setup Here's a bit more detail on what we're using and testing against. Our JS code is mostly using Knockout.js. We try to keep jQuery use to a minimum, and keep it out of our ViewModels completely, with the exception of $.ajax. Knockout makes it very easy to test our client side logic because there is no reliance on the DOM. On the testing side we use QUnit mainly because it is very close to NUnit which is our testing framework on the C# side of things. We've recently introduced Sinon.js for our mocking/spies/stubbing framework. We had been using one I wrote, but Sinon is just so much better. A Brief History of Testing When we started with JavaScript testing we just used a web page setup from the QUnit tutorials. That was fine for local testing, but didn't work with TeamCity. It didn't take long to get PhantomJS setup and having our tests run in TeamCity that way. To get code coverage working, we found the YUI-Coverage tool. It's a Java app that instruments your code then parses the output created when the tests run. It worked but was a pain to maintain. Since the files were modified when they were instrumented, we had to make sure we saved off a copy of the originals otherwise we'd see coverage percentages like 56000%. It has no issue instrumenting an already instrumented file for bonus coverage fun. We were able to get this setup working, but it wasn't quite where we wanted it to be. Enter Angular.js & Karma I had seen the limits of Knockout when it came to very complicated Single Page Apps (SPA) that we had worked on. Knockout worked, but the code was not as clean and clear as I would have liked it to be. I started reading about Angular.js and it's approach as a client-side framework. I came across the test framework that the Angular team had created. At the time it had a rather unfortunate name (which has been corrected), but it appeared to be everything we were looking for. Karma is a command line tool that runs in Node.js. It uses more modern approaches to testing by being just a modular test runner. It supports all the major testing libraries, including QUnit. It also has a code coverage module which runs Istanbul, also by the YUI team. Istanbul uses another library calls Esprima which allows for the instrumentation to be done in memory saving us the step of saving off the originals. How it works is actually really cool. You configure Karma with your source and test files and tell it how you want the results reported back to you. There are a variety of reporters; we just use the progress one. You also tell Karma which browsers you would like your tests run in. It defaults to Chrome, but supports the major browsers and PhantomJS. You can configure as many as you like and have your tests run on each concurrently. Karma hosts the web server itself and uses websockets to establish a connection to the testing browser. When you update your files, it re-sends them to the browsers and re-runs your tests. This provides instant and automatic feedback. Exactly what we want for doing TDD. As of 0.10, Karma is now plugin based. The team did a good job of breaking out the existing functionality into Node modules, and the community has filled the gaps. The TeamCity reporter works great, so we're still covered there. Karma on Windows For the most part, getting Karma to work on Windows was painless. We're using Node 0.10.15 and all of the Node modules that are used compile just fine. We did run into an issue with how the location of the Chrome and Firefox executables is determined, but I have already submitted pull requests to correct that (Chrome Reporter, Firefox Reporter). We have two Karma config files setup. Once for local development that runs after files are saved, and another for TeamCity and code coverage enabled. This allows us to see the coverage without having to check in, which is actually pretty nice. My Contribution to the Karma Community As I was learning how to get Karma going I didn't like how I had to keep the console window visible to know if my tests failed. I wanted to hear that my tests failed. Introducing Karma-Beep-Reporter. It's a simple reporter that outputs the ASCII character 0x07 (Bell) when you have a failed test or your tests fail to run altogether. It's meant to run along side one of the other reporters since it only beeps. I've only tested it on Windows so far, but it works great. I welcome comments and feedback!

Using Karma for JavaScript Testing

Getting the tooling to do TDD with JavaScript code has been something that I've been struggling with for the last year. There have been lots of tools that can handle one aspect or another of JavaScript testing, but nothing was a complete solution. I thought our needs would be fairly
I've started to move on to the next phase of learning about Node.js. I have a few sites created and for the most part IISNode has done a good job allowing me to run within IIS. Enabling output and kernel level caching gives a nice boost to performance as well. While this is all well and good, it's not how Node.js is generally run in production scenarios. I decided it was time to learn about hosting Node.js sites on Linux behind nginx. The Goal Here's what I want to accomplish. Get a Linux VM setup; Ubuntu 13.04 x64 Install Node.js & nginx Configure nginx to proxy my site with caching enabled for static files Setup my site to start when the server boots Installing Linux There's not much exciting here. Just a vanilla Ubuntu server install. I made sure I had OpenSSH installed so I could manage it remotely. I've done this part before. Important! I am not an experienced Linux administrator. I can get around and do some basics, but Linux is undiscovered country for me. The steps below are what I've been able to scrape together off the internet. It worked for me. If there's something I did wrong or there's a better way, I'd love to hear about it! Installing Node.js & nginx Doing a little of the Google magic points out that while Ubuntu has a Node.js package, its not maintained or up to date. The Node repo has a nice Github wiki page covering the steps you need to do to add a reference to the up to date package. sudo apt-get update sudo apt-get install python-software-properties python g++ make sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs This worked like a charm. Now I have Node v0.10.13 running. I followed a similar process with nginx. They have straightforward documentation for each of the main Linux distros. The first step is to install the nginx public key. I downloaded it the server then ran this command: sudo apt-key add nginx_signing.key Next I added these two lines to the end of /etc/apt/sources.list deb raring nginx deb-src raring nginx Now I'm ready to install. apt-get update apt-get install nginx Success! nginx installed. Configure nginx This is where things got fun. So I found a good post on StackOverflow with an answer that looked like what I needed! So I started at the top and went to create a new file in /etc/nginx/sites-available. Only, I didn't have a sites-available directory. Did I miss a step? Again, StackOverflow to the rescue! It turns out that the sites-available/sites-enabled setup is part of the Ubuntu maintained package, not the main package from the nginx folks. I like the concept of the sites-available/sites-enabled setup, so I decide to implement it. I create the directories, edit the /etc/nginx/nginx.conf file, restart nginx ( sudo service nginx restart), and now I can go back to getting the site setup. I used an article from the ARG! Team Blog I found on Hardening Node.js For Production. Looked like what I wanted! Instead of putting the server configuration directly in the nginx.conf, I put mine in the sites-available directory and created a symbolic link to it in the sites-enabled directory. For those that want to see the command: cd /etc/nginx/sites-enabled sudo ln -s /etc/nginx/sites-available/test.conf test.conf Here's the test.conf file: upstream testsite { server; } server { listen 80; access_log /var/log/nginx/test.log; location ~ ^/(images/|img/|javascript/|js/|css/|stylsheets/|favicon.ico) { root /home/joe/testsite/public; access_log off; expires max; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://testsite; proxy_redirect off; } } The article from the ARG! Team Blog goes into detail about what's going on in this file. Here are the highlights: Lines 1-3: This defines where my Node.js site is at. In my case, its on the same machine on port 3500. This can be another server, or multiple servers to round-robin against. Lines 9-13: This defines where the static content is that nginx should serve instead of Node.js. Notice that it points to my public directory inside my site. Lines 15-23: This defines the root of the site that nginx should proxy for. We add a bunch of headers to tell Node.js/Express that there's a proxy in front of it. Line 21: The url here isn't the url used to access the site. Instead it is referring to Line 1 as the backend servers to send requests to. Time to test it! After I got all this setup, I started up my site. I opened it up in the browser and... Welcome to nginx! If you see this page, the nginx web server is successfully installed and working. Further configuration is required. For online documentation and support please refer to Commercial support is available at Thank you for using nginx. Not quite what I was expecting. At least I know nginx is running. But what went wrong? I rechecked everything and I thought it looked right. Then I remembered the instructions for enabling the sites-available/sites-enabled. I had added this line as directed: include /etc/nginx/sites-enabled/*; What I missed was to remove the line that was already there: include /etc/nginx/conf.d/*.conf; I commented it out by putting a # in front of it and restart nginx again. When I tested this time, success! Here's my final nginx.conf after adding the rest of the parts from the ARG! Team blog: user nginx; worker_processes 4; error_log /var/log/nginx/error.log warn; pid /var/run/; events { worker_connections 1024; } http { proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m; proxy_temp_path /var/tmp; include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip on; gzip_comp_level 6; gzip_vary on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_buffers 16 8k; #include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Ok, time for the last step. Start the site when the server boots I'm used to Windows services which are compiled programs. Ubuntu has Upstart which is a nicer script driven system. It looks like that's the modern approach for what I want to do. Important I'm not running a module to restart Node if it goes down! This is just a test. When I move a real production site behind nginx I will use a module like Forever. I started with this StackOverflow post and couldn't get it to work. I did more searching and ran across the Upstart Cookbook which helped to explain what I was even trying to do, and then I found this post about Node.js and the Forever module. The example they gave was much simpler. To create an Upstart script create a file in /etc/init. I called mine test.conf for simplicity. Here's what I ended up with in the file: #!upstart description "Test Node.js Site" env FULL_PATH="/home/joe/testsite" env FILE_NAME="app.js" start on startup stop on shutdown script exec node $FULL_PATH/$FILE_NAME > /home/joe/testsite/test.log end script I start it up with: sudo start test And the site is live! I reboot the server and... the site is down. Hmm. Back to the Google. This time it's AskUbuntu (a StackExchange Network Site) which has a perfectly named post: Why isn't my upstart service starting on system boot? It led me to try changing my start event from on startup on line 8 to: start on net-device-up IFACE=eth0 I reboot once again... and the site is up! What next? Now that I have a basic site setup I want to play around with moving a few other sites onto this server and off of IIS. Since I still do have sites that I want to keep on IIS, I'm also planning on having nginx proxy for those as well. If things go well I'll probably also move this site as well.

Getting started with Node.js and Nginx

I've started to move on to the next phase of learning about Node.js. I have a few sites created and for the most part IISNode has done a good job allowing me to run within IIS. Enabling output and kernel level caching gives a nice boost to performance as
A common question I've seen on StackOverflow asks for the best way to open a connection to MongoDB when starting up your Express app. Folks generally don't care for just putting all of the Express setup in the callback of the MongoDB connect call, but it seems to be the generally accepted approach. I didn't like it either and felt that there must be a better way. Here's what I came up with. The Callbacks You can't really escape the callbacks when dealing with the native MongoDB driver. Pretty much every call expects a callback. The way I deal with that is by using Promises using the Q library. Q goes beyond just providing a way to use promises by also providing helper functions for wrapping existing Node.js APIs that use the standard callback pattern of the function(err, result). Promises are a deep topic themselves, so I won't go into them in detail here. Just know they they can help turn the callback "Pyramid of Doom" or "Callback Christmas Tree" into a chained series of function calls which greatly improves the readability of your code. Google can hook you up if you want to know more. The Database object The first step that made the most sense when I first started using MongoDB in Node.js was to create my data access object. Its used for creating the connection, holding the references to the collections used, and the methods that perform the specific actions against MongoDB. So here's what my Database object looks like: var Q = require('Q'), MongoClient = require('mongodb').MongoClient, ObjectId = require('mongodb').ObjectID, Server = require('mongodb').Server, ReplSet = require('mongodb').ReplSet; _ = require('underscore'); var Database = function(server, database) { this.server = server; this.database = database; }; Database.prototype.connect = function(collections) { var self = this; var connectionString = "mongodb://" + this.server + "/" + this.database + '?replicaSet=cluster&readPreference=secondaryPreferred'; return Q.nfcall(MongoClient.connect, connectionString) .then(function(db) { _.each(collections, function(collection) { self[collection] = db.collection(collection); }); return db; }); }; Database.prototype.findDocs = function(term) { return this.mydocs.find({ Title: term }).stream(); }; Database.prototype.saveDoc = function(postData) { return Q.npost(this.mydocs, "update", [{ id: }, postData, { w: 1, upsert: true }]); }; module.exports = Database; So what's going on here? For the most part, nothing very exciting. We take in the server(s) and database we want to connect to in the constructor. The first interesting part starts on line 16. Q.nfcall is our Q helper function wrapping the MongoClient.connect and giving us a promise back. We chain a then() function which is called after the connection is made to MongoDB. The function receives the connected db object which we can then save a reference to each collection we want to use for our app. We then return the db object from the function so we can keep passing it along. The end result, which consists of the chain of our two functions, is still a promise which is then returned back to the caller. Just to show a little more detail, you can also see the Q library in use for performing an upsert when we want to save a new document. Again, the promise is returned which means we don't need to use a callback. Line 27 also shows that the find function can utilize streams instead of using a callback. I hope that feature is spread around more! The Express App Configuration For the Express configuration, I decided to keep most of it wrapped in a function. Most of it could be pulled out and just run before we initialize the database. I like that its wrapped up, personally. So here's what our app.js looks like: var database = new Database(settings.databaseServers, settings.database); function startServer(db) { app.set('port', process.env.PORT || 3000); app.set('views', __dirname + '/views'); // The rest of the setup is excluded for brevity... console.log('Connected to the database'); app.locals.database = database; routes.registerRoutes(app); http.createServer(app).listen(app.get('port'), function onServerListen(){ console.log('Express server listening on port ' + app.get('port')); }); }; database.connect(['Posts', 'Stats', 'Log']) .then(startServer); The code that really starts things off is at the bottom. We call connect on our Database, passing in the array of collections we want. Since connect returns a promise, we can tack on another function using then() which will also use our connected db object. In this case, it's our startServer function which loads up Express and start our server listening. Accessing the Database in your Routes In our app.js snippet, something I do is attach the database to app.locals on line 10. I'm not sure if this is the best approach, but it has been working for me so far. Now in my routes, I can access the database using It could also be passed in to the registerRoutes function and pass around from there. For my blog, instead of accessing the database directly from the reference, I have another layer which resembles the Repository pattern. For simpler apps I've been ok with the direct reference approach. Can it be better? Like most of the code we write, it looks pretty good today. Much better than how we did it last year. I'm not sure if there's a better way, with better falling into [simpler, more scalable, something I just don't know about]. If you know of or use a better approach, I'd love to hear about it!

A Pattern for Connecting to MongoDB in an Express App

A common question I've seen on StackOverflow asks for the best way to open a connection to MongoDB when starting up your Express app. Folks generally don't care for just putting all of the Express setup in the callback of the MongoDB connect call, but it seems to be the
The World of WordPress About a year and a half ago I made the switch from Posterous to a WordPress for my blog. I figured that I might as well learn how to use the 800 pound gorilla in the room. For the most part, things went well considering that I wanted to run it on Windows under IIS and use SQL Server as the database. I added some plugins for the basics like commenting, syntax highlighting and the like. The import from Posterous was smooth with nothing lost. And it was good. I'm not sure if it was the WordPress upgrade or an update to one of the plugins. One day a few months ago I tried to create a new post only to have 90% of it just disappear upon hitting save. I hit the edit button and retyped a paragraph to see if that would save. It didn't. Typed a little less then previewed the post this time. Gone. I did a few more experiments with creating new posts and editing them in various stages. They all seemed to auto-save early in the entry and then get locked forever. A Ghost in the Darkness I wasn't sure what I wanted to do about my blog. I wasn't in the mood to re-install WordPress. I looked at a few blogs written in .NET, but none of them really appealed to me since most are written using WebForms. Then I saw the KickStarter for Ghost popup on Twitter. It's basically the start of a new platform designed to focus on blogs vs the CMS style product that WordPress has become. Its written in Node.js with SQLite as the default backend database. Markdown is used as the input language with a real-time preview as you create a post. It looks to leverage the best of HTML5 to make a state of the art blogging platform. My initial reaction was probably the same as most developers when they see something cool on the web: I can build that! And so I did. "I see you have constructed a new lightsaber." There's a bit of me that feels writing your own blog is a rite of passage as a developer. I know most people use existing packages because why would you really want to waste time at creating something that has been created hundreds of times before. For me, this is a chance to not only give it my personal touch, but really experiment with new technologies and practice the skills outside of my comfort zone. Some might say it's like a Jedi building his first lightsaber. At work I almost exclusively use ASP.NET MVC 4. And while I really do like using it, I felt this was the perfect time to try building a website in Node.js and Express. I really liked the idea of using Markdown instead of a WYSIWYG editor or plain HTML. I also liked the idea of having the layout update in real time when writing a post. I'm using MongoDB since it's my go-to datastore due to how easy and fast it is. So far the core is done. It's still mostly MVF (minimum viable functionality), but I'll keep tweaking it as I go. Here are some of the highlights that I'm proud of or really happy with. Editing To get the dual Markdown/HTML rendering I'm using Pagedown which is from the folks at Stack Exchange. Its the editor they use on their sites. It was really easy to implement and there's even a 3rd party add (Pagedown.Extra) on which extends the Markdown a bit more for things such as tables and code syntax highlighting. For syntax highlighting I'm using SyntaxHighlighter. For uploading images and files I integrated Dropzone.js by overriding the image dialog in Pagedown. Dropzone is amazingly simple implement and provides thumbnails of the images as you upload. Just eye candy, I know, but the effect is sweet. Here's a screenshot of me writing this post: Styling If there's anything I need more practice at, it's design. Thanks to Twitter Bootstrap, I got a running start. I like the clean and simple look so I tried to keep plenty of whitespace and let it define the sections. I'm using LESS for the CSS. I'm not yet customizing Bootstrap, but its on the list. Font Awesome is used for the icons. I went pretty minimalistic on the colors sticking to the really-dark-grey-and-black on while. I'm still iterating over the layouts, but I think I'm pretty close. Hosting I run my own servers, so I wanted continue to host my blog locally. For now I'm using iisnode with Node.js 0.10. One of the benefits is that I can have IIS host all of the static content and only have Node host the dynamic pages. This is the standard Node configuration I hear about, with the exception that its Nginx used instead of IIS. The concept is the same. I have Grunt setup to do my build and deployment so I can test locally then push out the live site. I really like Grunt and am looking at how feasible it would be to use in the .NET world for things like project scaffolding. Performance I wanted the site to be fast. Really fast. I tried to do all that I could to optimize the site. Grunt combines and minifies my JavaScript and CSS. Express is gzipping the content. The slowest part of the site is Disqus which is used for comments. Without Disqus, page load times are sub-70ms. Someone said on Twitter that a blog without comments is not a blog (and I agree), so its a price I'm willing to pay. One way I make things fast is loading all posts in memory and keeping them there. I don't have thousands of posts, so I can get away with that. Right now Node is only using ~60MB of memory, so I'm not too concerned. Almost there I still have a few behind the scenes sections to create. I want to build up a dashboard for some stats. Probably won't be as amazing as what Ghost will provide, but I'm not sure I need that much. I still have Google Analytics running anyways, and its not like I'm going to beat that. I also want to pretty up the Edit page to use auto-completion for the tags and to have the url get built from the title automatically. Just a bit of extra polish really. I do have an RSS feed, so if you're interested in .NET and Javascript posts, please do subscribe. Until next time...

The Creation of My New Blog

The World of WordPress About a year and a half ago I made the switch from Posterous to a WordPress for my blog. I figured that I might as well learn how to use the 800 pound gorilla in the room. For the most part, things went well considering that
I recently picked up Trevor Burnham’s CoffeeScript book. So far it’s a great introduction into CoffeeScript and also Node.js, two topics which I wanted to learn more about. I started running through the first examples to see them run. I downloaded the latest node.exe, and found a way to add the CoffeeScript module without NPM. I wrote up a simple test just to make sure it worked: console.log "Hello World!" I ran this command to run it: node %coffee% That worked. Node gladly printed my string, passed through CoffeeScript. Of course there isn’t much that CoffeeScript is doing, but there were no errors. My next step was to try the first full sample in the book. It’s a buildup of the larger app which the book is building up to. I use Notepad++ for most of my plain text editing. I typed in the code, saved it and ran it. Error: In, Parse error on line 14: Unexpected ‘POST_IF’ The function at line 14 looks like this: promptForTile2 = -> console.log "Please enter coordinates for the second tile." inputCallback = (input) -> if strToCoordinates input console.log "Swapping tiles...done!" promptForTile1() Everything looked correct. I just didn’t get it. Googling for the ‘Unexpected POST_IF’ brings up that it’s a parsing error and most posts have to do with multi-line if statements. I didn’t think that was what I was running into here. Or was I? I read through the multi-line if posts and it dawned on me that maybe it was being more helpful then I thought. I went back though my code and re-counted the spaces just to make sure I was consistent. Turns out I wasn’t exactly. Notepad++ was helping me out my automatically starting the next line at the same indention level as the last line. The I ran into was that Notepad++ was inserting a tab instead of 4 spaces when the indention was 4 spaces or more. CoffeeScript didn’t like the tab to start the line after the if statement. It wants spaces, not tabs. The fix was easy enough. Like all great apps, Notepad++ is flexible. I just had to turn off the option to automatically align the next line. After cleaning out the tabs and changing them to spaces, we were good to go. Since I didn’t really find anything on Google I thought it might help someone else that runs into this. I’m pretty sure it the kind of thing that only us Windows users will run into, with all of our overly helpful tools.

CoffeeScript gently reminds me that tabs are not spaces

I recently picked up Trevor Burnham’s CoffeeScript book. So far it’s a great introduction into CoffeeScript and also Node.js, two topics which I wanted to learn more about. I started running through the first examples to see them run. I downloaded the latest node.exe, and found