Working the command line

Working the command line

Last week A Book Apart, released a couple of new brief books. For anyone who has read any of the A Book Apart books you could call them all “brief” books, but these are about half the size of the regular books. I have quite the collection of the books from them (not all), and they are great small reads that get across all the required information on the subject of the book in an easy to read manor, also allowing you to try out the techniques (if it’s a technical book) straight away.

When A Book Apart released two new books in their brief collection last week I instantly bought and downloaded the “Working the command line” by Remy Sharp. For someone who uses the command line on a daily basis I was intrigued to see if there was anything I did not know, or any tips, techniques I could pick up. Knowing it was a small book, I thought if there is at least one take away from it then that would be perfect. Also for just $8 it’s a way to support people from the industry who spend time creating these items.

I recommended this book to anyone who does things with the command line and doesn’t consider themselves an expert. I am comfortable using the command line, and sometimes have to Google my way around. Some of the commands I need to use Google for Remy has done an excellent job of explaining what they do and how to use them. With this being a short book you might think it’s covering just the basics but you will be happily surprised at some of the depths Remy has gone to in his examples. The piping examples are something that I learned a lot from, and now understand far more.

React Placeholder loading state

Loading Messages

It seems like loading transitions and states come and go over time, and people try to get as creative as they can with them. Making cool animations from parts of their logo’s, to using nice imagery to give the impression of loading, You only need to search around your favorite search engine to find some amazing examples of loading UI’s.

It may of been around longer that I think, but it seems the “placeholder” style loading UI is becoming a lot more popular this days, or maybe it’s just the site’s I visit most regular are starting to use them. Even that popular that I have been working on a “placeholder” loader for my day job. Just in case your wonder what I am calling a “placeholder” loader, it’s a loading UI that simulates the look of the content that will be loaded but using a wireframe like design. So a user will see some shapes that are light in color.

Sample Placeholder loading UI


You may or may not of seen something very similar used on Facebook, the image is taken from a blog post that explain’s how it’s achieved. After reading the post, I was quite surprised on the amount of markup needed to achieve the desired affect. Having a requirement to produce a similar looking loading UI for a project I took some time to see if there was a nicer approach for the project I was needing to apply it to. The project I was going to be applying this too was a little simpler in terms of what was being displayed to the user, the UI I was going to be applying this to was a list of messages, consisting of an icon, couple items of text and another icon.

Rather than use div’s to mask the background and fill in the spot’s that I did not want to show the background animation, I used the elements of the messages from a loaded state to apply a loading state on to. Example of my demo can be seen here, with the code available on GitHub.

In the demo a message has a title and a created prop in it’s loaded state which should be text, which is what you see once the messages have been loading (in the case of the demo I use a setTimeout to simulate loading). To get the loading affect I set up the react component with some initial state for the messages, but with the values of the props empty, also telling the component we are in a loading state which applies a placeholder class to the messages div, then using CSS I make the p’s in the message component have a min-height of 1em to ensure the blocks are rendered out as blocks rather than no appearing as the contents are empty.

While it’s not a one stop solution for all placeholder loading within a web application to simulate the content’s UI, for simple items it’s a nice simple solution to apply.

Slack command with API Gateway and AWS Lambda

Stop Sprint 99, and the winner is...

At work we operate the Agile methodology and work in two week sprints. At the end of each sprint we hold the retrospective, something that was introduced a few sprints back was to vote for the past sprints MVP (Most Valuable Player), or as I like to say; Most Valuable Programmer. At the end of the sprint the team leads ask’s for everyone to send their votes for MVP, and for a couple of days up to the sprint this is asked over and over, and also asked during the retro. So I made an assumption, of why people might not be voting.

People are not voting due to in not being anonymous

With this in mind, and having wanted to make a bot for Slack I thought it could not be too hard to create a slack command that users could use to cast their vote for MVP.

For it to be simple and not get too complicated the minimum requirements I set myself were:

  1. People votes by using a slack command and the users name eg: /mvp @matthewroach
  2. You can not vote for yourself
  3. You can only vote once
  4. Voting is per sprint, need a way to start and stop voting
  5. Only one active vote topic at a time
  6. Upon stopping the vote the bot would send an in channel message saying who the winner was

Maybe not a small list to accomplish. Over the course of a weekend I created a slack command that did all the above.

One requirement that Slack enforce for integrations is that they must be using https. With this is mind and not wanting to set up SSL and host things myself for something that’s likely to used very infrequent. I decided to use AWS services to handle this, most notable API Gateway and Lambda, for storing the data I went with MongoDB using mLab, mainly because I am familiar with Mongo. mLab offer a free 500mb sandbox database that would be ideal for this.

Slack slash commands

Slash commands are commands that allow users to interact with a third party service. The part of the / (slash) is the command name, then any text after the command is used by the service to do what it needs to do. A slash command can use either a GET or POST request. I decided to use the POST verb to pass along the data from the command.

Slash commands can either post back to use anonymously, or send back the result to the channel it was triggered in. By default it’s anonymous. The other options that you have like better formatting of messages, attachments you can see at their documentation.

AWS API Gateway

API Gateway act’s as a “front door” for applications to access data, business logic or functionality from your back-end services.

API gateway is not limited to the AWS infrastructure, for the slack command I hooked up a POST interface to a lambda function.

Once you deploy your API to an environment, Amazon allows you to deploy your API to multiple stages so you can have a test, staging and production set up. With each stage you get a different URL you can use to call your endpoints with.

The UI for setting up API’s via the AWS console is not the greatest and takes quite a few clicks to go through the different steps. Also, when you hook up an API to a lambda function you need to create a body mapping template that will take the incoming requests and convert to a format you wish to consume in your lambda function. In this case I added a mapping for the content type: application/x-www-form-urlencoded that look like this:

## convert HTTP POST data to JSON for insertion directly into a Lambda function
## first we we set up our variable that holds the tokenised key value pairs
#set($httpPost = $input.path('$').split("&"))
## next we set up our loop inside the output structure
#foreach( $kvPair in $httpPost )
 ## now we tokenise each key value pair using "="
 #set($kvTokenised = $kvPair.split("="))
 ## finally we output the JSON for this pair and add a "," if this isn't the last pair
 "$kvTokenised[0]" : "$kvTokenised[1]"#if( $foreach.hasNext ),#end

Hopefully the comments in the code make it easy for you to understand what’s happening. Basically we are converting the form body slack pass us into a JSON object of key value pairs.

AWS Lambda

Is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you.

Lambda is where all the magic happens. I wrote some simple nodejs code that handles the different inputs from the slash command, does it’s logic, reads/stores data to MongoDB and then responds with a result for the user issuing the slack command.

I have pushed the code to a repository on my GitHub account if you wish to take a look at the node code.

As I mentioned earlier, I converted the incoming data from slack into a JSON object that is available to my node code under the event object. With this JSON object now available to me within my function I am able to look at the keys I need and do the required actions. The main thing we are after is the text key from the object this holds the text after the /mvp part of the slash command. I use this key to work out what action I should be taking from the caller.

There are only three commands available from to the user using /mvp; that is start, stop and voting. Voting is working about by looking for an @ as the first character of the text. If I don’t match either of these three, I tell the user you can not perform that action.

Some of the other keys I am using for the function is the team_domain, this is used to determine the mongoDB collection I need to look into. This keeps other teams data away for each other, and avoid having huge one huge collection of data. I also use the user_id to track if the user has voted already. The command does not track who voted for who, it will also not let you vote more than once, and you can only vote if we find an active mvp vote, which also means it’s only possible to have one mvp vote at a time.

I added some sample JSON files that I was using for testing the code locally. I used lambda-local to test my function locally, which makes for a much better experience than having to deal with the AWS interface all the time for writing code and testing.

Without going into great depths of lambda, you have up to three function arguments available to you within your main function, event, context, and callback. Data comes in on the event, context contains information about the lambda function that is executing and the callback is used to return information to the caller. The callback is an optional argument. You can read more about this in the lambda documentation.

Screenshots of working /mvp

Starting/ Opening the voting for a given item, I called this vote Sprint 99

Start Voting

Casting your vote for the MVP

/mvp @docbrown

Vote has been cast

Thank you, your vote has been cast

Stopping/ Closing the voting for Sprint 99 and seeing who was the MVP!

Stop Sprint 99, and the winner is...

Documenting your CSS with Styledown

Generating a styleguide for your CSS is something that can help other developers, and go a long way in reducing the confusion and possibility of someone reproducing the same styles. Ever jumped around a few projects over time and had to dive into all the CSS to find a style you thought was there, then found there was not one and you wrote it, or you may of even ended up duplicating some work that was brought up in a code review.

“Did you know we had a style already for that”

Generating a styleguide should not be a chore, and something you want your entire team to buy into. It should fit easily into your flow, not be too intimidating to other developers to start and continue with.

There are quite a lot of CSS styleguide generators around if you do a quick Google search, you will be greeted with many choices. I decided to go with styledown, as it looked to be the simplest and required very little in terms of getting up and running. Also the comments you needed to add to your CSS is very minimal, this suited what I was after. Another bonus for styledown was it was not Sass specific, it’s just CSS comments, you can even use markdown files if you so wish. Styledown is a node package, and is available on npm as styledown.

Documenting your CSS with Styledown is a simple as follows:

 * Buttons:

 * Button:
 * `button` - Button stylings for default sbittons on the sitee
 *     @example
 *     button Standard button

The first comment acts as the header, so in this case we are documenting our buttons. A file can contain multiple headers so you don’t have to worry too much about splitting your Sass up into lots of partials. We can then add comments throughout our CSS to give a description and example of the styles. The example part of the comment can be written as either Jade or HTML. The only required thing which can trip you up is the first line of the block you are documenting, in this case Button has to end with a semi-colon (:).

On top of adding comments to your CSS styledown allows you to have a config file, this file is markdown and lets you define what will be output in the head/body of the generated HTML file.

Generating the styleguide is as simple as running the following from your command line, assuming you have install the package globally.

styledown scss/**/*.scss > index.html

If you are using gulp as your build tool you there is a gulp-styledown module, makes things nicer if you want to build the styleguide for each change. I recently added the gulp-styledown module to a new project and have the style guide generating on save of any .scss file.

I am running styledown on my little CSS framework I use for personal projects, it’s more a big reset and normalize in one. You can see the source on Github, or view the styleguide at

Setting up an Amazon Cloudfront CDN

Heard of people talk about using a CDN (Content Delivery Network) to serve assets and resources to your website with? Ever thought it was too hard or complicated to set up? Well guess again, using Amazon S3 and Cloudfront is very straight forward to get set up and running. It’s also not that expensive in the grand scheme of things. If you are concerned about cost it’s probably not the thing for you, as it’s an extra cost on top of you hosting package. But if you want to try it out it’s going cost you cent’s rather than dollars. (I’ve been running the assets for my RSS Reader from Amazon’s CDN for the past 6 months and I am generating around 8,000 combined requests, and it’s costing me $0.09, yes that’s nine cents a month)

One thing to note is the costs will rise differently depending on how you set up you CDN distribution, and also you have to pay for the S3 storage. In total my S3 and Cloudfront bills are around $0.60 (sixty cents) a month. Regarding the CDN costs, I have mine set up to be best performance and use all Amazon locations, this meaning my assets are distributed world wide to ensure each user gets the best performance.

Depending on how you want to access your content from the CDN there are a maximum of 6 steps to follow. I am going to walk you through setting up an Amazon S3 bucket that will contain your content and then using that bucket as you CDN. This means, anything you put inside the S3 bucket will also be accessible via the CDN.

To follow the steps you will need an Amazon AWS account (

  • Login to your AWS account and create a new S3 Bucket and add some content to it (you can upload via the website)

  • Make sure the bucket is set up for static website hosting, after creating the bucket click the bucket name, then click the “Properties” button in top right. Here you will get access to many different options you can set up to configure your bucket.

  • Now go cloudfront control panel using the services menu, click “Create Distribution” – Click “Get Started” under the web option
  • Choose you bucket from the Origin Domain Name (this will be the bucket you created in step 1)

  • If you wish to use a custom domain for your CDN like:, add the domain you wish to use in the “Alternative Domain Names” input.

  • If you did step 5 you will need to update the DNS for your domain and add a CNAME for the subdomain and point it at the origin URL that’s shown in your distribution listing

  • Now you have your CDN set up you need to go back to your S3 bucket and add a bucket policy, this is in your bucket properties under the Permissions tab, click “Edit bucket policy” and add the policy shown below with the items <enter your s3 bucket here> with you bucket name.. eg. matthewroach-images


Bucket Policy

    "Version": "2008-10-17",
    "Id": "Policy1407892490897",
    "Statement": [
            "Sid": "Stmt1407892483586",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<enter your s3 bucket here>/*"
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E23JA8NDC54WON"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<enter your s3 bucket here>/*"

Now you have everything set up, and if you have changed DNS you may need to wait a few hours or 24 for them to take effect. But after that you can access your content using the cname or the origin domain, eg.


SVN Developer Branch Flow

Ever wanted to use a branch workflow with SVN, but had trouble getting it to work or finding out information on how to manage branches? Yes I have, and I spent the best part of two days working it out only to realise it was not as bad as I thought. I was just about to ditch the idea when I finally worked it out by re-reading about SVN merging.

The Strategy

The idea is to have developer branches, so each developer can have their own working copy and manage it themselves. Once they have completed each ticket of work and are ready for go back onto the mainline (trunk), they merge the batch of revisions down to trunk ready for release.

The Issue

Note: I am not branching trunk as a whole, but branching a sub folder within trunk

All seemed to be going well-I was making changes to my branch and committing as and when I needed to. I then finished my first ticket and merged the code down to trunk. Another ticket was finished so I merged the code down to trunk. A couple of days later another developer had finished their work and merged to trunk. Now I need to pull their changes to my branch to keep my branch in sync, but this is where it all started to go wrong. Upon doing a sync merge to bring all changes on trunk, my branch did not know about the previous merges that I had made from my branch to trunk. It was trying to bring the merges I had made from my branch to trunk, and throwing errors about conflicts.

The error was “Reintegrate can only be used if revisions X through Z were previously merged from {repo} to reintegrate source, but this is not the case”

The strategy of developer branches seemed like a simple idea, but seemed to be causing many issues. My research lead me to find out in SVN 1.8 server, merge had been updated to be smarter about moving changes between trunk and branches. We got a 1.8 server running and copied over the repository to check if this was the case-still no difference. I eventually ran back into my issue above.

The Solution

As these branches are long running branches, and merges happen but then they are kept running, the branches are not being reintegrated back to trunk. So you need to keep them alive by keeping the branch in the loop of what has been merged to trunk. One might think if you merge from a branch to trunk, the branch will know what you merge to trunk. But that’s not the case with SVN. When you merge to trunk, you are only applying the changes from the merge to the local copy of trunk. These changes are not merged until you commit all these changes to trunk. By committing the merged changes to trunk you create a new revision number, and this new revision number is never passed back to the branch. In this instance you would normally be terminating the branch as your feature is complete, but as we want long running branches we have to keep the branch alive.

In order to do what we need and keep the branches alive, we need to follow the following flow (diagram to help follow along)


SVN Long running branch flow

  • Rev. 10 We create a branch of trunk (branch-a) , this creates revision 11
  •  Another branch is created from trunk (branch-b), this creates revision 12
  • Marty is developing on branch-a and makes a change this makes revision 13
  • Meanwhile Jen is developing on branch-b and has done a fix and commits making revision 14
  • Jen is happy for revision 14 to pushed back to trunk, she merges here revision 14 to trunk, all goes OK, so she commits the merged changes creating revision 15 on trunk
  • As the merge created revision 15, branch-b does not know this and in future sync merges on branch-b it will try to bring revision 15 back to the branch, and cause conflicts. So, Jen needs to merge revision 15 to branch-b, but not like a normal merge, she only needs to do a record only (–record-only) merge. This will tell branch-b not to try and merge revision 15 to the branch-b in the future
  • Marty then makes a fix and creates revision 17
  • Marty realises Jen made a fix he needs on his branch, so Marty does a sync merge onto branch-a and commits the merge code as normal
  • Marty has fixed the issue he was working on in revision 13 & 17 and it’s time to merge into trunk, Marty merges his code to trunk, and merges the applied changes to trunk this creates revision 19
  • Now Marty needs to merge revision 19 as a record only merge to branch-a to avoid SVN try to merge it in sync merges later on.


–record-only is the command line flag if you are using the command line to do your merges, if you are using a GUI there should be an option for it, in SmartSVN it’s in the advanced tab, see below:


--record-only merge in Smart SVN


Always remember to commit the folder when doing merge’s, this contains the merge info! Not doing so will cause issues in the future!

Google Chrome (Canary) Emulation


I am a massive Google Chrome users, so much so that I have two versions installed on my work machine. I have the stable Google Chrome that everyone should be using, and I also used Chrome Canary, which is the bleeding edge of the web, or so they say.

By default I do all my development in Canary, this is to keep my history, Cookies, and everything else separate from my main browser, which is right stable Chrome.

If like me you deal with responsive (device aware) web sites/applications you will be aware of the challenges of testing and viewing your sites in all the different browsers and devices, while you can never beat looking at your creations on these real devices, I for one love speed and the ability to do it all from my desktop while in the depths of development and prototyping. This is where Chrome Canary comes in, I know I have wrote about this sort of thing before (Mobile device detection in Google Chrome), it turns out the Chrome team have updated this device emulation parts of Chrome Dev tools, there is a new Mobile icon next to the magnifying glass when you have Dev tools open, click on this icon and you will be greeted with the new Emulation features that contain a whole bunch of cool things like:

  • Better UI for seeing the breakpoints and size screen
  • Added more predefined Devices
  • Bar graph at top for quickly jumping between media queries
  • Ability to set your Network Speed (throttling)

The best thing of the new UI and layout and features that I have found useful so far is the Network drop down to be able to see how you site performs on different network speeds, again while it’s not as good as testing it for real on a device, it certainly helps with development and being able to see the Network tab and the timeline when looking at how your site is performing over different internet speeds.

My Sublime Text


Over the past few weeks I’ve added a few more packages to my Sublime Text work flow, it’s been a good while since I last added a package. I had got myself into a nice flow and was happy. But then I saw someone mention something about Sublime Text and I decide it was time to see if there where any new packages that might help with my work flow or speed up my development.

I’ve been using Sublime Text 2 for over two years now, and love it to bits, I have tried other editors and can not bring myself to move away, or spend the time to learn something new. I believe that the more time I spend within Sublime learning it’s little short cuts the better I’ll be, and then every now and then I’ll spend a bit of time seeing if there is anything else I should learn, or any other packages I should install. I don’t believe you can become an expert with your editor very quickly, but over time you will begin to gel with it and learn more about it, the trick for me is to every now and then spend some time learning something new with it.

As you may now if you are a sublime user, it’s a good editor straight out the box, but you really need the package manager for it to become it’s own, and for you to make it your own. It’s the first thing you should do after installing Sublime, even if its just for one or two packages.

I will not go into the basics of setting up Sublime and it’s settings, but I am going to talk about the settings I have, the packages I use, etc.

User Preferences

Listed below is a copy of my User Preferences ( Menu > Preferences > Settings – User )

  "auto_complete": false,
  "color_scheme": "Packages/Color Scheme - Default/Monokai.tmTheme",
  "detect_indentation": false,
  "ensure_newline_at_eof_on_save": true,
  "font_size": 8,
  "highlight_modified_tabs": true,
  "line_padding_bottom": 1,
  "line_padding_top": 1,
  "overlay_scroll_bars": "disabled",
  "preview_on_click": false,
  "tab_size": 2,
  "trim_trailing_white_space_on_save": true,
  "word_separators": "./\\()\"':,.;<>~!@#$%^&*|+=[]{}`~?",
  "word_wrap": true

You may notice that I like my font_size small, I hate word wrap, and hate trailing white space, and yes I use tabs, but only indent by 2.


I use the package manager and have a bunch of packages installed to help with my development.

12 Packages to help with my daily development. A few of them are for syntax highlighting, so not sure if they count, but without out them it would be difficult.

The first two, ColdFusion and Enhanced HTML and CFML along with Sass are for syntax highlighting along with auto completion.

Grunt is quite a new package to me, and it’s one I wish I had found earlier, it means one less terminal window open for me and I can access all my grunt commands without leaving Sublime, and by using the keyboard. You can run any of your grunt commands, including the watch commands, and you can also kill running grunt tasks.

HTTP Requester is a package I sometimes forget I have installed. It’s a great little package for making HTTP Requests from within Sublime, take a few minutes to look at the documentation, it’s not just for basic HTTP Requests.

SideBarEnhancements is the first package I tell anyone who installs Sublime Text 2 to go get, in my eyes it contains everything that Sublime should do out of the box, it provides some basic right click menu action for the left sidebar of Sublime text, for example: New Folder/File.

SVN, is exactly what you might think. Subversion control from within Sublime, while I don’t use this greatly as I get a little nervous when committing file(s) without using a GUI to check my changes, it can come it very handy when I need to do an update or check the status of things. I use a Windows machine to develop on and getting SVN set up via the command line, and then through Sublime took me all evening one day, I should of documented it but never, but I might try and find the settings and things I did and document them. So bear in mind you might be in for a little bit of configuration and banging your head against the wall getting this package to work.

VCS Gutter, this again is a fairly new package for me. This package is a great enhancement to the Sublime interface. The basics of the package is it will provide your visual indicators in the gutter of an open file of it’s local state against the repository’s state. It requires SVN to be on your path and a diff tool to be available on your path, for me I have it set to do the diff check on save of the file.

Code Conventions

Recently I put together a set of markdown files for the few different web languages I write a lot of code in, these being HTML, CSS (Sass), JavaScript/jQuery, and CFML. I have these located on my GitHub page under a repository named Conventions

I put together the set of conventions for a few different reason. I thought someone might be able to get something from it, or for when I release open source code/ library’s I can point people to them, the final reason being I can show people the method to my madness if they every cross my code paths.

While the conventions documents are not 100% full proof I will be continuing to add and tweak little items as I can continue to write more code and find my rhythm.
A lot of what is already documented is straight from how I write my code at the moment, if I see a pattern in the way I write certain parts of code I document it as a convention, simple as that. I don’t go out of my way to write a convention to change the way I code, I write the conventions on the way I am coding.

I feel conventions should not be something you force, obviously this can break down within a team environment, but that’s a whole different story. While the conventions I have documented do contain some ways of writing faster/better code, a lot of what I have documented is personal preference and more style based.

A simple example of one of my conventions, spacing around if statements:

if ( x ) {

Instead of


Following my gut with Hill Valleu

If you have read my previous entry or perhaps follow me on Twitter, you may know that I am currently working on a web app, Hill Valleu. Yes, the last update was a few months ago, but a lot has changed since then.

When I published that post, I was using a working copy of Hill Valleu in a beta stage. Within a week of launching it I had some friends and colleagues using the app. Feedback was good and the users seemed to like it. I knew a few items were missing and the limited users who were in all confirmed this by suggesting a few of the same enhancements. So, I got to work on these and within a couple of days I had a few new features launched to the small user base.

Everything seemed to be going great. A few app users, no major issues…so I planned to give it a couple of weeks before getting more users into the system and then a few more weeks before launching it for real. But over this period of time I noticed that my own usage of the app had dropped. This was not a good sign, when an app I was supposed to be building for myself was was not even being used by me…and it turned out the beta users had pretty much all stopped using it.


Now that’s a good question. I am not sure why others stopped using. A few said they were using other services, which is fair enough. I am trying to break into a crowded market – hard going if you do the basics, but miss a lot of features of the bigger apps.

The main question I needed to answer was why did I stop using a service I built to fill a need for myself?

After sitting down and working out the issue, I discovered the problem was I had stopped building the app for myself and started building it based on what others said they wanted. To get my heart back into the project, I made a big decision to go back to the root of why I started out on this path to begin with. To fill my need, and build something I would use everyday.

I deleted all the content and beta user data and started afresh. My next plan was to get back to where I wanted to be to start with. I ripped out a load of code, reworked the design and features to drastically reduce the complexity I had built in.

Within the week I had a skelton app but up online and I have been using it everyday since, slowly tweaking little bits here and there, adding couple new little enhancments, and have a list of the basic feature set I want in the app before I let anyone else in to the system.

To see the current state and to get a preview of the all new reworked Hill Valleu, go over to the website – and if you like what you see and would like to hear when I launch the app drop your email in the box at the bottom of the page.

Mobile device detection in Google Chrome

This article is to talk about how I used Google Chrome to help with the development of serving different content for different devices.


One thing to note is that, we did have a range of different devices to do actual testing on, but I found it much quicker to iterate basic things using the desktop and then fire up the device(s) to test for real. What I am about to explain is great for quick testing that what you are writing is performing as intended. I suggest you always check on a real device.

In Google Chrome you can override the user agent that the web browser is sending to the web server, and if you are doing user agent detection like we are the server would take this override and what ever you are doing on the server for mobile detection would return as if you where asking for it from your mobile or tablet, you can even set the user agent to other browser venders (such as IE). Bear in mind that this user agent override does not make Chrome act like the browser you are overriding to, it just mimics its user agent string, so if you are doing server side or client side checking of the user agent string this is pretty perfect for quick development.

The user agent is changed for the server side as well as the client side, so any code you have written that is using the User Agent string in the server request or the navigator.userAgent string in JavaScript will be changed based on this override.

Google Chrome Overrides

  1. Open the Chrome Developer tools (I assume you know how to do this, if not see here)
  2. Click the gear icon in the bottom right corner, this will give you the settings overlay
  3. Click overrides in the left menu, and you will see something similar to the screenshot above
  4. Start changing the user agent from the dropdown and do your testing.
  5. You need to refresh your page each time you change the user agent

You can play around with the JS Bin I used for the screenshot above if you so wish :

Dev Tip: I had multiple tabs open with the user agent set to different devices, and I also used a little bit of JavaScript to put the device (eg. iPhone) at the the start of the title, so I could easy see which tab was which.

As you will see, you can do more than I am going to explain in this article, and I suggest you go reading up on it. But for this article I am only using User Agent and Device Metrics.

Multiple OpenBD installs with one jetty

This is a copy of my reply to a message on the OpenBD Mailing List, and as I think it’s a great use for others I have decided to post it on here.

If I have gotten this correct you are wanting an OpenBD JAM install and be able to run two OpenBD sites from the one ./openbdjam command

For example:

to be severed from the same server, and same OpenBD JAM install?

If this is right, this is what you need to do to get it working:

Note – This is from a clean install

By default the webroot for the JAM install is /opt/openbdjam/webroot/ – You will need to duplicate this folder (or uploaded your own webroot containing a site), I created a webrootb folder like : /opt/openbdjam/webrootb/

Now you need to navigate to the jetty contexts folder: /opt/openbdjam/jetty/contexts/
You will see a file called webroot-context.xml here, this is the file that points to the default webroot, we need to make another one of these files (use the cp command, then we only need to edit a couple of lines), I called mine webrootb.xml

Open up webroot-context.xml and un comment the virtualHosts block and inside the < Item>< /item> Put the web address you want to use to access the site located in /opt/openbdjam/webroot/
eg. < Item>< /Item>
Save and close the file

Now open the webrootb.xml file and edit the 7th line to point to the new webrootb folder, then also un comment the virtualHosts block and change the line to the web address you wish to access this OpenBD site from
eg. < Item>< /Item>
Save and close the file

Note : You need to point the A record for your domain(s) to the IP of your server

Now stop OpenBDJAM, and then start OpenBDJAM

Try and access your two sites

jQuery Performance : building DOM elements

This is the third post regarding jQuery performance, it’s not technically jQuery performance, as I am not comparing two different ways of doing something with just jQuery, but more comparing jQuery over native JavaScript methods, mainly showing just because you are using jQuery does not mean you have to use it for everything.

As jQuery is JavaScript, and a very nice way of interacting with the DOM, and handles a lot of the cross browser issues for you, I see a lot of case’s where jQuery is over used and can be a performance issue on an application. Maybe for your average Joe who’s doing some simple bits and bobs to make their site stand out, this may not be an issue. But me being me, I love performance metrics, and testing different ways of doing things. I am always questioning asking: WHY?

The code

If you are working on web applications or something relatively big and generally something that involves an AJAX call to fetch data from a remote source. Chances are you are more than likely going to have to build some DOM elements with the data you have just fetched. Sounds simple enough.

For the purpose of this article we are not hitting any remote services to get our data, but we are building a table for users, that has 4 columns, and 50 rows. We have a name, company, admin, and actions column. Also in this example we are inserting the same data for each row of the table.
You maybe thinking, this is not going to show up much in the way of performance, but hold onto your hat and keep reading!

Example 1 – jQuery

The first example is using all jQuery methods to build all the DOM elements, and add them to variables then append the entire new row to the table.

    for ( var i=0; i < rows; i++ ) {

      var $newRow = $(‘<tr />’);

      var $fRow = $newRow.append($(‘<td />’).text(‘Jim’));

      var $sRow = $newRow.append($(‘<td />’).text(‘Testing’));

      var $tRow = $newRow.append($(‘<td />’).text(‘No’));

      var $foRow = $newRow.append($(‘<td />’).text(‘Delete’));



Exampe 2 – JavaScript Variable with jQuery append

The second example uses one jQuery call to append the HTML string I have built up in a variable using standard HTML markup.

    for ( var i=0; i < rows; i++ ) {

      row = ‘<tr><td>Jim</td>’

      row += ‘<td>Testing</td>’

      row += ‘<td>No</td>’

      row += ‘<td>Delete</td></tr>’



The Results

Test on

You may or may not be surprised in the results. If you think about it logically, you may realise that the jQuery way is going to be naturally slower, as we are calling jQuery methods to do the work for us, so this has to result in a performance hit naturally as there is another layer to go through than simply building the HTML string(s) in a variable.

The one major thing I got for this (even tho I have always used the HTML way) was the difference in performance over the two methods. I was seeing an average of 75% difference in speed between the two when running the jsPerf test.

On a non-performance matter the HTML way is actually nicer to read (in my eyes).

Basic jQuery testing with Jasmine – Spies – Part 2

In my previous jQuery testing with Jasmine I went over the basic’s of getting everything set up and even getting a few tests running, and as these couple of post’s are about testing a jQuery plug-in rather than standard JavaScript we are using jasmine-jquery. Couple of reason I am using jasmine-jquery is mainly I am testing a jQuery plug-in, so that makes sense, and as my plug-in has click events in it we can test these events.

The examples in the post relate to the testing of my simple jQuery tabs plug-in (fluabs) which is available on GitHub with all the test’s.

Following on from the previous post, we tested the plug-in was set up and initatied on the fixture we loaded, and the default actions of the plug-in had happened. In this case only the one content area should be shown.

Now we know the plug-in can do more than that, and it should do more than that. We need to know test against the click events that the plug-in should of bound to the element to hide/show the relevant tabs.

For me I set up another describe block for testing events, happily name fluabs events:

describe('fluabs events', function() { });

In here I will test the plug-ins events for clicking on the tabs.

Test One

The first test I preform is to check if any of the tabs have a click event bound to them. As the plug-in does the event binding we can test if the event has been bound properly.

My test looks like:

  it("click tab", function() {
    var spyEvent = spyOnEvent('#tabs li a', 'click' );
    $('#tabs li a').click();
    expect( 'click' ).toHaveBeenTriggeredOn( '#tabs li a' );
    expect( spyEvent ).toHaveBeenTriggered();

Ok, so let’s take a look at what this test is doing. This test is the basic example that is shown on the jasmine-jquery readme on their github page. I have modified it to work with the fluabs plug-in.

First we are assigned a spyOnEvent() function that has the element I want to perform the event on and the type of event I want to perform, to the variable spyEvent, we then trigger the event using the jQuery trigger function.
The next two lines are the test’s check a click event has happened on the element we have specified. So we expect a click to happen on the a inside the li under the #tabs element, and the second expect is to check that our spyOnEvent has been triggered

This first test is a bit open ended as all its checking is if a click event has been fired, so the plug-in could be binding the click event to the correct element but does not me the click event is working properly, we will do that next. But this is a good test to ensure we have go our event bound properly.

Test Two

The second test I preform is to check if the click event we bind in the plug-in are bound properly and do what they are supposed to do.

My test for this looks like:

it("click tab 2 - hide all tabs except tab 2, tab 2 to have class current", function() {
   var spyEvent = spyOnEvent('#tabs li a[href=#two]', 'click');
    $('#tabs li a[href=#two]').click();
    expect( 'click' ).toHaveBeenTriggeredOn( '#tabs li a[href=#two]' );
    expect( spyEvent ).toHaveBeenTriggered();
    expect( $('[data-tab="#two"]') ).toBe( ':visible' );
    expect( $('#tabs li a[href=#two]') ).toHaveClass( 'current' );
    expect( $('[data-tab="#one"]') ).not.toBe( ':visible' );
    expect( $('[data-tab="#three"]') ).not.toBe( ':visible' );

You will see some similarities to the first test, as we are again testing the click event, but this time we are not testing just for a click event to happen, we are testing the click event on a certain element and if it has performed the click events functions as the plug-in should be set up to do.

This means with the spyOnEvent() function will have to be more specific on the element we wish to target the click event on, so that the actual plug-in click event happens.

As this a tabs plug-in, and the way I have wrote it is, that the href part of the tabs unordered list elements is used to target the div within the .tabcontent div. The tabs list looks like:

  <ul id="tabs">
    <li><a href="#one" class="current">Tab One</a></li>
    <li><a href="#two">Tab Two</a></li>
    <li><a href="#three">Tab Three</a></li>

and my tabs content div looks like:

  <div class="tabcontent">
    <div class="tab" data-tab="#one">
        This is some sample tab content in <strong>Tab One</strong>
    <div class="tab" data-tab="#three">
        This is some sample tab content in <strong>Tab Three</strong>
    <div class="tab"data-tab="#two">
        This is some sample tab content in <strong>Tab Two</strong>

So the href part corresponds with the data-tab attribute of a div with the .tabcontent div

With that all sorted, back to the test. We have set up the click event to happen on the second tab like so :

$('#tabs li a[href=#two]').click();

Then the follow expects check that the element has been triggered on, and the spyEvent was triggered, then we check to make sure the elements on the page have changed based on what the plug-in is supposed to do.

We expected the data-tab=”#two” to be visible as we click the link with href of #two, and the plug-in adds a class of current to the a element, so we check to ensure this has happened.

Now we have checked the relevant tab is visible and the link we triggered the event on has the right class, we need to check to make sure none of the other tabcontent div’s are showing we check them to make sure they are not visible.

Basic jQuery testing with Jasmine Part 1

I have had JavaScript testing on my radar for a good few months, and in the past few months I’ve made it a big proirity to learn. So after rewriting my blog system to give me the kick up the backside, the next on my list was JavaScript Unit testing.

So for the past couple of months in my spare time I have been playing around with the JavaScript testing framework. I am going run through setting up Jasmine and jasmine-jquery and also how I tested one of my simplier plug-ins. The plug-in I am going to use in this article is a simple jQuery tab’s plug-in named fluabs. It’s available on github along with all the test’s which are not all covered in this article

What is Jasmine?

Jasmine is a behavior-driven development framework for testing JavaScript code. It does not depend on any other JavaScript frameworks. It does not require a DOM. And it has a clean, obvious syntax so that you can easily write tests.

Jasmine on Github

Why Jasmine?

I do not have a huge big pro’s and con’s of why I choose Jasmine. I was doing some research in the different JavaScript testing frameworks, and Jasmine seemed to come up alot, along with QUnit and Mocha. I tried Jasmine, then had a play with QUnint. I never actually tried Mocha as it just did not sit right with me for some reason.

After playing a little with Jasmine and QUnit, and hitting a couple of little issue’s with QUnit I just decided to stick with Jasmine, rather then fully evaluate them both and make a proper decision. So that’s how I chose Jasmine, probably not the best but I am getting on really well with Jasmine, at first the whole testing thing was really alien to me and it was Jasmine that actually made it click, or that fact I stuck at it.

On top of Jasmine, I am also using jasmine-jquery.

jasmine-jquery provides two extensions for Jasmine JavaScript Testing Framework

1, a set of custom matchers for jQuery framework

2, an API for handling HTML fixtures in your specs

Getting Started

As I am using jasmine-jquery, you need to download jasmine, aswell as jasmine-jquery. Also, as we are testing jQuery you will need a copy of jQuery. You could use a CDN, but I have downloaded it and put it in with my test’s so I have the correct version that I have used to develop the plug-in with. This would then allow us to re-run our tests when a new version of jQuery is released to check if our plug-in is compatable with it.

My folder structure for my plug-in is, which I use for all jQuery/JavaScript plug-in’s/code:

  • demo – contains a HTML file that demo’s my plug-in pulling the JS from the other folders
  • dist – contains a production ready minified version of the plug-in
  • src – this contains the unminfied development plug-in code
  • tests – this contains Jasmine/ jasmine-jquery, and my test stuff

Getting the files

  • First thing to do is download the latest standalone jasmine package from their github downloads page,
  • Extract and place somewhere. I put everything under a folder named tests within my plugin folder that I going be testing,
  • Download jasmine-jquery from their github downloads page – jasmine-jquery is a single file, this needs to be put in the lib folder that is in side the test folder we created in step 2,
  • Put a copy of the version of jQuery you need for your plug-in in the lib folder to

Adding jasmine-jquery and jquery

You now have jasmine and jasmine-jquery downloaded, you will now need to update the SpecRunner.html file to work with jasmine-jquery. It’s not too difficult just a case of adding a couple of script tags

You need to add the jasmine-jquery.js file on the line after the jasmine-html.js file like so:

<script type='text/javascript' src='lib/jasmine-jquery.js'></script>

And as we are testing jQuery you need to add jQuery to you page, which you will be familiar with. You add this on the line below jasmine-jquery.js like so:

<script type='text/javascript' src='lib/jquery.js'></script>

Now we have the testing lib’s all set up we need to add our test specs and jQuery plug-in so we can run test’s against it.

Adding plug-in and Spec

You need to add your jQuery plug-in code just like you would if you where putting it on any site, using the script tag. Rather than duplicate copies of the plug-in I reference the plug-in from the src folder as this is the development code and this code we want to test against. So I add the plug-in to the SpecRunner.html like:

<script type='text/javascript' src='../src/fluabs.js'></script>

We also need to create a Spec file that will contain our tests, I called it SpecFluabs.js and saved it in the spec folder, and referenced it in the SpecRunner.html file like:

<script type='text/javascript' src='spec/SpecFluabs.js'></script>

You will notice in the SpecRunner.html file there is the following code:

<!-- include source files here... -->
<script type='text/javascript' src='src/Player.js'></script>
<script type='text/javascript' src='src/Song.js'></script>
<!-- include spec files here... -->
<script type='text/javascript' src='spec/SpecHelper.js'></script>
<script type='text/javascript' src='spec/PlayerSpec.js'></script>

This is the default spec code and JavaScript code that is tested against, I remove the script tags and replace them with reference to my JavaScript and Spec files.

Testing the plug-in

As I had already written the plug-in, I’ve just added tests to prove it’s working. We could then use the testing framework set up to do test-driven development if I or someone else would like to contribute to the plug-in.

Now we got the test folder all set up we can start writing some test’s to check the plug-in does what it’s supposed to do.

The basics of Jasmine. Jasmine uses’ Suites to describe your tests, Specs which contain one or more expections, Expections which take a value, which your matcher is responsilbe for checking if your expection is true or false.

For complete documentation see the jasmine documentation


Right we have everything set up, lets write some tests. Ah one more thing, as we are testing a jQuery plugin we need to load the HTML required for the plug-in and bind the plug-in to it to simulate it being in the DOM. With jasmine-jquery you get the ability to have HTML fixtures, which is way you can load HTML content that is needed for your tests.

For this plug-in we need a fixture that contains the sample HTML needed for the plug-in. I put my fixtures in a folder named fixtures within the tests/spec/. You need to update jasmine-jquery.js to reflect where you have put your fixture folder, this is on line 76 ofjasmine-jquery.js

So we create a file named fluabs.html inside of the fixtures folder and add the HTML needed for our plug-in.

We now need to make this fixture avaiable to our test’s, which you do using loadFixtures() in your test spec.

For testing this plug-in we want to reset the state of our test code and plug-in, so that it’s as if the page has just loaded and nothing has happened. We do this using a beforeEach(). So we want to load our fixture inside of the beforeEach() function and initaite the plugin. We do this like:

var fixture;
beforeEach(function () {
  fixture = $('#tabs');


We have defined a variable name fixture which we can use later on to test againt in our Specs. We then use the beforeEach() to load our fixture and then to initaite our plug-in. As we have told the plug-in and HTML to load beforeEach(), it will load before each spec in the describe, so we want to reset the code by using afterEach(). So we set our Suite up and theafterEach() function:

  afterEach(function () {

Now we have the plug-in and the HTML needed for our plug-in getting loaded beforeEach test and our plugin is removed after each, we can start writing some tests.

We use the describe() function to set up a suite which will contain multiple specs and a nested suite with more specs.

describe('fluabs', function() {

Now we need to add our first spec, so the first test we need to do is to check if the plug-in has been defined and we do this like:

it('to be defined', function() {
   expect( fixture ).toBeDefined();

Open SpecRunner.html in your browser (use FireFox or Safari, or read the documentation on how to use Chrome), and this should pass, if not we have an issue!

Our second spec would be to check that the first content area is shown and the others are not, as we are using the plug-in with default option’s we know this is what should happen so we set our spec, expections and matchers to check all this:

it('tab one to be shown and others to be hidden', function() {
    expect( $('[data-tab='#one']') ).toBe( ':visible' );
    expect( $('[data-tab='#two']') ).not.toBe( ':visible' );
    expect( $('[data-tab='#three']') ).not.toBe( ':visible' );

As we are using jasmine-jquery and jQuery we can use jQuery selectors in our expect.

Refresh your SpecRunner.html (sometimes you may need to clear your cache) and see all green.

So we have some basic test’s set up, for our plug-in, in another article I will go into more depth on testing with spies to test click events and make sure everything works.