Valid ALT text?

This tweet the other day from Addy Osmani showing some CSS on how to highlight images on a page with no alt attribute got me thinking. Addy’s tip is great and shows how developers can provide guidance within the apps/sites they are building to editors/admins on missing attributes that are required for accessibility. The part I was stuck on was this only shows what I would call error states, there is also the state where the alt attribute can be present meaning the red border isn’t shown, this can provide false positives as even though a blank alt attribute is valid. It is only valid for decorative image. For frameworks like React or others it’s possible to put yourself outside of the error state easily without knowing it, and it appears like you are valid. A much better tip would be to highlight the “decorative” state image with a warning – using an orange or so forth. This then allows the user to quickly check the images that are present and have an alt attribute but not value and gives them the opportunity to confirm if that’s the correct state for the given image.

For example in a web application where a developer is using a utility function to display and image/graphic and assumes the alt text is being set based on them passing a value, or even if they are just passing an object of data from a API or other datasource. Example of a CMS, where the UI is built in React and you just pass the image data from the API into your component – the issue here is you are hoping the admin/editor added the alt text in the photo/media library tool.

Having images that are non decorative with a blank/null alt text fails accessibility for WCAG 1.1.1 Non-text Content – H67: Using null alt text and no title attribute on img elements for images that AT should ignore.

// Error - missing alt
img:not([alt]) {
    border: 5px solid rgb(255, 0, 0);

// Warning - Has alt attribute but no text value
img[alt=''] {
    border: 5px solid rgb(255, 165, 0);

The following CSS is used to produce the output you see in the image below. I also put together a quick demo over to provide a working example:


Demo of highlight alt missing and alt blank

HTML, CSS and netlify

Netlify is everything you need to build fast, modern websites: continuous deployment, serverless functions, and so much more.

I’ve been looking for a small project/excuse to use netlify for a little while now, and then something came along this week that seemed like the perfect place to be able at least try it without and fallout.

For some reason I’ve always thought netlify was for all the “modern” JAMStack style build – where you have a static site generator that needs to execute some build steps before it can be deployed. But as I found out this week I was pretty wrong.

I was building out some prototypes for something at work and wanted to get it online somewhere so I could show a few people. Normally I would use AWS S3 or upload to my own domain on a shared web host. There are many steps involved in either of those options that I wasn’t keen on having to set up or wait for DNS and so forth.

Here comes netlify – the prototype I was building was using no libraries or frameworks, using just HTML and CSS. I already had the prototype set up as a Git repository (I highly recommend always using source control, it’s a great “undo” tool and many other benefits).

With my Git repository setup (I am using a private repository for my prototype, I have made a demo one for this article), and my netlify account waiting. You just connect the two together and netlify will make a new build each time you commit. As I am committing directly to master the domain that netlify gives you for the project will always have the latest master commit deployed to it.


  1. Have a Git repository set up with you code (my example is here:
  2. Have a netlfiy account – You can sign up with oAuth and use your Github account if you like
  3. Choose “New site from Git”
  4. Continuous deployment – Choose Github, it will run through some steps and ask how you want to connect your account
  5. Choose the repository you want to use for this project/site
  6. Deploy settings – make sure the branch is your main branch – in my case I am using master. Don’t enter anything in the build section
  7.  Click “Deploy site”

You will then get taken to the overview screen for your site. First it will show as in progress, once it’s finished doing it’s thing you will have a link you can view your deployed code at. Netlify will auto generate a name for the site and a URL for you to view it at, for me it gave me the URL –

If you look at the code ( and the deployed site – You can see there is nothing specific to netlify – it’s deployed the repo as a static site, I have no build steps or any tools or frameworks in place. This allows me to quickly push my code up and have it available for others to view and interactive with. Just having an index.html in the root of the repository is enough.

For local development I am using npx to run a local server using the node module http-server.

netlify site overview

For each commit you make the branch you set as the main branch netlify will make a build for that commit. The URL for the site will always show you the most recent deploy.

You can view all the deploys that have happened in netlify and even view what that build looks like, Each build has it’s own URL you can view at – For example when I connected my repo it only had a readme file you can preview that build here – https://5d6a3e7e0c2e32cbcfc25ddf–

I then added some HTML and CSS files, after pushing to Github netlify picked up the commit and triggered another build. You can see my two builds in the deploys tab of the site.

netlify site deploys

Working the command line

Last week A Book Apart, released a couple of new brief books. For anyone who has read any of the A Book Apart books you could call them all “brief” books, but these are about half the size of the regular books. I have quite the collection of the books from them (not all), and they are great small reads that get across all the required information on the subject of the book in an easy to read manor, also allowing you to try out the techniques (if it’s a technical book) straight away.

When A Book Apart released two new books in their brief collection last week I instantly bought and downloaded the “Working the command line” by Remy Sharp. For someone who uses the command line on a daily basis I was intrigued to see if there was anything I did not know, or any tips, techniques I could pick up. Knowing it was a small book, I thought if there is at least one take away from it then that would be perfect. Also for just $8 it’s a way to support people from the industry who spend time creating these items.

I recommended this book to anyone who does things with the command line and doesn’t consider themselves an expert. I am comfortable using the command line, and sometimes have to Google my way around. Some of the commands I need to use Google for Remy has done an excellent job of explaining what they do and how to use them. It contains a lot of information on web development in general. Some people out there charge a pretty penny for something that you can teach yourself to do. There’s a wealth of information out there on the topic. This one was endorsed in an article I read at which is why I decided to check it out. With this being a short book you might think it’s covering just the basics but you will be happily surprised at some of the depths Remy has gone to in his examples. The piping examples are something that I learned a lot from, and now understand far more.

React Placeholder loading state

Loading Messages

It seems like loading transitions and states come and go over time, and people try to get as creative as they can with them. Making cool animations from parts of their logo’s, to using nice imagery to give the impression of loading, You only need to search around your favorite search engine to find some amazing examples of loading UI’s.

It may of been around longer that I think, but it seems the “placeholder” style loading UI is becoming a lot more popular this days, or maybe it’s just the site’s I visit most regular are starting to use them. Even that popular that I have been working on a “placeholder” loader for my day job. Just in case your wonder what I am calling a “placeholder” loader, it’s a loading UI that simulates the look of the content that will be loaded but using a wireframe like design. So a user will see some shapes that are light in color.

Sample Placeholder loading UI


You may or may not of seen something very similar used on Facebook, the image is taken from a blog post that explain’s how it’s achieved. After reading the post, I was quite surprised on the amount of markup needed to achieve the desired affect. Having a requirement to produce a similar looking loading UI for a project I took some time to see if there was a nicer approach for the project I was needing to apply it to. The project I was going to be applying this too was a little simpler in terms of what was being displayed to the user, the UI I was going to be applying this to was a list of messages, consisting of an icon, couple items of text and another icon.

Rather than use div’s to mask the background and fill in the spot’s that I did not want to show the background animation, I used the elements of the messages from a loaded state to apply a loading state on to. Example of my demo can be seen here, with the code available on GitHub.

In the demo a message has a title and a created prop in it’s loaded state which should be text, which is what you see once the messages have been loading (in the case of the demo I use a setTimeout to simulate loading). To get the loading affect I set up the react component with some initial state for the messages, but with the values of the props empty, also telling the component we are in a loading state which applies a placeholder class to the messages div, then using CSS I make the p’s in the message component have a min-height of 1em to ensure the blocks are rendered out as blocks rather than no appearing as the contents are empty.

While it’s not a one stop solution for all placeholder loading within a web application to simulate the content’s UI, for simple items it’s a nice simple solution to apply.

Slack command with API Gateway and AWS Lambda

Stop Sprint 99, and the winner is...

At work we operate the Agile methodology and work in two week sprints. At the end of each sprint we hold the retrospective, something that was introduced a few sprints back was to vote for the past sprints MVP (Most Valuable Player), or as I like to say; Most Valuable Programmer. At the end of the sprint the team leads ask’s for everyone to send their votes for MVP, and for a couple of days up to the sprint this is asked over and over, and also asked during the retro. So I made an assumption, of why people might not be voting.

People are not voting due to in not being anonymous

With this in mind, and having wanted to make a bot for Slack I thought it could not be too hard to create a slack command that users could use to cast their vote for MVP.

For it to be simple and not get too complicated the minimum requirements I set myself were:

  1. People votes by using a slack command and the users name eg: /mvp @matthewroach
  2. You can not vote for yourself
  3. You can only vote once
  4. Voting is per sprint, need a way to start and stop voting
  5. Only one active vote topic at a time
  6. Upon stopping the vote the bot would send an in channel message saying who the winner was

Maybe not a small list to accomplish. Over the course of a weekend I created a slack command that did all the above.

One requirement that Slack enforce for integrations is that they must be using https. With this is mind and not wanting to set up SSL and host things myself for something that’s likely to used very infrequent. I decided to use AWS services to handle this, most notable API Gateway and Lambda, for storing the data I went with MongoDB using mLab, mainly because I am familiar with Mongo. mLab offer a free 500mb sandbox database that would be ideal for this.

Slack slash commands

Slash commands are commands that allow users to interact with a third party service. The part of the / (slash) is the command name, then any text after the command is used by the service to do what it needs to do. A slash command can use either a GET or POST request. I decided to use the POST verb to pass along the data from the command.

Slash commands can either post back to use anonymously, or send back the result to the channel it was triggered in. By default it’s anonymous. The other options that you have like better formatting of messages, attachments you can see at their documentation.

AWS API Gateway

API Gateway act’s as a “front door” for applications to access data, business logic or functionality from your back-end services.

API gateway is not limited to the AWS infrastructure, for the slack command I hooked up a POST interface to a lambda function.

Once you deploy your API to an environment, Amazon allows you to deploy your API to multiple stages so you can have a test, staging and production set up. With each stage you get a different URL you can use to call your endpoints with.

The UI for setting up API’s via the AWS console is not the greatest and takes quite a few clicks to go through the different steps. Also, when you hook up an API to a lambda function you need to create a body mapping template that will take the incoming requests and convert to a format you wish to consume in your lambda function. In this case I added a mapping for the content type: application/x-www-form-urlencoded that look like this:

## convert HTTP POST data to JSON for insertion directly into a Lambda function
## first we we set up our variable that holds the tokenised key value pairs
#set($httpPost = $input.path('$').split("&"))
## next we set up our loop inside the output structure
#foreach( $kvPair in $httpPost )
 ## now we tokenise each key value pair using "="
 #set($kvTokenised = $kvPair.split("="))
 ## finally we output the JSON for this pair and add a "," if this isn't the last pair
 "$kvTokenised[0]" : "$kvTokenised[1]"#if( $foreach.hasNext ),#end

Hopefully the comments in the code make it easy for you to understand what’s happening. Basically we are converting the form body slack pass us into a JSON object of key value pairs.

AWS Lambda

Is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you.

Lambda is where all the magic happens. I wrote some simple nodejs code that handles the different inputs from the slash command, does it’s logic, reads/stores data to MongoDB and then responds with a result for the user issuing the slack command.

I have pushed the code to a repository on my GitHub account if you wish to take a look at the node code.

As I mentioned earlier, I converted the incoming data from slack into a JSON object that is available to my node code under the event object. With this JSON object now available to me within my function I am able to look at the keys I need and do the required actions. The main thing we are after is the text key from the object this holds the text after the /mvp part of the slash command. I use this key to work out what action I should be taking from the caller.

There are only three commands available from to the user using /mvp; that is start, stop and voting. Voting is working about by looking for an @ as the first character of the text. If I don’t match either of these three, I tell the user you can not perform that action.

Some of the other keys I am using for the function is the team_domain, this is used to determine the mongoDB collection I need to look into. This keeps other teams data away for each other, and avoid having huge one huge collection of data. I also use the user_id to track if the user has voted already. The command does not track who voted for who, it will also not let you vote more than once, and you can only vote if we find an active mvp vote, which also means it’s only possible to have one mvp vote at a time.

I added some sample JSON files that I was using for testing the code locally. I used lambda-local to test my function locally, which makes for a much better experience than having to deal with the AWS interface all the time for writing code and testing.

Without going into great depths of lambda, you have up to three function arguments available to you within your main function, event, context, and callback. Data comes in on the event, context contains information about the lambda function that is executing and the callback is used to return information to the caller. The callback is an optional argument. You can read more about this in the lambda documentation.

Screenshots of working /mvp

Starting/ Opening the voting for a given item, I called this vote Sprint 99

Start Voting

Casting your vote for the MVP

/mvp @docbrown

Vote has been cast

Thank you, your vote has been cast

Stopping/ Closing the voting for Sprint 99 and seeing who was the MVP!

Stop Sprint 99, and the winner is...

Documenting your CSS with Styledown

Generating a styleguide for your CSS is something that can help other developers, and go a long way in reducing the confusion and possibility of someone reproducing the same styles. Ever jumped around a few projects over time and had to dive into all the CSS to find a style you thought was there, then found there was not one and you wrote it, or you may of even ended up duplicating some work that was brought up in a code review.

“Did you know we had a style already for that”

Generating a styleguide should not be a chore, and something you want your entire team to buy into. It should fit easily into your flow, not be too intimidating to other developers to start and continue with.

There are quite a lot of CSS styleguide generators around if you do a quick Google search, you will be greeted with many choices. I decided to go with styledown, as it looked to be the simplest and required very little in terms of getting up and running. Also the comments you needed to add to your CSS is very minimal, this suited what I was after. Another bonus for styledown was it was not Sass specific, it’s just CSS comments, you can even use markdown files if you so wish. Styledown is a node package, and is available on npm as styledown.

Documenting your CSS with Styledown is a simple as follows:

 * Buttons:

 * Button:
 * `button` - Button stylings for default sbittons on the sitee
 *     @example
 *     button Standard button

The first comment acts as the header, so in this case we are documenting our buttons. A file can contain multiple headers so you don’t have to worry too much about splitting your Sass up into lots of partials. We can then add comments throughout our CSS to give a description and example of the styles. The example part of the comment can be written as either Jade or HTML. The only required thing which can trip you up is the first line of the block you are documenting, in this case Button has to end with a semi-colon (:).

On top of adding comments to your CSS styledown allows you to have a config file, this file is markdown and lets you define what will be output in the head/body of the generated HTML file.

Generating the styleguide is as simple as running the following from your command line, assuming you have install the package globally.

styledown scss/**/*.scss > index.html

If you are using gulp as your build tool you there is a gulp-styledown module, makes things nicer if you want to build the styleguide for each change. I recently added the gulp-styledown module to a new project and have the style guide generating on save of any .scss file.

I am running styledown on my little CSS framework I use for personal projects, it’s more a big reset and normalize in one. You can see the source on Github, or view the styleguide at

Setting up an Amazon Cloudfront CDN

Heard of people talk about using a CDN (Content Delivery Network) to serve assets and resources to your website with? Ever thought it was too hard or complicated to set up? Well guess again, using Amazon S3 and Cloudfront is very straight forward to get set up and running. It’s also not that expensive in the grand scheme of things. If you are concerned about cost it’s probably not the thing for you, as it’s an extra cost on top of your paid hosting package. But if you want to try it out it’s going cost you cent’s rather than dollars. (I’ve been running the assets for my RSS Reader from Amazon’s CDN for the past 6 months and I am generating around 8,000 combined requests, and it’s costing me $0.09, yes that’s nine cents a month)

One thing to note is the costs will rise differently depending on how you set up you CDN distribution, and also you have to pay for the S3 storage. In total my S3 and Cloudfront bills are around $0.60 (sixty cents) a month. Regarding the CDN costs, I have mine set up to be best performance and use all Amazon locations, this meaning my assets are distributed world wide to ensure each user gets the best performance.

Depending on how you want to access your content from the CDN there are a maximum of 6 steps to follow. I am going to walk you through setting up an Amazon S3 bucket that will contain your content and then using that bucket as you CDN. This means, anything you put inside the S3 bucket will also be accessible via the CDN.

To follow the steps you will need an Amazon AWS account (

  • Login to your AWS account and create a new S3 Bucket and add some content to it (you can upload via the website)

  • Make sure the bucket is set up for static website hosting, after creating the bucket click the bucket name, then click the “Properties” button in top right. Here you will get access to many different options you can set up to configure your bucket.

  • Now go cloudfront control panel using the services menu, click “Create Distribution” – Click “Get Started” under the web option
  • Choose you bucket from the Origin Domain Name (this will be the bucket you created in step 1)

  • If you wish to use a custom domain for your CDN like:, add the domain you wish to use in the “Alternative Domain Names” input.

  • If you did step 5 you will need to update the DNS for your domain and add a CNAME for the subdomain and point it at the origin URL that’s shown in your distribution listing

  • Now you have your CDN set up you need to go back to your S3 bucket and add a bucket policy, this is in your bucket properties under the Permissions tab, click “Edit bucket policy” and add the policy shown below with the items <enter your s3 bucket here> with you bucket name.. eg. matthewroach-images


Bucket Policy

    "Version": "2008-10-17",
    "Id": "Policy1407892490897",
    "Statement": [
            "Sid": "Stmt1407892483586",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<enter your s3 bucket here>/*"
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E23JA8NDC54WON"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<enter your s3 bucket here>/*"

Now you have everything set up, and if you have changed DNS you may need to wait a few hours or 24 for them to take effect. But after that you can access your content using the cname or the origin domain, eg.


SVN Developer Branch Flow

Ever wanted to use a branch workflow with SVN, but had trouble getting it to work or finding out information on how to manage branches? Yes I have, and I spent the best part of two days working it out only to realise it was not as bad as I thought. I was just about to ditch the idea when I finally worked it out by re-reading about SVN merging.

The Strategy

The idea is to have developer branches, so each developer can have their own working copy and manage it themselves. Once they have completed each ticket of work and are ready for go back onto the mainline (trunk), they merge the batch of revisions down to trunk ready for release.

The Issue

Note: I am not branching trunk as a whole, but branching a sub folder within trunk

All seemed to be going well-I was making changes to my branch and committing as and when I needed to. I then finished my first ticket and merged the code down to trunk. Another ticket was finished so I merged the code down to trunk. A couple of days later another developer had finished their work and merged to trunk. Now I need to pull their changes to my branch to keep my branch in sync, but this is where it all started to go wrong. Upon doing a sync merge to bring all changes on trunk, my branch did not know about the previous merges that I had made from my branch to trunk. It was trying to bring the merges I had made from my branch to trunk, and throwing errors about conflicts.

The error was “Reintegrate can only be used if revisions X through Z were previously merged from {repo} to reintegrate source, but this is not the case”

The strategy of developer branches seemed like a simple idea, but seemed to be causing many issues. My research lead me to find out in SVN 1.8 server, merge had been updated to be smarter about moving changes between trunk and branches. We got a 1.8 server running and copied over the repository to check if this was the case-still no difference. I eventually ran back into my issue above.

The Solution

As these branches are long running branches, and merges happen but then they are kept running, the branches are not being reintegrated back to trunk. So you need to keep them alive by keeping the branch in the loop of what has been merged to trunk. One might think if you merge from a branch to trunk, the branch will know what you merge to trunk. But that’s not the case with SVN. When you merge to trunk, you are only applying the changes from the merge to the local copy of trunk. These changes are not merged until you commit all these changes to trunk. By committing the merged changes to trunk you create a new revision number, and this new revision number is never passed back to the branch. In this instance you would normally be terminating the branch as your feature is complete, but as we want long running branches we have to keep the branch alive.

In order to do what we need and keep the branches alive, we need to follow the following flow (diagram to help follow along)


SVN Long running branch flow

  • Rev. 10 We create a branch of trunk (branch-a) , this creates revision 11
  •  Another branch is created from trunk (branch-b), this creates revision 12
  • Marty is developing on branch-a and makes a change this makes revision 13
  • Meanwhile Jen is developing on branch-b and has done a fix and commits making revision 14
  • Jen is happy for revision 14 to pushed back to trunk, she merges here revision 14 to trunk, all goes OK, so she commits the merged changes creating revision 15 on trunk
  • As the merge created revision 15, branch-b does not know this and in future sync merges on branch-b it will try to bring revision 15 back to the branch, and cause conflicts. So, Jen needs to merge revision 15 to branch-b, but not like a normal merge, she only needs to do a record only (–record-only) merge. This will tell branch-b not to try and merge revision 15 to the branch-b in the future
  • Marty then makes a fix and creates revision 17
  • Marty realises Jen made a fix he needs on his branch, so Marty does a sync merge onto branch-a and commits the merge code as normal
  • Marty has fixed the issue he was working on in revision 13 & 17 and it’s time to merge into trunk, Marty merges his code to trunk, and merges the applied changes to trunk this creates revision 19
  • Now Marty needs to merge revision 19 as a record only merge to branch-a to avoid SVN try to merge it in sync merges later on.


–record-only is the command line flag if you are using the command line to do your merges, if you are using a GUI there should be an option for it, in SmartSVN it’s in the advanced tab, see below:


--record-only merge in Smart SVN


Always remember to commit the folder when doing merge’s, this contains the merge info! Not doing so will cause issues in the future!

Google Chrome (Canary) Emulation


I am a massive Google Chrome users, so much so that I have two versions installed on my work machine. I have the stable Google Chrome that everyone should be using, and I also used Chrome Canary, which is the bleeding edge of the web, or so they say.

By default I do all my development in Canary, this is to keep my history, Cookies, and everything else separate from my main browser, which is right stable Chrome.

If like me you deal with responsive (device aware) web sites/applications you will be aware of the challenges of testing and viewing your sites in all the different browsers and devices, while you can never beat looking at your creations on these real devices, I for one love speed and the ability to do it all from my desktop while in the depths of development and prototyping. This is where Chrome Canary comes in, I know I have wrote about this sort of thing before (Mobile device detection in Google Chrome), it turns out the Chrome team have updated this device emulation parts of Chrome Dev tools, there is a new Mobile icon next to the magnifying glass when you have Dev tools open, click on this icon and you will be greeted with the new Emulation features that contain a whole bunch of cool things like:

  • Better UI for seeing the breakpoints and size screen
  • Added more predefined Devices
  • Bar graph at top for quickly jumping between media queries
  • Ability to set your Network Speed (throttling)

The best thing of the new UI and layout and features that I have found useful so far is the Network drop down to be able to see how you site performs on different network speeds, again while it’s not as good as testing it for real on a device, it certainly helps with development and being able to see the Network tab and the timeline when looking at how your site is performing over different internet speeds.

My Sublime Text


Over the past few weeks I’ve added a few more packages to my Sublime Text work flow, it’s been a good while since I last added a package. I had got myself into a nice flow and was happy. But then I saw someone mention something about Sublime Text and I decide it was time to see if there where any new packages that might help with my work flow or speed up my development.

I’ve been using Sublime Text 2 for over two years now, and love it to bits, I have tried other editors and can not bring myself to move away, or spend the time to learn something new. I believe that the more time I spend within Sublime learning it’s little short cuts the better I’ll be, and then every now and then I’ll spend a bit of time seeing if there is anything else I should learn, or any other packages I should install. I don’t believe you can become an expert with your editor very quickly, but over time you will begin to gel with it and learn more about it, the trick for me is to every now and then spend some time learning something new with it.

As you may now if you are a sublime user, it’s a good editor straight out the box, but you really need the package manager for it to become it’s own, and for you to make it your own. It’s the first thing you should do after installing Sublime, even if its just for one or two packages.

I will not go into the basics of setting up Sublime and it’s settings, but I am going to talk about the settings I have, the packages I use, etc.

User Preferences

Listed below is a copy of my User Preferences ( Menu > Preferences > Settings – User )

  "auto_complete": false,
  "color_scheme": "Packages/Color Scheme - Default/Monokai.tmTheme",
  "detect_indentation": false,
  "ensure_newline_at_eof_on_save": true,
  "font_size": 8,
  "highlight_modified_tabs": true,
  "line_padding_bottom": 1,
  "line_padding_top": 1,
  "overlay_scroll_bars": "disabled",
  "preview_on_click": false,
  "tab_size": 2,
  "trim_trailing_white_space_on_save": true,
  "word_separators": "./\\()\"':,.;<>~!@#$%^&*|+=[]{}`~?",
  "word_wrap": true

You may notice that I like my font_size small, I hate word wrap, and hate trailing white space, and yes I use tabs, but only indent by 2.


I use the package manager and have a bunch of packages installed to help with my development.

12 Packages to help with my daily development. A few of them are for syntax highlighting, so not sure if they count, but without out them it would be difficult.

The first two, ColdFusion and Enhanced HTML and CFML along with Sass are for syntax highlighting along with auto completion.

Grunt is quite a new package to me, and it’s one I wish I had found earlier, it means one less terminal window open for me and I can access all my grunt commands without leaving Sublime, and by using the keyboard. You can run any of your grunt commands, including the watch commands, and you can also kill running grunt tasks.

HTTP Requester is a package I sometimes forget I have installed. It’s a great little package for making HTTP Requests from within Sublime, take a few minutes to look at the documentation, it’s not just for basic HTTP Requests.

SideBarEnhancements is the first package I tell anyone who installs Sublime Text 2 to go get, in my eyes it contains everything that Sublime should do out of the box, it provides some basic right click menu action for the left sidebar of Sublime text, for example: New Folder/File.

SVN, is exactly what you might think. Subversion control from within Sublime, while I don’t use this greatly as I get a little nervous when committing file(s) without using a GUI to check my changes, it can come it very handy when I need to do an update or check the status of things. I use a Windows machine to develop on and getting SVN set up via the command line, and then through Sublime took me all evening one day, I should of documented it but never, but I might try and find the settings and things I did and document them. So bear in mind you might be in for a little bit of configuration and banging your head against the wall getting this package to work.

VCS Gutter, this again is a fairly new package for me. This package is a great enhancement to the Sublime interface. The basics of the package is it will provide your visual indicators in the gutter of an open file of it’s local state against the repository’s state. It requires SVN to be on your path and a diff tool to be available on your path, for me I have it set to do the diff check on save of the file.

Code Conventions

Recently I put together a set of markdown files for the few different web languages I write a lot of code in, these being HTML, CSS (Sass), JavaScript/jQuery, and CFML. I have these located on my GitHub page under a repository named Conventions

I put together the set of conventions for a few different reason. I thought someone might be able to get something from it, or for when I release open source code/ library’s I can point people to them, the final reason being I can show people the method to my madness if they every cross my code paths.

While the conventions documents are not 100% full proof I will be continuing to add and tweak little items as I can continue to write more code and find my rhythm.
A lot of what is already documented is straight from how I write my code at the moment, if I see a pattern in the way I write certain parts of code I document it as a convention, simple as that. I don’t go out of my way to write a convention to change the way I code, I write the conventions on the way I am coding.

I feel conventions should not be something you force, obviously this can break down within a team environment, but that’s a whole different story. While the conventions I have documented do contain some ways of writing faster/better code, a lot of what I have documented is personal preference and more style based.

A simple example of one of my conventions, spacing around if statements:

if ( x ) {

Instead of


Following my gut with Hill Valleu

If you have read my previous entry or perhaps follow me on Twitter, you may know that I am currently working on a web app, Hill Valleu. Yes, the last update was a few months ago, but a lot has changed since then.

When I published that post, I was using a working copy of Hill Valleu in a beta stage. Within a week of launching it I had some friends and colleagues using the app. Feedback was good and the users seemed to like it. I knew a few items were missing and the limited users who were in all confirmed this by suggesting a few of the same enhancements. So, I got to work on these and within a couple of days I had a few new features launched to the small user base.

Everything seemed to be going great. A few app users, no major issues…so I planned to give it a couple of weeks before getting more users into the system and then a few more weeks before launching it for real. But over this period of time I noticed that my own usage of the app had dropped. This was not a good sign, when an app I was supposed to be building for myself was was not even being used by me…and it turned out the beta users had pretty much all stopped using it.


Now that’s a good question. I am not sure why others stopped using. A few said they were using other services, which is fair enough. I am trying to break into a crowded market – hard going if you do the basics, but miss a lot of features of the bigger apps.

The main question I needed to answer was why did I stop using a service I built to fill a need for myself?

After sitting down and working out the issue, I discovered the problem was I had stopped building the app for myself and started building it based on what others said they wanted. To get my heart back into the project, I made a big decision to go back to the root of why I started out on this path to begin with. To fill my need, and build something I would use everyday.

I deleted all the content and beta user data and started afresh. My next plan was to get back to where I wanted to be to start with. I ripped out a load of code, reworked the design and features to drastically reduce the complexity I had built in.

Within the week I had a skelton app but up online and I have been using it everyday since, slowly tweaking little bits here and there, adding couple new little enhancments, and have a list of the basic feature set I want in the app before I let anyone else in to the system.

To see the current state and to get a preview of the all new reworked Hill Valleu, go over to the website – and if you like what you see and would like to hear when I launch the app drop your email in the box at the bottom of the page.

Mobile device detection in Google Chrome

This article is to talk about how I used Google Chrome to help with the development of serving different content for different devices.


One thing to note is that, we did have a range of different devices to do actual testing on, but I found it much quicker to iterate basic things using the desktop and then fire up the device(s) to test for real. What I am about to explain is great for quick testing that what you are writing is performing as intended. I suggest you always check on a real device.

In Google Chrome you can override the user agent that the web browser is sending to the web server, and if you are doing user agent detection like we are the server would take this override and what ever you are doing on the server for mobile detection would return as if you where asking for it from your mobile or tablet, you can even set the user agent to other browser venders (such as IE). Bear in mind that this user agent override does not make Chrome act like the browser you are overriding to, it just mimics its user agent string, so if you are doing server side or client side checking of the user agent string this is pretty perfect for quick development.

The user agent is changed for the server side as well as the client side, so any code you have written that is using the User Agent string in the server request or the navigator.userAgent string in JavaScript will be changed based on this override.

Google Chrome Overrides

  1. Open the Chrome Developer tools (I assume you know how to do this, if not see here)
  2. Click the gear icon in the bottom right corner, this will give you the settings overlay
  3. Click overrides in the left menu, and you will see something similar to the screenshot above
  4. Start changing the user agent from the dropdown and do your testing.
  5. You need to refresh your page each time you change the user agent

You can play around with the JS Bin I used for the screenshot above if you so wish :

Dev Tip: I had multiple tabs open with the user agent set to different devices, and I also used a little bit of JavaScript to put the device (eg. iPhone) at the the start of the title, so I could easy see which tab was which.

As you will see, you can do more than I am going to explain in this article, and I suggest you go reading up on it. But for this article I am only using User Agent and Device Metrics.

Multiple OpenBD installs with one jetty

This is a copy of my reply to a message on the OpenBD Mailing List, and as I think it’s a great use for others I have decided to post it on here.

If I have gotten this correct you are wanting an OpenBD JAM install and be able to run two OpenBD sites from the one ./openbdjam command

For example:

to be severed from the same server, and same OpenBD JAM install?

If this is right, this is what you need to do to get it working:

Note – This is from a clean install

By default the webroot for the JAM install is /opt/openbdjam/webroot/ – You will need to duplicate this folder (or uploaded your own webroot containing a site), I created a webrootb folder like : /opt/openbdjam/webrootb/

Now you need to navigate to the jetty contexts folder: /opt/openbdjam/jetty/contexts/
You will see a file called webroot-context.xml here, this is the file that points to the default webroot, we need to make another one of these files (use the cp command, then we only need to edit a couple of lines), I called mine webrootb.xml

Open up webroot-context.xml and un comment the virtualHosts block and inside the < Item>< /item> Put the web address you want to use to access the site located in /opt/openbdjam/webroot/
eg. < Item>< /Item>
Save and close the file

Now open the webrootb.xml file and edit the 7th line to point to the new webrootb folder, then also un comment the virtualHosts block and change the line to the web address you wish to access this OpenBD site from
eg. < Item>< /Item>
Save and close the file

Note : You need to point the A record for your domain(s) to the IP of your server

Now stop OpenBDJAM, and then start OpenBDJAM

Try and access your two sites

jQuery Performance : building DOM elements

This is the third post regarding jQuery performance, it’s not technically jQuery performance, as I am not comparing two different ways of doing something with just jQuery, but more comparing jQuery over native JavaScript methods, mainly showing just because you are using jQuery does not mean you have to use it for everything.

As jQuery is JavaScript, and a very nice way of interacting with the DOM, and handles a lot of the cross browser issues for you, I see a lot of case’s where jQuery is over used and can be a performance issue on an application. Maybe for your average Joe who’s doing some simple bits and bobs to make their site stand out, this may not be an issue. But me being me, I love performance metrics, and testing different ways of doing things. I am always questioning asking: WHY?

The code

If you are working on web applications or something relatively big and generally something that involves an AJAX call to fetch data from a remote source. Chances are you are more than likely going to have to build some DOM elements with the data you have just fetched. Sounds simple enough.

For the purpose of this article we are not hitting any remote services to get our data, but we are building a table for users, that has 4 columns, and 50 rows. We have a name, company, admin, and actions column. Also in this example we are inserting the same data for each row of the table.
You maybe thinking, this is not going to show up much in the way of performance, but hold onto your hat and keep reading!

Example 1 – jQuery

The first example is using all jQuery methods to build all the DOM elements, and add them to variables then append the entire new row to the table.

    for ( var i=0; i < rows; i++ ) {

      var $newRow = $(‘<tr />’);

      var $fRow = $newRow.append($(‘<td />’).text(‘Jim’));

      var $sRow = $newRow.append($(‘<td />’).text(‘Testing’));

      var $tRow = $newRow.append($(‘<td />’).text(‘No’));

      var $foRow = $newRow.append($(‘<td />’).text(‘Delete’));



Exampe 2 – JavaScript Variable with jQuery append

The second example uses one jQuery call to append the HTML string I have built up in a variable using standard HTML markup.

    for ( var i=0; i < rows; i++ ) {

      row = ‘<tr><td>Jim</td>’

      row += ‘<td>Testing</td>’

      row += ‘<td>No</td>’

      row += ‘<td>Delete</td></tr>’



The Results

Test on

You may or may not be surprised in the results. If you think about it logically, you may realise that the jQuery way is going to be naturally slower, as we are calling jQuery methods to do the work for us, so this has to result in a performance hit naturally as there is another layer to go through than simply building the HTML string(s) in a variable.

The one major thing I got for this (even tho I have always used the HTML way) was the difference in performance over the two methods. I was seeing an average of 75% difference in speed between the two when running the jsPerf test.

On a non-performance matter the HTML way is actually nicer to read (in my eyes).