Remote: Office Not Required

Having read their two previous books as soon as they came out, I finally got around to picking up a copy of Jason Fried’s and David Heinemeier Hansson’s latest book: “Remote: Office Not Required“, somewhat a bit late to the party on reading this one.

I decided to pick up a copy for a few reasons; I was looking for a new read and this came to mind, also for the past few months I have begun working much closer with a team based in the U.S.

I figured if anything I could pick up a few tips on working remote, while I technically don’t work remote as I am employed by a Scottish company and do work for a client that is based in the U.S. But I thought I should be able to pick up some good advice as I am close enough to remote working in the way we do our work.

The books is a collection of short essays each one to two pages long. While they don’t tell you how you should remote work they explain things they have found work for themselves as a company with remote employees world wide. Some of the items they discuss are items you may have already read about or think are simple ideas, but all the ideas presented in the book adds up to some great information.

One thing I took form the book, is when they speak of remote working, they are not speaking of people in other countries, they call everyone a remote worker, while some employees have desks in their office they are still remote workers as they don’t have to work from the office 9 till 5. With this methodology treating everyone as remote workers even if they are all in the same city, or some office based and some remote you have to work as if everyone is remote, and employ certain methods of communication and work flows to avoid leaving people not in the office out and to ensure everyone is treated the same.

Setting up an Amazon Cloudfront CDN

Heard of people talk about using a CDN (Content Delivery Network) to serve assets and resources to your website with? Ever thought it was too hard or complicated to set up? Well guess again, using Amazon S3 and Cloudfront is very straight forward to get set up and running. It’s also not that expensive in the grand scheme of things. If you are concerned about cost it’s probably not the thing for you, as it’s an extra cost on top of you hosting package. But if you want to try it out it’s going cost you cent’s rather than dollars. (I’ve been running the assets for my RSS Reader from Amazon’s CDN for the past 6 months and I am generating around 8,000 combined requests, and it’s costing me $0.09, yes that’s nine cents a month)

One thing to note is the costs will rise differently depending on how you set up you CDN distribution, and also you have to pay for the S3 storage. In total my S3 and Cloudfront bills are around $0.60 (sixty cents) a month. Regarding the CDN costs, I have mine set up to be best performance and use all Amazon locations, this meaning my assets are distributed world wide to ensure each user gets the best performance.

Depending on how you want to access your content from the CDN there are a maximum of 6 steps to follow. I am going to walk you through setting up an Amazon S3 bucket that will contain your content and then using that bucket as you CDN. This means, anything you put inside the S3 bucket will also be accessible via the CDN.

To follow the steps you will need an Amazon AWS account (

  • Login to your AWS account and create a new S3 Bucket and add some content to it (you can upload via the website)

  • Make sure the bucket is set up for static website hosting, after creating the bucket click the bucket name, then click the “Properties” button in top right. Here you will get access to many different options you can set up to configure your bucket.

  • Now go cloudfront control panel using the services menu, click “Create Distribution” – Click “Get Started” under the web option
  • Choose you bucket from the Origin Domain Name (this will be the bucket you created in step 1)

  • If you wish to use a custom domain for your CDN like:, add the domain you wish to use in the “Alternative Domain Names” input.

  • If you did step 5 you will need to update the DNS for your domain and add a CNAME for the subdomain and point it at the origin URL that’s shown in your distribution listing

  • Now you have your CDN set up you need to go back to your S3 bucket and add a bucket policy, this is in your bucket properties under the Permissions tab, click “Edit bucket policy” and add the policy shown below with the items <enter your s3 bucket here> with you bucket name.. eg. matthewroach-images


Bucket Policy

    "Version": "2008-10-17",
    "Id": "Policy1407892490897",
    "Statement": [
            "Sid": "Stmt1407892483586",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<enter your s3 bucket here>/*"
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E23JA8NDC54WON"
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<enter your s3 bucket here>/*"

Now you have everything set up, and if you have changed DNS you may need to wait a few hours or 24 for them to take effect. But after that you can access your content using the cname or the origin domain, eg.


UI Testing with nightwatch.js

Nightwatch.js is an easy to use Node.js based End-to-End (E2E) testing solution for browser based apps and websites. It uses the powerful Selenium WebDriver API to perform commands and assertions on DOM elements.

Testing your website as a user may interact with it may not be your first thought when talking about testing code. We can unit test our code to ensure it’s doing as it’s supposed to but the more code we apply to our UI the more dependency it will have from the UI and the code powering it.

Most of the code and web apps I create are driven from a server side language with a front end interface, and then JavaScript applied on top to add a nice user experience.

With UI testing or integration testing we can test all the parts of the web application are working as intended. Ever upgraded a third party library or decided to adjust a little bit of code on a single page to later get a bug report from someone that another page in the app is now not working? YES, unit testing should of caught these issues, one of the added benefits from using a UI testing tool is you can be 100% sure that the integration from the code and tests you have written work at the final stage of delivery.

Installing nightwatch.js

If you have node and npm installed on your machine, it’s pretty simply to get nightwatch set up, you can install it local to your application or as a global module (-g). I have installed mine globally. Just run the following from your command line (you may need to do this as an admin or sudo on a Mac):

npm install nightwatch -g

Once you have it installed you can confirm you have it install by checking the version

nightwatch -v
nightwatch v0.6.8

Setting up

The code and test’s are available on Github if you wish to look:

When using nightwatch.js you will need a nightwatch.json file, this is the config file that contains a lot of different options, for this post I’ll not cover them all, just the one’s I’ve used, changed or added. I have this file located at the route of my project, you can place it anywhere under your project but you’ll need to update config items to point to the relevant locations of your project.

I am using browserstack’s automate service to run my tests with. You don’t have to use browser stack to run these test’s you can use selenium to run them locally. Using selenium takes a bit more set up to get running, so for the purpose of this post we will use browser stack.

Below is the nightwatch.json file I have located at the root of my project. The src_folders config points to my tests folder that’s also located on the root of the project. You can change this to point to anywhere you like.

You will also notice some references to, this tell’s nightwatch to use browser stack as our selenium running. The two items you need to update to use this is the browserstack.user and browserstack.key. You will find these in your browser stack account.

  "src_folders": ["tests"],
  "output_folder": "reports",
  "custom_commands_path": "",
  "custom_assertions_path": "",
  "page_objects_path": "",
  "globals_path": "",

  "selenium": {
    "start_process": false,
    "server_path": "",
    "log_path": "",
    "host": "",
    "port": 80,
    "cli_args": {
      "": "",
      "": ""

  "test_settings": {
    "default": {
      "launch_url": "",
      "selenium_port" : 80,
      "selenium_host" : "",
      "silent": true,
      "screenshots": {
        "enabled": false,
        "path": ""
      "desiredCapabilities": {
        "browserName": "chrome",
        "javascriptEnabled": true,
        "acceptSslCerts": true,
        "browserstack.user": "",
        "browserstack.key": ""


Test’s are written in JavaScript, I separate out my tests into a tests folder to keep separation from the app code. On bigger projects I’ve added extra folders with the tests folder to mimic the web app structure, so you are able to run selected areas on their own.

Each test file can have one or multiple test’s. Each test is a JavaScript function. A sample test is shown below:

module.exports = {
  'Login Page Initial Render': function( _browser ) {
    .waitForElementVisible( 'body', 1000 )
    .verify.value( 'input[type=submit]', 'Log In' )

The test shown above is very simple example that will open the URL: and check that the username and password fields are visible, the submit button has a value of Log In, and the error element is not visible.

One thing to note here is I am using the .verify object rather than the .assert object. The reason I prefer to use verify over assert is that the test’s will abort if you use assert, where as using verify it will record a fail on the test but carry on running through the other tests.

Running Tests

Now you have your first test and nightwatch all set up you can just call nightwatch from terminal in your project root, and you will see output like the following, you will noticed that it still outputs the test details even tho you are running test’s on browser stack. The output is also saved within browser stack, it looks slightly different in browser stack.

Nightwatch Output

When a test causes an error the output looks as follows

Nightwatch error output


On Friday 1st May 2015 at 17:48 my wife gave birth to our little girl, Alice Mary Roach who weighed in at mere 8lb. A small complication after the birth lead to a couple of days in hospital, but me and Todd finally brought them home on Sunday.


I have never been one for politics, and on more than one occasion I have had my fair moan about the way things are, so this election I am voting.

Who I vote for is personal, I have voted based on pure reading, not followed any of the discussions or news reports. I read through all the parties key points and looked deeper into specific areas I wish to know more about, mainly items that may have a direct or partial impact on my life and family. Now I have a young family it’s my right to take part and make my vote count, some folk may say no point in voting as it’s not going make a difference, but imagine if a majority of people did that then what’s the point of an election. Yes, your vote might not help with the seat in your area, but by adding your vote you can be assured you did you part and you had your vote!

SVN Developer Branch Flow

Ever wanted to use a branch workflow with SVN, but had trouble getting it to work or finding out information on how to manage branches? Yes I have, and I spent the best part of two days working it out only to realise it was not as bad as I thought. I was just about to ditch the idea when I finally worked it out by re-reading about SVN merging.

The Strategy

The idea is to have developer branches, so each developer can have their own working copy and manage it themselves. Once they have completed each ticket of work and are ready for go back onto the mainline (trunk), they merge the batch of revisions down to trunk ready for release.

The Issue

Note: I am not branching trunk as a whole, but branching a sub folder within trunk

All seemed to be going well-I was making changes to my branch and committing as and when I needed to. I then finished my first ticket and merged the code down to trunk. Another ticket was finished so I merged the code down to trunk. A couple of days later another developer had finished their work and merged to trunk. Now I need to pull their changes to my branch to keep my branch in sync, but this is where it all started to go wrong. Upon doing a sync merge to bring all changes on trunk, my branch did not know about the previous merges that I had made from my branch to trunk. It was trying to bring the merges I had made from my branch to trunk, and throwing errors about conflicts.

The error was “Reintegrate can only be used if revisions X through Z were previously merged from {repo} to reintegrate source, but this is not the case”

The strategy of developer branches seemed like a simple idea, but seemed to be causing many issues. My research lead me to find out in SVN 1.8 server, merge had been updated to be smarter about moving changes between trunk and branches. We got a 1.8 server running and copied over the repository to check if this was the case-still no difference. I eventually ran back into my issue above.

The Solution

As these branches are long running branches, and merges happen but then they are kept running, the branches are not being reintegrated back to trunk. So you need to keep them alive by keeping the branch in the loop of what has been merged to trunk. One might think if you merge from a branch to trunk, the branch will know what you merge to trunk. But that’s not the case with SVN. When you merge to trunk, you are only applying the changes from the merge to the local copy of trunk. These changes are not merged until you commit all these changes to trunk. By committing the merged changes to trunk you create a new revision number, and this new revision number is never passed back to the branch. In this instance you would normally be terminating the branch as your feature is complete, but as we want long running branches we have to keep the branch alive.

In order to do what we need and keep the branches alive, we need to follow the following flow (diagram to help follow along)


SVN Long running branch flow

  • Rev. 10 We create a branch of trunk (branch-a) , this creates revision 11
  •  Another branch is created from trunk (branch-b), this creates revision 12
  • Marty is developing on branch-a and makes a change this makes revision 13
  • Meanwhile Jen is developing on branch-b and has done a fix and commits making revision 14
  • Jen is happy for revision 14 to pushed back to trunk, she merges here revision 14 to trunk, all goes OK, so she commits the merged changes creating revision 15 on trunk
  • As the merge created revision 15, branch-b does not know this and in future sync merges on branch-b it will try to bring revision 15 back to the branch, and cause conflicts. So, Jen needs to merge revision 15 to branch-b, but not like a normal merge, she only needs to do a record only (–record-only) merge. This will tell branch-b not to try and merge revision 15 to the branch-b in the future
  • Marty then makes a fix and creates revision 17
  • Marty realises Jen made a fix he needs on his branch, so Marty does a sync merge onto branch-a and commits the merge code as normal
  • Marty has fixed the issue he was working on in revision 13 & 17 and it’s time to merge into trunk, Marty merges his code to trunk, and merges the applied changes to trunk this creates revision 19
  • Now Marty needs to merge revision 19 as a record only merge to branch-a to avoid SVN try to merge it in sync merges later on.


–record-only is the command line flag if you are using the command line to do your merges, if you are using a GUI there should be an option for it, in SmartSVN it’s in the advanced tab, see below:


--record-only merge in Smart SVN


Always remember to commit the folder when doing merge’s, this contains the merge info! Not doing so will cause issues in the future!

Nexus Minimal Fountain Pen

nexus pen stood up

Back last August I came across a kickstarter project for a fountain pen. After landing on the page I knew I needed to back the project, the minimal nature of the pen appealed to me. Due to me finding the project late on, I missed out on the early backers deals, but even paying the full price of £26 I felt that was a reasonable price to pay for what should be a beautiful pen.

With this being my first kickstarter backed project I was a bit weary of the whole process, I’ve read about horror stories and seen lots on Twitter about bad experiences. In total I had committed £29 to the project, £26 for the £3 for an convertor, with an estimated shipping date of November, just over 2 months after the project was successful. I thought that was a bit ambitious for such an item, but I kept my hopes up. As you can probably predict I never received my pen in November, the pen finally made it into my hands in February, 6 months after the project was successfully funded. Even tho the project was a little late not too bad in my eyes, the communication from the company behind the project was amazing, they kept sending regular updates, which kept my faith in actually getting my pen.

nexus pen lay on side

Having received my pen just over two weeks ago, I’ve made it my daily pen since then. From first opening the envelope and small case it came it I have been over the moon with the pen. Pulling the pen from the case you get to feel the true weight of the pen, it’s not overly heavy but it has a good weight to it and you certainly know you are picking it up. While it has a bit of weight to it this does not affect long periods of use.

The design of the pen is amazing, CNC machined aluminium, that you can tell has been done to near perfection, with a perfect smooth finish. The grip is machined into the pen, and looks to be a bit clunky, but after a few moments of writing with the pen, it does not affect you too much, if you have a firm grip I can imagine a slight adjustment on your hold will be needed to get the grooves located within your fingers nicely.

Once you get into a flow of writing with this pen it become effortless, The nib is longer than some of my other pens, I needed to adjust my hold and writing posture a fraction. Also this is the first fine nib pen I have owned, and without much comparison to other fine nib pens this does not seem to be that fine. I am very happy with the thickness of the ink coming from this nib, this making me consider switching to using more fine nib pens.

nexus pen exploded

The pen is avialable buy direct from namisu. If you like a nice pen, and want something that is a great, strong everyday pen I would highly recommend this pen.


50 shades of grey

On Wednesday I took my wife to the cinema for her birthday to watch 50 shades of grey, she has read the trilogy series and really wanted to see the film. For me who’s not read the books and only heard and seen what’s been advertised in the media regarding the movie, from having now watched the movie for me it was not as bad as the media has made it out to be.

What I took it from the movie was yes it is a movie about BDSM, while that might not be everyone’s taste of sexual activity, just remember these are two adults consenting to these activities with each other. While you may have your own tastes, just because this is a movie don’t judge! Either can walk away at any point (which does happen) if they are not comfortable, and within the movie there was never a point at which either of them where forced into something that either of them could of stopped.

While people get hung up on the erotic parts of the story, there is also a sub story going on, the woman is slowly turning the man into a romantic.

Microsoft Sculpt Keyboard

Sculpt keyboard and mouse

Just over a year ago I purchased the Microsoft Sculpt Ergonomic Keyboard and Mouse set, and by far it’s the best keyboard and mouse set I have used. When I say it’s the “best” I have used, obviously I can not vouch for ever keyboard and mouse on the planet, but for me after trying numerous different keyboards and mouse combinations I have finally found something I’ll struggle to move away from.

Over the course of my working life, from working at a computer/laptop on a daily basis I have tried many different sets. From working on desktop computers with multiple monitors then single a monitor, working on laptops solely, or laptops with a monitor or two connected, to using a laptop with monitor and keyboard/mouse combination, and many more other variations, over time I have tired tens of different set up.

Before I purchased the Microsoft Sculpt set, I was back to using just a laptop solely at this point, and this was causing some issues with my hands, and wrists. I would leave work and my hands and wrists would ache, and being me I would just carry on, and as I carried on tinkering in the evenings this was also increasing the aching pain. After some research it became apparent that the amount of keyboard use I was doing was the cause of the ache and pain, I knew that I needed to find a solution sooner rather than later to avoid any long term damage. I began looking around for a solution to help solve my problems. In the lead up to making the purchase of the Sculpt set, I tired a couple of different keyboards, mainly because I was not sure on splashing out so much on the Sculpt, in hindsight I would of saved myself time and money in just buying the sculpt first.

Another reason that I never made the purchase straight off for the Sculpt was that I had tried the wired keyboard version of the Sculpt (Microsoft Natural Ergo), and struggled to get to grips with it. Having grown up on computers I found the split keyboard just a little too strange. Having read a load of reviews on the sculpt it was ticking all the boxes, apart from the split issue, this is my health I was talking about so I decided to just go for it, and told myself that it can not be that bad to learn how to use it. One of the biggest selling points for me is I need quite a compact keyboard so the reach for the mouse is not so far, I prefer the mouse to site quite close to the right side of the keyboard to avoid the stretching of shoulder and back, and the sculpt was the only one I could find that was a narrow keyboard, and wireless.

Having used the keyboard and mouse for just over a year I am completely satisfied with my purchase, and when/if it packs in on me I will be purchasing the exact same set again in an instant! In the set you get get a keyboard, mouse and a separate numeric keypad. I’ve never once use the numeric keypad.

The first impressions I had when opening it, where a bit daunting as I was not confident that I could get used to using it. The mouse seemed to be huge! Compared to an Apple Mouse its double the height plus some, but after putting it on the the desk and putting my hand on it to use it properly, the size of it never really bothered me, the size of the mouse actually made it feel more usable and my hand sat in a more natural position. As the hold is slightly angled it does not force you arm to twist inwards as much this reduces the stress and forces on your arm.

For the keyboard I placed the extra support bar underneath, making the keyboard angle away from me. You can use it without this rest if you prefer, I’ve not used it without it for any great length of time, it feels more natural with the support bar. Using the extra support bar helps set your fingers over the keys in a more natural way, it makes you arms and fingers all straight from your shoulders down (you may need to adjust your seating position to help).

One thing I noticed from using this keyboard on a daily basis, is that when I travel with just my laptop, within a couple of days of laptop use I can start to feel the ache and pain returning in my hands and wrists. On one of my trips I decided to take the set with me which helped but it’s not exactly the smallest thing to be carrying around in your bag, and is prone to getting damaged. If I travelled much more I would invest in another one, but as it’s generally a week a few times a year it’s not too bad to cope with for the time being.

Where did my time go?

Over the past 6 months, there would be days I would leave work and ask myself, what did I achieve today?

It was not always like this, as time has gone on I have taken on a more leadership based developer role that means I have to also task other developers with work during the day, help them out, more communication and meetings.

When I first started I would be doing 6 to 7 hours of development time, I’d probably be lucky to hit 4 or 5 of decent development time. This obviously comes with the territory of moving up in the world.

At the beginning of the year I decided I want to work out exactly where my time was being spent during a working day, to see if I could change how I plan my work days to be more productive. Much searching of the good old internet did not really lead me anywhere. I struggled to find something that would give me the insight I was after. Until one day someone on Twitter mentioned Rescue Time. I headed straight over to check it out, and since installing it 5 weeks ago I haven’t looked back.

Rescue Time

Rescue Time sits quietly in the background of the computer monitoring the applications you are using, for web browsers it can monitor the actual websites you have been visiting. Once installed there is pretty much nothing to do, give it a couple of days to monitor your usually activity then log in to the Rescue Time dashboard and see it’s tracking everything OK. You may need to categorise some of the software and websites you visit, as the application does not have categories for everything.

One tip I found was if you do web development use host names for each client/ project you work on so you can easily see which clients/ projects your time is being spent on.

As Rescue Time sits quietly in the background just monitoring your application usage, there’s not much else to do, you can login to the dashboard at any time to see your stats, it’s updated relatively quickly as you will see after installing it.
You can set daily goals, I currently have two goals set:
1. More than two hours per day on Software Development
2. Less than 2 hours per day on All Distracting Time

These are the defaults, I’ve not changed them as I wanted to set a base to work from then adjust over time.
Each week you can choose to receive a weekly report via email on the week just past. It’s a good way to get insight into the previous week, just the other week I forgot I had Rescue Time installed until I got my weekly email.

At the moment I am still using the Lite version, and it’s providing a great insight to how I spend my time. The reports are detailed enough if I want to drill down into where exactly my time is going.

Having had the application installed for over a month I have gotten a good insight into my work days/weeks and where my time is being spent, and how over the course of a day my time is being distributed between different items. One of the biggest shocks from using Rescue Time was how much time is actually spent in different applications, you may think you are only spending a few minutes here and there, but when you add these few minutes up over the course of a day they can easily turn into hours. The biggest shock for me was the amount of time that was spent in communication (Email and Chat).

Based on these findings I am going to adjust some items in my work day to see if I can find a better structure to my working day to improve my time distribution. Obviously this is quite hard due to factors that are out of my control, for example; When another developer needs my time to discuss work, or chat about something I may have asked them to do.
These sorts of distractions are hard to account for, but I think a better structured day will help in general to ensure I am getting the best use out of my time.

Below is my dashboard showing my logged time for February up until end of work today.

Rescue Time Dashboard