2017 Perseid Meteor Shower

First time out to watch the Perseid Meteor Shower, also the first time taking photo’s of the night sky. After reading around during the afternoon of some general settings that should be used we set out around 9:30 PM into AE Forest to a Loch to set up camp for a few hours to see what we could see. My son lasted about an hour before he climbed back into the car and fall asleep, and I last till just after midnight before calling it a night.

Having a Canon 80D I thought the built in WiFi stuff was a bit overkill for my needs, but having to do long exposures without a remote was going to be tricky to avoid camera shake on the tripod. (Should remember to charge my phone before going out tho). Using my iPhone to live shoot also allowed me see the images captured, and also change some of the settings like ISO, Aperture and Shutter speed.

Micro.blog Desktop

I’ve been using micro.blog for a few weeks now. The new way of posting and owning my own content is growing on me and I am really enjoying posting to my own site again. One thing I found was I wasn’t really interacting much during the day, I put this down to not having a desktop app. Yes. I could use the micro.blog website, but I prefer to keep my browsers for work related and I am trying to reduce my tab count.

So with that in mind, and know there is an API for micro.blog I decided to put together a desktop app for micro.blog using the electron framework. What better way to build a desktop application than use the tools I already use on a daily basis.
Last week I had a quick play around with electron and calling the micro.blog API, and all seemed to simple to be true, but it turns out it didn’t actually get much harder. In order to keep things simple and to have a consistent UX / UI with the micro.blog ecosystem at the moment, I decided to reuse a lot of what Manton had already created in terms of the UI, and just wrap up it all up in electron and calling out to the API.

I currently have only built the App for Mac OS X until I can test on the other platforms, to make use of the electron ability to cross compile.

Currently you can follow your timeline, see you mentions and favorites and reply to posts. I plan to expand on this over time, but if you would like to try out Micro.blog on Mac OS X then you can download it from here: Download Micro.blog OS X

Micro.blog – My initial thoughts

First and foremost, I’d like to thank Manton for creating Micro.blog, and more important thank him for all the hard work he’s put in. Even more this past week during the lunch. It can not be easily launching something like this and not have some teething problems, and he’s always been around to answer people’s question and very quickly too. It’s not small feat to do this, even with the small fires that have arisen.

Having followed Manton for a few years and trying to get myself into the “IndieWeb“, of owning your own content more in the past year or so. I found there are many gaps missing in all the different steps, of fully controlling your own content but not feeling like you are missing out from the “other” service. It’s not overly difficult it you are quite tech savvy and don’t mind diving into some code. But for the non tech savvy people the “IndieWeb” still has a little way to go. I’ve found myself a little confused on certain aspects and still need to read up on some of different aspects. When Manton mentioned he was working on something to allow for the indieweb of short posts, which he has called Micro.blog I dropped my name into the hat (added my email to the list), and then when he made it into a kickstarted, I jumped straight on board and signed up.

A few months on after the kickstarter got funded Manton has invited the backers first into the system. Being an early backer inside the first 100 I got into the system within the first couple of days. Which was great, but to be honest maybe a little intimidating as I wasn’t sure what to expect. How would I use this, what would it replace? You can follow me on micro.blog if you are a backer, at micro.blog/matthew.

Currently I don’t do the whole POSSE (Publish (on your) Own Site, Syndicate Elsewhere), I do more PESOS, Post Elsewhere, Syndicate (on your) own site. Mainly due to the tools and amount of tinkering that’s required. There are many people are doing POSEE very well. But I like that I can use the app’s of other social networks, publish via them and then have my site pick it up. This is what I do with Instagram. I have a couple of plugins install to fetch the posts over, and it even pulls the image across and puts them in my S3 bucket. (Instagram plugin by DsgnWorks, Amazon Web Services, and WP Offload S3 Lite)

I plan to initial use Micro.blog in the following way, based on the current feature set. Over time I am sure I’ll adjust it as it grows. I believe as more people start to use the platform the more it will evolve, and I’ll adjust as I see fit.

  1. As a IndieWeb RSS Reader
  2. Post from the Micro.blog iOS – once I can get the titles of the posts in my WordPress install to save how I want

IndieWeb RSS Reader

This is the biggest thing missing in the IndieWeb at the moment, a way to truly follow other peoples “micro” blogs in a nice timeline manner. I started on a path with my own RSS Reader to try to emulate a timeline based view but never delved into the IndieWeb consumption. Now Micro.blog is around I may have a bit more of a play with my own reader, but I am hoping Micro.blog’s platform will mean I don’t have to. (RSS is only a text medium, but it’s a pretty crazy world to go into an start consuming and parsing).

The only downside I see at the moment to the Micro.blog approach is if I reply to someone’s post I would love for that to create an entry on my site. It creates a web mention on the users site but I would love for it to create an entry on my own with a reply format. This way it keeps a history of conversations I’ve had.

Posting from the Micro.blog iOS

Another thing that keeps me going back to twitter or Instagram to first post there is the ability to do it quickly. This is where the Micro.blog iOS will win me over (one I fix the blank titles). Having the ability to easily and quickly post “Micro” posts to my site will mean I’ll more likely do that than reach for Twitter. My content will still reach Twitter via cross-posting but I’ll truly be doing POSSE.

UI Testing with Nightwatch.js – Page Objects

I have already written a little about UI testing with Nightwatch.js. This was a little while ago and nightwatch.js has changed a little since then. In v0.7.0 they changed their implementation of page objects and added enhanced support. Nightwatch is a great framework for writing UI tests, and it easy to pick up an write some basic tests. But, if you are going to be writing lots of tests which you are more than likely going to, to ensure your UI is working as planned then you should really be making use of page object within the Nightwatch framework. Page objects are going to save you a lot of time, and prevent duplicated test code. By using page objects you able to abstract away a lot of the HTML actions needed when you are accessing and manipulating your pages. Also by using page objects you can increase your test coverage as you reuse the objects in multiple tests.

There are two parts to the page objects in the Nightwatch framework. Elements and Sections. For me, you need to be using both combined to get the most from the page objects. This is where you will get the most of them. To highlight this I’ve put together a couple of examples to explain how elements and sections work. Following on from my first post: UI testing with Nightwatch.js. I will be using the test from that post, and making two more test files, one that uses elements, and another that uses elements and sections together.

Getting started with page objects

Before you can get using page objects you need to update your config file to tell Nightwatch where to look for your page object files. In your Nightwatch config file you need to update the property:  page_objects_path. Mine looks like: 

"page_objects_path": "page-objects",

I am putting all my page objects inside of one folder so I use a single value in my config. Nightwatch allows you to set the value of page_objects_path as an array, with each element in the array a folder path. As your UI tests grow it might be worth looking at splitting them up into multiple folders for better maintainability. Each page object should be within it’s own file. Nightwatch will read the contents of each folder and these will become your page objects you can use within your tests.

Using Page Objects in your tests

Once you have updated your config to pull in the page objects you need to reference them inside of your tests to be able to call the commands or using the elements. You can name your files how ever you like, I name mine all one word using camel case if needed. You can use hyphens, underscores or even spaces in the file names if you so wish. Depending on how you name your files will matter, as you will need to reference the page object differently inside of your test files. All your page objects are available to all your test on the function argument .page context. I stick with the argument being named browser as per the Nightwatch documentation, but you can use whatever you wish.

The reason I keep the filename to camelCase is because the browser.page is a JavaScript object of all your page object files and folders. I have a page object file named simpleLogin.js which I reference in my tests as:

var login = browser.page.simpleLogin();

If you where to use hyphens in your file name you would need to use the following syntax.

var login = browser.page['simple-login']();

You can make your page object folder have nested folders. This would result in the browser.page object have a key for the folder name which is an object containing the page object files within that folder, you could then call that object like:

var login = browser.page.folder['simple-login']();

Page Elements and Sections

Page elements, or elements as they are referred to within the code are a way to reduce to keep the selectors you use in your tests DRY (Don’t repeat yourself). Rather than having to write the same selector over and over again in you test file you can add it as an element. An element is a property of a page object that have a selector attached to them. You can then use the element name in your test to reference to the selector of the HTML element you want to test for. This allows us to remove the duplicated selector references from our test files and move them into a single file. By doing this we only have one location to change if at a later point in time you are refactoring you application and change the class or ID of the element you are using in your test, you just need to update the selector value in your object file and all the test’s using that element will get the new value.

From the tests in my previous post, switching the selector I was using into page elements gives me the following elements

elements: {
  username: {
    selector: '#username'
  },
  password: {
    selector: '#password'
  },
  submit: {
    selector: 'input[type=submit]'
  },
  error: {
    selector: '.error'
  }
}

If you look at the Nightwatch documentation for page elements you will see there are a few different ways to set up you elements object. I go with the name of the element then an object containing selector and value. As you can see from my example we have four elements, username, password, submit, and error. Each have a selector value that corresponded to an HTML element within our page. You can split your elements down into sections if your object is dealing with a lot elements, you could split your elements object into sections to allow for better maintainability. See the Nightwatch documentation for more on using element sections.

To show you how you could use page elements in a test, here’s our initial test written without using page objects:

'Login Page Initial Render': function(browser) {
  browser
    .init()
    .waitForElementVisible( 'body', 1000 )
    .verify.visible('#username')
    .verify.visible('#password')
    .verify.value( 'input[type=submit]', 'Log In' )
    .verify.elementNotPresent('.error')
    .end()
}

Here’s the same test written using a page object that contains just page elements:

'Login Page Initial Render': function(browser) {
  var login = browser.page.simpleLogin();

  login.navigate()
    .waitForElementVisible( 'body', 1000 )
    .verify.visible('@username')
    .verify.visible('@password')
    .verify.value( '@submit', 'Log In' )
    .verify.elementNotPresent('@error')

  browser.end();
}

As you can see it’s not any shorter in length, possibly longer if anything. First off you need to setup the page object you wish to use in your test, you do this by setting a variable with the value being the page object. Then you start your test chain using this new variable, you can then make use of the elements from the page object by using the name of the element prefixed with an @, as you can see I’ve replaced #username with @username.

Page Commands

Page Commands, or commands called by the Nightwatch documentation is what makes using page objects all the worth while. By using commands you can make your test files really DRY, and move the bulk of your test logic into the page objects allowing future developers or tests who need to contribute to your test to reuse blocks of test’s without having to reinvent the wheel.

Commands are functions that contain logic for reuse in your tests, if you find yourself writing the same assertions or verify statements over and over, you should look at using a command. A really common example would be having to click submit on a form, rather then each test having the same code to click and verify something to do with the form submission you can wrap this up in a command and call the command inside of your test. Commands will still output to the terminal or your report for assertions and verify statements, there is no drastic difference between using a command verses having the test code within the test file. Using commands inside of your page objects make things more maintainable, and helps others who may need to write tests that touch parts of the page you have worked on.

If we take our first test, that’s not using pages objects and show you how we could write it using a page object that makes use of elements and commands

'Login Page Initial Render': function(browser) {
  browser
    .init()
    .waitForElementVisible( 'body', 1000 )
    .verify.visible('#username')
    .verify.visible('#password')
    .verify.value( 'input[type=submit]', 'Log In' )
    .verify.elementNotPresent('.error')
    .end()
}

Here’s the same test written to use page objects with elements and commands:

'Login Page Initial Render': function(browser) {
  var login = browser.page.commandsLogin();

  login.navigate()
    .validateForm()

  browser.end();
}

Sure makes our test a lot smaller, that’s because we have moved the logic into a page object command named “validateForm()”, which now contains the test logic for validating the form is present, inside of the page object we have the following

var loginCommands = {
  validateForm: function() {
    return this.waitForElementVisible('body', 1000)
      .verify.visible('@username')
      .verify.visible('@password')
      .verify.value('@submit', 'Log In')
      .verify.elementNotPresent('@error')
  }
};

module.exports = {
  commands: [loginCommands],
    url: function() { 
      return this.api.launchUrl; 
    },
  elements: {
    username: {
      selector: '#username'
    },
    password: {
      selector: '#password'
    },
    submit: {
      selector: 'input[type=submit]'
    },
    error: {
      selector: '.error'
    }
  }
};

As you can see, we have page elements and commands combined in our page object file for our login page. Overall a bit more code than our very first test, but this is a simple example, if you look at our other tests which is all on my GitHub, you will be able to see how I have used page objects that combine both elements and commands.

Take a look around the updated nightwatch-demo repository to see the other test’s and how I’ve switched them over to use page objects. You can even clone it or download the repository, I’ve updated it to contain all it’s required dependencies, and all runs from an npm install (tested on a Mac and Travis-CI only)

Photo taken at: Southerness lighthouse

Hobonichi Techno Planner 2016

Hobonichi techno planner 2016

Back towards the end of 2015 I saw a lot of people posting about the Hobonichi Techno planner. Around the same time I was looking at diaries, notebooks and planner to keep track of my year. The plan was to start writing more by keeping a diary. Writing something whether it be a random thought how my day went or a doodle, just what ever I wanted to make note of that day. I wanted something that was not just your average diary or something cheap that’s going to start to wear and fall apart half way through the year. I also wanted something that could be used with a fountain pen.

After some detailed research into the Hobonichi Techno planner I knew it was what I was after. As this was the tail end of the year I asked Santa (my wife) for an English version of the Hobonichi Techno 2016 planner, and it turns out Santa was able to fulfill my request even quite late into the year. After speaking with my wife about the process of ordering the planner which she ordered direct from 1101.com – The supplies of the diary in Japan. It only took a couple of week to arrive, and it arrived with no issues. No issues with customs or any unexpected charges, apart from the extra charge for a non-english transaction at the bank (which was a new thing for my wife).

With my new Hobonichi in hand and it being the first day of a new year I was sat down looking at the beautiful planner slightly nervous to put pen to paper without a concrete plan on what I should put in it, or how I should start.
I started off by making a list of goals, items I would like to achieve in the coming year. Not your normal New Year resolutions but some targets I could aim for, some small and some quite large and possible out of reach for the coming year. This would give me the base for the comping year and things to aim for, and also help with long term goals.

My first impression of the Hobonichi was it was super small and slim for what you are getting. I actually never realized it was an A6 diary when I was looking at them. Part of me thought it was an A5 size diary. With it being a day a page diary the thickness of the diary is surprising narrow. The reason the diary is so narrow is just to the Tomoe River paper, it’s the thinest paper I’ve seen, and the best part is you are able to use fountain pen’s on it with out bleed through. It’s not quite pocket size, unless you have quiet thick pockets. Overall the size is very nice, and after using it for a few weeks I actually enjoyed the size of the pages. It took a little getting used to the narrow lines of the square grid pages. I don’t normally use a grid based paper which took a little getting used to, more so as the grid is quite narrow. I was not sure if I would write over two lines or try and writing within the narrow lines. I tried a couple of pages of using two lines of the grid as one for the writing, but soon changed to using just one line as using two lines felt too much and I did not like being able to see the grid through the middle of my writing. The grid lines on the pages are quite dark, darker than I would like, but by writing on a the single grid line it was much better. I found that a fine nib fountain pen was best for my style of writing. The medium nib pens I have are just a little too broad and heavy on the ink flow and made it very hard to write within the single grid row.

For the year I decided to aim for writing in my planner every day, just what ever I felt like. More of a journal a place I could write down thoughts, what happened during the things on my mind. Over the course of the year I have written all sorts in my planner, while I never managed to write every single day I did start of very well, it was towards the end of the year where I missed a lot of days, but I soon picked it back up and wished I had make more of an effort to not miss any days. I tried not to back fill days as that would not be the same as writing what was on my mind and fresh that day. There where a couple of occasions, mainly when I was ill that I would fill in the day previous. Even when I travelled to the US I managed to write each day.

I found the best time to write was before bed, it would allow me to offload the days thoughts and issues into my planner and clear my head for going to sleep (not that I have any issues sleeping). During the year I have not looked back through the days that much, just the other week I re-read some of the things I wrote towards the start of the year and it was nice to read and to refresh my memory on some of the items that have happened. I don’t plan on letting others read through the planner just yet, my wife seen me writing in my planner the other day and asked what I was doing she never realized what I was using the planner for or if I was even using it. She asked if she could read some of the stuff I had written, I was very shy and embarrassed to let her read it, I was scared of what she we think of me keeping a diary. I did let her read a couple of entries and the feedback from her was very positive. Maybe I’ll let her read some more over time, or over the coming year as a look back (time hop style).

For 2017 I plan to do the same, keep a daily journal. I may mix it up a little and keep it open during the day to jot things down more often. When it comes to the end of the day and I start to put pen to paper to release my thoughts on the day it’s sometimes hard to remember everything, especially if it’s been a very busy day or a days full of events that I should of noted down as they happened. Having the Hobonichi open all day and capturing items as they happen may give me a different insight to my days, I’ll then end the day with a summary recap after I’ve had time to digest the days items.

Purchase a 2017 edition direct from the Hobonichi Store.

 

 

Waterstones – The Online and the High Street

My wife loves to read, and she loves to read real paper books even after all my efforts to switch her 100% to the e-book world. (She has got better over the past 12 months or so) But every now and again she will buy a real book, mainly ones she wants to keep forever and add to her bookshelves (which have be reduced dramatically over the past few years, and I can see them growing again).

While out in town a few days after Christmas we popped into the big well know high street bookshop Waterstones where my wife spots a book she noticed on their website. But, on their website it was showing as half price £8.50, in the store there was not sign of any reduction on the book itself. We asked the nice shop assistant if the book is half price as per their website. Their reply was no sorry, the online shop is different to the stores, we have different deals and they don’t apply between each other. Very odd, but the shop assistant went to check with another shop employee and came back and said they would give us the book at the online price this time, and suggest next time we just do a click and collect from their website to the store to ensure we get the online price.

Couple things which are very odd and broken here:

  1. The online and high street stores act as separate entities it seems, I can make sense of that for accounting/business reasons, but I can’t work out why they are selling items at different prices
  2. I can use the click and collect feature from the online store to a high street store to get the cheaper price. But the book I actually get is from the high street store? Confused
  3. Next time I see a book in store I like, I could check their website and if it’s cheaper I could do click and collect there and then and pick the book up in store within minutes at the reduced price – This then skewing the online / high street business figures as I was technically a high street customer first.

And they wonder why the high street is struggling and in decline. Online almost always wins I find, but this seems like a broken system from Waterstones.

Working the command line

Working the command line

Last week A Book Apart, released a couple of new brief books. For anyone who has read any of the A Book Apart books you could call them all “brief” books, but these are about half the size of the regular books. I have quite the collection of the books from them (not all), and they are great small reads that get across all the required information on the subject of the book in an easy to read manor, also allowing you to try out the techniques (if it’s a technical book) straight away.

When A Book Apart released two new books in their brief collection last week I instantly bought and downloaded the “Working the command line” by Remy Sharp. For someone who uses the command line on a daily basis I was intrigued to see if there was anything I did not know, or any tips, techniques I could pick up. Knowing it was a small book, I thought if there is at least one take away from it then that would be perfect. Also for just $8 it’s a way to support people from the industry who spend time creating these items.

I recommended this book to anyone who does things with the command line and doesn’t consider themselves an expert. I am comfortable using the command line, and sometimes have to Google my way around. Some of the commands I need to use Google for Remy has done an excellent job of explaining what they do and how to use them. With this being a short book you might think it’s covering just the basics but you will be happily surprised at some of the depths Remy has gone to in his examples. The piping examples are something that I learned a lot from, and now understand far more.

Quickly snooze all notifications in Mac OS

Been on a conference call and heard notifications going off, or ever been giving a demo of something via screen share and had notifications popping up on the screen? Or even have your next meeting notifications keep popping up from your calendar application. Yep! It’s pretty annoying. I find it more annoying from the receiving end, and very embarrassing from the person who is creating them or having to dismiss them during calls.

It’s so simple to avoid it’s a quick keyboard and mouse click shortcut. ⌥ + Click the Notifications icon in the top right of you Mac and you will snooze all notification on your Mac.

Mac OS Notifications Snooze

What password would you like to use?

As the move towards using the internet has increased over the last few years, people are creating more and more accounts. Accounts for this credit card, accounts for this store card, you’re mobile phone account, you’re bank account, and all the rest. The accounts I just mentioned there are one’s a lot of people will have, and more than likely you will have an account that you are able to sign into on the internet. The reason I mentioned those types of accounts is because a majority of the time these accounts are opened in person in a shop or bank.

In the past six months I’ve have witnessed someone or created, opened one myself, and on each occasion when it came to the part of creating the online part of the account it goes something like this:

Shop: First Name?

Customer: Matthew

Shop: Last Name?

Customer: Roach

Shop: Email address?

Customer: matthew@mydomain.com

Shop: What password would you like to use for this account?

Customer: Umm…..

Shop: Can be something simple, I’ll type it in

 

Each time this has occurred I’ve requested that I enter the password myself, on one occasion the shop assistant in this case reply’s:

“I’ve never had anyone ask to enter it themselves”.

That is astounding that no-one else requested to enter their own password, even more astounding that the shop assistant found it ok for them to be asking the customer a password to enter for their account. Wether this be a temporary password you intend on changing when you get home it does not matter, you should never be giving a password to a random person!