Slack command with API Gateway and AWS Lambda

Stop Sprint 99, and the winner is...

At work we operate the Agile methodology and work in two week sprints. At the end of each sprint we hold the retrospective, something that was introduced a few sprints back was to vote for the past sprints MVP (Most Valuable Player), or as I like to say; Most Valuable Programmer. At the end of the sprint the team leads ask’s for everyone to send their votes for MVP, and for a couple of days up to the sprint this is asked over and over, and also asked during the retro. So I made an assumption, of why people might not be voting.

People are not voting due to in not being anonymous

With this in mind, and having wanted to make a bot for Slack I thought it could not be too hard to create a slack command that users could use to cast their vote for MVP.

For it to be simple and not get too complicated the minimum requirements I set myself were:

  1. People votes by using a slack command and the users name eg: /mvp @matthewroach
  2. You can not vote for yourself
  3. You can only vote once
  4. Voting is per sprint, need a way to start and stop voting
  5. Only one active vote topic at a time
  6. Upon stopping the vote the bot would send an in channel message saying who the winner was

Maybe not a small list to accomplish. Over the course of a weekend I created a slack command that did all the above.

One requirement that Slack enforce for integrations is that they must be using https. With this is mind and not wanting to set up SSL and host things myself for something that’s likely to used very infrequent. I decided to use AWS services to handle this, most notable API Gateway and Lambda, for storing the data I went with MongoDB using mLab, mainly because I am familiar with Mongo. mLab offer a free 500mb sandbox database that would be ideal for this.

Slack slash commands

Slash commands are commands that allow users to interact with a third party service. The part of the / (slash) is the command name, then any text after the command is used by the service to do what it needs to do. A slash command can use either a GET or POST request. I decided to use the POST verb to pass along the data from the command.

Slash commands can either post back to use anonymously, or send back the result to the channel it was triggered in. By default it’s anonymous. The other options that you have like better formatting of messages, attachments you can see at their documentation.

AWS API Gateway

API Gateway act’s as a “front door” for applications to access data, business logic or functionality from your back-end services.

API gateway is not limited to the AWS infrastructure, for the slack command I hooked up a POST interface to a lambda function.

Once you deploy your API to an environment, Amazon allows you to deploy your API to multiple stages so you can have a test, staging and production set up. With each stage you get a different URL you can use to call your endpoints with.

The UI for setting up API’s via the AWS console is not the greatest and takes quite a few clicks to go through the different steps. Also, when you hook up an API to a lambda function you need to create a body mapping template that will take the incoming requests and convert to a format you wish to consume in your lambda function. In this case I added a mapping for the content type: application/x-www-form-urlencoded that look like this:

## convert HTTP POST data to JSON for insertion directly into a Lambda function
 
## first we we set up our variable that holds the tokenised key value pairs
#set($httpPost = $input.path('$').split("&"))
 
## next we set up our loop inside the output structure
{
#foreach( $kvPair in $httpPost )
 ## now we tokenise each key value pair using "="
 #set($kvTokenised = $kvPair.split("="))
 ## finally we output the JSON for this pair and add a "," if this isn't the last pair
 "$kvTokenised[0]" : "$kvTokenised[1]"#if( $foreach.hasNext ),#end
#end
}

Hopefully the comments in the code make it easy for you to understand what’s happening. Basically we are converting the form body slack pass us into a JSON object of key value pairs.

AWS Lambda

Is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you.

Lambda is where all the magic happens. I wrote some simple nodejs code that handles the different inputs from the slash command, does it’s logic, reads/stores data to MongoDB and then responds with a result for the user issuing the slack command.

I have pushed the code to a repository on my GitHub account if you wish to take a look at the node code.

As I mentioned earlier, I converted the incoming data from slack into a JSON object that is available to my node code under the event object. With this JSON object now available to me within my function I am able to look at the keys I need and do the required actions. The main thing we are after is the text key from the object this holds the text after the /mvp part of the slash command. I use this key to work out what action I should be taking from the caller.

There are only three commands available from to the user using /mvp; that is start, stop and voting. Voting is working about by looking for an @ as the first character of the text. If I don’t match either of these three, I tell the user you can not perform that action.

Some of the other keys I am using for the function is the team_domain, this is used to determine the mongoDB collection I need to look into. This keeps other teams data away for each other, and avoid having huge one huge collection of data. I also use the user_id to track if the user has voted already. The command does not track who voted for who, it will also not let you vote more than once, and you can only vote if we find an active mvp vote, which also means it’s only possible to have one mvp vote at a time.

I added some sample JSON files that I was using for testing the code locally. I used lambda-local to test my function locally, which makes for a much better experience than having to deal with the AWS interface all the time for writing code and testing.

Without going into great depths of lambda, you have up to three function arguments available to you within your main function, event, context, and callback. Data comes in on the event, context contains information about the lambda function that is executing and the callback is used to return information to the caller. The callback is an optional argument. You can read more about this in the lambda documentation.

Screenshots of working /mvp

Starting/ Opening the voting for a given item, I called this vote Sprint 99

Start Voting

Casting your vote for the MVP

/mvp @docbrown

Vote has been cast

Thank you, your vote has been cast

Stopping/ Closing the voting for Sprint 99 and seeing who was the MVP!

Stop Sprint 99, and the winner is...

UI Testing with nightwatch.js

Update 26 December 2016: The code in the repo https://github.com/matthewroach/nightwatch-demo, has been updated to run from a clone/download. All dependencies are part of the repository or part of the npm install.

Nightwatch.js is an easy to use Node.js based End-to-End (E2E) testing solution for browser based apps and websites. It uses the powerful Selenium WebDriver API to perform commands and assertions on DOM elements.

Testing your website as a user may interact with it may not be your first thought when talking about testing code. We can unit test our code to ensure it’s doing as it’s supposed to but the more code we apply to our UI the more dependency it will have from the UI and the code powering it.

Most of the code and web apps I create are driven from a server side language with a front end interface, and then JavaScript applied on top to add a nice user experience.

With UI testing or integration testing we can test all the parts of the web application are working as intended. Ever upgraded a third party library or decided to adjust a little bit of code on a single page to later get a bug report from someone that another page in the app is now not working? YES, unit testing should of caught these issues, one of the added benefits from using a UI testing tool is you can be 100% sure that the integration from the code and tests you have written work at the final stage of delivery.

Installing nightwatch.js

If you have node and npm installed on your machine, it’s pretty simply to get nightwatch set up, you can install it local to your application or as a global module (-g). I have installed mine globally. Just run the following from your command line (you may need to do this as an admin or sudo on a Mac):

npm install nightwatch -g

Once you have it installed you can confirm you have it install by checking the version

nightwatch -v
nightwatch v0.6.8

Setting up

The code and test’s are available on Github if you wish to look: https://github.com/matthewroach/nightwatch-demo

When using nightwatch.js you will need a nightwatch.json file, this is the config file that contains a lot of different options, for this post I’ll not cover them all, just the one’s I’ve used, changed or added. I have this file located at the route of my project, you can place it anywhere under your project but you’ll need to update config items to point to the relevant locations of your project.

I am using browserstack’s automate service to run my tests with. You don’t have to use browser stack to run these test’s you can use selenium to run them locally. Using selenium takes a bit more set up to get running, so for the purpose of this post we will use browser stack.

Below is the nightwatch.json file I have located at the root of my project. The src_folders config points to my tests folder that’s also located on the root of the project. You can change this to point to anywhere you like.

You will also notice some references to hub.browserstack.com, this tell’s nightwatch to use browser stack as our selenium running. The two items you need to update to use this is the browserstack.user and browserstack.key. You will find these in your browser stack account.

{
  "src_folders": ["tests"],
  "output_folder": "reports",
  "custom_commands_path": "",
  "custom_assertions_path": "",
  "page_objects_path": "",
  "globals_path": "",

  "selenium": {
    "start_process": false,
    "server_path": "",
    "log_path": "",
    "host": "hub.browserstack.com",
    "port": 80,
    "cli_args": {
      "webdriver.chrome.driver": "",
      "webdriver.ie.driver": ""
    }
  },

  "test_settings": {
    "default": {
      "launch_url": "http://hub.browserstack.com",
      "selenium_port" : 80,
      "selenium_host" : "hub.browserstack.com",
      "silent": true,
      "screenshots": {
        "enabled": false,
        "path": ""
      },
      "desiredCapabilities": {
        "browserName": "chrome",
        "javascriptEnabled": true,
        "acceptSslCerts": true,
        "browserstack.user": "",
        "browserstack.key": ""
      }
    }
  }
}

Tests

Test’s are written in JavaScript, I separate out my tests into a tests folder to keep separation from the app code. On bigger projects I’ve added extra folders with the tests folder to mimic the web app structure, so you are able to run selected areas on their own.

Each test file can have one or multiple test’s. Each test is a JavaScript function. A sample test is shown below:

module.exports = {
  'Login Page Initial Render': function( _browser ) {
    _browser
    .url('http://dev.matthewroach.me/login/')
    .waitForElementVisible( 'body', 1000 )
    .verify.visible('#username')
    .verify.visible('#password')
    .verify.value( 'input[type=submit]', 'Log In' )
    .verify.elementNotPresent('.error')
  }
}

The test shown above is very simple example that will open the URL: http://dev.matthewroach.me/login/ and check that the username and password fields are visible, the submit button has a value of Log In, and the error element is not visible.

One thing to note here is I am using the .verify object rather than the .assert object. The reason I prefer to use verify over assert is that the test’s will abort if you use assert, where as using verify it will record a fail on the test but carry on running through the other tests.

Running Tests

Now you have your first test and nightwatch all set up you can just call nightwatch from terminal in your project root, and you will see output like the following, you will noticed that it still outputs the test details even tho you are running test’s on browser stack. The output is also saved within browser stack, it looks slightly different in browser stack.

Nightwatch Output

When a test causes an error the output looks as follows

Nightwatch error output

 

Now you have the basics down, look at using page objects to make your tests DRY and reusable, see my post on UI Testing with nightwatch.js – Page Objects.

SVN Developer Branch Flow

Ever wanted to use a branch workflow with SVN, but had trouble getting it to work or finding out information on how to manage branches? Yes I have, and I spent the best part of two days working it out only to realise it was not as bad as I thought. I was just about to ditch the idea when I finally worked it out by re-reading about SVN merging.

The Strategy

The idea is to have developer branches, so each developer can have their own working copy and manage it themselves. Once they have completed each ticket of work and are ready for go back onto the mainline (trunk), they merge the batch of revisions down to trunk ready for release.

The Issue

Note: I am not branching trunk as a whole, but branching a sub folder within trunk

All seemed to be going well-I was making changes to my branch and committing as and when I needed to. I then finished my first ticket and merged the code down to trunk. Another ticket was finished so I merged the code down to trunk. A couple of days later another developer had finished their work and merged to trunk. Now I need to pull their changes to my branch to keep my branch in sync, but this is where it all started to go wrong. Upon doing a sync merge to bring all changes on trunk, my branch did not know about the previous merges that I had made from my branch to trunk. It was trying to bring the merges I had made from my branch to trunk, and throwing errors about conflicts.

The error was “Reintegrate can only be used if revisions X through Z were previously merged from {repo} to reintegrate source, but this is not the case”

The strategy of developer branches seemed like a simple idea, but seemed to be causing many issues. My research lead me to find out in SVN 1.8 server, merge had been updated to be smarter about moving changes between trunk and branches. We got a 1.8 server running and copied over the repository to check if this was the case-still no difference. I eventually ran back into my issue above.

The Solution

As these branches are long running branches, and merges happen but then they are kept running, the branches are not being reintegrated back to trunk. So you need to keep them alive by keeping the branch in the loop of what has been merged to trunk. One might think if you merge from a branch to trunk, the branch will know what you merge to trunk. But that’s not the case with SVN. When you merge to trunk, you are only applying the changes from the merge to the local copy of trunk. These changes are not merged until you commit all these changes to trunk. By committing the merged changes to trunk you create a new revision number, and this new revision number is never passed back to the branch. In this instance you would normally be terminating the branch as your feature is complete, but as we want long running branches we have to keep the branch alive.

In order to do what we need and keep the branches alive, we need to follow the following flow (diagram to help follow along)

 

SVN Long running branch flow

  • Rev. 10 We create a branch of trunk (branch-a) , this creates revision 11
  •  Another branch is created from trunk (branch-b), this creates revision 12
  • Marty is developing on branch-a and makes a change this makes revision 13
  • Meanwhile Jen is developing on branch-b and has done a fix and commits making revision 14
  • Jen is happy for revision 14 to pushed back to trunk, she merges here revision 14 to trunk, all goes OK, so she commits the merged changes creating revision 15 on trunk
  • As the merge created revision 15, branch-b does not know this and in future sync merges on branch-b it will try to bring revision 15 back to the branch, and cause conflicts. So, Jen needs to merge revision 15 to branch-b, but not like a normal merge, she only needs to do a record only (–record-only) merge. This will tell branch-b not to try and merge revision 15 to the branch-b in the future
  • Marty then makes a fix and creates revision 17
  • Marty realises Jen made a fix he needs on his branch, so Marty does a sync merge onto branch-a and commits the merge code as normal
  • Marty has fixed the issue he was working on in revision 13 & 17 and it’s time to merge into trunk, Marty merges his code to trunk, and merges the applied changes to trunk this creates revision 19
  • Now Marty needs to merge revision 19 as a record only merge to branch-a to avoid SVN try to merge it in sync merges later on.

 

–record-only is the command line flag if you are using the command line to do your merges, if you are using a GUI there should be an option for it, in SmartSVN it’s in the advanced tab, see below:

 

--record-only merge in Smart SVN

 

Always remember to commit the folder when doing merge’s, this contains the merge info! Not doing so will cause issues in the future!

Gigaw.at: Back from the future

Gigaw.at

A week ago, I had an email about a domain name due to expire, which got me thinking, then two days ago I wrote a blog post about open sourcing more of my web apps that I have built and are sat lying on my computer.

Today, I open sourced my first web app: Gigaw.at.

Gigawat was a web app that I had built nearly two years ago, and I was running it for a good time on it’s own VPS with separate database server, and them over time it sort of dwindled out, and was not very active, and it was costing me each month with very little reward (personally), so I closed it down and decided to save my money and put into other projects. As per my blog post the other day, I want to bring some of my web apps back to life, and have a history of the code I’ve written.

I decided that gigaw.at would be my first web app to be open sourced as I knew it was in a pretty good state and was able to run with very little work… I did say little work, which turned out to be wrong as I decided to adjust a couple of things, I also removed a couple of features. This meant that I could spend more time working out how to use heroku and get it all up and running. Which was very simple, just a few kinks to work out, which was mainly the config settings to ensure I did not commit any sensitive information.

So, gigaw.at is on my github, under a GPL3 license, and it’s now back up and running over at heroku, and using compose as my mongoDB provider. Feel free to grab the code, and play around (note it’s made using OpenBD, and I need to write up a readme for information on how to get running)

Time to Open Source

On Saturday I tweetwed:

And since then I’ve been thinking about all the projects I have ideas for be it not started, half started, finished but not online anywhere, and what I could do with these. I don’t want them to just disappear. So I have been thinking about what I can do with these, generally they are very lightweight and will not see great demand, but all these projects will soon start to cost me a small fortune to keep running on servers, and I don’t want them to just sit on my computer without people seeing them.

So! I have decided that I will be Open Sourcing many more of my projects, these will be going on to my GitHub account, and I will be using Heroku to put up these projects, this way people can see the code and interact with them.

I have started playing with Heroku and am in the middle of understanding how to use it, my next step is to get my apps ready to run on Heroku, I need to ensure I don’t commit any sensitive credentials to GitHub.

By releasing my code and apps it may help someone out with an issue, but maybe not. Another good thing for me doing this is I can have a history of a lot of the code I have written, I am forever going back to projects to grab snippets of code.

Basic image resizing with nginx

So I came across an nginx module (ngx_http_image_filter_module) that would allow you to resize, crop and compress images before sending them back to the user. This allowing you to create a simple image parsing server. After some reading I wanted to have a play around with this to see what it could do, and if it was something I could use, or something we could use a work.

A couple of requirements I had before I started was I want it to be able to do the following:

  1. Handle resize of images based on the user agent without having to change URL’s on the fly or with JavaScript. eg. server.com/test.jpg – Should return different sized images based on the user agent. (360px wide for iPhone, and 460px wide for Android)
  2. Handle general resizing based on a URL string eg. server.com/resize/400×400/test.jpg

Not many requirements to start with, but over time I’ll adjust these and play more with nginx but this is enough to play around with and see what’s possible with it.

I got started by spinning up a 512Mb Droplet from Digital Ocean, and installed nginx using yum (you can see a tutorial on Digital Ocean if you need to), then I spent the rest of the time tweaking the conf file (/etc/nginx/conf.d/default.conf). See below for example of what I’ve added to the default.conf file.

Here’s the part of my nginx config that’s doing all the magic:

  location ~ /resize/([\d-]+)x([\d-]+)/(.*) {
     proxy_pass                  http://$server_addr/images/$3;
     image_filter                resize $1 $2;
     image_filter_jpeg_quality   80;
     image_filter_buffer         10M;
  }

  location ~ /(.*) {
     try_files                   /$1 @img;
  }

  location @img {
     if ( $http_user_agent ~ iPhone ) {
        proxy_pass              http://$server_addr/resize/360x-$uri;
     }

     if ( $http_user_agent ~ Android ) {
        proxy_pass              http://$server_addr/resize/460x-$uri;
     }

     proxy_pass http://$server_addr/images$uri;
 }

Original Image Requested

  • http://i.matthewroach.me/test.jpg – Will request the test.jpg image, and return it as is. But if you are on an iPhone you will get a image resized to the max width of 360px, or if you are on an Android device it will return the image at 460px wide

Using the resize URL to get smaller image

  • http://i.matthewroach.me/resize/523×400/test.jpg – Will return the image scaled to 523px wide and the height will be proportionally to the width. Try changing the 523 value and see what happens!

So, I have just touched the basics here, and I am looking into more advanced things that it can do and see how far I can take it and if it’s a viable solution for a production environment.

URLrewrite with OpenBD

I love a good clean URL, and recently I decided that I wanted to clean up the way I was handling URLs in my web apps. while this is easily achieved using say nginx or apache, or which ever server you are using. I had a unique requirement that I wanted to be able to handle the rewrites locally without having to have to install nginx or something on top of what I was already using. Also I did not want to have a separate server config to maintain. I was after a way I could have the settings checked in with the web application code so it would work on the server and locally.

Locally I am developing using jettydesktop launcher, and I deploy to a server that’s running jetty, while I could have jetty handle it, it would mean loosing the ability to test locally easily by having the settings within the web app.

After some research and looking around I came across UrlRewriteFilter this is a jar file that can be dropped into the lib directory, a small update to the web.xml file to ensure its loaded. Then you have a configuration file where you can set up all your URL rewrites. The details are shown on the UrlRewriteFilter website, and it’s was a relativity painless to get up and running. But I did run into a small issue. After using the base config and it was all working with that correctly, I applied the changes I wanted for it to work for my web app, and I was getting lots of java string overflow errors. Some Googl’ing around lead me no further, so it became process of elimination to get it working.

The default config for the web.xml is

<filter>
    <filter-name>UrlRewriteFilter</filter-name>
    <filter-class>org.tuckey.web.filters.urlrewrite.UrlRewriteFilter</filter-class>
</filter>
<filter-mapping>
    <filter-name>UrlRewriteFilter</filter-name>
    <url-pattern>/*</url-pattern>
    <dispatcher>REQUEST</dispatcher>
    <dispatcher>FORWARD</dispatcher>
</filter-mapping>

But for me to remove the error’s I was seeing I had to remove the <dispatcher> lines to give me the following config:

<filter>
    <filter-name>UrlRewriteFilter</filter-name>
    <filter-class>org.tuckey.web.filters.urlrewrite.UrlRewriteFilter</filter-class>
</filter>
<filter-mapping>
    <filter-name>UrlRewriteFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

After restarting jetty all was good. Few  more test’s and then I released it to my web server, and all seems to be working.

This jar file is now part of my base web app structure that I use for all new web apps, and I’ll be updating my other apps soon.

Twitter Cards for HillVall.eu

I’ve had this little idea in my head for a while now, on how I could use Twitter Cards to display the content I am sharing from HillVall.eu in a much more rich way on twitter, but without taking the user away from the original website.

The idea is that I want to share a link to an article, blog post, review, or what ever I happen to find interesting from the RSS reader application I have written (still in private beta), but I don’t want to handle the traffic or take user’s away from the real website where the information is. I always want to push the traffic to the original source. Which is fine, but when twitter released their twitter cards for tweets, I was interesting if there was a way I could use them with sharing from HillVall.eu but without handling the traffic like I mentioned before. Plus another reason for this is so I have a reason to look at the twitter analytic’s.

So tonight I set about implementing it, and to my surprise it was relative easy once I had work out what twitter was doing to get the card information.

As you can see from my tweet below when I share something from HillVall.eu it will have a summary card in the expanded display and drive traffic directly to the original website.

For the time being I am only using the summary card, but over time I will look into a way of using the other twitter cards depending on the type of content being shared, guess I can put that doing on the feature list for launch.