Interview: Professor Mike McGuirk on How Brandwatch For Students is Used in His Classroom
By Olivia SwainSep 6
It is said that behind every great man is an even greater woman.
Well, taking that theme, I suggest that behind every great developer is an even greater tester.
I know, not the humblest opening line for a blog post, but when the the developer:tester ratio is 6:1, fortune favours the brave.
Testers are the people that don’t want the glory. We’re not the rock stars of the I.T. world.
Put it this way: if I.T. was a band, we’d be the drummers; sat at the back, keeping everyone in time with a steady beat, not showing off at front of the stage rocking long hair and a black AC-DC t-shirt (oh, hang on, that sounds like a Sysadmin!).
We see the new-born code when it’s freshly delivered, and usually we have happy, healthy code that is ready to fulfill its destiny in the Brandwatch app.
However, sometimes, we do have to tactfully say to a developer “erm … sorry: your baby’s a touch ugly!”
We do this with a smile, a steady gaze, and sometimes a mild hint of sarcasm: “did you actually test this yourself before checking it in?”
We don’t stand for “works on my machine” comments and won’t play the “it’s a backend bug”/“no it’s a front-end bug” tennis that some of the tickets result in.
We’re the last line of defence before the customers see new things, and want to make sure it does exactly what is says on the tin.
I always think a good tester has to have thick skin, a degree in diplomacy, the ability to ‘know what they actually meant’ and to be just a teeny bit cynical.
Anyway, what do we do at Brandwatch?
Well … lots! Every day is the same but different: issues are fixed, issues are raised, releases are tested, sprints are updated, test plans are written, manual tests are run, automated tests are made and, as Chris “Bones” Skilton mentioned in his recent post: tea is drunk in traditional British quantities and customs (I shall refrain from mentioning the foosball).
We do a lot of our testing manually, which given that we release a new version every two weeks and the richness of functionality in our app, makes this a tall order. However, somehow we always get there, albeit often only just!
More recently, we’ve been investing heavily in automation and I now find myself having conversations about Cucumber and Gherkin, feature files and step definitions, as well as the ins and outs of creating classes and the pros and cons of BDD and continuous integration.
Not bad for an old fella, eh? (I’m only 43 but that’s ancient for most of the people that fill Brandwatch Towers)
My team includes two automation guys already and not only have they been tasked with implementing the framework, they are now dragging the rest of us up to scratch in the automation world.
This includes lots of “how do you …” questions, which fill the pages of Skype or the inboxes of staff on a daily basis.
The goal by the end of the year is that most of what we do will be automated, meaning testing takes less time, has more coverage and allows us to release even more, even quicker.
This reflects the company ethos: we move forward constantly, ideas are raised and then challenged, allowing the good ones (and very occasionally one of the bad ones) to be implemented.
So, to sum up, testing at Brandwatch is a busy place to be.
We have a lot of talented developers who have made the massive, feature-rich Brandwatch platform
We have our platform, our new features, our admin systems, our website, our API and soon a currently-confidential but truly amazing new release to work on.
All of which, of course, have to be constantly tested to ensure we deliver the services and products our customers love and expect.
We do very occasionally let bugs pass us by (sorry about that). It’s not intentional and we do our utmost to fix ‘em quick and make sure they don’t happen again.
Anyway, I’m off to discuss our next automation project: I’m trying to somehow automate the tea-making process.