The Rise and Fall of the Rockstar Developer

For some time the tech industry was plagued by the term “rockstar developer”. It has been used to mean a programmer who was staggeringly productive, and could sit down at a keyboard and be “in the zone”, producing some complicated piece of software and solving problems that others couldn’t.

It’s a very appealing idea for anyone even remotely interested in coding, and one perpetuated by Hollywood montages of programmers tapping away at keyboards, all to a thumping electronica soundtrack. Most (probably all) programmers spend some of their free time writing code on their own, so it’s tempting to buy into the notion that each one of us could build something incredible while being entirely focused.

It’s also not that unreasonable. Some incredible pieces of software have been written by solo developers, but I’m not convinced it’s the natural state for a software engineer. The subject matter is truly enormous, and evolving at a terrifying pace, so interracting with your peers is necessary just to learn more and to build software that can be used by others.

The result is a notion that started to become dangerous. A rockstar developer is someone that is entirely depended on by a team, and who believes (rightly or wrongly) that they are head and shoulders above their team. Decisions start to go unchallenged, and mistakes go ignored by team members who respond to any quesiton with “because X told us to write it that way”. As the product starts to break down, the rockstar developer in question is inundated with work and (understandably) gets stressed, until things disintegrate (or explode spectacularly). What’s more, rockstar developers are indispensible, and people who are indispensible can’t be promoted and cant’ go on holiday.

Despite all this grim reading, I’ve used the past tense in the above as the industry seems to have moved on. The vast majority of Google search results for “rockstar developer” are critical of the phrase. Software engineers aren’t judged by how many lines of code they write, but instead working out whether someone fits well as part of a team has become a crucial aspect in the hiring process. Teams have started citing “empathy” as a crucial requirement, and while it might seem a little woolly at times, it underscores how much a good application is a product of a good team, rather than one key individual.

That’s not to say individuals can’t do great work. I would still encourage anyone even remotely interested in programming to have a go and do some research on their own. Similarly, engineers frequently investigate things in their spare time. Work is always different though, and when writing software it’s always invaluable to bounce ideas of other people (beyond rubber duck debugging).

If you want a great example of this, the story of Brent in the wonderful The Phoenix Project is a perfect fable. Brent is depended upon by everyone, and quickly becomes a blocker for the team. It’s not that he needs to be fired (far from it – he’s an excellent engineer), but that he is so overloaded with work and depended upon that everything gets stuck.

As the industry moves away from a culture of “rockstars” and key individuals, the environment created by a dependency on cohesive teams is incredibly reassuring.

Software Development with Empathy

I’m hoping this will be the first in a series of posts about some recent trends in the way software is developed, going beyond the raw technologies involved.

Around nine months ago I was fortunate enough to be working with a team at Pivotal Labs – a consultancy (although I suspect they might baulk at the term) specialising in Extreme Programming which works with clients to train engineers in pair programming and test-driven development.

One of the core principles by which Pivotal operate is “Always be kind”, which might easily be dismissed as comparable to Google’s famous “Don’t be evil” mantra. At its root is the concept that a team functions best when there is a sense of empathy. As one team member phrased it to me: “egos are left at the door”.

It’s worth defining what this actually means, as it’s very easy for companies to get lost in a series of platitudes in an effort to change its culture. Empathy means taking the time to listen to everyone an seeking to come to a solution to a problem as a team, and seeking to understand colleagues. Adopting empathy as a core principle for a team involves recognising that no team member is capable of doing all of the work – and if they are there is something fundamentally wrong with the team structure.

Crucially, empathy does not just mean giving way to the loudest voice or ignoring bad work. I’m currently reading the excellent book Radical Candor by Kim Scott which describes “ruinous empathy” – the idea of skirting around an issue so it isn’t addressed. Building a team founded on empathy involves caring about people, and caring about how they do their work and how they learn.

It’s a very difficult balance to achieve, but an important one. Empathy might seem like an obvious requirement, but historically it’s been discarded in favour of “rockstar developers” – a fantasy that a genius-level programmer can sit at a terminal and build brilliant software provided they are left undisturbed. The balance seems to be tipping the other way, which is good news for all engineers.

Scheduling content

An interesting coding problem: how do you schedule content on a reasonably substantial scale?

This might be a script you have to run constantly (normally fixable with a cron), or something that needs to be triggered after an arbitrary length of time – delayed blog posts, campaigns coming to an end etc.

This problem gets considerably more complex when you add a caching layer in, where you might have to expire the cache on several pieces of data simultaneously at a time when you can’t reasonably expect someone to hit a “clear cache” button.

Laravel 5 implements one solution to this: a cron that runs every minute triggering a script that checks a database for scripts to run. Entries in this database can be created in the code of your app.

For example, when a post is created in the database to be published in the future, another entry is created in the database to trigger a cache clear at a certain time. Each minute the cron checks the database for entries matching the current time, and when it encounters the relevant entry, clears the cache. Otherwise, when the list of articles is returned stale cache data might be returned instead of the accurate list.

There are plenty of alternatives to this – changing the length of time the data is cached for, for example, but I haven’t encountered any quite as simple.

There are of course drawbacks to this approach. In particular, how do you handle a system like this when you have a large number of items to schedule? What if you have an application where every user has an opportunity to add data which requires scheduling? If your cron script is hit every minute and on each occasion returns multiple scripts to run, it could potentially put a lot of pressure on the server.

I’d be interested to hear any alternatives to Laravel’s approach. I’m aware of node-schedule, which appears to take advantage of NodeJS’s structure to run scheduled tasks without resorting to crons. This very much relies on the way that Node is structured, so I’m curious as to whether there are any other solutions using Apache or nginx. If you know of any, please do post them in the comments.

Site content: more tech posts

I’m changing the overall theme of what I write about, and planning to write more about tech and web development in the future.

For roughly a year this site has been in an unintended hibernation, partly due to a lack of time, but mostly due to a lack of effort on my part. Once again, it’s about time to start writing again.

It’s also that time of year when bloggers tend to make New Year’s resolutions to write more, and sites filled with posts apologising for not writing more gain a new article.

So this isn’t a post promising to post more, but rather an indicator of what I am going to post about in future, namely more tech posts. It’s likely a lot of them will be related to web development and web technologies, but there should be a few more general tech posts as well.

Any posts older than this one are a random assortment of blog posts ranging from ice cream recipes to film reviews, but I’ve left them in tact (albeit uncategorised) on the basis that I try not to delete posts.

Oh, and the photo isn’t mine I’m afraid – it’s from the excellent Unsplash.

Man of Steel

SPOILERS: The below contains major spoilers, so don’t read any further if you haven’t seen the film!

Last night I finally got around to watching Man of Steel. It’s been something I’ve been meaning to do for some time, but after some pretty lacklustre reviews (and a housemate declaring it to be “the worst film [he’d] ever paid to see”), I hadn’t treated it as a priority.

Being a fan of Christopher Nolan’s work, I desperately wanted the Superman reboot to be a success. It was alway going to be a tricky one to get right – gone are the days when Superman could declare he would fight for “truth, justice, and the American way” without being ridiculed by audiences, and the film acknowledges this – for a start he’s no longer wearing his underwear over his trousers. There are references to the old version of the character, such as leaping over mountains, but the name “Superman” isn’t even referenced until the last 30 minutes of the film.

With so many changes, you’d have thought director Zack Snyder had been given enough flexibility to avoid any disastrous clichés, and for the most part that’s true. The script is broadly believable (save for Michael Shannon repeatedly yelling “I will find him!”*), even if the story feels a little simplistic at times.

A lot of people speculated before the film was released that if it was good Christopher Nolan would be credited with it’s success, while any failure would be blamed on Zack Snyder. Most critics then promptly tried to dodge this by saying it wasn’t a badly directed film, and I’d agree that the story needed more. But after 150 minutes of things blowing up, I find it difficult not to attribute the bulk of the problems to direction, cinematography and editing.

There are two main issues. The first is film length. The film opens with all sorts of sci-fi explosions on Krypton, and although Zod and Jor-El turn-up, there’s no Superman yet (technically he is there, but given he’s just been born I wasn’t expecting him to deliver any lines). The cold-open with the protagonist’s parents is exactly what was done in Star Trek (2009), but that rather neatly sums up the problem – Star Trek took 11 minutes for that opening sequence, while Man of Steel takes nearly twice as long to show what’s happening to Clark Kent. There’s no need for the film to be nearly 2hr 30m.

The other issue is just how much time is spent showing things that blow up. The Director of Photography was Amir Mokri, who previously worked on Bad Boys II and Transformers: Dark of the Moon, and the action sequences feel very similar to Michael Bay’s work. The final fight sequence (which probably shouldn’t have been shot) is very similar to the fight sequence in The Matrix Revolutions – two super-humans capable of flight beat the living daylights out of each other while causing all sorts of destruction and mayhem. Hot Fuzz mocked this style of false-endings and multiple fights, but apparently the satire has gone unnoticed.

There is also a massive planet-sized plot hole (spoilers ahoy!) – if Zod wanted a planet to colonise and knew Superman wanted to protect humans, why on earth didn’t he pick another planet? The whole last 45 minutes of film could have been avoided with the words: “Well I actually quite like these humans given they raised me in the first place, but I’d be happy to help if you want to colonise Mars. The humans have been wanting to do that for ages but can’t get the hang of it, so if you tell us how it’s done and let us watch we can help with supplies while you’re getting set-up.” There’s genuinely no reason for Zod not to take this offer up, but he’s hell-bent on terraforming earth because he’s Generically Evil.

Even having watched it and with all the above faults, I still want to like Man of Steel. There are no bad performances, Henry Cavill is perfectly believable as Superman (although he doesn’t really get a chance to play Clark Kent, and the underlying issues of family and origins had the potential to be interesting. The problem is that everything feels under-utilised, and content is replaced with huge explosions and fancy graphics.

I’m now less pessimistic about the upcoming Superman vs Batman film now that I’ve seen Man of Steel. Once Zod has been dealt with there’s less of an excuse for levelling half of Metropolis, so perhaps everything will calm down and the next film will be the intelligent and thoughtful production everyone was hoping for. But I can’t help but think they’ll find an excuse anyway.


  • Hilariously, Wikipedia describes this as “Zod then cryptically warns Jor-El’s widow that he will find her son”. There’s very little cryptic about a man screaming “I will find him” over and over again.

Steam OS

After something of a quiet period, Valve are back with some interesting product launches. On Monday they announced Steam OS – a complete free-to-download operating system which will be optimised for their games and open source. This was followed by yesterday’s announcement of the anticipated Steam Machines (often previously referred to as the Steam Box) – a series of PCs with Steam OS already installed. There’s one more announcement due tomorrow, but given the first two crashed Valve’s website now seems like a good time to think about what they’ve said so far.

I suspect Steam OS will be the closest Linux* – and indeed any open source operating system – gets to mainstream desktop usage for some time. Every so often (usually around a major Windows revision) small crowds of Linux users declare “this will be the year Linux goes mainstream”, and to be fair, it’s had a lot of success. Android is Linux-based, a vast number of servers use Linux (including this one), and Ubuntu seems to be on a permanent upward trend.

Valve want to encourage people to mess around with the code and see what they come up with. Releasing the code base is a pretty bold move, and completely separates them from the console market. They already know what people are capable of having seen the effect of mods on their own games – Counter-Strike has made a vast amount of money and began life as a mod. The fact they’ve extended this to hardware is where things get a little more interesting – a return to the philosophy of Steve Wozniak in allowing and even encouraging people to hack their hardware and add to it. To quote Valve’s FAQ:

Can I hack this box? Run another OS? Change the hardware? Install my own software? Use it to build a robot?

However much I’m interested in the prospect of an openly modifiable gaming OS and accompanying PC, I can’t help but feel the whole thing rests on price point. Here’s a quick estimate of prices I found on Amazon for comparison:

  • Playstation 4: £350
  • Wii U: £250
  • XBox One: £450
  • Desktop PC (first search result): £390

If Valve can do it for less than a Playstation 4, they’ll be sorted, but they’ll have a hard time convincing people that they need a PC designed just for gaming if it costs much more than an average desktop. Valve say there will be a range of Steam Machines to choose from at different price points to try and accommodate everyone, so I wouldn’t be surprised if we see both.

There’s one announcement left. As much as I really want this to be Half-Life 3 (oh please oh please oh please), it’s looking increasingly like the other theories will prove correct and it will be a new controller. Valve have talked before about measuring biometrics such as heart rate to feed back into a game (if your pulse rate goes up in a zombie game, the developer can change the scenario accordingly), and I can see how this would make for better games, but it will be much easier to sell the idea to developers than to gamers.

Valve clearly want to massively change the gaming industry, and they might well just do that. Steam has already done so for the PC gaming market, although it’s worth remembering how hated Steam was at first. The announcements are impressive, but there’s a lot of marketing for Valve to do.

* I include Ubuntu in this – yes, I know it’s Debian rather than pure Linux!

A new domain

You might have noticed a slight change to the site: specifically the name and domain. It was about time I changed the old to something a little more relevant, and given I’m now doing much more tech (now working as a PHP Developer), this seemed more appropriate. So, without further ado, welcome to

For anyone wanting to know what on earth the webscale thing is about: I refer tech-minded folk to (apologies for the less-than-savoury language).

For those less tech-minded: “webscale” is the notion of a technology being able to handle traffic with relative ease, without collapsing under the sheer weight of users. It’s become something of a catchphrase in some circles, to the point where people sometimes ignore other considerations. The hashtag thing is Twitter’s fault.