“All employees are equal, but some employees are more
equal than others”
– George Orwell (mostly)
When it comes to the subject of access to the Internet developers are quite clearly far more equal than any other sort of employee in the business. Or at least some think so, but are we?
Over the last decade I’ve found myself working at some big corporations – the kind of places where IT is a part of the business, but not the actual business itself (despite what you might choose to believe about its importance today). As a consequence there is almost a Mexican standoff between the security team, whose purpose is to keep the company safe, and the developers/testers/support staff whose job is to provide and maintain solutions to solve business problems. To perform this function effectively invariably requires accessing some content and/or downloading additional tools that the company does not already provide for. So what’s the big deal?
The landscape has changed dramatically over the last 20 years where malicious content has evolved from being the result of misguided individuals with something to prove, to being a business – if you consider organised crime a business that is. At least, that’s if you believe what the security industry tells us. Throw in the recent revelations about government spying and I don’t think it’s hard to see why the paranoia levels in the security departments of these big corporations are at Spinal Tap [1] levels.
Take my own personal web site as an example of where the level of corporate trust is almost certainly very low. I have always made the source code available alongside every tool I’ve ever published as a courtesy to anyone who might be interested. But let’s face it how many developers download the source code, sift through it to make sure there aren’t any exploits, then build it and finally see if it’s going to be useful? Virtually none, I’d wager. In fact I’d question whether the license agreement even gets an airing.
No, what we do is see if the web site looks legitimate, i.e. it’s not just a random IP address for an FTP site somewhere on the planet, and if we think it looks trustworthy we’ll go ahead and try it. In the case of the really popular sites, like NuGet, I bet we don’t even give security a second thought; after all, if big companies like Microsoft are posting content on there it has to be totally legit, right? From a security team’s perspective, seeing how some of us behave, I’d suggest that free, 3rd Party components and frameworks are like dancing pigs [2] for developers.
OK, I get that executable content can be
really dangerous in the same way that granting my normal account admin rights
on the production system is dangerous; I want to be protected from my own
stupidity. What I definitely don’t get though is the apparent danger caused by
non-executable content, like blogs. Are “The Powers That Be” afraid we’ll somehow
become subverted by poisonous articles that will generate an uprising to
overthrow the management? Or are they just afraid we’ll waste time by looking
at the football scores which will undoubtedly lead to even more wasted time as
we argue over the relative merits of a
Talking of admin rights, why is so hard to
obtain those on my local machine? Luckily plenty of software copes these days
without needing local admin rights. In fact I’m writing this on Word 2002 using
an LUA-style [3] account under Windows XP. Even modern versions of Visual
Studio play nicely, but there are still times when elevation is required by my job
– unless testing, debugging and deployment has been
removed from my job description. Once again, what are they afraid of? Surely
the worst damage I can really do is screw up my own machine? Or perhaps they’re
afraid I’ll install DOOM and flood the network with
The only other answer I can come up with is that I’m “the wrong sort of developer”. Is the vast majority of software development in the enterprise actually the writing of macros for Excel and doing customisation of 3rd party products? If the development of custom services (native or managed) is in the minority I can see how we “builders” might appear to be so overly demanding compared to the “tweakers”. Perhaps there is an assumption too that the modern technique of unit testing helps us to eliminate all those nasty external dependencies and so reduces the need to do any sort of system level testing on our own machines, right? In fact isn’t that why they have a QA department?
So far in this article I’ve pretty much failed to be even vaguely objective, and that really was my goal when writing it. We all know what the status quo is; the question is how to overcome it. What can we do to try and convince those in control that to do our jobs effectively we need the reins to be loosened so that we can access more of the internet than our peers? Whilst unfettered internet access for all is possibly the desired end state, with a focus on educating employees to act responsibly, I’m not convinced that’s a realistic expectation for an enterprise in the short term to medium term.
When it comes to non-executable content I don’t believe we need to have any more rights than someone in, say, the Accounts or Marketing department. In a large organisation with open-plan offices it should be as culturally unacceptable to view inappropriate content as it would be to stick Page 3 pin-ups around your desk. In an agile working environment the demands of constant communication make it virtually impossible to do anything other than your job as you’ll be collaborating regularly. And that’s before you take into account the effects of pair programming for keeping you honest.
What I suspect the non-developer employees don’t realise is quite how much we rely on the internet to do our job. Whilst there are the obvious vendor support sites for the core products we use, there are also the big self-help sites like Stack Overflow. But even allowing access to these is only part of the equation because often the simple answer is just not enough and the salient details are in some blog post that is then linked to. Like it or not blogs are the modern knowledge base for programmers so categorising them as “personal pages” along with Twitter and Facebook is to completely miss the point. When your job is also your hobby, which it is for many in our profession, then the meaning of “personal” no longer distinguishes it from “professional”.
When they introduced a draconian content filter at a major financial institution I had recently started at, I decided to seek out the team responsible for the change and try to describe how much pain they were causing and to see if they couldn’t loosen it somewhat. After a few days of quizzing everyone I knew I managed to track down a chap in the security department and met with him to put forward my case. After showing some examples of blogs that were clearly relevant to “the business of programming” he accepted that the filter was too coarse. However he played the “but it’s group policy” card to explain why content categorised by the 3rd party content filter as “social” was forbidden. Exceptions, he said, could be made, but he also intonated that the process for getting content checked and accepted was not a priority.
Later, back at my desk I noticed that any content could also be given a secondary category. Knowing that the primary category of “Computers/Internet” was allowed I went back to the security team with a proposal. If I could convince the 3rd party content filter vendor to re-categorise blocked blog content from simply “personal pages” to the more specific “personal pages and computers/internet” would they allow it? They said “yes” and added that re-categorisation by the vendor was much quicker than their process.
OK, so this is far from the perfect outcome, but it did feel like I had at least managed to make some progress as I opened access to a number of popular blogs. More importantly I had found out who to contact and got to discuss the issue with them. In the end I decided the only way to instigate any further change would be to show how an outage was a direct result of an inability to do my job properly so that there was a monetary value to the loss. Unsurprisingly the right opportunity never arose because I just needed to get on with my job and I’d got used to using my phone to view blocked content instead. I briefly looked into trying to claim my phone bill back as an expense, but that just created far more pain for me (being a contractor) than for them due to the ridiculous paperwork involved.
Access to static information is only one cornerstone of our jobs; another is tools. This includes both entire programs, such as the classic UNIX command line utilities, and libraries which we consume directly within our own applications. Whilst it might be an interesting personal exercise to write an XML or JSON parser, that’s not really the best way for us to be spending our customer’s money. Component-level reuse is mandatory if we are to create our own applications as efficiently as possible, unless there is an especially good reason to build them ourselves (e.g. legal). The same goes for tools – we shouldn’t be writing our own compiler or web browser either.
This rather thorny issue is probably where a large part of the problem lies. Executable content is mostly opaque, except for scripts or where you’re building it from source code, and that means it’s hard to trust by default. Anyone who has ever been involved in a virus clean-up operation knows how time-consuming it can be. Most organisations now run virus scanners by default and although they do interfere with compilation and development duties, the impact is minimal unless it’s set to scan everything (which sadly does happen). One option to reduce their impact is to get a company-wide policy agreeing that certain folders (e.g. C:\Work) or certain processes (e.g. DEVENV.EXE) are excluded.
Aside from a virus attack, allowing developers and support staff to being able to run arbitrary programs, especially ones they don’t truly understand, also comes with a risk of consuming other resources, such as network bandwidth. One morning I arrived at work faced with investigating why our clients couldn’t access our services. It turned out we’d used up our entire month’s bandwidth allowance overnight. This it transpired was down to someone not understanding how BitTorrent works. Aside from the direct financial cost of paying for more bandwidth, there is an indirect cost too in a loss of confidence by their clients.
There is also the subject of license agreements. Most blog posts don’t come with a multi-page license agreement for you to consume before reading on – programs and libraries do however. Ensuring that your staff are only using correctly licensed software is hard. Apart from the obvious common collections, like the UNIX toolset, there are many smaller, more specialised tools that we need only temporarily to solve an immediate problem. However we might not even know if it’s the right tool for the job without first trying it out – the classic problem of chickens and eggs. If choice is going to be limited, then there must also be an understanding that we’ll have to resort to using the wrong tool for the job and that comes with a price.
The ironic thing is one of the reasons we choose to use third party tools and libraries in the first place is because we know writing security conscious stuff is hard and most of us are not qualified to do that reliably ourselves. By increasing security around their employees they have inadvertently reduced their ability to produce secure applications.
Black-listing dangerous content is never going to workable, so the only recourse will be white-listing. The question is whether it’s possible to put together a white-list with enough low-risk software that would strike a balance between providing enough of the most common tools we need, without leaving anything obvious out. Whilst the perfect tool for a one-off problem might be a ground-breaking new programming language, I’d wager the problem is also solvable, albeit with a little more effort, in one of the more common, general-purpose languages such as C++ or Python. Specialised tools definitely have their place, but they also have a cost, and getting access to them is just one.
Disallowing local admin rights on a machine for a normal user is a good defence-in-depth measure; it helps the user to protect themselves from their own mistakes. But as we’ve just established developers need these same rights because they often have to evaluate tools. If they’re expected to handle deployment or do some form of local end-to-end testing they may need to install, configure and debug their applications whilst running as services. The alternative is essentially remote debugging, which is a painful experience at the best of times. It also takes a test environment out of action and creates a single point of contention which is the whole reason we have our own machines in the first place.
Once again the problem is no doubt one of trust. The theory must be that if you allow someone the right to install software, any software, they will clearly use it to abuse their position. Whilst we’ve seen that it’s possible to create bigger problems by being granted such powers, this kind of problem only goes to highlight a lack of partitioning within the organisations infrastructure. Production services must be isolated from development services and there should be some kind of airlock required to bridge them; in fact this problem is probably one of the best adverts for cloud-based computing.
Ultimately though, if you can’t trust your developers and support staff to be given admin rights to their own machines how on earth are you going to trust them with the keys to crown jewels? I’ve tried to have a grown-up, responsible conversation in the past to help establish trust by promoting a policy where my day-to-day account should not also be used for support. I was told that, whilst it sounded like a good idea in theory, they didn’t have enough licenses for their 3rd party account management tool to allow it. I then suggested that the cost of accidental failure would probably dwarf that, but my comment was not appreciated.
If I really had to give up this right my (Windows) development machine would need to come pre-configured with a bare minimum of: Visual Studio, the VCS client, a decent Notepad replacement and Gnu on Windows (or equivalent). Of course I’d be the first one out the door the moment the contract came up for renewal.
They say every problem in Computer Science can be solved using an extra level of indirection, can that work here? One option would be to stop doing in-house development altogether and just outsource it all. That way we’d be working for a company who’s bread-and-butter is IT and so they stand a better chance of understanding our needs. Sadly the rise of Agile means that close collaboration with our customer is called for and that puts us straight back into the client’s offices again, but this time with even less influence.
The other candidate for overseeing our well being in a big corporation is the Enterprise Architecture team. These people are allegedly the gate keepers of the company’s IT strategy. Their role, as I’ve always understood it, is to look after the big picture and I can’t think of a bigger IT picture than providing the basic tools that every architect, developer, tester and administrator needs to do their job. Sadly it’s an exclusive club concerned only with “design”; what programmers apparently do is merely “an implementation detail”. Where is the equivalent department for us? There is no “Enterprise Implementation” team that I’ve heard of.
The closet thing I have ever seen to something like this was called The Technology Council. Their role was to try and maintain a level of consistency across the various tool chains that the in-house projects needed so that the skill sets of both its developers and support staff were more portable across its applications. It also tackled the licensing issue and potential impedance mismatch problems between applications and operations. However it didn’t seem to include internet access as one of its mandates.
The only other approach I’ve thought of would be to take a leaf out of the eXtreme Programming manual and get someone from the security team to sit with us in a pair programming kind of way. Then they might at least see what we face and the compromises we have to make.
Whatever solutions we come up with starts with us finding the right person to talk to and that’s often the first hurdle which we stumble on straight away. Blocked content is usually presented in a “big brother” fashion with stern words to make you feel intimated. Nowhere on the page does it give you details of the person or team you should consult if you feel access to the content would be beneficial to getting your job done and is therefore in fact in the businesses own interest in granting you permission.
As a freelance programmer I live and work at the bottom of the corporate food chain. As such I know I can’t begin to imagine what it must be like to try and manage even a small company, let alone a large corporation with thousands of workers, each with different roles and abilities. I know it’s not personal and that I’m only being tarred with the same brush as the rest of the workers because it’s easier to create a homogenous environment.
But surely there must be a way forward; a way for me to do my job without even having to contemplate doing things that either put me out of pocket, risk violating the terms of my contract or just make me look incompetent because everything takes longer than it should.
[1] http://en.wikipedia.org/wiki/Up_to_eleven
[2] http://en.wikipedia.org/wiki/Dancing_pigs
[3] http://en.wikipedia.org/wiki/Principle_of_least_privilege
14 February 2014
Bio
Chris is a freelance developer who started out as a bedroom coder in the 80’s writing assembler on 8-bit micros; these days it’s C++ and C#. He also commentates on the Godmanchester duck race and can be contacted via gort@cix.co.uk or @chrisoldwood.