We found out a couple of days ago that there’s a proposal to change policy so that only people working in Operations have card-swipe access to the data centers; the rest of us who work on the machines there will (under the proposed policy) have to be checked in and out by the operators.
Well, this seems like a great idea. Now every time we need to go into the data center we’ll get to stand around and wait for an operator, who will have to interrupt whatever he’s doing to let us in. And what benefit does this provide? It certainly isn’t going to make things any more secure. (The amount of effort being expended on physical security for the data centers already seems way out of proportion to the risks.) It will, however, increase the feeling that management doesn’t trust us or respect our abilities. (And hurt morale, as was so eloquently expressed in yesterday’s ITS all staff meeting.)
Some people seem to have the idea that if you set up sufficient rules you can prevent bad things from happening. This never actually works: as long as you’re doing something, there’s a chance that things will go wrong. The best way to get things done with minimal risk is to trust people but hold them accountable for the results.
I think rules can slow down the rate of bad things happening. (Think pre-flight checklists.) But they can also slow down the rate of good things happening. This cost is often invisible, or nearly so (beyond griping about red-tape): it’s hard to measure the things that haven’t happened.
Trusting people requires accepting risk. This is not a risk-tolerant culture, and all the “fresh look”s are not going to improve that situation.
I saw this linked over on Hacker News and thought it was a nice contrast: http://en.wikipedia.org/wiki/Nordstrom#Employee_handbook
It also seems to me that it’s kind of like screening the pilot before he gets on the plane. Um, he’s the pilot. He can already crash the plane if he wants to, and the only thing keeping him from hijacking the plane is the definition of “hijack”.
Does anybody really believe that the people being signed into the data centers, who works on the equipment there as part of their jobs, will be prevented from doing something bad to them once signed in by the operators? Here we have only the cost of the security, without the actual improvement in security. You have to trust the people working on the equipment, and signing them in doesn’t change that fact.
Of course, they DO screen pilots, even putting them through the new full body scans and making some of them furious ( http://www.baltimoresun.com/wreg-pilot-protest,0,774981,full.story ). Makes no sense, but they do it.
In our culture of distrust, I think such crazy rituals are perhaps intended to emphasize that no one is trusted. “Don’t confuse yourself with a trusted person just because you have some responsibility.” And perhaps sometimes to increase fear to make people more compliant with higher-up decisions or more reluctant to protest. But yes they are patently senseless in terms of their ostensible purpose.
Terrible for societal cohesion OR work morale.
Someone wants to look like he is “improving security”, so the next eval will be better. If you can’t trust the folks who have to maintain the hardware and software, you should replace them with folks you do trust. The Ops folks are (in my experience) really good at their job, but unless they are watching over your shoulder while you work on the equipment, the signing in business does NOTHING.
sigh, management…