It's particularly true in the field of Web application security.
I continuously monitor the major Web application security discussion forums, and occasionlly chime in on a few items, but it's become increasingly clear that too often the folks discussing Web security (as experts) are operating on a plane several layers removed from how real people use those applications and what "security" really means to the end user. That's not to say the security experts are wrong - they're not - but the root cause of the security threat is often quite remote to the technical or technological solution proposed by the experts.
An area of interest to me within Web application security is "security ergonomics" - which is a label I tend to use to describe just how well the security elements of an application fit (and work) with the end non-security user, and how much they impact the safe use fo the application. The net impact for a poorly thought out security mechanism is typically counterproductive to the overall security of the application is was designed to operate in. Which, when translated to non-geek basically means "if it gets in the way of how the user wants to use the application or causes them to think about it, it'll be ignored and bypassed".
For technical minds, the sign on the machine in the photo above - "CAUTION - This machine has no brain use your own" - is what is expected of application users today. Unfortunately, I don't believe that such warnings work, and more effort needs to applied by Web application developers to embed their security and make it invisible to the end user.
Reconsider the warning above. While it sounds like a great message that should be applied to Web application security, there are some key reasons why this is inappropriate. Firstly, the probability that Joe Public can randomly walk around within an industrial environment and come face to face with this machinery is pretty remote. Secondly, the warning was designed for the skilled and trained employees who regularly work within that particular environment - not as a warning, but as a reminder of the systems capability.
By way of example of where a divergence in security thought occurs. Examine the login panels for various Web mail portals to the right.
Most technical users and those with some degree of security paranoia would know what to do next. But, as for Joe Public, what the hell is "Advanced Login"? Why should it be important? Why should they use it? Whats the difference between "advanced" and "secure"?... "let me in to my webmail damn'it, and don't ask me stupid questions!"
I'll let you in to a dirty little secret, 99.9% of users to using these webmail portals believe they're secure already. The "Advanced" and "Secure" login options are redundant for them. Not because they willing choose to be insecure, but because they have assumed that they are already secure (or secure enough).
Sure, the security geeks can argue all they like about the merits of providing these enhanced security options, but for 99.9% percent of their user base those options are redundant. On top of that they add to the users confusion about how to use the application.
So, with that in mind, and in the context of "security ergonomics", the correct Web application security model should have been:
- The maximum security option is the default - i.e. HTTPS - not the other way around.
- There should be no other login options visible on the first login panel.
- Only after they fail to log in the first time should they be offered the facility to lessen their security settings (e.g. offer to turn off HTTPS - and provide a visible warning that it is now less secure).