Monday, December 7, 2009

Northrup Grumman Funnels Money Into Security Research

Universities getting together to spend millions of Northrup Grumman's dollars on security research? Sounds good to me. Earlier this month, Northrup Grumman announced their new Cybersecurity Research Consortium, which is a teaming of three University research activities:
Among the topics to be addressed will be trusted computing platforms, control systems security, and mobile phone forensics. The effort is directed at bringing information assurance and security development ahead of security threat trends. Read more at CyLab's CyBlog.

Friday, October 30, 2009

Cloud Compliance and A6

Commenting on the same blog's posts (TaoSecurity) twice in a row is probably bad blogger form, but I do read it often. Anyway, in a recent post, titled "Initial Thoughts on Cloud A6," Richard Bejtlich provides his feedback on an idea to create an API that allows cloud consumers to access security information about the cloud from the provider. The API is called A6 (Audit, Assertion, Assessment And Assurance API), and Richard points out that while this work claims to address issues with auditing cloud security, it really addresses compliance-driven issues related to the cloud.

You can read much more about the API effort on Chris Hoff's (one of the creators) blog, specifically the post titled "Extending the Concept: A Security API for Cloud Stacks"

There is no question that this is definitely more about compliance and much less about actual security. As a service provider employee who spends a lot of time dealing with customer-furnished compliance-driven audit questionnaires, I have thought about this problem extensively.

As you can imagine (or recall if you are employed similarly) the questions asked in most of these questionnaires quite often overlap, as they are based on the same industry compliance standards and regulations that their peer organizations must adhere to. I have often thought that there should be a standardized way to provide the information requested in most of these questionnaires, and I have even found a few examples of attempts to implement such standard information gathering methods. Take a look at the Shared Assessments program's SIG questionnaire (, which I think is an excellent example of work in this area. The idea is to have an agreed upon set of questions, acceptable responses, and procedures to receive and evaluate such answers. In doing so, you make it easy for service providers to maintain and provide this information in a format with which consumer organizations will likely be satisfied.

The problem is not that it cannot be done. The problem is that in order for this type of system to work, everyone has to implement the in-house procedures to support the system. Unfortunately, audit procedures tend to grow from the inside outward. Often what occurs is that the questionnaire we receive is tightly wound into their corporate policies and procedures by the time we receive it. To quote Craig Balding's description of the typical security questionnaire:

1. it’s the result of 100 hours of internal team meetings
2. it’s gone through 14 drafts, 20 reviewers inboxes, 76 yellow highlighter comment fields and was printed at least 6 times
3. it only asks IT security questions..
4. it’s laced with a few tricky landmine questions based on potential security issues raised (but not satisfactorily answered) in online forums and provider support forums
5. it contains 25 attachments detailing all the company security policies that *must* be followed...

I would add my own #6- it is revised or completely changed every year to request more and more information and documentation from the service provider regardless of the practicality or security implications of providing such detail about an infrastructure shared by a myriad of consumer organizations, including competitors...

That being said, the A6 API is just another program like Shared Assessments, possibly offering a more dynamic way of presenting the agreed upon information to the consumer organization. Instead of sending and receiving a spreadsheet, they would run an application to gather and evaluate the information, and follow up on any percieved gaps. This implies costs associated with implementing and changing these procedures, on both sides, and again, both the service provider and the consumer organization have to agree that this is the best, most cost effective method for them. Not saying it isn't a good idea, but I do not think it solves the underlying issue.

Thursday, October 8, 2009

Richard Bejtlich on Technical Visibility

Security guru Richard Bejtlich's latest post describes a scale with which we can measure the Technical Visibility of a piece of technology. In the post he suggests that the technology we use is becoming increasingly feature-rich without a corresponding move toward open architecture, threatening our ability to trust what a device/application/machine is really doing behind the scenes. The question is how do you measure the true *need* for a certain level of Technical Visibility? In other words, I think we would benefit from a scale that relates both the properties of a piece of technology (for example, ip enabled vs non-ip enabled) and the context in which it is used (for example storing confidential information vs. storing my grocery list), to a specific level of Technical Visibility. The concept does, in my mind, bring up familiar questions about how certain companies decide how much access the consumer should have to study the inner-workings of a device like, for example, the iPhone...