Tuesday, October 29, 2013

Disclosure Policies vs. Security Researchers

So, I suck at blogging consistently.  In my defense, it's been a tough month (but that's another story for another time).  This post is a follow up to two previous posts.  In the first post, I made an argument for bug bounties.  My good friend Lenny Zeltser posted a response, making a couple of good points, which I addressed in a follow up post.  But I failed to address Lenny's question, deferring that to a second follow-up.  Unfortunately, that took almost a month to write.  For the sake of completeness (and those too lazy to click the links), Lenny's comment/question was:
While some companies have mature security practices that can incorporate a bug bounty program, many organizations don't know about the existence of the vulnerability market. Such firms aren't refusing to pay market price for vulnerabilities--they don't even know that vulnerability information can be purchased and sold this way. Should vulnerability researchers treat such firms differently from the firms that knowingly choose not to participate in the vulnerability market?
I addressed everything but the last question (I think) in the last post.  But Lenny addresses a serious ethical concern here. Should we as security researchers treat firms differently based on their participation in (or knowledge of) the vulnerability market?  There is an implied question here that may be difficult to examine: namely, how do you as a security researcher determine whether the firm has knowledge of a vulnerability market?

I would propose that one way to confirm knowledge is an explicit "we don't pay for bugs" message on the website.  This implies that they know other companies pay for bugs, but they refuse to lower themselves to that level.  IMHO, these guys get no mercy.  They don't give away their research (their software), I'm not really interested in giving mine away either.  Ethically, I think I'm good here to release anything I find (and fire for effect).

Generally, I treat any company with a disclosure policy (but no bug bounty) in the same category as those who simply refuse to pay.  If you have a published disclosure policy, it doesn't pass the sniff test that you don't also know about bug bounties.  Even if there's no explicit policy on paying (or not paying) bug bounties, the omission of this data in and of itself means that that you're not paying.  Bad on you.  Again, I argue for no mercy using the same "your time isn't free, why should mine be" argument.

In the two categories above, it's pretty easy to slam a company by using full public disclosure or third party sale. What about when neither of these conditions have been met?  What sorts of disclosure are appropriate in these cases?  Is third party sale of the vulnerability appropriate?

In my opinion, this can be handled on a case by case basis.  However, I'm going to take the (probably unpopular) position that the answer has as much to do with the security researcher as it does with the target company.  For instance, I would expect a large vulnerability research firm to exercise some level of responsible disclosure when dealing with a software company that employs two full time developers.  I would hope that they would work to perform a coordinated disclosure of the vulnerability.

However, I don't think an independent vulnerability researcher with no budget has much motivation to work closely with a large software vendor that has no disclosure policy.  If the software firm is making money, why expect an independent researcher to work for free?  The security researcher may find himself in a sticky situation if the company has no public bug bounty.  Does the company have an explicit policy not to pay for bugs?  Is the lack of a disclosure policy just an oversight?  

The independent researcher might prefer to give the vulnerability to the vendor, but also has rent to pay.  In this case, should the researcher approach the vendor and request payment in exchange for the bug?  This seems to be at the heart of what Lenny originally asked about.  Clearly this is an ethical dilemma.

If the researcher approaches the vendor asking for money, only three possible outcomes exist:
  1. The vendor pays a reasonable (market price) bounty
  2. The vendor offers a pittance for the vuln (see Yahoo! t-shirtgate)
  3. The vendor refuses to pay any price (and may attempt legal action to prevent disclosure)
Two of these outcomes are subpar for the researcher.  Assuming they all have equal probabilities of occurrence (in my experience they don't), the answer is already clear.  Further, in the other two cases, the security researcher may have limited his ability to sell the vulnerability to another party.  This may be due to pending legal action.  In another case, enough details are released to the vendor to substantiate the bug that the vendor is able to discover and patch.

So my answer to Lenny's question is a fair "it depends."  I'm not all for a big corporate entity picking on the little guy.  But if the tables are reversed, sounds like a payday to me (whether or not the existence of a vulnerability market can be provably known). 

Only one question remains in my mind: what if there is no bug bounty but because the attack space for the vulnerability is very small, there is also no market for the vulnerability?  Well in this case, disclosure is coming, it's just a question of whether the disclosure is coordinated with the vendor.  I don't have strong opinions here, but feel it's up to the researcher to evaluate which disclosure option works best for him.  Since he's already put in lots of free labor, don't be surprised when he chooses the one most likely to being in future business.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.