Credit: Jen Anderson
Accuvant's Ryan Smith talks about disclosure, bounty programs, and vulnerability marketing
Accuvant's Ryan Smith talks about disclosure, bounty programs, and
vulnerability marketing with CSO, in the first of a series of topical
discussions with industry leaders and experts.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense?
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.
Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle?
Ryan Smith (RS), Vice President & Chief Architect, Accuvant and FishNet Security:
Full disclosure and responsible disclosure are not mutually exclusive philosophies. Full disclosure is the philosophy that secrecy is bad for security. Responsible disclosure is the philosophy of reducing the potential harm to security. Coordinated disclosure is the practice of coordinating with vendors when publishing vulnerability information.
When discussing the best form of disclosure there are many facets to consider. Vendors differ in their ability to effectively respond to security vulnerabilities. One vendor may have an entire team dedicated to vulnerability disclosure while another may not have anybody tasked to disclosure and has never dealt with disclosure in the past.
The target of the vulnerability research differs tremendously these days as well. Finding a vulnerability in a traditional software product is different than finding a vulnerability in a website, and both differ from finding a vulnerability in an embedded device. Patching is easy and straight-forward in traditional software. With websites like Facebook or Google, it may not benefit anyone to know the details of the vulnerability since only Facebook or Google need to patch. With embedded devices you deal with difficulties addressing the vulnerability.
Sometimes it requires a complete board revision, and sometimes deploying a patch requires boots on the ground to visit millions of locations. To have a rigid, one-size-fits-all policy will unnecessarily cause chaos in some types of systems. It’s better to intelligently determine how best to disclose and ensure that your disclosure policy allows enough room to adjust.
If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent -- or CERT stops responding to them -- is Full Disclosure proper at this point? If not, why not?
RS: When a researcher is working closely with a vendor to help them fix a vulnerability and the vendor goes silent, it can be frustrating. Did they go silent because they decided not to address the vulnerability? Does the contact have a medical emergency? Or, maybe they disagree with what constitutes a vulnerability?
The point here is that silence is never a good communication option as it usually causes people to synthesize information for themselves. Sometimes, this synthesis causes people to believe that going public with the vulnerability information is the only way to allow people to secure themselves.
In this scenario it’s important for the researcher to take a step back and evaluate the situation from an altruistic perspective rather than from the passion created from having spent months discovering the vulnerability. The researcher must really determine how he or she can best help the security of the people who are using the affected system.
Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly?
RS: Every researcher has individualized motivations for performing the research and perceives value differently. It’s important to consider that performing vulnerability research while not under contract for the vendor is effectively working without getting paid.
My motivation has always been to expand my and the security community’s knowledge of how systems work, not to make money. But if I were in it for the money, when you add up all associated costs – devices, hardware software and time, many of the bug bounties appear to woefully undervalue the researcher’s time and investment.
Vendors can better engage the knowledge-motivated researchers by creating partnerships and sharing. They could give researchers free access to their hardware, software or service. They could invite researchers into private beta tests.
They could invite researchers to speak with the product development teams. By engaging meaningfully and sharing in-kind the knowledge-motivated researcher could have better access, and be able to serve as a partner, delivering real value.
Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value?
RS: When it comes to vulnerability marketing, you have to consider why organizations are undertaking such measures. Vulnerability research is expensive, so companies want to extract the most value from those endeavors. With Heartbleed, I think there was value in giving it a name and promoting awareness of the vulnerability. Some of the marketed vulnerabilities that followed had more questionable value. If you break down the phenomenon of marketed vulnerabilities into elements, there are good qualities and bad qualities.
First, anti-virus companies have been naming malware since their inception. Giving vulnerabilities a memorable, pronounceable name allows better discussion, and allows people to more easily remember the nuances of the vulnerability so the mistake isn’t repeated.
If a company were to setup a website, logo and hype the vulnerability to the media more than the vulnerability warranted, that’s less productive. It improperly focuses the attention of the security community on things that don’t matter, possibly diverting resources from more important vulnerabilities. If the attention isn’t unwarranted, and facts aren’t skewed, then it doesn’t matter if companies throw vulnerability launch parties.
If the proposed changes pass, how do you think Wassenaar will impact the disclosure process? Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day? Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law?
RS: If the proposed Wassenaar laws pass, the community will adjust over the course of a year or two. If nobody is found to be pursued for Wassenaar violations, then it’s likely it’ll be business as usual. If people are pursued, it will likely make it so researchers no longer communicate their findings to vendors and revert the security industry back to 1997 when 0-day exploits were traded in tight knit circles.
The proposed Wassenaar laws, as they’re written and as I understand them, make it so that disclosure of vulnerabilities to an organization outside of the US would require an export license for each disclosure.
Hacked Opinions is an ongoing series of Q&As with industry leaders and experts on a number of topics that impact the security community. The first set of discussions focus on disclosure and how pending regulation could impact it. In addition, we asked about marketed vulnerabilities such as Heartbleed and bounty programs, do they make sense?
CSO encourages everyone to take part in the Hacked Opinions series. If you would like to participate, email Steve Ragan with your answers to the questions presented in this Q&A, or feel free to suggest topics for future consideration.
Where do you stand: Full Disclosure, Responsible Disclosure, or somewhere in the middle?
Ryan Smith (RS), Vice President & Chief Architect, Accuvant and FishNet Security:
Full disclosure and responsible disclosure are not mutually exclusive philosophies. Full disclosure is the philosophy that secrecy is bad for security. Responsible disclosure is the philosophy of reducing the potential harm to security. Coordinated disclosure is the practice of coordinating with vendors when publishing vulnerability information.
When discussing the best form of disclosure there are many facets to consider. Vendors differ in their ability to effectively respond to security vulnerabilities. One vendor may have an entire team dedicated to vulnerability disclosure while another may not have anybody tasked to disclosure and has never dealt with disclosure in the past.
The target of the vulnerability research differs tremendously these days as well. Finding a vulnerability in a traditional software product is different than finding a vulnerability in a website, and both differ from finding a vulnerability in an embedded device. Patching is easy and straight-forward in traditional software. With websites like Facebook or Google, it may not benefit anyone to know the details of the vulnerability since only Facebook or Google need to patch. With embedded devices you deal with difficulties addressing the vulnerability.
Sometimes it requires a complete board revision, and sometimes deploying a patch requires boots on the ground to visit millions of locations. To have a rigid, one-size-fits-all policy will unnecessarily cause chaos in some types of systems. It’s better to intelligently determine how best to disclose and ensure that your disclosure policy allows enough room to adjust.
If a researcher chooses to follow responsible / coordinated disclosure and the vendor goes silent -- or CERT stops responding to them -- is Full Disclosure proper at this point? If not, why not?
RS: When a researcher is working closely with a vendor to help them fix a vulnerability and the vendor goes silent, it can be frustrating. Did they go silent because they decided not to address the vulnerability? Does the contact have a medical emergency? Or, maybe they disagree with what constitutes a vulnerability?
The point here is that silence is never a good communication option as it usually causes people to synthesize information for themselves. Sometimes, this synthesis causes people to believe that going public with the vulnerability information is the only way to allow people to secure themselves.
In this scenario it’s important for the researcher to take a step back and evaluate the situation from an altruistic perspective rather than from the passion created from having spent months discovering the vulnerability. The researcher must really determine how he or she can best help the security of the people who are using the affected system.
Bug Bounty programs are becoming more common, but sometimes the reward being offered is far less than the perceived value of the bug / exploit. What do you think can be done to make it worth the researcher's time and effort to work with a vendor directly?
RS: Every researcher has individualized motivations for performing the research and perceives value differently. It’s important to consider that performing vulnerability research while not under contract for the vendor is effectively working without getting paid.
My motivation has always been to expand my and the security community’s knowledge of how systems work, not to make money. But if I were in it for the money, when you add up all associated costs – devices, hardware software and time, many of the bug bounties appear to woefully undervalue the researcher’s time and investment.
Vendors can better engage the knowledge-motivated researchers by creating partnerships and sharing. They could give researchers free access to their hardware, software or service. They could invite researchers into private beta tests.
They could invite researchers to speak with the product development teams. By engaging meaningfully and sharing in-kind the knowledge-motivated researcher could have better access, and be able to serve as a partner, delivering real value.
Do you think vulnerability disclosures with a clear marketing campaign and PR process, such as Heartbleed, POODLE, or Shellshock, have value?
RS: When it comes to vulnerability marketing, you have to consider why organizations are undertaking such measures. Vulnerability research is expensive, so companies want to extract the most value from those endeavors. With Heartbleed, I think there was value in giving it a name and promoting awareness of the vulnerability. Some of the marketed vulnerabilities that followed had more questionable value. If you break down the phenomenon of marketed vulnerabilities into elements, there are good qualities and bad qualities.
First, anti-virus companies have been naming malware since their inception. Giving vulnerabilities a memorable, pronounceable name allows better discussion, and allows people to more easily remember the nuances of the vulnerability so the mistake isn’t repeated.
If a company were to setup a website, logo and hype the vulnerability to the media more than the vulnerability warranted, that’s less productive. It improperly focuses the attention of the security community on things that don’t matter, possibly diverting resources from more important vulnerabilities. If the attention isn’t unwarranted, and facts aren’t skewed, then it doesn’t matter if companies throw vulnerability launch parties.
If the proposed changes pass, how do you think Wassenaar will impact the disclosure process? Will it kill full disclosure with proof-of-concept code, or move researchers away from the public entirely preventing serious issues from seeing the light of day? Or, perhaps, could it see a boom in responsible disclosure out of fear of being on the wrong side of the law?
RS: If the proposed Wassenaar laws pass, the community will adjust over the course of a year or two. If nobody is found to be pursued for Wassenaar violations, then it’s likely it’ll be business as usual. If people are pursued, it will likely make it so researchers no longer communicate their findings to vendors and revert the security industry back to 1997 when 0-day exploits were traded in tight knit circles.
The proposed Wassenaar laws, as they’re written and as I understand them, make it so that disclosure of vulnerabilities to an organization outside of the US would require an export license for each disclosure.