Category Archives: Vulnerability Disclosure

NTIA, Bug Bounty Programs, and Good Intentions

[Note: This blog had been sitting as a 99% completed draft since early September. I lost track of time and forgot to finish it off then. Since this is still a relevant topic, I am publishing now despite it not being quite as timely in the context of the articles cited in it.]

An article by Kim Zetter in Wired, titled “When Security Experts Gather to Talk Consensus, Chaos Ensues“, talks about a recent meeting that tried to address the decades-old problem of vulnerability disclosure.

This article covers a recent meetings organized by the National Telecommunications and Information Administration (NTIA), a division of the US Commerce Department (DoC). This is not the main reason I am blogging, when most would assume I would speak up on this topic. I’ll give you an odd insight into this from my perspective, then get to the real point. The person who organized this, was virtually introduced to me on July 31, 2015. He replied the same day, and had a great introduction that showed intent to make this a worthwhile effort:

The US Dept of Commerce is trying to help in the old dance between security researchers and system/SW vendors. We think that folks on most sides want a better relationship, with more trust, more efficiency, and better results. Our goal is to bring the voices together in a “multistakeholder process” to find common ground in an open participant-led forum.

I’ve been trying to reach out to many people in this area, on all sides. I know that you’re both veterans of many discussions like this, and I’d like to chat with you about lessons learned, how things may (or may not) have changed, and generally get your insight into how to make this not suck.

That level of understanding from a U.S. government employee is rewarding and encouraging. But, in the interest of fairness and pointing out the obvious, the first thing I asked:

In the past couple of weeks, I have heard mention of the Dept. of Commerce becoming involved in vulnerability disclosure. One thing that is not clear to myself, and many others, is what your intended role or “jurisdiction” (for lack of better words) is?

That is an important question that the DoC must consider, and understand how to respond to.

That question should still be on everyone’s minds, as it has not been answered in a factual manner. Good intentions only get you so far. Having another government player in this mess, who has no power, no jurisdiction, and only “the best of intentions” can only muddy the waters at best. The notes of the ~ 6 hour meeting are now online. I haven’t read them, don’t plan to. This is pure academic masturbation that has escaped from academia, and missed the age old point that so many refuse to admit. “When the $government outlaws chinchillas, only the outlaws will have chinchillas.” Seriously, stupidly simple concept that basically as much the basis for our society as anything else you hold dear.

I’m jumping past that, because a vulnerability tourist said something else that needs addressing. If you haven’t seen me use that term, ‘vulnerability tourist’ refers to someone who dabbles in the bigger world of vulnerability aggregation, disclosure, and the higher-level ideas surrounding it. This doesn’t speak to finding vulns (although it helps), disclosing them (although it helps), or aggregating them for long-term analysis (although it helps). It speaks to someone who thinks they are doing good speaking as some form of expert, but are actually continuing to harm it due to a lack of real knowledge on the topic. When someone who has next to no experience in those realms speaks on behalf of the industry, it becomes difficult to take them seriously… but unfortunately, journalists do. Especially when it comes to hot topics or the dreaded “0day” vulnerabilities, that happen every day… while the media cherry picks the occasional one. As usual in our industry, give such a person a bit of rope and you are likely to find them hanging a few days or weeks later.

Focusing on a single aspect of the Wired article:

Members of the audience snickered, for example, when a representative from the auto industry pleaded that researchers should consider “safety” when testing for vulnerabilities. At a bar after the event, some attendees said automakers are the ones who don’t seem concerned about consumer safety when they sell cars that haven’t been pen-tested for vulnerabilities or when it takes them five years to fix a known vulnerability.

And when Corman sought community support for new companies entering the bug bounty arena, some attendees responded with derision. He noted that after United Airlines launched its bug bounty program this year—the first for the airline industry—it suffered backlash from the security community instead of support.

Just Google “volkswagen emissions software” and you will see why the public cannot trust auto manufacturers. Nothing to do with vulnerability research, and at least one is manipulating their software to defraud the public which may hurt the world environment in ways we can’t comprehend. If that isn’t enough for you, consider the same company spent two years doing everything in their power to hide a vulnerability in their software, that may have allowed criminals to more easily steal consumer’s cars.

So first, ask yourself why anyone from the security arena is so gung-ho in supporting the auto industry. Sure, it would benefit our industry, and more importantly benefit the average consumer. Effecting that change would be incredible! But given the almost three decades of disclosure debate centered on questionable dealings between researchers and vendors, it is not a fight you can win fast. And more importantly, it is not one you capitulate to for your own gain.

Moving past that, the quote from Corman speaks to me personally. When United announced their bug bounty program, it received a lot of media attention. And this is critical to note. A company not known for a bug bounty program, in an industry not known for it, offering perks that were new to the industry (i.e. United, airlines, and frequent flier miles). Sure, that is interesting! Unfortunately, none of the journalists that covered that new bounty program read their terms, or if they did, couldn’t comprehend why it was a horrible program that put researchers at great risk. To anyone mildly familiar with bounty programs, it screamed “run, not walk, away…”. I was one of the more vocal in criticizing the program on social media. When a new bounty opens up, there are hundreds of (largely) neophyte researchers that flock to it, trying to find the lowest hanging fruit, to get the quick and easy bounties. If you have run a vulnerability reporting program, even one without bounties, you likely have witnessed this (I have). I was also very quick to start reaching out to my security contacts, trying to find back-channel contacts to United to give them feedback on their offering.

United’s original bounty program was not just a little misleading, but entirely irresponsible and misleading. It basically said, in laymen’s terms, “if you find a vuln in our site, report it and you may get up to 1 million airline miles!” It also said, you cannot test ANY united.com site, you cannot test our airplanes (on the back of the Chris Roberts / United / FBI drama), our mobile application, or anything else remotely worth testing. The program had a long list of what you could NOT test, that excluded every single target a malicious hacker would target. Worse? It did not offer a test / Dev network, a test airplane, or any other ‘safe’ target to test. In fact, it even excluded the “beta” United site! Uh… what were bounty-seekers supposed to do here? If United thinks that bug bounty seekers read past the “mail bugs to” and “this is the potential bounty”, they need to reconsider their bounty program. The original bounty excluded:

“Code injection on live systems”

Bugs on customer-facing websites such as:
united.com
beta.united.com
mobile.united.com

Yet, despite that exclusionary list, they did NOT include what was ALLOWED to be tested. That, in modern security terms, is called a honeypot. Don’t even begin to talk about “intent”, because in a court of law with some 17 year old facing CFAA charges, intent doesn’t enter the picture until way too late. The original United program was set up so that they could trivially file a case with the FBI and go after anyone attacking any of their Internet-addressable systems, their mobile apps, or their airplanes. Shortly after the messy public drama of a white hat hacker telling the media he tested the airplane systems, and could have done “bad things” in so many words.

My efforts to reach out to United via back-channels worked. One of their security engineers that is part of the bounty program was happy to open a dialogue with me. See, this is where we get to the bits that are the most important. We traded a few mails, where I outlined my issues with the bounty program and gave them extensive feedback on how to better word it, so that researchers could not only trust the program, but help them with their efforts. The engineer replied quickly saying they would review my feedback and I never heard back. That is the norm for me, when I reach out and try to help a company. So now, when I go to write this blog, of course I look to see if United revised their bounty program! Let’s look at the current United bug bounty offering:

Bugs that are eligible for submission:

Authentication bypass
Bugs on United-operated, customer-facing websites such as:
united.com
beta.united.com
mobile.united.com
mystatus.united.com
smartphone.continental.com
Bugs on the United app
Bugs in third-party programs loaded by united.com or its other online properties

Wow… simply WOW. That is a full 180 from the original program! Not only do they allow testing of the sites they excluded before, but they opened up more sites to the program. They better defined what was allowed (considerably more lenient for testing) as far as technical attacks, and targets. I still disagree a bit on a few things that are “not allowed”, but completely understand their reasons for doing so. They have to balance the safety of their systems and customers with the possibility that a vulnerability exists. And they are wise to consider that a vulnerability may exist.

So after all that, let’s jump back to the quote which drew my ire.

And when [Josh] Corman sought community support for new companies entering the bug bounty arena, some attendees responded with derision. He noted that after United Airlines launched its bug bounty program this year—the first for the airline industry—it suffered backlash from the security community instead of support.

This is why a soundbyte in a media article doesn’t work. It doesn’t help vendors, and it doesn’t help researchers. It only helps someone who has largely operated outside the vulnerability world for a long time. Remember, “security researcher” is an overly vague term that has many meanings. Nothing about that press release suggests Corman has experience in any of the disciplines that matter in this debate. As a result, his lack of experience shows clearly here. Perhaps it was his transition from being a “DevOps” expert for a couple years, to being a “Rugged Software” expert, to becoming a “vulnerability disclosure” expert shortly after?

First, United’s original bug bounty offering was horrible in every way. There was basically zero merit to it, and only served to put researchers at risk. Corman’s notion that scaring companies away from this example is ‘bad’ is irresponsible and contradictory to his stated purpose with the I Am The Cavalry initiative. While Corman’s intentions are good, the delivery simply wasn’t helpful to our industry, the automobile industry, or the airline industry. Remember, “the road to hell is paved with good intentions“.

A quick, factual reminder on the value and reality of a “EULA”… (aka MADness)

This post is in response to the drama the last few days, where Mary Ann Davidson posted an inflammatory blog about security researchers that send Oracle vulnerabilities while violating their End-user License Agreement (EULA… that thing you click without reading for every piece of software you install). The post was deleted promptly by Oracle, then Oracle said it was not the corporate line, and due to the crazy journalists who of course felt obligated to cover. You can read up on the background elsewhere, because it has absolutely no bearing on reality, which this very brief blog covers.

This is such an absurdly simple concept to grasp, yet the CISO of Oracle (among others) are oblivious to it. Part of me wants to write a scathing 8 page “someone is wrong on the Internet” blog. The other part of me says sleep is more valuable than dealing with these mouth-breathing idiots, which Davidson is one of. Sleep will win, so here is the cold hard facts and reality of the situation. Anything else should be debated at some obscure academic conference, but we know Oracle pays big money to debate it to politicians. Think about that.

Reality #1: Now, let’s start with an age-old saying… “when chinchillas are outlawed, only outlaws will have chinchillas.” Fundamentally, the simple fact that cannot be argued by any rational, logical human, is that laws apply to law-abiding citizens. Those who break the law (i.e. criminal, malefactor, evildoer, transgressor, culprit, felon, crook, hoodlum, gangster, whatever…) do not follow laws. Those who ignore criminal law, likely do not give two fucks about civil law, which a EULA violation would fall under.

Reality #2: Researchers get access to crappy Oracle software in the process of performing their job duties. A penetration test or audit may give them temporary access, and they may find a vulnerability. If the client doesn’t mandate they keep it in-house, the researcher may opt to share it with the vendor, doing the right thing. Where exactly does the EULA fit in here? It was agreed to by the customer, not the third-party researcher. Even if there is a provision in the EULA for such a case, if the company doesn’t warn the researcher of said provision, how can they be held liable?

Reality #3: Tying back into #1 here, what are the real consequences? This is civil law, not criminal. When it comes to criminal law, which is a lot more clear, the U.S. doesn’t have solid extradition case-law backing them. We tend to think “run to Argentina!” when it comes to evading U.S. law. In reality, you can possibly just run to the U.K. instead. Ignore the consequences, that is not relevant when it comes to the law in this context. If you focus on “oh but the death penalty was involved”, you are not understanding Law 101.

In the case of Soering v. United Kingdom, the European Court of Human Rights ruled that the United Kingdom was not permitted under its treaty obligations to extradite an individual to the United States, because the United States’ federal government was constitutionally unable to offer binding assurances that the death penalty would not be sought in Virginia courts.

Now, consider all of the countries that have no extradition treaty with the U.S. There are a lot. How many? Think less on the volume, think more on how well-known this is… a quick Google shows that U.S. news tells us where to run! CNBC says “10 hideout cities for fugitives” and DailyFinance says “Know Where to Run to: The 5 Best Countries With No Extradition“. Not enough? Let’s look at the absolute brilliance that local news can deliver, since my search was intended to find a short list of countries with no extradition, and Wikipedia failed me. Leave it to WSFA 12 in Alabama, to give us a very concise list of countries with no extradition treaty with the US! Criminals, send a spoofed email of thanks to this station for cliff-noting this shit.

These countries currently have no extradition treaty with the United States:

Afghanistan, Algeria, Andorra, Angola, Armenia, Bahrain, Bangladesh, Belarus, Bosnia and Herzegovina, Brunei, Burkina Faso, Burma, Burundi, Cambodia, Cameroon, Cape Verde, the Central
African Republic, Chad, Mainland China, Comoros, Congo (Kinshasa), Congo (Brazzaville), Djibouti, Equatorial Guinea, Eritrea, Ethiopia, Gabon, Guinea, Guinea-Bissau, Indonesia, Ivory Coast, Kazakhstan, Kosovo, Kuwait, Laos, Lebanon, Libya, Macedonia, Madagascar, Maldives, Mali, Marshall Islands, Mauritania, Micronesia, Moldova, Mongolia, Montenegro, Morocco, Mozambique, Namibia, Nepal, Niger, Oman, Qatar, Russia, Rwanda, Samoa, São Tomé & Príncipe, Saudi Arabia, Senegal, Serbia, Somalia, Sudan, Syria, Togo, Tunisia, Uganda, Ukraine, United Arab Emirates, Uzbekistan, Vanuatu, Vatican, Vietnam and Yemen.

Now, can anyone arguing in favor of Davidson’s “EULA speech”, that Oracle officially disagreed with, explain how a EULA protects a company in any way, in a real-world scenario?

Quite simply, there are two major issues at play. First, the absurd idea that a EULA will protect you from anything, other than chasing Intellectual Property (IP) lawsuits against other companies. That happens a lot, to be sure. But it has no bearing, in any way, on security research.

Second, I think back to something an old drunk friend told me a few times. “Never lick a gift-whore in the mouse.” I said he was a drunk friend. Security researchers who ply their trade, find vulnerabilities in your product, report them to you, and wait for you to release a patch? Embrace them. Hug them. Pay them if you can. They are your allies… and every vulnerability they help you squash, is one less vulnerability a bad guy can use to pop your customers. No one in their right mind would ever alienate such a process.

Vendors sure like to wave the “coordination” flag… (revisiting the ‘perfect storm’)

I’ve written about coordinated disclosure and the debate around it many times in the past. I like to think that I do so in a way that is above and beyond the usual old debate. This is another blog dedicated to an aspect of “coordinated” disclosure that vendors fail to see. Even when a vendor is proudly waving their own coordination flag, decrying the actions of another vendor, they still miss out on the most obvious.

In order to understand just how absurd these vendors can be, let’s remember what the purpose of “coordinated disclosure” is. At a high level, it is to protect their consumers. The idea is to provide a solution for a security issue at the same time a vulnerability becomes publicly known (or ideally, days before the disclosure). For example, in the link above we see Microsoft complaining that Google disclosed a vulnerability two days before a patch was available, putting their customers at risk. Fair enough, we’re not going to debate how long is enough for such patches. Skip past that.

There is another simple truth about the disclosure cycle that has been outlined plenty of times before. After a vendor patch becomes public, it takes less than 24 hours for a skilled researcher to reverse it to determine what the vulnerability is. From here, it could be a matter of hours or days before functional exploit code is created based on the complexity of the issue. Factor in all of the researchers and criminals capable of doing this, and the worst case scenario is that within very few hours a working exploit is created. Best case scenario, customers may have two or three days.

Years ago, Steve Christey pointed out that multiple vendors had released patches on the same day, leading me to write about how that was problematic. So jump to today, and that has become the reality that organizations face at least once a year. But… it got even worse. On October 14, 2014, customers got to witness the dark side of “coordinated disclosure”, one that these vendors are quick to espouse, but equally quick to disregard themselves. In one day we received 25 Microsoft vulnerabilities, 117 Oracle vulnerabilities, 12 SAP vulnerabilities, 8 Mozilla advisories, 6 adobe vulnerabilities, 1 Cisco vulnerability, 1 Chrome OS vulnerability, 1 Google V8 vulnerability, and 3 Linux Kernel vulnerabilities disclosed. That covers every major IT asset in an organization almost and forces administrators to triage in ways that were unheard of years prior.

Do any of these vendors feel that an IT organization is capable of patching all of that in a single day? Two days? Three days? Is it more likely that a full week would be an impressive task while some organizations must run patches through their own testing before deployment and might get it done in two to four weeks? Do they forget that with these patches, bad guys can reverse them and have working exploit in as little as a day, putting their customers at serious risk?

So basically, these vendors who consistently or frequently release on a Tuesday (e.g. Microsoft, Oracle, Adobe) have been coordinating the exact opposite of what they frequently preach. They are not necessarily helping customers by having scheduled patches. This year, we can look forward to Oracle quarterly patches on April 14 and July 14. Both of which are the “second Tuesday” Microsoft / Adobe patch times. Throw in other vendors like IBM (that has been publishing as many as 150 advisories in 48 hours lately), SAP, Google, Apple, Mozilla, and countless others that release frequently, and administrators are in for a world of hurt.

Oh, did I forget to mention that kicker about all of this? October 14, 2014 has 254 vulnerabilities disclosed. On the same day that the dreaded POODLE vulnerability was disclosed, impacting thousands of different vendors and products. That same day, OpenSSL, perhaps the most oft used SSL library released a patch for the vulnerability as well, perfectly “coordinated” with all of the other issues.

Microsoft’s latest plea for CVD is as much propaganda as sincere.

Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.

Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.

Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.

In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).

Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.

Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.

And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.

Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.

Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.

It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.

This is a great point. But, let’s read on and offer some context using your own words…

Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.

Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.

CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.

And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)

[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]

One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…

What’s right for Google is not always right for customers.

This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.

For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?

It is a zero sum game where all parties end up injured.

What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.

Betz’ blog goes on to quote the Microsoft CVD policy which states:

Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.

Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:

Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.

Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.

Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…

Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.

Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.

Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?

Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.

Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.

The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.

While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.

The Five High-level Types of Vulnerability Reports

Based on a Twitter thread started by Aaron Portnoy that was replied to by @4Dgifts asking why people would debunk vulnerability reports, I offer this quick high-level summary of what we see, and how we handle it.

Note that OSVDB uses an extensive classification system (that is very close to being overhauled greatly for more clarity and granularity), in addition to CVSS scoring. Part of our classification system allows us to flag an entry as ‘not-a-vuln’ or ‘myth/fake’. I’d like to briefly explain the different, but also in the bigger picture. When we process vulnerability reports, we only have time to go through the information disclosed usually. In some cases we will spend extra time validating or debunking the issue, as well as digging up information the researcher left out such as vendor URL, affected version, script name, parameter name, etc. That leads to the high-level types of disclosures:

  • Invalid / Not Enough – We are seeing cases where a disclosure doesn’t have enough actionable information. There is no vendor URL, the stated product name doesn’t come up on various Google searches, the proof-of-concept (PoC) provided is only for one live site, etc. If we can’t replicate it or dig up the vendor in five minutes, we have to move on.
  • Site-specific – Some of the disclosures from above end up being specific to one web site. In a few rare cases, they impact several web sites due to the companies all using the same web hosting / design shop that re-uses templates. Site-specific does not qualify for inclusion in any of the big vulnerability databases (e.g. CVE, BID, Secunia, X-Force, OSVDB). We aggregate vulnerabilities in software and hardware that is available to multiple consumers, on their premises. That means that big offerings like Dropbox or Amazon or Facebook don’t get included either. OSF maintains a separate project that documents site-specific issues.
  • Vulnerability – There is enough actionable information to consider it valid, and nothing that sets off warnings that it may be an issue. This is the run-of-the-mill event we deal with in large volumes.
  • Not a Vulnerability – While a valid report, the described issue is just considered a bug of some kind. The most common example is a context-dependent ‘DoS’ that simply crashes the software, such as media player or browser. The issue was reported to crash the software, so that is valid. But in ‘exploiting’ the issue, the attacker has gained nothing. They have not crossed privilege boundaries, as the issue can quickly be recovered from. Note that if the issue is a persistent DoS condition, that becomes a valid issue.
  • Myth/Fake – This was originally created to handle rumors of older vulnerabilities that simply were not true. “Do you remember that remote Solaris 2.5 bug in squirreld??” Since then, we have started using this classification more to denote when a described issue is simply invalid. For example, the researcher claims code execution and provides a PoC that only shows a DoS. Subsequent analysis shows that it is not exploitable.

Before you start sending emails, as @4DGifts reminds us, you can rarely say with 100% assurance that something isn’t exploitable. We understand and agree with that completely. But it is also not our job to prove a negative. If a researcher is claiming code execution, then they must provide the evidence to back their claim. Either an additional PoC that is more than a stability crash, or fully explain the conditions required to exploit it. Often times when a researcher does this, we see that while it is an issue of some sort, it may not cross privilege boundaries. “So you need admin privs to exploit this…” and “If you get a user to type in that shell code into a prompt on local software, it executes code…” Sure, but that doesn’t cross privilege boundaries.

That is why we encourage people like Aaron to help debunk invalid vulnerability reports. We’re all about accuracy, and we simply don’t have time to test and figure out every vulnerability disclosed. If it is a valid issue but requires dancing with a chicken at midnight, we want that caveat in our entry. If it is a code execution issue, but only with the same privileges as the attacker exploiting it, we want to properly label that too. We do not use CVSS to score bogus reports as valid. Instead, we reflect that they do not impact confidentiality, integrity, or availability which gives it a 0.0 score.

The Death and Re-birth of the Full-Disclosure Mail List

After John Cartwright abruptly announced the closure of the Full Disclosure mail list, there was a lot of speculation as to why. I mailed John Cartwright the day after and asked some general questions. In so many words he indicated it was essentially the emotional wear and tear of running the list. While he did not name anyone specifically, the two biggest names being speculated were ‘NetDev’ due to years of being a headache, and the more recent thread started by Nicholas Lemonias. Through other channels, not via Cartwright, I obtained a copy of a legal threat made against at least one hosting provider for having copies of the mails he sent. This mail was no doubt sent to Cartwright among others. As such, I believe this is the “straw that broke the camels back” so to speak. A copy of that mail can be found at the bottom of this post and it should be a stark lesson that disclosure mail list admins are not only facing threats from vendors trying to stifle research, but now security researchers. This includes researchers who openly post to a list, have a full discussion about the issue, desperately attempt to defend their research, and then change their mind and want to erase it all from public record.

As I previously noted, relying on Twitter and Pastebin dumps are not a reliable alternative to a mail list. Others agree with me including Gordon Lyon, the maintainer of seclists.org and author of Nmap. He has launched a replacement Full Disclosure list to pick up the torch. Note that if you were previously subscribed, the list users were not transferred. You will need to subscribe to the new list if you want to continue participating. The new list will be lightly moderated by a small team of volunteers. The community owes great thanks to both John and now Gordon for their service in helping to ensure that researchers have an outlet to disclose. Remember, it is a mail list on the surface; behind the scenes, they deal with an incredible number of trolls, headache, and legal threats. Until you run a list or service like this, you won’t know how emotionally draining it is.

Note: The following mail was voluntarily shared with me and I was granted permission to publish it by a receiving party. It is entirely within my legal right to post this mail.

From: Nicholas Lemonias. (lem.nikolas@googlemail.com)
Date: Tue, Mar 18, 2014 at 9:11 PM
Subject: Abuse from $ISP hosts
To: abuse@

Dear Sirs,

I am writing you to launch an official complaint relating to Data
Protection Directives / and Data Protection Act (UK).

Therefore my request relates to the retention of personal and confidential
information by websites hosted by Secunia.

These same information are also shared by UK local and governmental
authorities and financial institutions, and thus there are growing
concerns of misuse of such information.

Consequently we would like to request that you please delete ALL records
containing our personal information (names, emails, etc..) in whole, from
your hosted websites (seclists.org) and that distribution of our
information is ceased . We have mistakenly posted to the site, and however
reserve the creation rights to that thread, and also reserve the right to
have all personal information deleted, and ceased from any electronic
dissemination, use either partially or in full.

I hope that the issue is resolved urgently without the involvement of local
authorities.

I look forward to hearing from you soon.

Thanks in advance,

*Nicholas Lemonias*

Update 7:30P EST: Andrew Wallace (aka NetDev) has released a brief statement regarding Full Disclosure. Further, Nicholas Lemonias has threatened me in various ways in a set of emails, all public now.

Missing Perspective on the Closure of the Full-Disclosure Mail List

This morning I woke to the news that the Full-Disclosure mail list was closing its doors. Assuming this is not a hoax (dangerously close to April 1st) and not spoofed mail that somehow got through, there seems to be perspective missing on the importance of this event. Via Facebook posts and Twitter I see casual disappointment, insults that the list was low signal to noise, and that many had stopped reading it a while back. I don’t begrudge the last comment one bit. The list has certainly had its share of noise, but that is the price we pay as a community and industry for having a better source for vulnerability disclosure. Speaking to the point of mail lists specifically, there were three lists that facilitated this: Bugtraq, Full-Disclosure, and Open Source Security (OSS). Bugtraq has been around the longest and is the only alternative to Full-Disclosure really (remember that VulnWatch didn’t last, and was ultimately low traffic). OSS is a list that caters to open source software and does not traffic in commercial software. A majority of the posts come from open source vendors (e.g. Linux distributions), the software’s maintainer, etc. It is used as much for disclosure as coordination between vendors and getting a CVE assigned.

One of the first things that should be said is a sincere “thank you” to John Cartwright for running the list so long. For those of you who have not moderated a list, especially a high-traffic list, it is no picnic. The amount of spam alone makes list moderation a pain in the ass. Add to that the fake exploits, discussions that devolve into insults, and topics that are on the fringe of the list’s purpose. Trying to sort out which should be allowed becomes more difficult than you would think. More importantly, he has done it in a timely manner for so long. Read the bold part again, because that is absolutely critical here. When vulnerability information goes out, it is important that it goes out to everyone equally. Many mails sent to Bugtraq and Full-Disclosure are also sent to other parties at the same time. For example, every day we get up to a dozen mails to the OSVDB Moderators with new vulnerability information, and those lists and other sources (e.g. Exploit-DB, OffSec, 1337day) are in the CC. If you use one or a few of those places as your primary source for vulnerability intelligence, you want that information as fast as anyone else. A mail sent on Friday afternoon may hit just one of them, before appearing two days later on the rest. This is due to the sites being run with varying frequency, work schedules, and dedication. Cartwright’s quick moderation made sure those mails went out quickly, often at all hours of the day and over weekends.

While many vulnerability disclosers will send to multiple sources, you cannot assume that every disclosure will hit every source. Some of these sites specialize in a type of vulnerability (e.g. web-based), while some accept most but ignore a subset (e.g. some of the more academic disclosures). Further, not every discloser sends to all these sources. Many will send to a single mail list (e.g. Bugtraq or FD), or to both of them. This is where the problem arises. For many of the people still posting to the two big disclosure lists, they are losing out on the list that was basically guaranteed to post their work. Make no mistake, that isn’t the case for both lists.

This goes back to why Full-Disclosure was created in the first place (July 11, 2002). This was days before Symantec announced they were acquiring SecurityFocus (July 17, 2002). That was not a coincidence. While I can’t put a finger on when BugTraq changed for the worse exactly, I can assure you it has. Back in 2003, security researchers were noticing curious delays in their information being posted. One company challenged SecurityFocus/Bugtraq publicly, forcing them to defend themselves.

“The problem with SecurityFocus is not that they moderate the lists, but the fact that they deliberately delay and partially censor the information,” said Thomas Kristensen, CTO of Secunia, based in Copenhagen, Denmark. “Since they were acquired by Symantec they changed their policy regarding BugTraq. Before they used to post everything to everybody at the same time. Now they protect the interests of Symantec, delay information and inform their customers in advance.” Wong says there is no truth to these accusations. “The early warnings that our DeepSight customers get come from places like BugTraq and events and incidents that we monitor,” Wong said. “We dont give those alerts [from BugTraq] to our customers any sooner than anyone else gets them.”

Unfortunately for our community, Mr. Wong is absolutely incorrect. I have witnessed this behavior first hand several times over the years, as have others. From a series of mails in 2006:

* mudge (mudge @ uidzero org) [060120 20:04]:
Actually, this advisory is missing some important information. bugtraq engaged in this prior to the “buy out”. Security Focus engaged in this practice as well where there were some advisories that would go out only to the Security Focus paid private list and not be forwarded to the public list to which they were posted.

On Fri, 20 Jan 2006, H D Moore wrote:
FWIW, I have noticed that a few of my own BT posts will not reach my mailbox until they have already been added to the securityfocus.com BID database. It could be my subscriber position in the delivery queue, but it does seem suspicious sometimes. Could just be paranoia, but the list behavior/delivery delays definitely contribute to it.

In each case, moderators of Bugtraq vehemently denied the allegations. In one case, Al Huger (with Symantec at the time) reminded everyone that the combined lists of SecurityFocus were delivering over 7 million mails a day. That alone can cause issues in delivery of course. On the other hand, Symantec surely has the resources to ensure they run a set of mail servers that can churn out mail in such volume to ensure prompt delivery. Jump to more recently and you can still see incredible delay that has nothing to do with delivery issues. For example, RBS posted an advisory simultaneously to both Bugtraq and Full-Disclosure. Notice that the mail was posted on Sep 10 for Full-Disclosure and Sep 19 for Bugtraq. A nine day delay in moderating vulnerability information is not acceptable in today’s landscape of threats and bad actors. Regardless of intent, such delays simply don’t cut it.

In addition to the Bugtraq moderators having such delays, they will sometimes reject a post for trivial reasons such as “using a real IP address” in an example (one time using the vendor’s IP, another time using a public IP I control). They rejected those posts, while frequently allowing “target.com” in disclosures which is a real company.

With the death of Full-Disclosure, Bugtraq is now our primary source of vulnerability disclosure in the scope of mail lists, and only source for vulnerabilities in commercial software (out of scope for OSS). To those who argue that people “use mail a lot less now”, I suggest you look at the volume of Bugtraq, Full-Disclosure, and OSS. That is a considerable amount of disclosures made through that mechanism. Another mindset is that disclosing vulnerabilities can be done with a Tweet using a hash tag and a link to pastebin or other hosting site. To this I can quickly say that you have never run a VDB (and try finding a full set of your original l0pht or @stake advisories, many have largely vanished). Pastebin dumps are routinely removed. Researcher blogs, even hosted on free services such as WordPress and Blogger, disappear routinely. Worse, vendors that host advisories in their own products will sometimes remove their own historical advisories. The “Tweet + link” method simply does not cut it unless you want vulnerability provenance to vanish in large amounts. It is bad enough that VDBs have to rely on the Internet Archive so often (speaking of, donate to them!), but forcing us to set up a system to mirror all original disclosures is a burden. Last, for those who argue that nothing good is posted to Full-Disclosure, Lucian Constantin points out a couple good examples to counter the argument in his article on the list closing.

Instead, mail lists provide an open distributed method for releasing information. As you can see, these lists are typically mirrored on multiple sites as well as personal collections of incoming email. It is considerably easier and safer to use such a method for vulnerability disclosures going forward. In my eyes, and the eyes of others that truly appreciate what Full-Disclosure has done, the loss of that list is devastating in the short term. Not only will it introduce a small amount of bias in vulnerability aggregation, it will take time to recover. Even if someone else picks up the torch under the same name, or starts a new list to replace it, it will take time for people to transition to the new list.

To conclude, I would also ask that John Cartwright practice full disclosure himself. Shuttering the list is one thing, but blaming the action on an unnamed person with no real details isn’t what the spirit of the list is about. Give us details in a concise and factual manner, so that the industry can better understand what you are facing and what they may be getting into should they opt to run such a list.

More tricks than treats with today’s Metasploit blog disclosures?

Today, Tod Beardsley posted part one and part two on the Metasploit blogs titled “Seven FOSS Tricks and Treats. Unfortunately, this blog comes with as many tricks as it does treats.

In part one, he gently berates the vendors for their poor handling of the issues. In many cases, they are labeled as “won’t fix” without an explanation of why. During his berating, he also says “I won’t mention which project … filed the issue on a public bug tracker which promptly e-mailed it back in cleartext“. In part two, the only disclosure timeline including a bug report is for Moodle and ticket MDL-41449. If this is the case he refers to, then he should have noted that the tracker requires an account, and that a new account / regular user cannot access this report. Since his report was apparently mailed in the clear, the ticket system mailing it back is not the biggest concern. If this is not the ticket he refers to, now that the issues are public the ticket should be included in the disclosure for completeness.

Next, we have the issue of “won’t fix”. Zabbix, NAS4Free, and arguably OpenMediaVault are all intended functionality by the vendor. In each case, they require administrative credentials to use the function being ‘exploited’ by the metasploit modules. I won’t argue that additional circumstances make exploitation easier, such as XSS or default credentials, but intended functionality is often a reason a vendor will not “fix” the bug. As you say in part one, a vendor should make this type of functionality very clear as to the dangers involved. Further, they should strive to avoid making it easier to exploit. This means quickly fixing vulnerabilities that may disclose session information (e.g. XSS), and not shipping with default credentials. Only at the bottom of the first post do you concede that they are design decisions. Like you, we agree that admin of a web interface does not imply the person was intended to have root access on the underlying operating system. In those cases, we consider them a vulnerability but flag them ‘concern’ and include a technical note explaining.

One of the most discouraging things about these vulnerability reports is the lack of version numbers. It is clear that Beardsley downloaded the software to test it. Why not include the tested version so that administrators can more easily determine if they may be affected? For example, if we assume that the latest version of Moodle was 2.5.2 when he tested, it is likely vulnerable. This matters because version 2.3.9 does not appear to be vulnerable as it uses an alternate spell check method. This kind of detail is extremely helpful to the people who have to mitigate the vulnerability, and the type of people who use vulnerability databases as much as penetration testers.

Finally, the CVE assignments are questionable. Unfortunately, MITRE does not publish the “CVE ID Reservation Guidelines for Researchers” on their CVE Request Page, instead offering to mail it. This may cut down on improper assignments and may explain why these CVE were assigned. When an application has intended functionality that can only be abused by an attacker with administrator credentials, that does not meet the criteria for a CVE assignment. Discussion with CVE over each case would help to ensure assignment is proper (see above re: implied permission / access).

As always, we love seeing new vulnerabilities disclosed and quickly fixed. However, we also prefer to have disclosures that fully explain the issue and give actionable information to all parties involved, not just one side (e.g. penetration testers). Keep up the good work and kindly consider our feedback on future disclosures!

An Open Letter to @InduSoft

InduSoft,

When referencing vulnerabilities in your products, you have a habit of only using an internal tracking number that is kept confidential between the reporter (e.g. ICS-CERT, ZDI) and you. For example, from your HotFix page (that requires registration):

WI2815: Directory Traversal Buffer overflow. Provided and/or discovered by: ICS-CERT ticket number ICS-VU-579709 created by Anthony …

The ICS-CERT ticket number is assigned as an internal tracking ID while the relevant parties figure out how to resolve the issue. Ultimately, that ticket number is not published by ICS-CERT. I have already sent a mail to them suggesting they include it in advisories moving forward, to help third parties match up vulnerabilities to fixes to initial reports. Instead of using that, you should use the public ICS-CERT advisory ID. The details you provide there are not specific enough to know which issue this corresponds to.

In another example:

WI2146: Improved the Remote Agent utility (CEServer.exe) to implement authentication between the development application and the target system, to ensure secure downloading, running, and stopping of projects. Also addressed problems with buffer overrun when downloading large files. Credits: ZDI reports ZDI-CAN-1181 and ZDI-CAN-1183 created by Luigi Auriemma

In this case, these likely correspond to OSVDB 77178 and 77179, but it would be nice to know that for sure. Further, we’d like to associate those internal tracking numbers to the entries but vendors do not reliably put them in order, so we don’t know if ZDI-CAN-1181 corresponds to the first or second.

In another:

WI1944: ISSymbol Virtual Machine buffer overflow Provided and/or discovered by: ZDI report ZDI-CAN-1341 and ZDI-CAN-1342

In this case, you give two ZDI tracking identifiers, but only mention a single vulnerability. ZDI has a history of abstracting issues very well. The presence of two identifiers, to us, means there are two distinct vulnerabilities.

This is one of the primary reasons CVE exists, and why ZDI, ICS-CERT, and most vendors now use it. In most cases, these larger reporting bodies will have a CVE number to share with you during the process, or if not, will have one at the time of disclosure.

Like your customers do, we appreciate clear information regarding vulnerabilities. Many large organizations will use a central clearing house like ours for vulnerability alerting, rather than trying to monitor hundreds of vendor pages. Helping us to understand your security patches in turn helps your customers.

Finally, adding a date the patch was made available will help to clarify these issues and give another piece of information that is helpful to organizations.

Thank you for your consideration in improving your process!

howdoireportavuln.com – Good intentions, needs fix-ups though.

Tonight, shortly before retiring from a long day of vulnerability import, I caught a tweet mentioning a web site about reporting vulnerabilities. Created on 15-aug-2013 per whois, the footer shows it was written by Fraser Scott, aka @zeroXten on Twitter.

http://howdoireportavuln.com/

I love focused web sites that are informative, and make a point in their simplicity. Of course, most of these sites are humorous or parody, or simply making fun of the common Internet user.

This time, the web site is directly related to what we do. I want to be very clear here; I like the goal of this site. I like the simplistic approach to helping the reader decide which path is best for them. I want to see this site become the top result when searching for “how do I disclose a vulnerability?” This commentary is only meant to help the author improve the site. Please, take this advice to heart, and don’t hesitate if you would like additional feedback. [Update: After starting this blog last night, before publishing this morning, he already reached out. Awesome.]


Under the ‘What’ category, there are three general disclosure options:

NON DISCLOSURE, RESPONSIBLE DISCLOSURE, and FULL DISCLOSURE

First, you are missing a fourth option of ‘limited disclosure’. Researchers can announce they have found a vulnerability in given software, state the implications, and be done with it. Public reports of code execution in some software will encourage the vendor to prioritize the fix, as customers may begin putting pressure on them. Adding a video showing the code execution reinforces the severity. It often doesn’t help a VDB like ours, because such a disclosure typically doesn’t have enough actionable information. However, it is one way a researcher can disclose, and still protect themselves.

Second, “responsible”? No. The term was possibly coined by Steve Christey, further used by Russ Cooper, that was polarized by Cooper as well as Scott Culp at Microsoft (“Information Anarchy”, really?), in a (successful) effort to brand researchers as “irresponsible” if they don’t conform to vendor disclosure demands. The appropriate term more widely recognized, and fair to both sides, is that of “coordinated” disclosure. Culp’s term forgets that vendors can be irresponsible if they don’t prioritize critical vulnerabilities while customers are known to be vulnerable with public exploit code floating about. Since then, Microsoft and many other companies have adopted “coordinated” to refer to the disclosure process.

Under the ‘Who’ category, there are more things to consider:

SEND AN EMAIL

These days, it is rare to see domains following RFC-compliant addresses. That is a practice mostly lost to the old days. Telling readers to try to “Contact us” tab/link that invariably shows on web pages is better. Oh wait, you do that. However, that comes well after the big header reading TECHNICAL SUPPORT which may throw people off.

As a quick side note: “how to notifying them of security issues”. This is one of many spelling or grammar errors. Please run the text through a basic grammar checker.

Under the ‘How’ category:

STAY ANONYMOUS

This is excellent advice, except that using Tor bit since there are serious questions about the security/anonymity of it. If researchers are worried, they should look at a variety of options including using a coffee shop’s wireless, hotel wireless, etc.

BE YOURSELF

This is also a great point, but more to the point, make sure your mail is polite and NOT THREATENING. Don’t threaten to disclose on your own timeline. See how the vendor handles the vulnerability report without any indication of disclosing it. Give them benefit of the doubt. If you get hints they are stalling at some point, then gently suggest it may be in the best interest of their customers to disclose. Remind them that vulnerabilities are rarely discovered by a single person and that they can’t assume you are the only one who has found it. You are just the only one who apparently decided to help the vendor.

THE DISCLOSURE

Post to Full-Disclosure sure, or other options that may be more beneficial to you. Bugtraq has a history of stronger moderation, they tend to weed out crap. Send it directly to vulnerability databases and let them publish it anonymously. VDBs like Secunia generally validate all vulnerabilities before posting to their database. That may help you down the road if your intentions are called into question. Post to the OSS-security mail list if the vulnerability is in open-source software, so you get the community involved. For that list, getting a CVE identifier and having others on the list verifying or sanity checking your findings, it gives more positive attention to the technical issues instead of the politics of disclosure.

FOR SALE

Using a bug bounty system is a great idea as it keeps the new researcher from dealing with disclosure politics generally. Let people experienced with the process, who have an established relationship and history with the vendor handle it. However, don’t steer newcomers to ZDI immediately. In fact, don’t name them specifically unless you have a vested interest in helping them, and if so, state it. Instead, break it down into vendor bug bounty programs and third-party programs. Provide a link to Bugcrowd’s excellent resource on a list of current bounty programs.

FINALLY

The fine print of course. Under CITATIONS, I love that you reference the Errata legal threats page, but this should go much higher on the page. Make sure new disclosers know the potential mess they are getting into. We know people don’t read the fine print. This could also be a good lead-in to using a third-party bounty or vulnerability handling service.

howdoi

It’s great that you make this easy to share with everyone and their dog, but please consider getting a bit more feedback before publishing a site like this. It appears you did this in less than a day, when an extra 24 hours shows you could have made a stronger offering. You are clearly eager to make it better. You have already reached out to me, and likely Steve Christey if not others. As I said, with some edits and fix-ups, this will be a great resource.