Since 2019, LifterLMS has maintained a Vulnerability Disclosure Program. Our program has evolved since it’s initial iteration and today we’ve opened our formerly private program to any researcher on the Bugcrowd platform.
We’ve always taken care to ensure our software has best-in-class security but, as any software or company, we’ve grown, evolved, and learned.
We’ve had failures and successes and in all things we’ve endeavored to build secure software so our users can focus on education and training first. We are not, nor do we pretend to be, an enterprise solution. But we do aim to ensure our software is as safe and secure as any enterprise alternative.
While considering whether or not to announce the public launch of our program on Bugcrowd I started thinking back through my memory about how we arrived at where we are today. In thinking about it I decided to write something of a history of our team’s security journey.
Our First Vulnerability Disclosure
In the fall of 2019 an anonymous security researcher disclosed a vulnerability in the LifterLMS plugin to the WordPress.org Plugins Review Team. As a result, our plugin was de-listed from the Plugin Repository.
My initial reaction to the email was shame.
I reviewed the offending code in disbelief. How could I have written and shipped this code. It was so obviously flawed it should have never been released. I should have known better and I did know better. Yet, here it was. The facts of the vulnerability were indisputable.
So, we fixed it. As we’ve found over the years, fixing a vulnerability is often quite simple. It’s trivial to see an exploit once you’re alerted to it. A few hours after we were de-listed, the issue was resolved and the next morning the plugin was re-listed.
Before this incident we thought about proactive security. Our coding process requires code review from another developer before any code is published. We talked about security. We kept ourselves apprised of best practices and common exploits.
We ran automated tests and performed static analysis against all our code. We knew this wasn’t enough but we didn’t know what was better.
This vulnerability demonstrated, publicly, that our intentions and processes were limited and fallible. Human error and oversight could be mitigated but not entirely prevented.
Launching a Self-Managed Bug Bounty Program
After resolving the issue my initial shame and embarrassment had time to fester. Instead of feeling confident that we’d fixed a problem, I felt terrified that there were more issues and oversights. A multi-developer audit of our codebases resulted in no additional vulnerabilities. And instead of feeling safe, I was haunted.
We don’t know what we don’t know.
Chris and I flew to Pressnomics 6 in Tuscon a few weeks later. One of the talks was given by a security engineer at Pagely. After his presentation he was kind enough to act as a security therapist. He listened to my story and nodded with empathy.
He said “It will happen again” and “It will be okay.”
He said “As long as researchers know how to find you, they will.”
He said “Make it stupid easy for them contact you.”
When I got home I contacted HackerOne and Bugcrowd and learned that it’s not terribly affordable to run a vulnerability disclosure program on these platforms. After weeks of conversations with both parties we decided that maybe there’s a reason why the only WordPress plugins I could find with bug bounties and security programs had 500,000+ active installs. At the time we had less than 10,000.
So we launched our own security program and bug bounty. We published the first version of our security program at lifterlms.com/security. The page outlined our security disclosure and research policy. It included a relatively low-paying bounty schedule.
We paid out a few bounties over the next six months. The program was not a success but we decided it was better than nothing. If a researcher found something, they’d be able to get it to us safely. Our primary goal was to prevent any future de-listing by ensuring security researchers knew how to contact us should they discover any vulnerabilities in the future.
This superficial goal arose out of the implications of a statement the plugin review team made to us in their relisting email:
“…once a plugin is closed, many people will think it is because it was insecure, even if it wasn’t. That means your plugin becomes a target for hackers.
In other words, de-listing due to a security vulnerability alerts malicious actors to the presence of an unpatched vulnerability.
Ensuring researchers could communicate us, in favor of the plugins review team, meant we could fix vulnerabilities without being de-listed and in doing so we could reduce the number of people, specifically malicious actors or hackers, who were aware of an unpatched vulnerability.
Triage is Difficult and Time Consuming or The Next Human Oversight
In the late spring of 2020 our program was discovered by a group of student security researchers. They shared and posted our policy to various Facebook groups and forums.
Over the course three weeks, we triaged nearly 200 vulnerability reports, almost all were invalid or informational reports related to things like security headers on LifterLMS.com (not the LifterLMS codebase). Often these emails were hostile and contained minimal useful information. The researchers insisted that we pay them for their efforts even if we found their reports invalid, requested more information, or were duplicate reports.
I did my best to arrive at a mutually beneficial agreement with this group. In the moment, while overwhelmed, and not truly understanding what was happening, I mistakenly assumed that it was a small group of friends or acquaintances. I tried to hire them as a team and pay them a monthly stipend for security research.
However, while trying to discuss agreeable terms with a person who identified themselves as a “leader” of the group, reports continued to come in. It became clear that the problem was our own making.
I made an enormous error in drafting our security program: I had no idea how to effectively communicate with security researchers and I didn’t understand how to write a research brief with a meaningful scope.
We decided to cease communications, delete many of these emails, and suspend the self-managed program.
Launching and Maintaining a Manged Vulnerability Disclosure Program
So I returned phone calls to Bugcrowd and in July we reopened our security program, this time with managed triage on the Bugcrowd platform.
We launched the program as a private, invitation-only program and over the past two years invited more than 300 Bugcrowd researchers to test LifterLMS, our websites, and codebases.
Today there are 66 individual researchers who have joined our program. We’ve received 111 total submissions and accepted (and fixed) 20.
Submission Outcome | Count |
---|---|
Valid | 20 |
Informational | 14 |
Invalid | 67 |
Duplicate | 10 |
Total | 111 |
Of the accepted submissions, 2 were high severity, 7 were medium severity, and 10 were low severity. The remaining submissions were informational.
Growing our Program and Improving the Security and Stability of LifterLMS
We consider our partnership with Bugcrowd to be a success. In partnering, we’ve managed to remove the bulk of triage effort from our developers, the Bugcrowd Application Security Engineers intake reports and let us know when the report has been validated or when they require our assistance to validate.
We’ve successfully patched two high-severity vulnerabilities before they were disclosed publicly. On one hand, it’s terrible that we’ve had any high-severity vulnerabilities. But patching them following responsible, private disclosure is something to celebrate. To our knowledge these issues were never publicly exploited.
This is the ultimate goal of security research. To improve the security of our software by leveraging the knowledge and experience of security experts.
Together, with our partners at Bugcrowd, we’ve determined that the path towards further growth and improvement is to open our program to any interested researcher.