PixieBrix can now fully automate end-to-end tests of a browser extension

The situation: PixieBrix was growing and needed a scalable way to accelerate testing — particularly for their browser extension, the core of their offering. QA was important, partly because the process of securing Chrome Web Store approval meant it could take days for bug fixes to reach customers. But their initial QA approach — a long, manual QA checklist — took up too much engineering time.

The result: Of all the testing tools they tried, only Rainforest could automate end-to-end tests outside the browser window, which allowed them to fully test the browser extension. Now, with Rainforest integrated into their CI pipeline, engineers can focus on building product, catch more bugs, and spend less time hotfixing.

Misha Holtz
Full-Stack Engineer at PixieBrix

Employees

20

Industry

Productivity software

We use Rainforest QA as a way to ship faster with confidence.

Spending engineering time manually testing a browser extension

The PixieBrix browser extension allows users to customize and automate web applications in their web browser, like Chrome. The browser extension includes a low-code tool in the browser's developer tools that a user can use, for example, to add a button to a common web page to reduce repetitive actions, or to respond quickly to messages by injecting knowledge base search results into a chat window.

Before Rainforest, the PixieBrix engineering team spent valuable time testing everything manually before each release. Because they didn't have any QA on staff, they were recruiting anyone (and sometimes everyone) internally to do it.

When Misha joined as an engineer, she was new to the team and new to browser extension testing. As she recalls:

“Was it difficult? Oh definitely. We didn't have anything integrated into CI. In principle, everything should be automated if possible except maybe exploratory testing. But we on the engineering team were running through a big manual QA checklist.”

Every one to two weeks, the team would build their final release and test their beta builds before publishing to the Chrome Web Store — which conducts a thorough review, usually taking one to three days, but up to two weeks. So if a bug slipped through in the approved release and PixieBrix needed to publish a patch release, it could take a long time for the fixes to reach customers.

“Ideally we would do more releases, but the Chrome Web Store's review process is very extensive. We wanted to have tight feedback loops and adhere to CI as well as we could, but we didn't have the tools to do that. And taking up engineering team time isn't great because it's better we do our work rather than that rudimentary kind of testing.”

Now that the engineering team was growing quickly, they needed a more scalable way to do end-to-end testing of their Chrome extension. They began investigating test automation tools, including Rainforest, which they'd already been using to test visual regressions on their website.

Discovering (only) Rainforest can automate browser extension tests

Misha also looked at the open source tools Playwright and Puppeteer, while her CTO evaluated a combination of Selenium and BrowserStack. It all came down to which could test interactions with PixieBrix's extension in Chrome's toolbar and in DevTools. 

“I ran into the same limitations with Playwright that my CTO did with the Selenium-BrowserStack combination: You can't operate outside the browser window. We couldn't interact with the browser bar, the browser nav bar, or the browser toolbar. And importantly, we couldn't make assertions in the browser toolbar. All these are crucial to testing our product.”

Because of that limitation, they ruled out Playwright and Selenium entirely. A PixieBrix user might use their Chrome extension on any web page on the internet — which means the PixieBrix team needs to test on a variety of pages. They simply couldn't keep doing this by hand.

“The DevTools extension is a big part of our product. Our low-code editor lives in that part. People see changes that they make in our DevTools extension on an arbitrary web page. So being able to end-to-end test those interactions is crucial to being able to catch regressions as they happen.”

Rainforest was the only tool that allowed them to test the Chrome extension, including interactions with it in the browser toolbar and DevTools.

“Rainforest is the only solution that we found that we could do that at all, and easily.”

They began the process of converting the tests in their manual QA checklist to automated Rainforest tests.

Freeing up the engineering team from manual testing, bugs, and hotfixes

With the power to automate their browser extension tests end-to-end, Misha and the team began incorporating them into their CI workflow. They now run their Rainforest regression suite post-merge. 

“We're able to fully automate end-to-end flows for our user onboarding interactions within our Chrome extension, which is pretty awesome. And we're running Rainforest tests on arbitrary pages like Wikipedia, not just some environment we set up.”

The result is fewer engineers are involved in manual testing and there are fewer last-minute, all-hands hotfixes before and after a release. Their Chrome Web Store deployments contain fewer bugs and engineers can focus on building the product.

“Automated testing replaced over an hour of one person's time manually queueing QA tests to run. It's also easier to write tests with Rainforest, because it takes out all of the headaches involved with mocking and finding exactly the right method to use when writing a code-based test. Especially since our team is small.” 

“You want to make developments joyful and I'm personally happier when I don't have to trudge through a release bug a few hours before we need to deploy something business critical.”

If they no longer had Rainforest? They'd have to revert to pulling engineers away from shipping code. 

“Without Rainforest, we'd have a much longer manual QA checklist. Certain features would take a lot longer to build than they would otherwise, especially because some of our most problematic features like our onboarding tend to be very fragile. We'd also be hesitant to develop new features because we'd have more regression bugs around them as well.”

Rainforest is especially helpful with testing onboarding because Misha acknowledges they have some technical debt around that infrastructure. There are just so many possible user states and ways things can go wrong, it'd be very time-consuming for a person to replicate. Plus, Rainforest has lots of little added benefits, like helping them explain how their product works to other teams.

“The Rainforest error recordings are also nice. Sometimes I'll point our Design or Marketing team members to our Rainforest recordings so that they can see the user onboarding experience in PixieBrix.” 

Now, PixieBrix has the rigorous test process they need to feel comfortable enough to move quickly. And it also helps them catch bugs that they couldn't have discovered otherwise. That peace of mind is valuable to a small startup growing fast.

“We've seen a direct correlation between our end-to-end testing and a reduction in issues and bugs. Rainforest has also caught some bugs that none of us are able to replicate in our own dev environments. So being able to recreate those in Rainforest and also open the debugger with all the source maps in the VM? It has saved us.”