How much time has it saved you since bringing it into your business?
What are some examples of errors /regressions that StagingPilot caught?
Great question! When I first talked to Nathan Tyler about the setup, we set the projected number of hours saved per update. I was looking at it from the perspective of just hitting “update” on all plugins, themes, and core at once and doing a quick visual check of the site, not looking at the pixel-level detail that StaginPilot’s diff provides. So the number it used to calculate the time saved was artificially low. That’s because we didn’t initially talk about it in terms of the best practice, which (as I later came to realize) is to do one update at a time, then check each page to see if it broke the site, and repeat. If you have 10 plugins and a theme to update, plus a WP core update, the time adds up to quite a bit more for a small site and tons more for a large site.
So, based on the conservative estimate we had set up, StagingPilot reports 1.4 hours saved for 4 sites on one account, plus 1.2 hours saved for 3 sites on another separate StagingPilot account. That’s quite low compared to what the actual effort would be. If I were doing the work manually that StagingPilot does automatically, it would likely be 3-4 times that amount as I update one component, check the site and ecommerce checkout paths, troubleshoot if something broke, and then repeat for the next component.
Some examples of errors/regressions StagingPilot caught were (most notably) the checkout process for ecommerce sites. This is honestly my favorite feature because of the tedium involved with constantly doing a test purchase for each component updated. At any point in the checkout process (I’m all WooCommerce) if a form throws an error or a purchase doesn’t complete, the update will stop and I’ll get a message about it via email.
It’s also really cool how you get a visual diff of the site before the update and after. Up to 15 pages can be compared and you have a visual screenshot with a slider down the middle that you can swipe back and forth to see the pixel-by-pixel differences in a heatmap format. Or, you can click to see all the diffs at once in an overlaid screenshot.
It does throw false positives sometimes whenever the “before” render time and the “after” render time differ. Things on the page will be in different spots because of that. Or, with certain ecommerce shops, different products will be in different places before and after. But there’s a slider to set a tolerance threshold so that on the next test, that level of difference will be ignored. You have options for whether to halt and fix, or to continue and deploy.