I’m back! And I’m hopeful I can outline a few ways to understand the traffic reports.
At a high level, we are taking this data from the Global CDN. Any traffic served at the CDN level or the application level counts against those traffic limits.
In our experience, there are typically two types of traffic where the data will get really divergent:
1. Scans/probes on your site
2. Non-human-readable content
Regarding scans/probes on your site, we don’t charge for known beneficial traffic (e.g. Googlebots crawling your site). We know it’s outside of your control, and we also know it will help your site succeed–so we want to be sure it gets through unhindered.
At a platform level, we can determine and block widespread malicious traffic. But we can’t really determine “unwanted” traffic on a site level (what’s the old saying? One site’s trash is another site’s treasure?) So if you are getting traffic that you don’t want, you’ll want to block that—e.g. using a WAF, our Advanced CDN, or some other solution. We typically see a lot of probes on things like CMS-standard login pages. Your logs may also show really outdated browsers or operating systems in the User-agent, or lots of traffic from a location where you don’t have customers.
If you want to pull your logs to look for that kind of behavior, the instructions are here. If you have multiple application containers (Performance Medium & up), you’ll want to be sure you’re pulling from all the application containers—the docs have a script to make that easier.
Note: Your logs only contain traffic that actually hits the application containers—so anything served by the CDN isn’t included here. We don’t have an auditing solution currently on the roadmap (it hasn’t been a big request from customers), but I’ve let the product team know there’s some interest.
Lots of other details here: https://pantheon.io/docs/traffic-limits
I’m thinking I’ll submit some changes to our documentation to try to clear some of this up, but I’m happy to keep discussing & brainstorming here.