Same issues here - client’s plan was auto-bumped up to performance large from small (x3 in cost per month). There are the nginx server logs, which you can download via SFTP, and there are the global CDN logs, which, as I understand it, determine our cost/plan - please let me know if that’s not correct.
I requested the global CDN logs on Jan 24th and still haven’t received them. So I analyzed the nginx log requests from one of the highest request days, Jan 28th.
Here is the pantheon metrics view for that day via dashboard and the nginx logs (combo of 2 nginx log files - sorry, this image may appear smaller than the 1500px wide one I uploaded)
Sum of total requests for Jan 28th: 7821
Not sure why the metrics are showing 2000 (visits)-10,000 (served) more requests for that day(?)
In analyzing the logs, I found one bot (mj12bot) that I decided to block via robots.txt.
But this is like whac-a-mole, as other bots will pop up in the future, and the robots.txt file can be ignored, so it will need to be monitored. I thought an .htaccess config allow me to block IPs, but Pantheon uses nginx (not apache), and you have to block IPs via the settings.php file for Drupal sites.
Also, and more importantly, I noticed this bot and others were pinging recursive (yet still accessible) Drupal views URLs, which has been a crawling issue with Drupal views before (/resource/resource/resource/resource…). I found this Drupal.org post that offers a resolution via views contextual filter.
I’m hoping that robots.txt and this views fix will help deal with the extra requests, and possibly help others trying to bring their requests back down. I’ve considered implementing Cloudflare, to easily setup page rules, but not sure I’m at that point yet.