You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The find_robots_txt check does not work properly with redirects.
The description states
Iterates through all service_sites in each RwsSet provided, and makes
a get request to site/robots.txt for each. This request should return
an error 4xx, 5xx, or a timeout error. If it does not, and the page
does exist, then it is expected that the site contains a X-Robots-Tag
in its header. If none of these conditions is met, an error is appended
to the error list.
However the code only makes a request to the root, it does not first check for a 4xx/5xx against the service domain.
If a redirect (per the guidelines) is in place then the headers being checked are for the destination domain, not the service one.
I'm working to set up for our service domains to meet the requirements but running into issues because of the structure of this test and am waiting to finalize the setup until I know the expectations of the check.