这是indexloc提供的服务,不要输入任何密码
Skip to content

Register visibility-change early #637

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from
Open

Conversation

tunetheweb
Copy link
Member

@tunetheweb tunetheweb commented Jul 15, 2025

Fixes #502

@philipwalton I did what you suggested in #502 (comment) .

@tunetheweb tunetheweb requested a review from philipwalton July 15, 2025 20:44
@tunetheweb tunetheweb requested a review from philipwalton July 16, 2025 19:38
@tunetheweb
Copy link
Member Author

I got bored of manually testing this each time so added a test case. Despite my worries in the original comment it wasn't too bad to add.

@@ -34,19 +34,50 @@
<script type="module">
const {onCLS} = await __testImport('{{ modulePath }}');
const queue = new Set();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's good to add a test for an example we show in the README, but I wonder if it's overkill to test it for every single metric.

What if instead of testing it for every metric we added a single new test page that tests how all of the metrics work together and includes tests for the examples shown in the README. WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, I think it would be fine to updates the tests in a separate PR, since this is behavior that wasn't previously being tested anyway.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the issues are metric-specific I think it is good to test each metric.

It really only affects CLS at the moment but I added tests for the other two metrics that can change over page lifetime (INP and LCP) in case they ever add similar logic. I didn’t add for FCP and TTFB since they shouldn’t ever be affected.

WDYT?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's up to you. I agree they're metric specific, but personally I'd be totally comfortable with a single test that reports all of the metrics, to test them all at the same time.

That type of test would also ensure that calling all of the metric functions together doesn't cause any of them to break (we don't currently have a test for that).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I agree that's a good test that we don't currently have, I don't think it applies to just this bug. We'd potentially want a similar test for: the basic case, reportAllChanges, reportAllChanges for some metrics but not all, this bug, double calls, INP reports even if starts hidden when LCP doesn't report...etc. Plus I think that should be all 5 metrics and I've only changed 3 test cases in this PR.

We should have a think which tests we wanna test together (in addition? instead of?) and how much duplication we'd want because of that.

Which is all obviously beyond the scope of this PR. Now you could argue that this test case could be the start of such a combined test suite that we can expand on later, but there's some logistics/plumbing to sort out (e.g. how much of the current options the combined page supports, code to separate out the metric beacons which might be reported in different orders).

So I'd rather merge this and spend some more time thinking about that and tackle it in a future PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm fine postponing this decision to a future PR.

@@ -832,6 +832,28 @@ describe('onCLS()', async function () {
assert.strictEqual(cls1_1.navigationType, cls2_3.navigationType);
});

it('reports on document visibility change handling', async function () {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the description in this test is misleading. The library always reported on visibilitychange, and there are already numerous tests for that behavior.

What's different here is your testing to ensure that if a developer chooses to implement their own batching (following the example in the README), that all of the relevant metric callbacks have run before their code does. So I'd rename this to clarify that.

Maybe something like: "it reports before developer-added visibilitychange listeners run"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to "reports on batch reporting using document.visibilitychange" cause the key thing was the document bit.

Copy link
Member

@philipwalton philipwalton Jul 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key thing is that the onXXX() function callback runs (i.e. "reports") before any user-added visibilitychange listeners run. I don't think that's clear from the wording your suggesting. I also don't know if it's obvious what "batch reporting" means in the context of this test, which is why I think it's clearer to say something explicit like "reports before developer-added visibilitychange listeners run". (Let me know if you don't think that wording is clear.)

The problem in the original bug report was that the onCLS() logic registered visibilitychange listeners asynchronously after the onCLS() function is invoked. So I think the important thing to test is that developer-registered visibilitychange events added synchronously after onCLS() call still get invoked after the onCLS() callback is invoked. I don't think it's important to specify or test whether the current target is document or window.

In other words, the important thing to test is that the following code logs in the expected order:

onCLS(console.log); // Logs first
document.addEventListener('visibilitychange', console.log); // Logs after the `onCLS()` log

BTW, with the changes you've made the following should also work, but I don't think it's necessary to test all of these cases:

document.addEventListener('visibilitychange', console.log); // Logs after the `onCLS()` log
onCLS(console.log); // Logs first
document.addEventListener('visibilitychange', console.log, true); // Logs after the `onCLS()` log
onCLS(console.log); // Logs first

The only case that should now fail is this one:

addEventListener('visibilitychange', console.log, true); // Will log before the `onCLS()` log
onCLS(console.log); // Logs second

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“reports before developer-added visibilitychange listeners run” works for me

@@ -34,19 +34,50 @@
<script type="module">
const {onCLS} = await __testImport('{{ modulePath }}');
const queue = new Set();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's up to you. I agree they're metric specific, but personally I'd be totally comfortable with a single test that reports all of the metrics, to test them all at the same time.

That type of test would also ensure that calling all of the metric functions together doesn't cause any of them to break (we don't currently have a test for that).

}, {
reportAllChanges: self.__reportAllChanges,
generateTarget: self.__generateTarget && ((el) => el.dataset.target),
});
if (self.__batchReporting) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I know I suggested calling this __batchReporting, but looking more closely at this code, the concept of "batch reporting" only makes sense when tracking multiple metrics. In this case the queue Set will never be larger than size=1 because there's only one metric object being tracked, so having a queue and iterating over it with a for loop doesn't make sense.

At this point I'm now leaning toward not testing this behavior in each individual metric test suite, but rather in a combined test—which I'm also happy to have addressed in a separate PR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It’s true the Set should never be > 1 with the test cases as they’re written now.

The metrics are re-inited in bfcache but that would necessitate a visibilitychange after which the queue should be cleared (but that’s an assumption/dependency I don’t think we should reply on). And in the soft navs branch they could be re-inited within a page view. But for now, particularly with the test cases as they exist, that shouldn’t happen as you say. I’d still rather have a loop to code this properly rather than making these assumptions.

Are you suggesting removing these test cases and not testing this until we get to that other PR? I think they’re still useful now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ultimately I don't think it's that big of a deal, and I don't mean to block merging this or releasing the other pending features.

I think my main concerns are that it's still not super clear what's being tested. E.g. batching is really about sending multiple metrics together in the same HTTP request, but in this test the use of batching is actually only testing manual sending in the visibilitychange (rather than relying on the library to send things at the right time)—it's not actually testing any sort of batch sending.

So my concern is that if for some reason someone other than you or me had to update these tests in the future, it might not be clear why these tests are needed or what they're actually testing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well I think there’s two things;

Firstly, we’ve added batch reporting facility to the test suite. So I think batchReporting is an appropriate name for that param. As I say it can be used to collapse multiple instances of the same metric into one, or to deal with several metrics if they were reported on that one page. I know we’re not doing either of those yet in this PR but the functionality is there for that (well we need a new page for the second so only half there).

Secondly, we had a specific bug with document visibility handler registration that’s now fixed and has test cases (all using the new batchreporting param - even if each “batch” is only 1 item). So the name of the test case should be clear what it’s testing. And IMHO the test case doesn’t need to be the same name as the batchReporting param, as that’s more generic and has more uses cases for us to exploit in the future.

So I say we keep the flag as batchReporting and just agree on the test case name (which I think we have now that I’m happy with your latest suggestion). And I can add comments to the test case to explain why we’re using the batchReporting functionality to test the document visibilitychange test case since that’s not super obvious.

Does that framing make you any more comfortable?

if it makes you happier I can also add a test case that reportAllChanges two INP changes and confirm only one beacon is sent with batch reporting (since the set should collapse to only have one instance of the metric). So we have another batch reporting test case that really does use the batch reporting mode. And then in future add another one that tests multiple metrics so we have another batch reporting test case that really, really does multiple batches with multiple instances in the set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add clarification regarding event listener in event batching section
2 participants