这是indexloc提供的服务,不要输入任何密码
Skip to content

Auction Performance Testing #287

@MarcoLugo

Description

@MarcoLugo

In a desire to better understand the FLEDGE API and its behavior I created a containerized environment to use the API for both manual and automated tests and see what I could learn.

One of the tests was an attempt to observe what would occur if Chrome had to deal with computationally-intensive bidders in the FLEDGE auction. The summary below expands on this test.

Setup

In the auction, we have 201 participants and the experiment tries to discover what happens when a strict subset of these bidders require significant computation. In practical terms, this was done with an infinite loop within the bidding function. Auctions are repeated many times with a randomized number of infinite-loop bidders in order to try to assess the impact of their presence.

Expected

I expected either the infinite-loop bidders would time out without affecting the rest of the auction participants or, if the system was not robust enough to handle this, the auction to freeze and thus fail.

Results

The reality landed somewhere in the middle of the expectations spectrum. Auctions did conclude and produce winners but a pattern emerged very clearly: more computationally-intensive bidders translated into more time for the auction to conclude. In some cases, this could mean seconds more to conclude. See the graphs below for more detail:

One Bidder per Buyer
image

More Than One Bidder per Buyer
image

This is problematic because the longer the auction takes, the longer the ads will take to display and this would likely lead to a higher bounce rate for websites, a lower performance for ads and degrade the overall user experience.

This test also proved to be a laptop battery hog, substantially reducing battery life. I think ultimately everyone would benefit from quantifying the impact of FLEDGE on battery life under more normal conditions.

Caveats

  • The computationally-intensive bidder was created with a DoS or stress test in mind. One could argue that this is merely an edge case. However, it is reasonable to expect some bidders to be compute-heavy. A more realistic test could be warranted in the near future.
  • The test was run using one laptop and not tested under many different devices. We cannot count on everyone having powerful hardware: an appreciable amount of people still only have 2 physical CPUs and Steam’s statistics may be biased towards higher-end computers.
  • The test was run as the only CPU-intensive task on the computer. Under normal conditions, the user could be running other demanding processes that would make the browser compete for resources and degrade user experience beyond the browser.
  • There could be a mistake in my understanding of the API or in the experiment setup and if so maybe this is an opportunity to clarify certain things and come up with a better experiment. The fact that the test harness is open sourced may help mitigate this and hopefully enable others to build on top of it if they wish to.

Takeaways

If the results are correct and assuming that the number of bidders will increase over time as well as the complexity of bidders themselves, then we may run into the performance issues outlined above in the future. I welcome WebAssembly as a way to enable bidders to do more with the same computing resources and even do calculations that would not even be possible without it but I do not think that WebAssembly alone would fix these issues, perhaps just delay their appearance. Accepting a potentially infinite number of bidders while having a finite amount of computing resources does not seem like a sustainable path forward. The suggestions in #79 and/or #268 could be among the possible solutions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions