-
Notifications
You must be signed in to change notification settings - Fork 28.9k
Added performance benchmarks for platform channels #81414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added performance benchmarks for platform channels #81414
Conversation
3b4c848
to
374976d
Compare
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of boilerplate in this PR, I hope I reviewed all the right files...
Would probably be good if we found a way to not duplicate all this boilerplate for different benchmarks.
} | ||
|
||
void main() { | ||
runApp(const MyApp()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you actually need the Counter app UI? To simplify, could this just be:
void main() {
WidgetsFlutterBinding.ensureInitialized();
_runTests();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That doesn't work because that doesn't setup the binary messenger for platform channels. I've removed most of the UI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's supposed to set it up though, right?
If not, can we just directly do what's needed to set up the default binary messenger?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't say anything about that in the documentation. What's the concern? My understanding is that goderbauer didn't want all this extra code, it's mostly been removed. If we had a PlatformChannelsBinding.ensureInitialized()
function call it would save maybe 6 lines of code if I paired down the UI even further. I imagine things might get a bit goofy if you don't present any widget though. Do you want me to trim even more away?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me it's more that it takes that much longer to find out what this test is actually doing from here. I have to read through a bunch of unrelated widgets to finally get to one that calls a method in initState
that has nothing to do with the widget itself. I also have to think a little bit about whether that could have any impact on the benchmark.
I'm kind of surprised the WidgetsFlutterBinding.ensureInitialized isn't enough to get the channels set up - that's what we tell people to do if they need to talk to platform channels before any UI is rendered after all.
This test doesn't really need any UI to run - it'd just show a black screen in that case. If that bothers you, you can just have it show a Placeholder
widget, or maybe some text saying it's running the benchmark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And in case it's not clear, IMO this shouldn't block the test from landing. But future us will probably be grateful for any simplifications that can be made now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should remove everything from the benchmark that's not required for it to make it clear what this is testing. Since this benchmark doesn't rely on any UI, why show it? If you really need to show UI, then just show a simple container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I removed everything I could. It just shows a black screen now while executing.
final List<Object> largeBuffer = _makeTestBuffer(1000); | ||
const StandardMessageCodec codec = StandardMessageCodec(); | ||
final ByteData data = codec.encodeMessage(largeBuffer); | ||
print('Large buffer size: ${data.lengthInBytes}'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove the print?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was intentional to leave this in for the logs since it's difficult to make the datastructure of a specific size and it's important that its size is known in order to make valid comparisons. I might actually tweak the size before I turn the tests on. I gotta check to make sure this is what we want to test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we somehow capture this in the benchmark metric? Like, maybe make it us/byte
or something? Or just include the number of bytes processed somewhere in the metrics?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if we capture us/byte
changing the size of the buffer in the test would create a change in us/byte
, it won't scale linearly (byte bytes are copied 4 different times and serializing/deserializing isn't equal for all datatypes regardless of byte size). We have to make sure when I turn this on, we never change the size or contents of the buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gotta check to make sure this is what we want to test.
The print alone doesn't tell me what the size should be, though, if I later come in and modify the benchmark. Ideally, this would have been expressed as an assert(data.lengthInBytes == expectation)
. But asserts are disabled here. Maybe just throw if it doesn't have the expected size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the print statement and instead added a comment that warns about ever editing that line of code. I don't think there is anything else we can do or assert.
); | ||
printer.addResult( | ||
description: | ||
' BasicMessageChannel/StandardMessageCodec/Flutter->Host/Large', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
I was able to remove most of that boilerplate. The test is a bit more fragile now since I needed to keep some of android/ and ios/ to work. If the format of the template changes it could break these tests. It also adds a slight complication if people want to run the tests outside of devicelab for debugging purposes. I added a note in the README file about it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly nits here. You're going to need to update dev/prod_builders.json
to add a builder entry for this, and a corresponding PR will need to be opened in flutter/infra (see e.g. https://github.com/flutter/infra/pull/423) to make a builder for this actually run.
assert(false, | ||
"Don't run benchmarks in checked mode! Use 'flutter run --release'."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure this was copied from elsewhere, but if we really want to check for release mode I think we want to do
assert(false, | |
"Don't run benchmarks in checked mode! Use 'flutter run --release'."); | |
if (!kReleaseMode) | |
throw Exception("Must be run in release mode! Use 'flutter run --release'."); |
The assert will not be run in profile mode, and checked mode is an antiquated/deprecated notion at this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done (checked against kDebugMode)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we want this check so that we make sure CI is running these in release mode?
These benchmarks just rely on stdout to get timing information, rather than the observatory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there is a comment somewhere else that the tests have to be run in profile mode to work on iOS. Plus the only real problem case is Debug if people are running them locally.
final List<Object> largeBuffer = _makeTestBuffer(1000); | ||
const StandardMessageCodec codec = StandardMessageCodec(); | ||
final ByteData data = codec.encodeMessage(largeBuffer); | ||
print('Large buffer size: ${data.lengthInBytes}'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we somehow capture this in the benchmark metric? Like, maybe make it us/byte
or something? Or just include the number of bytes processed somewhere in the metrics?
} | ||
|
||
void main() { | ||
runApp(const MyApp()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's supposed to set it up though, right?
If not, can we just directly do what's needed to set up the default binary messenger?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with nits
...chmarks/android/app/src/main/kotlin/com/example/platform_channels_benchmarks/MainActivity.kt
Show resolved
Hide resolved
final List<Object> largeBuffer = _makeTestBuffer(1000); | ||
const StandardMessageCodec codec = StandardMessageCodec(); | ||
final ByteData data = codec.encodeMessage(largeBuffer); | ||
print('Large buffer size: ${data.lengthInBytes}'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gotta check to make sure this is what we want to test.
The print alone doesn't tell me what the size should be, though, if I later come in and modify the benchmark. Ideally, this would have been expressed as an assert(data.lengthInBytes == expectation)
. But asserts are disabled here. Maybe just throw if it doesn't have the expected size?
} | ||
|
||
void main() { | ||
runApp(const MyApp()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should remove everything from the benchmark that's not required for it to make it clear what this is testing. Since this benchmark doesn't rely on any UI, why show it? If you really need to show UI, then just show a simple container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This pull request is not suitable for automatic merging in its current state.
|
b8226a6
to
095c3a1
Compare
This pull request is not suitable for automatic merging in its current state.
|
… to run the test now
095c3a1
to
8021eb2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normally the json file change should be landed after builder config changes in flutter/infra. Please see #79399
@@ -372,6 +372,12 @@ | |||
"task_name": "linux_picture_cache_perf__e2e_summary", | |||
"flaky": false | |||
}, | |||
{ | |||
"name": "Linux platform channels benchmarks", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall the name be: Linux platform_channels_benchmarks
?
@@ -1152,6 +1158,12 @@ | |||
"task_name": "mac_ios_platform_channel_sample_test_swift", | |||
"flaky": false | |||
}, | |||
{ | |||
"name": "Mac_ios platform channels benchmarks", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
As outlined in the doc: https://docs.google.com/document/d/1oNLxJr_ZqjENVhF94-PqxsGPx0qGXx-pRJxXL6LSagc/edit
This just adds tests for iOS and Android for BasicMessageChannels with the StandardMessageCodec.
issue: #81559
Pre-launch Checklist
///
).If you need help, consider asking for advice on the #hackers-new channel on Discord.