这是indexloc提供的服务,不要输入任何密码
Skip to content

Added performance benchmarks for platform channels #81414

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

gaaclarke
Copy link
Member

@gaaclarke gaaclarke commented Apr 28, 2021

As outlined in the doc: https://docs.google.com/document/d/1oNLxJr_ZqjENVhF94-PqxsGPx0qGXx-pRJxXL6LSagc/edit

This just adds tests for iOS and Android for BasicMessageChannels with the StandardMessageCodec.

issue: #81559

Pre-launch Checklist

  • I read the Contributor Guide and followed the process outlined there for submitting PRs.
  • I read the Tree Hygiene wiki page, which explains my responsibilities.
  • I read and followed the Flutter Style Guide, including Features we expect every widget to implement.
  • I signed the CLA.
  • I listed at least one issue that this PR fixes in the description above.
  • I updated/added relevant documentation (doc comments with ///).
  • I added new tests to check the change I am making or feature I am adding, or Hixie said the PR is test-exempt.
  • All existing and new tests are passing.

If you need help, consider asking for advice on the #hackers-new channel on Discord.

@flutter-dashboard flutter-dashboard bot added the c: contributor-productivity Team-specific productivity, code health, technical debt. label Apr 28, 2021
@google-cla google-cla bot added the cla: yes label Apr 28, 2021
@gaaclarke gaaclarke force-pushed the platform-channels-benchmarks branch 9 times, most recently from 3b4c848 to 374976d Compare April 28, 2021 22:48
@gaaclarke gaaclarke marked this pull request as ready for review April 28, 2021 23:29
@flutter-dashboard

This comment has been minimized.

Copy link
Member

@goderbauer goderbauer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a lot of boilerplate in this PR, I hope I reviewed all the right files...

Would probably be good if we found a way to not duplicate all this boilerplate for different benchmarks.

}

void main() {
runApp(const MyApp());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you actually need the Counter app UI? To simplify, could this just be:

void main() {
  WidgetsFlutterBinding.ensureInitialized();
   _runTests();
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doesn't work because that doesn't setup the binary messenger for platform channels. I've removed most of the UI.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's supposed to set it up though, right?

If not, can we just directly do what's needed to set up the default binary messenger?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't say anything about that in the documentation. What's the concern? My understanding is that goderbauer didn't want all this extra code, it's mostly been removed. If we had a PlatformChannelsBinding.ensureInitialized() function call it would save maybe 6 lines of code if I paired down the UI even further. I imagine things might get a bit goofy if you don't present any widget though. Do you want me to trim even more away?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me it's more that it takes that much longer to find out what this test is actually doing from here. I have to read through a bunch of unrelated widgets to finally get to one that calls a method in initState that has nothing to do with the widget itself. I also have to think a little bit about whether that could have any impact on the benchmark.

I'm kind of surprised the WidgetsFlutterBinding.ensureInitialized isn't enough to get the channels set up - that's what we tell people to do if they need to talk to platform channels before any UI is rendered after all.

This test doesn't really need any UI to run - it'd just show a black screen in that case. If that bothers you, you can just have it show a Placeholder widget, or maybe some text saying it's running the benchmark.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And in case it's not clear, IMO this shouldn't block the test from landing. But future us will probably be grateful for any simplifications that can be made now :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should remove everything from the benchmark that's not required for it to make it clear what this is testing. Since this benchmark doesn't rely on any UI, why show it? If you really need to show UI, then just show a simple container.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I removed everything I could. It just shows a black screen now while executing.

final List<Object> largeBuffer = _makeTestBuffer(1000);
const StandardMessageCodec codec = StandardMessageCodec();
final ByteData data = codec.encodeMessage(largeBuffer);
print('Large buffer size: ${data.lengthInBytes}');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the print?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was intentional to leave this in for the logs since it's difficult to make the datastructure of a specific size and it's important that its size is known in order to make valid comparisons. I might actually tweak the size before I turn the tests on. I gotta check to make sure this is what we want to test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we somehow capture this in the benchmark metric? Like, maybe make it us/byte or something? Or just include the number of bytes processed somewhere in the metrics?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if we capture us/byte changing the size of the buffer in the test would create a change in us/byte, it won't scale linearly (byte bytes are copied 4 different times and serializing/deserializing isn't equal for all datatypes regardless of byte size). We have to make sure when I turn this on, we never change the size or contents of the buffer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gotta check to make sure this is what we want to test.

The print alone doesn't tell me what the size should be, though, if I later come in and modify the benchmark. Ideally, this would have been expressed as an assert(data.lengthInBytes == expectation). But asserts are disabled here. Maybe just throw if it doesn't have the expected size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the print statement and instead added a comment that warns about ever editing that line of code. I don't think there is anything else we can do or assert.

);
printer.addResult(
description:
' BasicMessageChannel/StandardMessageCodec/Flutter->Host/Large',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@gaaclarke
Copy link
Member Author

There's a lot of boilerplate in this PR, I hope I reviewed all the right files...

Would probably be good if we found a way to not duplicate all this boilerplate for different benchmarks.

I was able to remove most of that boilerplate. The test is a bit more fragile now since I needed to keep some of android/ and ios/ to work. If the format of the template changes it could break these tests. It also adds a slight complication if people want to run the tests outside of devicelab for debugging purposes. I added a note in the README file about it.

Copy link
Contributor

@dnfield dnfield left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly nits here. You're going to need to update dev/prod_builders.json to add a builder entry for this, and a corresponding PR will need to be opened in flutter/infra (see e.g. https://github.com/flutter/infra/pull/423) to make a builder for this actually run.

Comment on lines 51 to 52
assert(false,
"Don't run benchmarks in checked mode! Use 'flutter run --release'.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure this was copied from elsewhere, but if we really want to check for release mode I think we want to do

Suggested change
assert(false,
"Don't run benchmarks in checked mode! Use 'flutter run --release'.");
if (!kReleaseMode)
throw Exception("Must be run in release mode! Use 'flutter run --release'.");

The assert will not be run in profile mode, and checked mode is an antiquated/deprecated notion at this point.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done (checked against kDebugMode)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we want this check so that we make sure CI is running these in release mode?

These benchmarks just rely on stdout to get timing information, rather than the observatory.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, there is a comment somewhere else that the tests have to be run in profile mode to work on iOS. Plus the only real problem case is Debug if people are running them locally.

final List<Object> largeBuffer = _makeTestBuffer(1000);
const StandardMessageCodec codec = StandardMessageCodec();
final ByteData data = codec.encodeMessage(largeBuffer);
print('Large buffer size: ${data.lengthInBytes}');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we somehow capture this in the benchmark metric? Like, maybe make it us/byte or something? Or just include the number of bytes processed somewhere in the metrics?

}

void main() {
runApp(const MyApp());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's supposed to set it up though, right?

If not, can we just directly do what's needed to set up the default binary messenger?

Copy link
Contributor

@dnfield dnfield left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with nits

final List<Object> largeBuffer = _makeTestBuffer(1000);
const StandardMessageCodec codec = StandardMessageCodec();
final ByteData data = codec.encodeMessage(largeBuffer);
print('Large buffer size: ${data.lengthInBytes}');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gotta check to make sure this is what we want to test.

The print alone doesn't tell me what the size should be, though, if I later come in and modify the benchmark. Ideally, this would have been expressed as an assert(data.lengthInBytes == expectation). But asserts are disabled here. Maybe just throw if it doesn't have the expected size?

}

void main() {
runApp(const MyApp());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should remove everything from the benchmark that's not required for it to make it clear what this is testing. Since this benchmark doesn't rely on any UI, why show it? If you really need to show UI, then just show a simple container.

@gaaclarke gaaclarke requested a review from goderbauer May 4, 2021 17:30
Copy link
Member

@goderbauer goderbauer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@fluttergithubbot
Copy link
Contributor

This pull request is not suitable for automatic merging in its current state.

  • The status or check suite Windows build_aar_module_test has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows build_tests_1_3 has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows build_tests_2_3 has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows build_tests_3_3 has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows framework_tests_libraries has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows framework_tests_misc has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows framework_tests_widgets has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows gradle_plugin_bundle_test has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows module_custom_host_app_name_test has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows module_test has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows tool_integration_tests_2_5 has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows tool_integration_tests_3_5 has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows tool_integration_tests_5_5 has failed. Please fix the issues identified (or deflake) before re-applying this label.
  • The status or check suite Windows web_tool_tests has failed. Please fix the issues identified (or deflake) before re-applying this label.

@fluttergithubbot
Copy link
Contributor

This pull request is not suitable for automatic merging in its current state.

  • The status or check suite Windows tool_tests_commands has failed. Please fix the issues identified (or deflake) before re-applying this label.

@fluttergithubbot fluttergithubbot merged commit a5f57b9 into flutter:master May 7, 2021
Copy link
Contributor

@keyonghan keyonghan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Normally the json file change should be landed after builder config changes in flutter/infra. Please see #79399

@@ -372,6 +372,12 @@
"task_name": "linux_picture_cache_perf__e2e_summary",
"flaky": false
},
{
"name": "Linux platform channels benchmarks",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall the name be: Linux platform_channels_benchmarks?

@@ -1152,6 +1158,12 @@
"task_name": "mac_ios_platform_channel_sample_test_swift",
"flaky": false
},
{
"name": "Mac_ios platform channels benchmarks",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c: contributor-productivity Team-specific productivity, code health, technical debt.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants