Unlike other checks, tree-status isn't tied to the pull request:
instead, it shows whether checks are passing in the main branch.
Failures can happen for a variety of reasons and should be addressed
before anything else is merged in.
What to do: Once review requirements
are met and all other checks are passing, a reviewer will add the
autosubmit
label,
and then a bot will merge the PR once tree-status succeeds.
A Google testing failure could be a flake (see below), or it might be due to changes in the PR (See Understanding Google Testing for more info). Google employees can view the test output and give feedback accordingly.
What to do: If 2 weeks have gone by and nobody's looked into it, feel free to reach out on Discord.
In order for checks to run correctly, the .ci.yaml file needs to stay in sync with the base branch.
What to do: This check failure can be fixed by applying the latest changes
from master.
(The Tree hygiene page recommends updating
via rebase, rather than a merge commit.)
Oftentimes, a change inadvertently breaks expected behavior.
When this happens, usually the best way to find out what's wrong is to
view the test output.
If a customer_testing check is unsuccessful, it's a signal that something in the
Flutter customer test registry has failed.
This includes package tests
along with other tests from open-source Flutter projects.
If a pull request requires an update to those external tests, it qualifies as a
breaking change;
please avoid those when possible.
If Linux Analyze fails, it's likely that one or more changes in the PR
violated a linter rule.
Consider reviewing the steps outlined in
setting up the framework dev environment
so that most of these problems get caught in static analysis right away.
Note
All Dart code is run through static analysis: this includes markdown code snippets in doc comments!
See Hixie's Natural Log for more details.
Click on Details for the failing test, and then click View more details on flutter-dashboard.
The full test output is linked at the bottom of the page.
Often, there will be a message that resembles the one below:
══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞════════════════════════════════════════════════════
The following TestFailure was thrown running a test:
Expected: exactly one matching candidate
Actual: _TextWidgetFinder:<Found 0 widgets with text
"AsyncSnapshot<String>(ConnectionState.waiting, null, null, null)": []>
Which: means none were found but one was expected
When the exception was thrown, this was the stack:
#4 main.<anonymous closure>.<anonymous closure> (…/packages/flutter/test/widgets/async_test.dart:115:7)
<asynchronous suspension>
#5 testWidgets.<anonymous closure>.<anonymous closure> (package:flutter_test/src/widget_tester.dart:189:15)
<asynchronous suspension>
#6 TestWidgetsFlutterBinding._runTestBody (package:flutter_test/src/binding.dart:1032:5)
<asynchronous suspension>
<asynchronous suspension>
(elided one frame from package:stack_trace)
This was caught by the test expectation on the following line:
file:///b/s/w/ir/x/w/flutter/packages/flutter/test/widgets/async_test.dart line 115
The test description was:
gracefully handles transition from null future
════════════════════════════════════════════════════════════════════════════════════════════════════
From there, it's just a matter of finding the failing test, running it locally, and figuring out how to fix it!
A check might "flake", or randomly fail, for a variety of reasons.
Sometimes a flake resolves itself after changes are pushed to re-trigger the checks. Consider performing a rebase to include the latest changes from the main branch.
Flakes often happen due to infra errors. For information on how to view and report infrastructure bugs, see the infra failure overview.