Aggregating testing results from multiple kernel CI systems is hard, but masking known issues in them is next level. That's what Kernel CI's KCIDB is trying to do. Learn more about the problem, our ideas, and suggest your solutions on this session!
The Linux Foundation's Kernel CI project has been aggregating CI system results in KCIDB for a while. We have six systems contributing and have started sending result notifications to maintainers.
However, as a kernel maintainer or developer, the last thing you want is someone else's issue attributed to your patches. With 10K build and 100K test results daily, it's a given that tested revisions are often red with known issues, despite submitting CI systems masking them independently.
We're trying to come up with a way to aggregate known issues, similarly to test results, and to prevent misattribution of problems to innocent changes for results coming from all CI systems, equally. We want to give maintainers, test authors, and CI system operators the ability to submit their issue descriptions, manually or automatically, so that we can deal with them, and all can save their time and effort.
The contributed information could be a human-readable description, plus a regular expression to look for in a log or output file, or it could be something more complicated. We could be generating those automatically ourselves, and/or rely on human contributions. We plan to process and apply those ourselves, but perhaps it's a good idea to let submitters do that, and let us know the results instead. We don't know yet.
Come, see how different kernel CI systems already deal with the problem, how we plan to unify that, and let us know what you think!