Hello, my name is Kamil Páral and I’m part of the Fedora QA team. I was asked to be a point of contact between Fedora CoreOS and Fedora QA. I would like to learn what the current state of QA is in your project, and how I can help you with integrating it into Fedora QA processes, or extending it.
I know there hasn’t been any Fedora CoreOS  release yet, but do you have any QA processes existing already? Testcases to describe what to test and how, testplans to describe which tests to run when, automated tests executed regularly (before release? after each commit?), results publicly visible somewhere? You know, the usual boring QA stuff.
As an example, here’s our Cloud-specific test plan containing just a few test cases that we used for Fedora 28 Cloud/Atomic release validation:
(scroll to the bottom). These were tested manually during release. Atomic folks had some automated testing as well, I believe, but I don’t really know the details (I’m sure the right people in your team will know about it).
If it makes sense to you, we could adopt those test cases, extend them to cover the most important CoreOS functionality, and create a separate CoreOS section for it. If you have any automated test results, we can talk about how to best integrated into our usual workflows. For the parts that we have automated in our team, we usually combine writing test results directly to such wiki matrices as shown above, and also inspect the failures in the tools’ specific frontend. For example, everything filled out by coconut in our Installation matrix has been automatically tested by OpenQA - the failures are examined in its frontend directly. But there are definitely other approaches that can be taken.
I can also help you set up test days for your project, and try to reach out to Fedora community to help you get more of a unique test coverage. Or… tell me what else you’d like to see or help with.
Looking forward to your feedback!
 Do you use any abbreviation for that? FCOS?