Feature: Requirement verification functionality #2162
Replies: 11 comments
-
I have several thoughts regarding this:
It is a very interesting feature and it would be great to find a practical implementation path. |
Beta Was this translation helpful? Give feedback.
-
I've been playing around with some potential solutions to this issue that does not require strictdoc modifications. At the moment I'm using XSLT to transform the JUnit XML into an .sdoc together with the doxygen tagfile xml. I'm using google test so the .cc code can look like this:
The corresponding UID for the test case specification in .sdoc looks like below so that the test name in the JUnit file matches the strictdoc UID.
For test cases in the JUnit XML that has corresponding strictdoc tests the XSLT creates a [LINK:]. I imagine that very low level tests e.g. parameter range checks etc will not get a test case written in strictdoc. The generated .sdoc is then included into the strictdoc export of the requirement repo via [DOCUMENT_FROM_FILE]. Adding the python for transformation and the xslt here if anyone would be interested. The resulting RST renders like this for two tests where one test case was defined in .sdoc and the other not. Still some details to iron out. |
Beta Was this translation helpful? Give feedback.
-
One little problem I've encountered is that the @relation does not seem to work for my custom grammar test case, i.e. I do not get forward traceability from my .sdoc testcase to code. Not 100% sure if that is strictly necessary but something to think upon. |
Beta Was this translation helpful? Give feedback.
-
I am heading off but the last thought today is exactly similar to your message above (had 10 seconds to scan through it): maybe a good start into discussing this would be to think through a minimal and a quickest possible script or set of scripts that we could write to achieve what you need. Then take it from there and develop into a more general solution if needed. I will read your message in detail tomorrow. |
Beta Was this translation helpful? Give feedback.
-
Slightly evolved today with a TEST_RESULT element which is created by the xslt transform in my CI pipeline from (JUnit and tagfile). This means that I get forward traceability from TEST_CASE to TEST_RESULT.
The report render like below at the moment: |
Beta Was this translation helpful? Give feedback.
-
Inclusion of results of manually performed tests is a bit trickier. I'm thinking I'll have to resort to creating result files that are named with the git hash so that the CI can automatically find the correct file and include it, e.g. test_results_8d9abc2.sdoc. But where to store it and how to version it is the question. I'd also need to figure out an easy and user friendly way to author these manual test results from [TEST_CASE] nodes. |
Beta Was this translation helpful? Give feedback.
-
@johanenglund, do you have a solution in mind for the inclusion of manual results? The main challenge I see is to avoid duplicate MIDs & UIDs. The solution I would probably implement with existing functionality would be to have two project configurations (test_view, result_view). Tests would be copied to a results directory as they are executed. Then I would render the docs depending on the view I was interested in. This not a complete solution, because at times we may generate multiple results for each test. To support that, I would probably develop a custom script that would copy a document and update all of the MIDs & UIDs. I could see a strictdoc command that generates result files useful. I think this could be implemented as an export, for example:
|
Beta Was this translation helpful? Give feedback.
-
Sorry, I have kicked this can down the road for a bit. I have not yet thought out a practical solution. When I started thinking on this I got stuck on the fact that the test results for the automated tests (for which I use the solution I presented above) end up as an artifact in the CI pipeline. The CI pipeline have access to the repo with the strictdoc reqs and testcases so it can generate test reports just fine. But. How to practically "merge" these results with the results from manual tests? Where to store them? How and where to generate the "final" report that includes both automatic CI and manual tests, for each release. Since the manual tests are indeed manual, perhaps it would be OK to also generate the final report manually and pull in the CI results for the correct release version somehow... Hopefully I'll get to this soon! |
Beta Was this translation helpful? Give feedback.
-
A mental note perhaps mostly for my own particular case, situation might be different for others. A successful CI run is a prerequisite for doing any manual tests. It would then make sense that the CI pipeline prepares a package with .sdocs tied to the SW and requirement versions that was tested in the CI job (likely by git hash or tag). The manual tester will download the CI-prepared package to a location defined in the QA plan and get on with the recording of manual evidences, preferably via the strictdoc web interface. Alternatively, the CI pipeline will automatically upload the package to an internal web server where the manual tester via a simple web portal can easily start/stop the strictdoc web interface for the desired release package. I'm a bit hesitant to require everyone to have a working python setup on their own rigs. I foresee more problems with that than maintaining an internal web portal. |
Beta Was this translation helpful? Give feedback.
-
I'm also playing around with the tracing from tests to requirements. The use case is slightly different though - the requirements are finished and when creating a new release, I have to verify that all requirements are either covered by test cases and/or code reviews. The process:
The missing part is some kind of check and/or visualisation that all requirements are covered by code reviews or tests (ideally, even with some kind of warning output if not everything is covered). I think it would make sense if strictdoc provides this as an optional feature but it would probably require some additional logic/types. I haven't looked into this yet. I think everything else can be generated from templates. My current approach is to generate a detailed test report so that you can generate an sdoc:
This requires that the test report contains information like requirement/test case you want to link to + filename and line number. I'm currently using ctrf since it can be extended with "tests": [
{
"name": "User should be able to login",
"status": "passed",
"duration": 1200,
"filepath": "tests/login_test.go",
"tags": ["REQ-003", "TC-003"],
},
] Relying on only the test report would be nice because it contains almost everything we need and it would decouple the generation of documents from the test code. After all, it is not relevant for strictdoc how the Regarding the manual tests and code reviews, I would handle it similarly - the data somehow ends up in some kind of report, and from there the tooling takes over and generates an sdoc file. How this report is created is an implementation detail - could be manual editing or a custom web UI. |
Beta Was this translation helpful? Give feedback.
-
@stanislaw Depending on if you want to support requirement verification as part of Strictdoc this lobster https://bmw-software-engineering.github.io/lobster/tracing-core_online_report.html visualization is pretty interesting. W.r.t. verification one should consider a generic cross programming language approach to parse, link and verify like e.g. https://github.com/awslabs/duvet does. |
Beta Was this translation helpful? Give feedback.
-
Description
As a developer of a software product I want to
Some requirements are verified in CI pipeline while others are verified by hand.
Problem
Currently there is no direct support of taking in dynamic data at strictdoc export runtime to facilitate creation of a verification report. A feature or recommended process to enable creation of the required traceability is desired.
UIDs of test cases should ideally be preserved over the project life cycle. Redundant/duplicated information should be avoided.
Solution
?
Beta Was this translation helpful? Give feedback.
All reactions