From 9492c3ade4635119686e66f04f682c59426f1215 Mon Sep 17 00:00:00 2001 From: Simon Condon Date: Wed, 24 Apr 2024 21:11:20 +0100 Subject: [PATCH] roadmap tweak --- .../wwwroot/md/roadmap.md | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/src/FlUnit.Documentation/wwwroot/md/roadmap.md b/src/FlUnit.Documentation/wwwroot/md/roadmap.md index 1eb5643..34e82d3 100644 --- a/src/FlUnit.Documentation/wwwroot/md/roadmap.md +++ b/src/FlUnit.Documentation/wwwroot/md/roadmap.md @@ -1,27 +1,28 @@ # FlUnit Roadmap -Proper issue tracking would be overkill, so just a bullet list to organise my thoughts: +Proper issue tracking and a formal plan would be overkill, so the "roadmap" consists of just a bullet list to organise my thoughts: Next up: -- Fix VSTest adapter so that the test assembly is loaded in a reflection-only context when discovering tests. +- Fix VSTest adapter so that the test assembly is loaded in a MetadataLoadContext when discovering tests. On the to-do list for soon-ish: -- Allow for instantiable test fixtures rather than just static test properties, with an abstract - factory to allow for extensions that hook into various DI frameworks (and some mechanism for initialisationof the DI container). - Basic test tidy-up support. Open questions here about if/when we should consider objects (prerequisites, test function return values) to be "owned" by the test, and thus its responsibility to dispose of. What is the ideal default behaviour, and by what mechanisms should we support deviation from that. - Test attachment support +- Allow for instantiable test fixtures rather than just static test properties. + Of course, test prerequisites and builder reusability perhaps offers an alternative way to approach this kind of thing, but there's + almost certainly still value in this. -On the to-do list for later: +Other things on the to-do list: - VSTest platform adapter internal improvements - Improvement of stack traces on test failure (eliminate FlUnit stack frames completely) - Get rid of some aspects of the core execution logic that are too influenced by VSTest - Configurability: - - Test case labelling is still annoying after the minor improvement made in v1.2. Better support for custom test case labelling, and perhaps further improved default labelling. Currently mulling over some options, including: + - Test case labelling could still be better after the minor improvement made in v1.2. Better support for custom test case labelling, and perhaps further improved default labelling. Currently mulling over some options, including: - ~~in default labelling, spot and eliminate *all* type names (even ones contained *within* prereq tostrings..). not trivial, it seems - cant e.g. verify anon type names with gettype. would probably require reflection - which id really rather avoid in a default behaviour.~~ @@ -43,7 +44,9 @@ On the to-do list for later: useless on its own - requires the labelling strategy to use it. Con: seems most useful for particular values for particular test cases, but config really for stuff across a whole test suite. when its for a particular test, `LabelledAs` feels more powerful? - - Expand on parallel partitioning control by allowing for by class name and namespace - whether thats treated as a special case or if we hook this into trait system is TBD. + - Parallelisation: + - in vstest adapter, look into uing its test parallelisation setting rather than one specific to flunit. + - Expand on parallel partitioning control by allowing for by class name and namespace - whether thats treated as a special case or if we hook this into trait system is TBD. - Of strategy for duration records (which currently makes a "sensible" decision which may not be appropriate in all situations). Look at achieving greater accuracy in durations in the vstest adapter. Now that I realise you can record duration separately to start and end time. I could pause the the duration timing while doing framework-y things.. - Take a look at configurability of test execution strategy in general (should different cases be different "Tests" and so on). *NTS: What this'd look like, probably: TestDiscovery to get Test and `Arrange` it. @@ -54,6 +57,6 @@ TestRun would need to then act accordingly (TBD whether it could/should execute Of course a gotcha here is that GivenEach.. doesn't have to return the same number of cases each time (which I maintain is good behaviour - allows for storage of cases in external media). Would need to handle that gracefully. Problems here: simply can't if target bitness differs. Test code execution on test discovery probably not something to pursue, all things considered* -Not going to do, at least in the near future: +Not going to do: - QoL: Perhaps `Then/AndOfReturnValue(rv => rv.ShouldBe..)` and `Then/AndOfGiven1(g => g.Prop.ShouldBe..)` for succinctness? No - Lambda discards work pretty well (to my eyes at least), and `OfGiven1`, `OfGiven2` is better dealt with via complex prereq objects - QoL: dependent assertions - some assertions only make sense if a prior assertion has succeeded (easy for method-based test frameworks, but not for us..). Such assertions should probably give an inconclusive result? Assertions that return a value (assert a value is of a particular type, cast and return it) also a possibility - though thats probably inviting unacceptable complexity. A basic version of this could be useful though - perhaps an `AndAlso` (echoing C# operator name) - which will make all following assertions inconclusive if any prior assertion failed? No - this is best left to assertion frameworks (e.g. FluentAssertions `Which`)