Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

  • MR_GABARISE@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    On top or, better, in addition to mutation testing, some amount of property-based testing is always great where it counts.

    • sveri@lemmy.sveri.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Additionally I like roundtrip tests.

      For example we have two data formats and support conversion between both of them.

      So I have tests that convert from A to B and back to A. Then I can go and call assertEquals on them.

      It’s a very cheap test, that tests all functionality of the conversion itself.